uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,995,563 | arxiv | \section{Introduction.}
A kicked system is a Hamiltonian system that
is periodically driven by pulses of infinitesimal duration.
More than thirty years after the invention of the paradigmatic model of such systems,
namely the Kicked Rotor (KR) \cite{kr},
Kicked Quantum Dynamics is still the focus of active research for a twofold reason.
On the one hand, it has given birth to an ever increasing list of variants of the basic original prototypes, which have provided formally simple models for the investigation of quantum-classical correspondence and of some general properties of quantum transport. These include dynamical localization \cite{f3}, anomalous diffusion \cite{khar2,khar1,khar3,gubo}, decay from stable phase-space islands \cite{bkm,bkls,SFGR05},
electronic conduction in mesoscopic devices \cite{fbkkrhb,okg,bgr},
nondispersive wave packet dynamics \cite{dlryb}, effects of dissipation on quantum dynamics
\cite{gabriel}, and lately directed transport \cite{TM1,mont,dasum}.
On the other hand, renewed interest on the physical side has been stimulated by experimental
realizations \cite{raizen1b,c,KRexp3,phill}, which are now possible, under
excellent control conditions, thanks to the science and technology of cold and ultra-cold atoms. Unexpected advances of the theory have been prompted
by such experiments. For instance, the so-called Quantum Accelerator Modes (QAM) were discovered in experiments with cold atoms in periodically pulsed optical lattices
\cite{Ox991,Ox992,Ox995,Ox994}. Their underlying theoretical model is
a variant of the Kicked Rotor model, the difference being that, in between kicks,
atoms are subject to gravity.
When the kicking period is close to a half-integer multiple of the Talbot time \cite{BB}, which is a natural time scale for the system, a fraction of atoms steadily accelerates away from the
bulk of the atomic cloud, at a rate and in a direction which depend on various parameter values.
Though QAMs are a somewhat particular phenomenon, their theory
\cite{FGR022,FGR021,GRF05,Rebk} is a vast repertory of classic items of classical and quantum mechanics. QAMs are rooted in subtle aspects of the Bloch theory, and have a relation to the Wannier-Stark resonances of Solid-State physics \cite{SFGR05}. They are a purely quantal effect, and yet they are explained in terms of trajectories of certain classical dynamical systems, by means of a ``pseudo-quasi classical" approximation, where the role of the Planck constant is played by a parameter $\epsilon$, which measures the detuning of the kicking period from a half-integer multiple of the Talbot time. This theory hinges on existence of a ``pseudoclassical limit" for $\epsilon\to 0$. That means, for kicking periods close to half-integer multiples of the Talbot time, the quantum dynamics may formally be obtained from quantization of a classical dynamical system, using $\epsilon$ as the Planck constant. This system is totally unrelated from the classical system that is obtained in the proper classical limit
$\hbar\to 0$.\\
Experimental and theoretical investigation on QAMs are currently focused on novel
research lines: the observation of QAMs in a Bose-Einstein
condensate \cite{S,rafg,hsgc08}, which allows a precise control on the initial momentum
distribution, the analysis of QAMs for special values of the
physical parameters \cite{lemarie,shgc08} and, in particular,
when the kicking period is close to a rational multiple
of Talbot time \cite{GS,GR08}.
Theoretical aspects concerning the latter problem are considered in the
present paper.
\\
\begin{figure*}
\includegraphics[width=16cm,angle=0]{figl-pac-tot-ae.eps}
\caption{ Momentum distributions, in the time dependent gauge, after
$t=100$ kicks, for different values of the kicking period near the resonance
$\tau_{res} =3\pi$. Red color corresponds to highest probability.
The vertical dashed line corresponds to the resonant value.
White full lines show the theoretical curves (\ref{acc}), with:
(left) $q=2, (r,s)=(1,1)$ and (right) $q=7, (r,s)=(4,1)$ close to the
higher order resonance $\tau ^{\rm res}/2\pi =p/q=11/7$. The initial quantum distribution is a gaussian
wave packet, reproducing the experimental conditions.
The other parameters are: $k=1$ and $g=0.0386$. All our numerical simulation refer to the
choice $V(\theta)=k \cos \theta$.}
\label{pac-tot}
\end{figure*}
QAMS are connected with an important feature of the KR model, namely, the KR resonances
\cite{IzShep1}, which occur whenever the kicking period is rationally related to the internal frequencies of the free rotor. The dynamics of the rotor at a quantum resonance is invariant under momentum
translations by multiples of an integer number. The least positive integer $q$, such
that translation invariance in momentum space
holds, is the ``order" of the resonance (sect. \ref{back}).
The half-Talbot time in atom-optics experiments is
the period of the KR resonances of order $q=1$ (i.e. ``principal" resonances),
so the originally observed QAMs are related to KR
resonances of order $1$.
In this paper we consider quantum motion in the vicinity of a higher order KR resonance
($q >1$) in the presence of gravity.
Numerical
(see fig.\ref{pac-tot}) and heuristic indications \cite{GS} suggest that higher-order KR resonances,
too, may give rise to QAMs. This
has been substantiated by a theory \cite{GR08} based on a nontrivial reformulation of the original
pseudo-classical approximation. It has been remarked that, in the case of higher-order resonances,
no pseudoclassical limit exists and similarity to the case of quasi-classical analysis
for particles with spin was noted, but not explored.
About the latter general problem \cite{LF91,LF92}, it is known that, although
no single well-defined classical limit exists, and so no global quasi-classical phase-space approximation in terms of a unique classical Hamiltonian flow is possible,
{\it local} quasi-classical approximations are nevertheless still possible, as
provided by bundles of trajectories which
belong to a number of different Hamiltonian systems.
In this paper we develop a formulation of the
problem of QAMs near higher-order resonances in spinor terms.
The quantum evolution at exact resonance is described by a multi-component wave function, that is
by a spinor of rank $q$, given by the order of the resonance \cite{IzShep1,CaGua}, and is
generated by a time-independent spinor Hamiltonian
\cite{sokzhicas,SZAC1}. We show that
the small-$\epsilon$ analysis of quantum dynamics is formally
equivalent to semiclassical approximation for a particle
with spin-orbit coupling.
Thus QAMs near higher order resonances constitute a particular, though
experimentally relevant, model system, in which this crucial theoretical issue can be explored.
The semiclassical theory in
\cite{LF92} is not directly applicable here, because the dynamics is not specified
by a self adjoint spinor Hamiltonian, but by a spinor unitary propagator instead.
We therefore resort to an ``adiabatic" ansatz, which allows decoupling spin dynamics from orbital motion. In this way we obtain $q$ distinct and independent orbital one-period propagators.
Each of them may be viewed as the quantization of a formally classical dynamical system, given by a map; however, the ``pseudo-Planck constant" $\epsilon$ explicitly appears in such maps, in a form that precludes existence of a $\epsilon\to 0$ limit for the maps themselves, except for
the $q=1$ case, in which the pseudo-classical
theory of ref. \cite{FGR021,FGR022} is recovered.\\
QAMs, detected by numerical simulations of the exact quantum dynamics near higher order resonances, tightly correspond to stable periodic orbits of the maps. The acceleration of the modes
is expressed in terms of the winding numbers of the corresponding orbits and of the order
of the resonance. Moreover, we derive some
theoretical results, which generalize those obtained in \cite{FGR021,GRF05}
for the principal resonances: a formula for the special values
of quasi-momenta, which dominate the mode, and a classification of detectable
modes by a Farey tree
contruction \cite{far816, HW79}, as a function
of the gravity acceleration.
The paper is organized as follows. In sect.\ref{back} the Floquet operator,
describing one-step evolution of a kicked atom in a free-falling frame,
is recalled and the resonant spinor dynamics in Kicked Particle (KP) model
is briefly reviewed; in sect.\ref{sodec},
the quantum motion in the vicinity of a resonance of arbitrary order is related
to the problem of a particle with spin-orbit coupling.
In sect.\ref{map}, a ``formally" classical description of the orbital dynamics, associated
to the QAMs, is achieved.
Finally, in sect.\ref{exper} connections between
the theoretical results and possible experimental findings are discussed.
\section{Background.}
\label{back}
\subsection{Floquet operator in the ``temporal gauge".}
In the laboratory frame, the quantum dynamics of the atoms moving under the joint action
of gravity and of the kicking potential, is ruled by the
time-dependent Hamiltonian (expressed in dimensionless units):
\begin{equation}
\label{ham-l}
\hat H_L(t)=\frac {\hat p^2}{2}-\frac {\eta}{\tau}\hat x+ k V (\hat x)
\sum_{n=-\infty}^{+\infty} \delta (t-n\tau ),
\end{equation}
where $\hat p$ and $\hat x$ are the momentum and position operators.
The potential $V(x)$ is a smooth periodic function of spatial period $2\pi$. Denoting $M,T, {\mathrm K}, {\mathrm g}$ and $2\pi/G$ the atomic mass, the temporal period of the kicking, the kicking strength, the gravity acceleration and the spatial period of the kicks, respectively, the momentum, position and mass of the atom in (\ref{ham-l})
are rescaled in units of $\hbar G$, $G^{-1}$ and $M$.
The three dimensionless parameters $k,\tau$ and
$\eta$ in (\ref{ham-l}), which fully characterize the dynamics,
are expressed in terms of physical quantities by $k={\mathrm K}/\hbar$,
$\eta =M{\mathrm g}T/(\hbar G)$ and $\tau =\hbar TG^2 /M=4\pi T/T_B$.
$T_B=4 \pi M/(\hbar G^2)$ is the Talbot time \cite{BB} and $g=\eta /\tau$ is the rescaled
gravity acceleration.
Throughout the following $\hbar=1$ is understood.
For $\eta=0$,
the Hamiltonian (\ref{ham-l}) reduces to that of the Kicked Particle model (KP),
which is a well-known variant of the Kicked Rotor model (KR), corresponding
to the particular choice: $V(x)=k\cos (x)$. The KP differs from the KR
because the eigenvalues of particle momentum are continuos while those of the angular momentum
of the rotor are discrete. Due to Bloch theorem, the invariance of the KP Hamiltonian,
under space translations by $2\pi$, implies conservation of the quasi-momentum $\beta$, which,
in the chosen units, is the fractional part of the momentum. The particle
momentum is decomposed as $p=N+\beta$ with
$N\in {\mathbb Z}$ and $0\leq \beta <1$. Conservation of quasi-momentum
enables a Bloch-Wannier fibration of the particle dynamics:
the particle wave function is obtained by a superposition of Bloch waves,
describing the states of independently evolving kicked rotors with different
values of the quasi-momentum (called $\beta$-rotors).
A remarkable feature of Hamiltonian (\ref{ham-l}) is that, unless rescaled gravity $g=\eta/\tau$ assumes exceptional commensurate values, the linear potential term breaks invariance under $2 \pi$ space translations. Such an invariance may be recovered by going to a temporal gauge, where momentum is measured {\em w.r.t.} free fall. This transformation gets rid of the linear term and the new Hamiltonian reads \cite{FGR021}:
\begin{equation}
\label{ham-ff}
\hat H_g (t)=\frac 12 (\hat N +\beta +\frac \eta\tau t)^2+k V(\hat \theta)
\sum_{n=-\infty}^{+\infty} \delta (t-n\tau ).
\end{equation}
where $\theta =x\; {\rm mod}(2\pi)$, $\hat{N}=-id/d\theta$ with
periodic boundary conditions.
The quantum motion of a $\beta$-rotor in the ``temporal gauge" (that is, ``in the falling frame") is described by the following Floquet operator
on $L^2 ({\mathbb T})$ ( ${\mathbb T}$ denotes the 1-torus, parametrized by $\theta\in [-\pi,\pi [$):
\begin{equation}
\label{fo}
\hat U_\beta (n) =e^{-ikV(\hat \theta )}e^{-i\frac {\tau}{2}(\hat N +\beta +\eta n+\frac {\eta}{2})^2}.
\end{equation}
where $n\in{\mathbb Z}$ denotes the number of kicks.
The operator (\ref{fo}) describes evolution from time $t=n\tau$ to time
$t=(n+1)\tau$.
\subsection{Quantum Resonances.}
We consider the problem of Quantum Accelerator Modes in the
vicinity of a generic resonance of the $\beta$-rotor.
The concept of quantum resonance (QR) is reviewed in this subsection.
A QR occurs whenever quantum evolution commutes with a nontrivial group of momentum translations. A momentum translation $\hat{N}\to\hat{N}+\ell$ (recall $\hbar=1$) with $\ell\in{\mathbb Z}$ is described by
the operator $\hat T^{\ell}=e^{i{\ell}\hat\theta}$. In the following we assume $\eta=0$ and then the operator (\ref{fo}) is time-independent. It commutes with $\hat{T}^{\ell}$ if and only if \cite{dd}:
$i)$ $\tau /2\pi =p/q$ with $p,q$ coprime integers;
$ii)$ ${\ell}=rq$ with $r\in {\mathbb N}$; $iii)$ $\beta= \nu/rp +rq/2$(mod 1), with $\nu\in{\mathbb Z}$.
In this paper we
restrict to ``primary" resonances, i.e. to resonances with $r=1$ and $\ell =q$; in this case,
$q$ defines the order of the resonance. QR of order 1 are called ``principal resonances".
A theory for QAMs in the vicinity of
principal resonances was proposed in \cite{FGR022,FGR021}. In this paper we consider
quantum resonances of arbitrary order $q\geq 1$. The resonant values
of the kicking period $(i)$, expressed in physical units, coincide with rational multiples of half of the Talbot time.
We generically denote $\hat{U}_{\mbox{\tiny res}}$ the operator (\ref{fo}) at resonance, and $\beta_0$ the resonant values of quasi-momentum, given by the above condition $(iii)$.
\subsection{Bloch theory and spinors.}
Translation invariance under $\hat{T}^q$ enforces conservation of the Bloch phase $\xi\equiv\theta$ mod $2\pi/q$,
taking values in the Brillouin zone $\mathbb B=[-\pi /q, \pi/q[$. Loosely speaking, this means that
$\theta$ only changes by multiples of $2\pi/q$, so $\xi$ has the meaning of ``quasi-position".
As we show below, a Bloch-Wannier fibration of the rotor dynamics holds
with respect to the quasi-position $\xi$, at a QR.
We use a rescaled quasi-position $\vartheta\equiv q\xi$, and accordingly resize the Brillouin zone
to $[-\pi,\pi[$. In all representations where quasi-position is diagonal, the state $|\psi\rangle$ of the rotor
is described by a
$q$-spinor ${\phi}$, specified by $q$ complex functions
$\phi_l(\vartheta)=\langle \vartheta,l |\psi\rangle$, $(l=1,\ldots,q) $.
We shall use a representation where the spinor $\pmb{\phi}(\vartheta)$,
which corresponds to a given rotor wavefunction $\psi(\theta)=\langle\theta|\psi\rangle$, is defined by:
\begin{equation}
\label{scomp2}
\phi _l (\vartheta )
\;=\;\frac {1}{\sqrt {2\pi}}\;\sum _{m\in {\mathbb Z}} \hat
\psi (l+mq) e^{im\vartheta}
\quad\quad\quad l=0,...,q-1.
\end{equation}
where $\hat \psi(n)$, $n\in{\mathbb Z}$ are the Fourier coefficients
of $\psi (\theta)$.
Equation (\ref{scomp2}) defines a unitary map $\mathfrak a$ of $L^2({\mathbb T})$ onto
$L^2({\mathbb T})\otimes{\mathbb C}^q$. Under this map, the (angular) momentum operator $\hat N$ is tranformed to:
\begin{equation}
\label{recipe}
\hat N =-i \partial_\theta \;\to \; {\mathfrak a}(\hat N) {\mathfrak a}^{-1}\; =\;
-iq\partial_\vartheta
\otimes {\bf \hat I} + \hat I\otimes {\bf \hat S},
\end{equation}
where $\hat I$ and ${\bf \hat I}$ are the identity operators in $L^2({\mathbb T})$ and in ${\mathbb C}^q$ respectively, and ${\bf \hat S}$ is the spin operator in ${\mathbb C}^q$:
\begin{equation}
\label{bthspin}
{\bf \hat S} =\sum_{l=0}^{q-1} l
| l \rangle \langle l |.\;
\end{equation}
where $|l\rangle$, $l=1,\ldots,q-1$ is the canonical basis in ${\mathbb C}^q$.
Thus, in spinor representation, the momentum operator is the sum of the
orbital operator $-iq\hat \partial _\vartheta \otimes {\bf {\hat I}}$
and the spin operator $\hat I \otimes {\bf{\hat S}}$.
In this picture, the rotor is characterized by ``orbital" observables
($\vartheta$, $-i\partial _\vartheta$) and by the spin
observable.
Bold symbols denote vectors in ${\mathbb C}^q$ and matrices.
\subsection{Resonant spin dynamics.}
At resonance, quasi-position is conserved under the discrete-time evolution defined by (\ref{fo}), so, whenever
it has a definite value $\vartheta$, no ``orbital" motion occurs, and spin alone changes in time. Therefore the evolution is described by a unitary $q\times q$ matrix ${\bf \hat A}(\vartheta)$ such that, as
$\psi(\theta)$ evolves into $\hat U_{\rm res}\psi(\theta)$,
the corresponding spinor $\pmb{\phi}(\vartheta)$ evolves into
the spinor ${\bf \hat A}(\vartheta)\pmb{\phi}(\vartheta)$.
The explicit form of the spin propagator ${\bf \hat A}(\vartheta)$ is easily computed by using (\ref{fo}) under resonance conditions. With the specific choice $V(\theta)=\cos(\theta)$, one finds
(details can be found in appendix \ref{exres}):
\begin{eqnarray}
\label{fibfl}
& & {\bf \hat A}(\vartheta)\;=\;e^{-ik{\bf \hat V}(\vartheta )}e^{-i{\bf \hat G}},\\
\label{fibfl1}
& & {\bf \hat G}\;\equiv\;{\bf \hat G} _{p,q,{\beta _0} }\;=\;\pi \frac pq ({\bf \hat S} +{\beta _0}{\bf \hat I} )^2,\\
\label{fibfl2}
& & {\bf \hat V} (\vartheta )=
\frac 12 \left\{
\sum _{l=0}^{q-2} \left( | l \rangle \langle l+1 | +
| l+1 \rangle \langle l |\right)
+|0\rangle \langle q-1 | e^{i\vartheta } +|q-1 \rangle \langle 0
| e^{-i\vartheta} \right\}.
\end{eqnarray}
\subsection{Bands.}
\begin{figure}
\includegraphics[width=8cm,angle=0]{figl-autov-q2q7-ae.eps}
\caption{ Eigenvalues of the resonant Hamiltonian ${\bf \hat H}^{\rm res} (\vartheta )$ for different values of the kicking constant $k=1$ (red), $3$ (green) and $5$ (blue) and (a) $q=2, p=3$, (b) $q=7, p=11$. }
\label{levels}
\end{figure}
The ``resonant Hamiltonian"
${\bf \hat H}^{\rm res} (\vartheta)$ is a hermitean matrix of rank $q$ such that:
\begin{equation}
\label{fibhres}
{\bf \hat A}(\vartheta )=e^{-i{\bf \hat H}^{\rm res} (\vartheta)}.\\
\end{equation}
It is uniquely defined, under the condition that its eigenvalues (\emph{i.e.}, the eigenphases of
${\bf \hat A}(\vartheta)$) lie in $[0,2\pi[$.
Explicit calculation of eigenvalues and eigenvectors of
$\hat{\bf A}(\vartheta)$, hence of the resonant Hamiltonian, is trivial for $q=1$, and
is easily performed for $q=2$ in terms of Pauli matrices \cite{SZAC1} (such a
case is reviewed in appendix \ref{appq2}). However, for $q>2$ analytical calculation is prohibitive.\\
Eigenphases of $\hat{\bf A}(\vartheta)$ are
smooth periodic function of quasi-position $\vartheta$.
As $\vartheta$ varies in $[-\pi,\pi[$, they sweep bands in the quasi-energy spectrum of the resonant evolution described by $\hat U_{\mbox{\tiny res}}$ \cite{cs}.
They also depend on the kicking strength $k$ and will be denoted by $\omega_l=\omega _l(\vartheta, k)$ in the following ($l=0,...,q-1$).
In the case $q=1$, $\omega _{0}(\vartheta, k)=k\cos (\vartheta)$.
For $q>1$ the eigenvalues are
nontrivial functions of the kick strength $k$. For fixed $q >2$ bandwidths tend to increase with $k$, eventually giving rise to complex patterns of avoided crossings.
Examples of $\vartheta$- and $k$-dependence of eigenphases
are shown in fig.\ref{levels} for $q=2$ (a) and $q=7$ (b). For $q>2$ the bandwidths depend also
on $l$.
In the resonant representation (i.e. in the representation in which the resonant
propagators (\ref{fibhres}) are diagonal), the spinor components (\ref{scomp2}) evolve
independently.
\section{Near-resonant dynamics and spin-orbital decoupling.}
\label{sodec}
We are interested in quantum motion, described by (\ref{fo}), in the vicinity of a
QR, namely when the kicking period is $\tau=2\pi p/q +\epsilon$, where the detuning $\epsilon$ of the period from the resonant period $\tau _{\rm res}=2\pi p/q$ is assumed to be small.
The one-step evolution operator (\ref{fo}) may be factorized as:
\begin{eqnarray}
\label{fodec0}
& & \hat U_\beta (n) =\hat U_{\rm res}\cdot
\hat U_{\rm nr} (n)\\
\label{fodec1}
& & \hat U_{\rm nr}(n)=
e^{-i\left[ \frac {1}{2}\epsilon\hat N^2+ D_n \hat N\right]},
\end{eqnarray}
where $D_n =\tau (\beta + \eta n +{\eta}/{2})-2\pi p {\beta _0}/q.$
\subsection{``Adiabatic" decoupling of spinor and orbital motions.}
Translation invariance (in momentum) is now broken by $\hat U_{\rm nr}$ ,
so quasi-position is not conserved any more.
The evolution of a spinor ${\phi}\in L^2({\mathbb T})\otimes{\mathbb C}^q$ is ruled by the
time-dependent Schr\"odinger equation:
\begin{eqnarray}
&& i\epsilon \frac {\partial}{\partial t}{\phi }\; =\;
\hat {H} (t ){\phi} \;,
\qquad\qquad\qquad
\hat {H} (t ) =
\epsilon
\hat {H}^{\rm res} \cdot \sum _{n=-\infty}^{+\infty}\delta (t-n)+ \hat {H}_0 (t),
\label{ham-sp}
\\
&& \hat {H}_0 (t)=
\frac 12 \epsilon ^2
(-iq\partial_\vartheta
\otimes {\bf \hat I} + \hat I\otimes {\bf \hat S})^2 +\epsilon D_{[t]} (-iq\partial_\vartheta
\otimes {\bf \hat I} + \hat I\otimes {\bf \hat S})
\label{ham-sp1}.
\end{eqnarray}
where $[t]$ denotes the integer part of $t$ and $(\hat {H}^{\rm res} \phi) (\vartheta)=
{\bf \hat H}^{\rm res} (\vartheta) \pmb{\phi}
(\vartheta)$.
Note that $\hat {H}_0 (t)$ is constant in between kicks.
Both sides of the Schr\"odinger equation have been multiplied by $\epsilon$,
to make it apparent that the detuning $\epsilon$ plays the role of an effective Planck constant
in what concerns the motion between the $\delta$-kicks. \\
The spinor components are mixed by (\ref{ham-sp1}) during the evolution;
we use an ansatz of Born-Oppenheimer type in order to decouple orbital (slow) motion from spin (fast) motion.
The detuning $\epsilon$ controls as well the separation between the different time scales
of the system. At exact resonance (i.e. $\epsilon=0$) the decoupling is exact, because motion
is restricted to the eigenspaces of the resonant propagator (\ref{fibhres}).
These subspaces are defined by the spectral decomposition of the resonant Hamiltonian, which we
write in the form:
\begin{equation}
\label{spr}
{\bf \hat H}^{\rm res}{(\vartheta)}\;=\;\sum\limits_{j=0}^{q-1}\;\omega_j(\vartheta,k)\;\hat{\bf P}_j(\vartheta)\;\;\;,\;\;\;
\hat{\bf P}_j(\vartheta)\;=\;|\pmb{\varphi}_j(\vartheta )\rangle\langle\pmb{\varphi}_j(\vartheta)|\;,
\end{equation}
where $\pmb{\varphi}_j(\vartheta )$ is the normalized eigenvector of ${\bf \hat H}^{\rm res}{(\vartheta)}$ which corresponds
to the eigenvalue $\omega _j {(\vartheta,k)}$. For each value of $\vartheta$, the
operators $\hat{\bf P}_j(\vartheta)$ in (\ref{spr}) are projectors in ${\mathbb C}^q$. We denote
by $\hat P_j$ the projectors
in the full Hilbert space
$L^2({\mathbb T})\otimes{\mathbb C}^q$, which act on spinors according to $(\hat{P}_j{\phi})(\vartheta)=\hat{\bf P}_j(\vartheta)\pmb{\phi}(\vartheta)$. The subspaces ${\cal H}_j$ whereupon the ${\hat P}_j$ project are the ``band subspaces" and are not invariant for the full Hamiltonian (\ref{ham-sp}).
By using the ansatz that ``band subspaces" be almost invariant for small $\epsilon$,
we next decouple the (assumedly ``fast") spin variables from the orbital (``slow") ones.
We assume that the decoupled evolution inside the band subspaces provides a good
description of the exact evolution when $\epsilon$ is small, because
the leading error terms are linear in $\epsilon$.
Our approximation
consists in replacing the exact dynamics, ruled by the Hamiltonian in (\ref{ham-sp}),
(\ref{ham-sp1}) by an ``adiabatic" evolution, generated by the
Hamiltonian:
\begin{gather}
\label{bo}
\hat{H}^{\mbox{\tiny diag}}(t)\;=\;\sum\limits_{j=0}^{q-1}\;\hat{P}_j\;\hat{H}(t)\;\hat{P}_j\;=\;
\epsilon \hat {H}^{\rm res} \cdot \sum _{n=-\infty}^{+\infty}\delta (t-n)\;
+\;\sum\limits_{j=0}^{q-1}\;\hat{P}_j\;\hat{H}_0(t)\;\hat{P}_j\;.
\end{gather}
In the case of time-independent Hamiltonians, such projection on ``band subspaces", aimed at separating fast and slow time scales, is basically a Born-Oppenheimer approximation \cite{adth}.
In the case of kicked dynamics this projection should be performed on the ``effective", time-independent Hamiltonian ${\hat H}_{\mbox{\tiny eff}}$, which
generates over a unit time the same evolution as does the kicked Hamiltonian. The
effective Hamiltonian is not known in closed form, although it can be expressed by a sum
of infinite terms, ordered in powers of $\epsilon$ \cite{SZAC1,danaar}.
Our ansatz is somehow related to a rough approximation ${\hat H}_{\mbox{\tiny eff}}\simeq
\epsilon \hat {H}^{\rm res} +{\hat H}_0$.
We assume this is valid in some restricted parameter regimes (see further comments in Sect. \ref{map}).
\\
A spinor in ${\cal H}_j$ has the form $\psi(\vartheta)\pmb{\varphi}_j(\vartheta)$ with $\psi\in L^2({\mathbb T})$ and may thus be described by a scalar wavefunction $\psi(\vartheta)$ (the amplitude of the spinor on the $j$-th resonant eigenstate). Evolution inside the band subspace ${\cal H}_j$ is ruled by the ``band Hamiltonian" $\hat{H}_j(t)=\hat{P}_j\;\hat{H}(t)\;\hat{P}_j$ and
direct calculation by using (\ref{ham-sp}) shows that band Hamiltonians have the following form:
\begin{gather}
\label{bo1}
{\hat H}_j(t)\;=\;\;\epsilon\omega_j(\vartheta,k)\sum\limits_{t'=-\infty}^{+\infty}
\delta(t-t')\;+\;\hat{H}_0^{(j)}(t)\;,
\end{gather}
where:
\begin{eqnarray}
\label{matrel1}
&& \hat{H}^{(j)}_0(t)\; = \;-\frac 12 \epsilon ^2 q^2 \partial _\vartheta ^2 -
\left (\epsilon ^2 q^2 \langle \pmb{\varphi}_j | { \pmb{\dot \varphi}_j} \rangle +i\epsilon ^2 q S_j +i \epsilon
q D _{[t]}\right)\partial _\vartheta +\frac 12 \epsilon ^2 \left( S^{''}_j
-q^2 \langle \pmb{\varphi}_j | {\pmb{\ddot \varphi}_j} \rangle
-i 2q S^{'}_{l}\right) +\nonumber \\
&& \qquad\qquad \qquad\qquad +\epsilon D _{[t]} \left( S_j-iq \langle
\pmb{\varphi}_j | {\pmb{\dot \varphi}_j} \rangle \right),
\end{eqnarray}
where dots denote derivatives with respect to $\vartheta$, and
\begin{equation}
\label{ssss}
S_j{(\vartheta)} =\langle \pmb{\varphi}_j {(\vartheta)} | {\hat {\bf S}}| \pmb{\varphi}_j {(\vartheta)} \rangle\;, \quad
S^{'}_j{(\vartheta)} =\langle \pmb{\varphi}_j {(\vartheta)} | {\hat {\bf S}}| \pmb {\dot {\varphi}} _j {(\vartheta)} \rangle, \quad
\quad
S^{''}_j{(\vartheta)} =\langle \pmb{\varphi}_j {(\vartheta)} | {\hat {\bf S}^2}| \pmb{\varphi}_j {(\vartheta)} \rangle\;.
\end{equation}
\subsection{Band Hamiltonians.}
We now note that the problem can be formulated as the evolution of a particle in a fictitious magnetic
field, which takes into account the average effects of spin degree of freedom on the orbital motion.
We derive a simpler form for the band Hamiltonians (eqs.(\ref{eqsc-eff})).
By the introduction of magnetic vector
and scalar potentials, the operator (\ref{matrel1})
may be written in the form:
\begin{equation}
\label{h1-pot}
\hat {H}_j(t)\;=\;\frac 12 \epsilon ^2 q^2
\left(-i\partial _\vartheta -{\mathcal A}_j{(\vartheta)}\right)^2 +\epsilon
q D _{[t]}
\left(-i\partial _\vartheta -{\mathcal A}_j{(\vartheta)}\right) +\frac 12 \epsilon ^2{\mathcal B}_j{(\vartheta)}\;.
\end{equation}
The ``geometric" vector potential ${\mathcal A}_j {(\vartheta)}$ and the scalar potential ${\mathcal B}_j{(\vartheta)}$ are determined
by the structure of the resonant eigenvectors $\pmb{\varphi}_j(\vartheta)$, via the following relations:
\begin{eqnarray}
\label{ab1}
&& {\mathcal A}_j {(\vartheta)} =i \langle \pmb{\varphi}_j {(\vartheta)} |{\pmb{\dot\varphi}}_j{(\vartheta)} \rangle -\frac 1q S_j {(\vartheta)}\\
\label{ab2}
&& {\mathcal B}_j{(\vartheta)} = S^{''}_j {(\vartheta)} +2q\; \Im S^{'}_j {(\vartheta)} -q^2 {\mathcal A}_j^2 {(\vartheta)}
+ q^2 \langle {\pmb{\dot\varphi}_j} {(\vartheta)} |{\pmb{ \dot\varphi}}_j {(\vartheta)} \rangle.
\end{eqnarray}
Reality of such potentials
follows from (\ref{ssss}) and from the fact that $\langle \pmb{\varphi}_j {(\vartheta)} | {\pmb{ \dot\varphi }}_j {(\vartheta)} \rangle $ is purely imaginary thanks to normalization. The vector potential is
gauge-dependent;
eigenvectors $\pmb{\varphi}_j {(\vartheta)}$ are determined up to arbitrary $\vartheta$-dependent
phase factors and so operator (\ref{h1-pot}) may be further simplified by a gauge transformation,
$
\pmb{\varphi}_j {(\vartheta)} \to \pmb{\varphi}_j {(\vartheta)} e^{i\lambda _j {(\vartheta)}}.
$
Under such a transformation, ${\mathcal A}_j {(\vartheta)}$ changes to
$\tilde {\mathcal A}_j {(\vartheta)} ={\mathcal A}_j {(\vartheta)} -\dot {\lambda} _j {(\vartheta)}$, and ${\cal B}_j{(\vartheta)}$ does not change. The transformation may be chosen so that
\begin{equation}
\label{potco}
\tilde{\cal A}_j{(\vartheta)}\;=\;\mbox{\rm const.}\;=\;-\gamma_{j,q}\;-\varsigma\;\equiv \alpha _j\;,
\end{equation}
where:
$$
\gamma_{j,q}=\frac1{2\pi i}\int_{-\pi}^{\pi}d\vartheta\;\langle\pmb{\varphi}_j{(\vartheta)}|{\pmb{\dot\varphi}}_j{(\vartheta)}\rangle\;\;\;,\;\;
\varsigma=\frac1{2\pi q}\int_{-\pi}^{\pi}d\vartheta\;S_j{(\vartheta)}\;.
$$
This immediately follows from (\ref{ab1}) and from the requirement, that eigenvectors
be single-valued. Note that
$2\pi\gamma_{j,q}$ is the geometric (Berry's) phase \cite{berph,bsimon}.
We thus assume ${\tilde {\mathcal A}}_j=\alpha _j$:
this choice corresponds to the Coulomb gauge.\\
In conclusion, in the $j$-th band subspace, the
band dynamics is described by the following
Schr\"odinger equation:
\begin{eqnarray}
\label{eqsc-eff}
&& i\epsilon \frac {\partial}{\partial t}\psi(\vartheta ,t)\; =\;
\hat {H}_j (t)\;\psi(\vartheta ,t),
\nonumber \\
&& \hat {H}_j (t)\;=\;
\epsilon\omega_j(\vartheta,k)\;\sum _{n=-\infty}^{\infty}\delta (t-n)+
\frac 12 \epsilon ^2 q^2\left(-i\partial_\vartheta -\alpha _j\right)^2
+\frac12\epsilon^2{\mathcal B}_j{(\vartheta)} + \epsilon
q D _{[t]}
\left(-i\partial_\vartheta -\alpha _j \right)
\end{eqnarray}
The multicomponent Schr\"odinger equation (\ref{ham-sp}), for the $q$-spinor wave function
$\phi (\vartheta,t)$, is then
reduced to $q$ scalar Schr\"odinger equations (\ref{eqsc-eff}), each of which
determines the independent evolution of a rotor wave function $\psi (\vartheta ,t)$.
\section{Pseudo-classical description of orbital motion.}
\label{map}
We now derive a
description of the dynamics of the orbital observables
($\vartheta$, $-i\partial _\vartheta$), restricted inside each of the band subspaces
${\cal H}_j$, by ``formally" classical equations of motion.
We introduce a ``pseudo-classical'' momentum operator $\hat I$, defined as follows:
\begin{equation}
\label{clm}
\hat I=-i\epsilon \partial_\vartheta\;,
\end{equation}
which differs from the orbital momentum because of the replacement of
the Planck constant ($=1$) by $\epsilon$.
If the same role is granted to $\epsilon$ in eqs.(\ref{eqsc-eff}), then, in classical terms,
the effective
band dynamics in the $j$-th band subspace looks like a rotor dynamics,
with angle coordinate $\vartheta$ and conjugate momentum $I$, ruled by the
kicked Hamiltonian:
\begin{eqnarray}
\label{hamj}
& & H_j (\vartheta , I , t) = \epsilon\omega_j(\vartheta,k)\sum _{t=-\infty}^{+\infty}
\delta (t-t')+F_j (\vartheta, I,t), \nonumber \\
&&
F_j (\vartheta, I,t)=
\frac 12 q^2 I^2 +D_{[t]} q I -\epsilon q^2 \alpha _j I +
\frac 12 \epsilon^2 {\mathcal B}_j{(\vartheta)}.
\end{eqnarray}
Terms, independent of $I$ and $\vartheta$, have been neglected. This Hamiltonian describes a classical kicked dynamics. By dropping terms beyond first order
in $\epsilon$, the map from immediately after the $n$-th kick to immediately after the $(n+1)$-th kick is:
\begin{eqnarray}
\label{map0}
& \vartheta _{n+1} =\vartheta _n + q^2I_n +2\pi\Omega q n +\varrho\;, & \qquad\qquad
{\rm mod} \; (2\pi), \nonumber\\
& I_{n+1} =I_n - \epsilon \dot \omega_j(\vartheta_{n+1},k)\;, & \qquad \qquad
\end{eqnarray}
where $\Omega=\eta\tau/(2\pi)$ and
$\varrho=q(-\epsilon q \alpha _j+\pi \Omega +\tau\beta -2\pi p \beta_0 /q)$.
The meaning of the pseudoclassical map (\ref{map0}) as a description of the nearly resonant quantum dynamics will be discussed in section \ref{meaning}.
\subsection{Pseudo-classical maps and Quantum Accelerator Modes.}
\label{pcmaps}
We now describe how quantum accelerator modes appear in the present framework.
\begin{figure}
\includegraphics[width=8cm,angle=0]{figl-phsp-q2q7q15-ae.eps}
\caption{ Phase portraits of maps (\ref{mapq2}) with $k=1$ and $g=0.0386$ and
(a) $\tau /2\pi=1.445\; (\epsilon =\tilde k =-0.2827),\; \tau\eta=2\pi\Omega =3.2261, (r,s)=(1,1)$,
of map (\ref{mapq2}) with $j=1$;
(b) $\tau /2\pi=1.5025\; (\epsilon =\tilde k=0.0157),\; \tau\eta =3.4401,(r,s)=(23,21)$,
of map (\ref{mapq2}) with $j=0$;
(c) $p/q =11/7, \tau /2\pi=1.5375 \; ( \epsilon =\tilde k =0.2132),\; \tau\eta=3.6023, (r,s)=(4,1)$, of
map (\ref{mapj}) with $j=3$;
(d) $p/q= 22/15, \tau /2\pi=1.485\; (\epsilon =\tilde k = 0.1152),\; \tau\eta =3.3605, r/s=(8,1)$,
of map (\ref{mapj}) with $j=15$.
The periodic orbit has period $r=132$ and it is associated to 132 stability islands; $r$ and $s$ are not coprime ($r=1056$, $s=132$). In the inset a magnification of one of the small islands of the chain is shown.
}
\label{phsp-q2p3-q7p11-q15p22}
\end{figure}
The explicit dependence on time of map (\ref{map0}) is removed by changing the momentum variable to
$J_n=q^2 I_n + 2\pi\Omega q n + \varrho$. In the variables $(J,\vartheta)$ the map is $2\pi$-periodic in $J$ and so it may be written as a map on the 2-torus:
\begin{eqnarray}
\label{mapj}
& \vartheta _{n+1} =\vartheta _n + J_n & \qquad\qquad {\rm mod}\; (2\pi), \nonumber\\
& J_{n+1} =J_n - \epsilon q^2
\dot \omega_j(\vartheta,k) + 2\pi \Omega q & \qquad\qquad {\rm mod}\; (2\pi).
\end{eqnarray}
In case $q=1$, this map reduces to one which was introduced in \cite{FGR022}, in order to explain the QAMs that had been experimentally observed near principal resonances. For
$q>1$, it has $q$ different versions, labeled by $l=1,\ldots,q$. Like in the case $q=1$, the stable periodic orbits of each of these versions are expected to give rise to QAMs. Indeed, each stable periodic orbit of map (\ref{mapj}) corresponds to a stable accelerating orbit of map (\ref{map0}), because the difference between momentum $I_n$ and momentum $J_n$ linearly increases with time. More precisely,
let
$(\vartheta _0, J_0)$ be initial conditions for a periodic orbit of period $s$ and winding number $r/s$.
The increment of $J$ after time $ns$ (measured in the number of kicks) is
$2\pi r n$; therefore, the increment of the original momentum variable is:
\begin{eqnarray}
\label{ist}
I _ {sn} - I_0\;=\;a _I s n\;\;\;\;,\;\;\;
a _I =\frac {2\pi}{q} \left(
\frac r{qs} - \Omega \right)\;,
\end{eqnarray}
with $I _0 = (J _0 -\varrho)/q^2$.
This formula (\ref{ist}) yields the acceleration of a stable orbit of the pseudoclassical dynamics (\ref{map0}), and it is precisely such orbits that may give rise to QAMs in the vicinity of resonances of arbitrary order.
As a matter of fact, numerical
simulations reveal QAMs near higher order resonances, in correspondence with
periodic orbits of maps (\ref{mapj}).
In sect. \ref{exper}, we explain how the analysis of the stable periodic orbits of maps (\ref{mapj})
may help to resolve the complex pattern of QAMs presented in fig.\ref{pac-tot}\\
Thanks to (\ref{recipe}) and (\ref{clm}), the physical momentum $N$ is related to $I$ by $N= q I /\epsilon +j$; therefore, the physical acceleration is given by:
\begin{equation}
\label{acc}
a=\frac {2\pi}{\epsilon }\left( \frac r{qs} - \Omega \right).
\end{equation}
Although the analytical derivation of the maps is based on the resonant Hamiltonian,
which is known in closed form only for $q=1,2$, the practical use of (\ref{mapj}) only
requires the resonant eigenvalues, which can be easily computed
by a numerical diagonalization of a $q\times q$ matrix.
In fig.\ref{phsp-q2p3-q7p11-q15p22}
examples of phase space of maps
(\ref{mapj}) are shown for $q=2$ (a,b), $q=7$ (c)
and $q=15$ (d), in parameter regimes in which QAMs are present. The plotted
periodic orbits correspond to some of the modes shown in fig.\ref{pac-tot}.
For instance, in fig.\ref{phsp-q2p3-q7p11-q15p22} (a)
the stability island of a fixed point of one of the maps (\ref{mapq2}) is plotted
for $\tau /2\pi =1.455$; this fixed point corresponds to the huge mode on the left side of
fig.\ref{pac-tot}. A distribution of phase-space points, which initially fall inside the stability island,
describes an ensemble of atoms generating the QAM.
Classical structures, like stability islands, may affect the quantum system only if their size is comparable with the effective Planck constant $\epsilon$.
The map (among the $q$ set of eq. (\ref{mapj})), that crucially contributes
in determining the observed QAMs, is
generally the one with the widest bandwidth.
\subsection {Special values of quasi-momenta.}
A QAM arises when the initial wave packet is centered in momentum around $N_0$,
related to $I _0$ by
\begin{equation}
\label{n0}
N _ 0 = q\frac {I _0} {\epsilon }+j = \frac 1{q\epsilon}( J_0+2\pi n) +q \alpha _j -
\frac {1}{\epsilon}\left( \pi\Omega+\tau\beta -2\pi p{\beta _0} /q\right)+j
\end{equation}
with $n\in {\mathbb Z}$.
As in the case for main resonances, we expect that the modes will be especially pronounced when quasimomentum is fine tuned: in view of (\ref{n0}) such optimal values of $\beta$ are determined by the condition:
\begin{equation}
\label{beta}
\beta _\nu =-\frac {\epsilon}{\tau} (N_0 -j-q \alpha _j +
{\beta _0} )+\frac {J_0+2\pi m}{q\tau}-\frac {\eta}{2}+{\beta _0} \qquad {\rm mod}\; (1),
\end{equation}
with ${\beta _0}=\frac {\nu}{p} +\frac q2$ and $\nu =0,1,..,p-1$. A wave packet initially localized in $N_0 +
\beta_\nu$ will be mostly captured inside a QAM; indeed in this case, the
overlap between the stability island and the initial wave packet is maximal.
Formula (\ref{beta}) is a generalization of the result derived for $q=1$ in \cite{FGR022}
and
experimentally verified in \cite{S}; it reduces to the expression in \cite{FGR022} for $\alpha_0 =0$ (see appendix \ref{appq2-sp}).
This picture
is confirmed by fig.\ref{hus-st}, in which the quantum phase-space
evolution of a $\beta$-rotor, with a quasi-momentum
given by (\ref{beta}), and the pseudoclassical motion are compared. The initial state of the
rotor is a coherent wave packet centered in the $(r,s)=(1,1)$ fixed point, plotted in
fig.\ref{phsp-q2p3-q7p11-q15p22}(a), corresponding to the
$\epsilon$-classical accelerator mode on the left part of fig.\ref{pac-tot}, in the vicinity of
$q=2$ resonance. The mode moves with an acceleration equal to 0.2988,
according (\ref{acc}).
\subsection{Validity of the Pseudo-Classical Description.}
\label{meaning}
We now come back to the meaning of the pseudo classical description, as
``pseudoclassical" dynamics (\ref{map0}) still explicitly retains the ``Planck constant" $\epsilon$.
In the case when $q=1$, there is a single resonant eigenvalue,
given by $\omega_0(\vartheta,k)=k\cos(\vartheta)$, so the pseudoclassical
dynamics (\ref{map0}) has a well defined limit
for $\epsilon\to 0$, $k\to\infty$,
$k\epsilon\to\tilde k$ with $|\tilde k|<\infty$.
This limit dynamics was discovered and analyzed in \cite{FGR021,FGR022}.
This is no longer true when $q>1$ and then the relation between the band dynamics and the ``pseudoclassical" dynamics (\ref{map0}) is less transparent. The quantum band-dynamics is still, formally, the quantization of the classical kicked dynamics
(\ref{map0}) using $\epsilon$ as the Planck constant. Nevertheless, the latter dynamics
contains the ``Planck constant" $\epsilon$ in crucial ways,
which preclude existence of a limit for $\epsilon\to 0$.
To see this, note that $\omega_j(\vartheta, k)$ depends on its arguments only through the real variables $u=k\sin(\vartheta/q), v=k\cos(\vartheta/q)$ (cf. the form of the resonant evolution (\ref{evsp3}) in
appendix \ref{exres})
that is,
$\omega_j(\vartheta,k)=G(u,v)$, where $G$ is a smooth oscillatory function, independent of $k$.
Hence,
\begin{equation}
\label{osc}
\epsilon \dot \omega_j (\vartheta ,k)
\;=\;\frac{\epsilon k}{q}\;
\left\{\cos(\vartheta/q)\partial_{u}G-\sin(\vartheta/q)\partial_{v}G\right\}
\end{equation}
Existence of a limit demands $\epsilon k\to\tilde k$; but then, except in the trivial case
$\tilde k=0$, the arguments of the $G$ functions in (\ref{osc}) diverge and so (\ref{osc}) appears to oscillate faster and faster as $\epsilon \to 0$, without a well-defined limit.\\
Nonexistence of a pseudoclassical limit for the quantum dynamics
was established in \cite{GR08},
by a stationary phase approach, with no recourse to the band formalism. It was nonetheless pointed out that, despite
absence of such a limit, QAMs may be associated to certain rays,
which do correspond to trajectories of some formally classical maps. The meaning of the latter maps is, at most, that of providing local phase space descriptions, near QAMs.
Similar remarks apply in the case of the pseudoclassical maps (\ref{map0}).\\
It is worth recalling that maps (\ref{map0}) were derived from an ansatz, which would be optimally justified if the effective Hamiltonian of kicked dynamics could be replaced by the sum of the free and of the kicking Hamiltonians (sect. \ref{sodec}). This approximation is obviously invalid in a global sense, yet, in ``spinless" cases, it is known to work remarkably well near stable fixed points \cite{SFGR05}; indeed, in the KR case
it yields a pendulum Hamiltonian, which provides a good description of the motion near the stable fixed point of the Standard Map.
This may be seen as a qualitative justification for the use of maps (\ref{map0}), if restricted to the search of QAMs.
\subsection {Case $q=2$.}
While the expressions in subsect. \ref{pcmaps}
are quite general, we may accomplish a detailed analysis when $q=2$ and
$V(\theta)=k \cos( \theta)$. In such a case
the eigenvalues $\omega _j (\vartheta , k )$ ($j=0,1$) of the
resonant Hamiltonian can be written down explicitly (see appendix \ref{appq2}):
\begin{equation}
\label{eigq2}
\omega _j (\vartheta ;k)=-{m_p} \left[ \frac {\pi}4+(-1)^j \arccos \left(
\frac {\cos (k \cos(\vartheta /2)}{\sqrt 2}\right) \right],
\end{equation}
with ${m_p} =(-1)^{\frac {p+1}{2}}$.
Therefore, our theory produces two maps (\ref{mapj}), which take the form:
\begin{eqnarray}
\label{mapq2}
& \vartheta _{t+1} =\vartheta _t +J_t & \quad\quad\quad {\rm mod}\; (2\pi), \nonumber\\
& J_{t+1} =J_t + 4\pi\Omega - 2(-1)^j{m_p} \tilde k
\sin \left( \frac {\vartheta _{t+1}}2\right)
\frac {\sin \left( k \cos \left( \frac {\vartheta _{t+1}}2
\right) \right)}{\sqrt {1+\sin ^2 \left(
k \cos \left( \frac {\vartheta _{t+1}}2
\right)\right)}}&
\quad\quad\quad {\rm mod} \; (2\pi).
\end{eqnarray}
\begin{figure}
\includegraphics[width=8cm,angle=0]{figl-hus-stato-2-ae.eps}
\caption{ Contour plots at times $t=100$ (a) and $t=200$ (b)
of the Husimi distribution of the wave packet
of a $\beta$-rotor with $\beta =0.1672$, given by (\ref{betaq2})
with $j=1$ and $\nu =0$. The rotor is
initially prepared in a coherent state centered in the $(r,s)=(1,1)$ fixed point of
fig.\ref{phsp-q2p3-q7p11-q15p22}(a).
The black spots in the centers of the contours are an
ensemble of classical phase points, initially distributed in a circle of area $\sim \epsilon$ centered at the
mode. They evolve according to the $\epsilon$-classical dynamics (\ref{mapiq2}) with
$\varrho =-0.5709$. The
other parameter values are
$k=1, \tau =1.455*2\pi
(\epsilon =-0.2827)$ and $g=0.0386$.}
\label{hus-st}
\end{figure}
Going back to the time-dependent form the maps are written as
\begin{eqnarray}
\label{mapiq2}
& \vartheta _{t+1} =\vartheta _t +4I_t +4\pi\Omega t +\varrho
& \quad\quad\quad {\rm mod}\; (2\pi), \nonumber\\
& I_{t+1} =I_t - (-1)^j{m_p} \frac {\tilde k}{2}
\sin \left( \frac {\vartheta _{t+1}}2\right)
\frac {\sin \left( k \cos \left( \frac {\vartheta _{t+1}}2
\right) \right)}{\sqrt {1+\sin ^2 \left(
k \cos \left( \frac {\vartheta _{t+1}}2
\right)\right)}}&
\quad\quad\quad {\rm mod} \; (2\pi),
\end{eqnarray}
with $\varrho =2\left( \epsilon\delta_{j,1}
+\pi\Omega +\tau\beta -\pi p\beta_0 \right)$ and where we have used
$\alpha _j=-\frac 12 \delta_{j,1}$ (see appendix \ref{appq2-sp}).
We remark that in the case $q=2$, avoided crossing between the
eigenvalues (\ref{eigq2}) are absent for an arbitrary value of $k$.
\begin{figure}
\includegraphics[width=8cm,angle=0]{figl-beta-q2p3-p-ae.eps}
\caption{ Probability inside a box of extension equal to $L=6\simeq \Delta J q/|\epsilon |$ in momentum, moving according to (\ref{acc}) for $p=3, q=2, (r,s)=(1,1)$ and $\epsilon =-0.2828$, as a function of quasimomentum $\beta$ of the $\beta$-rotor. $\Delta J$ is the size of the island in $J$. Dashed vertical lines refer to special values of quasimomenta, given by formula (\ref{betaq2}) with
${\beta _0} =\nu/3$, $\nu =0,1,2$, $N_0=0, m=0$.
The probability is shown at time $t=100$ (red) and $t=200$ (blue).
The parameter values are the same as in fig.\ref{pac-tot}.}
\label{qmom}
\end{figure}
We may also check the selection criterion for quasimomenta, that in the present case assumes the form:
\begin{equation}
\label{betaq2}
\beta _{j, \nu}=-\frac {\epsilon}{\tau} (N_0 +{\beta _0} )+\frac {J_0+2\pi n}{q\tau}-\frac {\eta}{2}+
{\beta _0} \qquad {\rm mod}\; (1),
\end{equation}
with $\beta _0$ given by $(iii)$ in sect. \ref{back} and $\nu=0,...,p-1$.
A scan over possible $\beta$ values reveals that indeed QAMs are greatly enhanced around the values predicted by (\ref{beta}): this is confirmed by fig.\ref{qmom}, in which the momentum
probability transferred to the mode is shown as a function of $\beta$.
\section{Mode spectroscopy and connections with cold atom experiments.}
\label{exper}
\subsection{Farey ordering of QAMs near a fixed resonance.}
We now elucidate how our findings apply to inspection of density plots like the one illustrated in
fig.\ref{pac-tot}. We point out that such a picture is of direct physical significance, since typical experimental protocols maintain $k$ and $g$ fixed, while performing a scan on the pulse period $\tau$. Such a scan, in the present context, has to be carried out around a resonant value, namely
$\tau (\epsilon ) =2\pi p /q +\epsilon$. Density plots of momentum distribution disclose the presence of QAMs, since after a fixed number of kicks their momentum is linearly related to the acceleration (\ref{acc}): $a$ depends on $\epsilon$, through the ``bare" winding number $q\Omega$
\begin{equation}
\label{ep}
q\Omega (\epsilon ) =\frac q{2\pi} g\left( 2\pi \frac pq +\epsilon \right)^2
\end{equation}
and on the ``dressed" winding number of the pseudo classical map $r/s$, which individuates the mode. We denote by $\Omega^*$ the resonant ($\epsilon=0$) value of the ``bare" winding number (notice that for $\epsilon=0$ the maps correspond to pure
rotation in $J$):
\begin{equation}
\label{oms}
\Omega ^ * \equiv q\Omega (0)=2\pi \frac {p^2}q g,
\end{equation}
which is independent of the mapping index $j$. Formula (\ref{oms}) is a generalization
of the analogous result found for $q=1$ \cite{GRF05,all05}.
As analyzed in \cite{GRF05,all05} for
principal resonances, the parameter space of map (\ref{mapj}) is
characterized by the presence of regions (Arnol'd tongues), in which stable periodic
orbits exist.
Close to resonances we expect that mode-locking structure of the pseudo classical maps singles out modes whose winding number provide rational approximants to $\Omega^*$: at the same time fat tongues are associated to small $s$ values, so the corresponding modes should be more clearly detectable. This is the physical motivation underlying Farey organization of observed modes: whenever we observe two modes labelled by winding numbers $r_1/s_1$ and $r_2/s_2$ ($r_1/s_1 < \Omega^* < r_2/s_2$), the fraction with smallest denominator bracketed by the winding pair is the Farey mediant $(r_1+r_2)/(s_1+s_2)$.
We can now analyze in more detail fig.\ref{pac-tot}, which represents
a numerical simulation of experimental momentum distribution after $t=100$ vs $\tau$,
which assumes values
around a second order resonance, namely $\tau ^{\rm res}/2\pi =p/q=3/2$.
All parameters are chosen to be accessible to experiments and the initial atomic distribution
reproduces that employed in \cite{Ox991,Ox992,Ox995,Ox994}:
$g=0.0386$, $k=1$ and the initial state is a mixture of 100 plane waves sampled from a gaussian distribution of momenta with FWHM $\sim 9$.
Full lines in the figure delineate momentum profiles consistent with the acceleration (\ref{acc}),
with $(r,s)$ given by winding number $r/s$ of corresponding stable periodic orbits of maps (\ref{mapj}).
\begin{figure}
\includegraphics[width=8cm,angle=0]{figl-pac-q2p3-ae.eps}
\caption{ Enlargement of fig.\ref{pac-tot} in the region $1.49\leq\tau/2\pi\leq 1.50625$, around the resonance $\tau_{res} =3\pi$ $(p/q=3/2)$. Momentum distributions is calculated after
$t=200$ kicks. Full lines show the theoretical curves (\ref{acc}): the yellow ones refer to principal convergents of $\Omega ^*$. Starting from the left, the modes correspond to the stable periodic orbits of maps (\ref{mapq2}) with:
$(r,s)=(14,13), (25,23), (12,11), (23,21)$ and $(11,10)$.}
\label{pac-q2p3}
\end{figure}
The value of $\Omega ^*$ and the first few rational approximants (obtained upon successive truncation of the continued fraction expansion), corresponding to detectable modes, are:
\begin{eqnarray}
\label{ostar1}
& & \Omega ^* \simeq 1.0913893 = 1 + [10,1,16,3,3,...]\nonumber\\
& & \frac rs = 1; \; \frac {11}{10};\; \frac {12}{11}; ...
\end{eqnarray}
The first one $(r,s)=(1,1)$ is shown with yellow full line on the left of fig.\ref{pac-tot}
and the stability island of the corresponding fixed point is shown
in fig.\ref{phsp-q2p3-q7p11-q15p22} (a).
The second and third are marked by full yellow lines in fig.\ref{pac-q2p3},
which is an enlargement of fig.\ref{pac-tot} in the region $1.49\leq \tau/2\pi\leq 1.50625$, calculated for time $t=200$.
Farey organization is exemplified by the appearance of the $(23,21)$ QAM, whose winding number is the Farey composition of the $(11,10)$ and the $(12,11)$ modes; the correspondent stable periodic
orbit is plotted in fig.\ref{phsp-q2p3-q7p11-q15p22}(b).
Through Farey composition law we may also identify observed modes to the right of $\tau_{res}$, as shown in fig.\ref{pac-q2p3}.
\subsection{Visibility of resonances of different order.}
The complexity of mode spectroscopy is further enhanced by the fact that, within
some interval in $\tau$, arbitrarily many different resonant values occur. As a matter of fact it is possible to recognize in
fig.\ref{pac-tot} modes coming from a wide set of resonances:
besides $q=2$ also $q=7, \,15,\,17,\,21,\,36,\,40$ contribute QAMs in the selected range;
this is shown in fig.\ref{pac-tot} for $q=7$ and in fig.\ref{pac-qalto} for the other resonances.
No QAM with $q=13$ could be resolved in the range of fig.\ref{pac-tot}.
Farey composition is still of some use in the identification of the resonances to which modes belong: for instance, the very large mode
on the right of the figure belongs to a QR between $p/q=3/2$ and $p/q=2/1$; applying Farey composition successively, we get the sequence $p/q=5/3, 8/5$ (outside the plotted range in $\tau$)
and then $11/7$, to which the mode belongs.
The accumulation point of the resonance $p/q=11/7$ is $\Omega ^*\simeq 4.1923207 =4+[5,...]$. The mode shown in fig.\ref{pac-tot}
corresponds to the first principal convergent of $\Omega ^*$, i.e. to the fixed point
$(r,s)=(4,1)$, shown in fig.\ref{phsp-q2p3-q7p11-q15p22}(c). The same occurs
for the modes near resonances of higher $q$, shown in fig.\ref{pac-q2p3}.
We remark that a hierarchy in resonant fractions looks more cumbersome than the one considered for winding numbers, as for instance there does not seem to be any straightforward dependence on the size of $q$. Numerical data however suggest that detectable modes appears in
the vicinity of resonances leading to almost integer $\Omega^*$, i.e. when
the fractional part of $\Omega^*$ is
closer to the integers 0 or 1 than to their Farey mediant 1/2.
In these cases, the resonance may display a mode corresponding to a periodic orbit of period 1.
As shown in fig.\ref{pac-qalto},
this condition may be fulfilled for different $p/q$ values. Moreover, the
absence of observable QAMs with $q=13$ in the range of fig.\ref{pac-tot}, even if $p/q=20/13$ resonance belongs to the plotted $\tau$ range, is consistent with this rough ``thumb" rule.
Indeed $\Omega^* \simeq 7.4624908 = 7 + [2, 6, 6, ..]$, thus its fractional part is closer to 1/2 than to 0.
\begin{figure}
\includegraphics[width=8cm,angle=0]{figl-pac-qalti-ae.eps}
\caption{ Enlargement of fig.\ref{pac-tot} in the region $1.45\leq\tau/2\pi\leq 1.49$. The momentum distributions is calculated after
$t=200$ kicks. Full lines show the theoretical curves (\ref{acc}); each color refer to a different quantum resonance, namely different values of $p/q$. Starting from the left the modes correspond to:
$p/q=3/2$ (theoretical curve not shown), $31/21, 59/40, 28/19, 53/36, 25/17$ and $22/15$. These
resonances $p_n/q_n$
lead to almost integer $\Omega^*$ and they are exacted from the sequence of Farey
fractions obtained starting from $p_0/q_0=1/1$ and $p_1/q_1=3/2$:
$p_8/q_8=22/15,\; \Omega ^*_8\simeq 7.82566 =8-[5,1,2,1,...],\; (r,s)=(8,1)$ (shown in black);
$ p_9/q_9=25/17, \; \Omega ^*_9\simeq 8.9165791=9-[11,1,78,...], \; (r,s)=(9,1)$ (in purple);
$p_{10}/q_{10}=28/19; \; \Omega ^*_{10}\simeq10.0075= 10+[131,...],
\; (r,s)=(10,1)$ (in red); $p_{11}/q_{11}=31/21,\; \Omega ^*_{11}\simeq11.098678=11+[10,7,2,6,1,...],
\; (r,s)=(11,1)$ (in yellow). Further modes are shown in between the mentioned ones:
$p/q=53/36 =25/17\oplus 28/19$ with $\Omega ^* \simeq 18.924152 = 19-[13,5,2,...]$ (shown in pink) and $p/q=59/40 =28/19\oplus 31/21$ with $\Omega ^* \simeq 21.106256 = 21+[9,2,2,3,...]$
(shown in orange). The $(8,1)$-periodic orbit of the resonance $p_8/q_8=22/15$ is plotted
in fig.\ref{phsp-q2p3-q7p11-q15p22}(d).
}
\label{pac-qalto}
\end{figure}
\section{Summary.}
The quantum dynamics of quantum accelerator modes, experimentally observed
by exposing cold atoms to periodic kicks in the direction of the gravitational field, is
theoretically described in terms of spinors, when the pulse period is close to a rational multiple
of a characteristic time of the atoms (Talbot time). The reference model is
a non-trivial variant of the well-known Kicked Rotator in an almost-resonant regime.
If the detuning of
the kicking period to the resonant value is assigned the role of the
Planck constant, the problem is shown to
share similarities with the semiclassical limit of the particle
dynamics in presence of spin-orbital coupling. The separation of the spinor and orbital
degrees of freedom, is based on an ``adiabatic" assumption of Born-Oppenheimer type,
valid for small detunings and for values of the parameters in which the QAMs manifest.
In these parameter regimes, a
description of some properties of the ``slow" orbital motion, by means of formally classical equations,
is finally achieved. Some results of a previously formulated ``pseudo-classical"
theory \cite{FGR021}, restricted to QAMs near principal resonances, are
extended to arbitrary higher order resonances. Potential applications to current experiments on
cold atomic gases are proposed.
L.R. acknowledges useful discussions with Shmuel Fishman.
|
1,314,259,995,564 | arxiv | \section{Introduction and main results}
\smallskip
\subsection{The model}
We consider a $(1+1)$-dimensional model of a polymer
depinned at an infinity of equi-spaced horizontal
interfaces. The possible configurations of the polymer
are modeled by the trajectories of the simple random walk $(i,S_i)_{i\geq 0}$, where
$S_0=0$ and
$(S_i-S_{i-1})_{i \geq 1}$ is an i.i.d. sequence of symmetric Bernouilli trials taking values $1$ and $-1$, that is
$P(S_i-S_{i-1}=+1) = P(S_i-S_{i-1}=-1) = \frac 12$.
The polymer receives an energetic penalty $\delta<0$ each times it touches
one of the horizontal interfaces located at heights $\{k T\colon k\in\mathbb{Z}\}$, where $T\in 2\mathbb{N}$
(we assume that $T$ is even for notational convenience).
More precisely, the polymer interacts
with the interfaces through the following Hamiltonian:
\begin{equation}\label{eq:H}
H^T_{N,\delta}(S) \;:= \; \delta\, \sum_{i=1}^{N} \boldsymbol{1}_{\{S_i \,\in\, T\mathbb{Z}\}}
\;=\;\delta\, \sum_{k\in\mathbb{Z}}\sum_{i=1}^{N} \boldsymbol{1}_{\{S_i\,=\,k\, T\}},
\end{equation}
where $N \in \mathbb{N}$ is the number of monomers constituting the polymer.
We then introduce the
corresponding polymer measure $\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta}$
(see Figure~\ref{fig:1} for a graphical description) by
\begin{equation}\label{eq:model}
\frac{\dd \ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta}}{\dd P}(S) \;:=\;
\frac{\exp\big(H^T_{N,\delta}(S)\big)}{Z^T_{N,\delta}},
\end{equation}
where the normalizing constant
$Z^T_{N,\delta} = E[\exp(H^T_{N,\delta}(S))]$ is called the {\sl partition function}.
\begin{figure}[t]
\includegraphics[width=.84\textwidth]{inter.pdf}
\caption{A typical path of $\{S_n\}_{0 \le n \le N}$
the polymer measure $\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta}$, for $N=158$
and $T=16$.
The circles indicate the points where the polymer
touches the interfaces, that are penalized by $\delta < 0$ each.}
\label{fig:1}
\end{figure}
We are interested in the case where the interface spacing
$T=\{T_N\}_{N\geq 1}$ is allowed to vary with the size $N$ of the polymer.
More precisely, we aim at understanding whether and how the
asymptotic behavior of the polymer is modified
by the interplay between the energetic penalty $\delta$ and the growth rate of $T_N$
as $N \to \infty$.
In the {\sl attractive case} $\delta>0$, when the polymer is rewarded
rather than penalized to touch an interface, this question
was answered in depth in a previous paper \cite{cf:CP},
to which we also refer for a detailed discussion on the motivation
of the model and for an overview on the literature (see also \S\ref{sec:slit} below).
In the present paper we extend the analysis to the {\sl repulsive case}
$\delta < 0$, showing that
the behavior of the model is sensibly different from the attractive case.
\smallskip
For the reader's convenience, and in order to get some intuition
on our model, we recall briefly the result obtained in \cite{cf:CP} for $\delta > 0$.
We first set some notation:
given a positive sequence $\{a_N\}_N$, we write
$S_N \asymp a_N$ to indicate that, on the one hand, $S_N / a_N$ is tight
(for every $\gep > 0$ there exists $M > 0$ such that
$\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( |S_N/a_N| > M \big) \le \gep$ for
large $N$) and, on the other hand, that for some
$\rho \in (0,1)$ and $\eta > 0$ we have $\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\big( |S_N/a_N| > \eta \big) \ge \rho$ for large $N$.
This notation catches the rate of asymptotic growth of $S_N$
somehow precisely: if $S_N \asymp a_N$ and
$S_N \asymp b_N$, for some $\gep > 0$
we must have $\gep a_N \le b_N \le \gep^{-1} a_N$, for large $N$.
Theorem~2 in \cite{cf:CP} can be read as follows:
for every $\delta >0$ there exists $c_\delta > 0$ such that
\begin{equation} \label{eq:asdelta>0}
S_N \text{ under } \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\; \asymp \; \begin{cases}
\sqrt{N} \, e^{-\frac{c_\delta }{2} T_N}\, T_N & \text{if }
T_N - \frac{1}{c_\delta} \log N \to -\infty\\
T_N & \text{if } T_N - \frac{1}{c_\delta} \log N = O(1)\\
1 & \text{if } T_N - \frac{1}{c_\delta} \log N \to +\infty
\end{cases}\,.
\end{equation}
Let us give an heuristic explanation for these scalings.
For fixed $T \in 2\mathbb{N}$, the process $\{S_n\}_{0 \le n \le N}$
under $\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T}$ behaves approximately like a time-homogeneous
Markov process (for a precise statement in this direction
see \S\ref{sec:renewal}). A quantity of basic interest is the
first time $\hat \tau := \inf\{n > 0:\, |S_n| = T \}$ at which
the polymer visits a neighboring interface. It turns
out that for $\delta > 0$ the typical size of $\hat \tau$
is of order $\approx e^{c_\delta T}$, so that until epoch $N$
the polymer will make approximately $N/e^{c_\delta T}$ changes
of interface.
Assuming that these arguments can be applied
also when $T = T_N$ varies with $N$, it follows that
the process $\{S_n\}_{0 \le n \le N}$ jumps from an interface
to a neighboring one a number of times which is
approximately $u_N := N/e^{c_\delta T_N}$.
By symmetry, the probability of jumping to the
neighboring upper interface
is the same as the probability of jumping to the lower one,
hence the last visited interface will be approximately
the square root of the number of jumps. Therefore,
when $u_N \to \infty$, one expects that
$S_N$ will be typically of order $T_N \cdot \sqrt{u_N}$,
which matches perfectly with the first line of \eqref{eq:asdelta>0}.
On the other hand, when $u_N \to 0$ the polymer will never visit
any interface different from the one located at zero and, because
of the attractive reward $\delta > 0$, $S_N$ will be typically at
finite distance from this interface, in agreement with the
third line of \eqref{eq:asdelta>0}. Finally, when $u_N$ is
bounded, the polymer visits a finite number of different interfaces
and therefore $S_N$ will be of the same order as $T_N$,
as the second line of \eqref{eq:asdelta>0} shows.
\smallskip
\subsection{The main results}
Also in the repulsive case $\delta < 0$ one can perform an
analogous heuristic analysis. The big difference with respect
to the attractive case is the following: under $\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^T$,
the time $\hat \tau$ the polymer needs to jump
from an interface to a neighboring one turns out to be typically of order $T^3$
(see Section~\ref{sec:preliminary}).
Assuming that these considerations can be applied
also to the case when $T = T_N$ varies with~$N$,
we conclude that, under $\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}$, the total number of jumps from
an interface to the neighboring one
should be of order $v_N := N/T_N^3$.
One can therefore conjecture that if $v_N \to +\infty$
the typical size of $S_N$ should be of order $T_N \cdot \sqrt{v_N} = \sqrt{N/T_N}$,
while if $v_N$ remains bounded one should have $S_N \asymp T_N$.
In the case $v_N \to 0$, the polymer will never exit the interval
$(-T_N, +T_N)$. However, guessing the right scaling in this case
requires some care: in fact, due to the
repulsive penalty $\delta < 0$, the polymer will {\sl not} remain close
to the interface located at zero, as it were for $\delta > 0$,
but it will rather spread in the
interval $(-T_N, +T_N)$. We are therefore led to distinguish two cases:
if $T_N = O(\sqrt{N})$
then $S_N$ should be of order $T_N$, while if $T_N \gg \sqrt{N}$
we should have $S_N \asymp \sqrt{N}$ (of course we write $a_N \ll b_N$
iff $a_N / b_N \to 0$ and $a_N \gg b_N$ iff $a_N / b_N \to +\infty$).
We can sum up these considerations in the following formula:
\begin{equation} \label{eq:asdelta<0}
S_N \; \asymp \; \begin{cases}
\sqrt{N/T_N} & \text{if }\ T_N \ll N^{1/3}\\
T_N & \text{if }\ (const.) N^{1/3} \le T_N \le (const.) \sqrt{N}\\
\sqrt{N} & \text{if }\ T_N \gg \sqrt{N}
\end{cases}\,.
\end{equation}
It turns out that these conjectures are indeed correct:
the following theorem makes this precise, together with
some details on the scaling laws.
\medskip
\begin{theorem} \label{th:main}
Let $\delta<0$ and $\{T_N\}_{N\in\mathbb{N}} \in (2\mathbb{N})^{\mathbb{N}}$
be such that $T_N \to \infty$ as $N\to\infty$.
\begin{enumerate}
\item \label{part:1}
\rule{0pt}{1.3em}If $\,T_N \ll N^{1/3}$, then
$S_N \asymp \sqrt{N/T_N}$. More precisely,
there exist two constants $0 < c_1 < c_2 < \infty$ such that
for all $a,b \in \mathbb{R}$ with $a < b$ we have for $N$ large enough
\begin{equation} \label{eq:infinite}
c_1 \, P\big[ a < Z \le b \big] \;\le\;
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \left( a <
\frac{S_N}{C_\delta \, \textstyle\sqrt{\frac{N}{T_N}}} \le b \right)
\;\le\; c_2 \, P\big[ a < Z \le b \big] \,,
\end{equation}
where $C_\delta := \pi / \sqrt{e^{-\delta}-1}$
is an explicit positive constant and $Z \sim {\ensuremath{\mathcal N}} (0,1)$.
\item \label{part:2}
\rule{0pt}{1.3em}If $\,T_N \sim (const.) N^{1/3}$, then
$S_N \asymp T_N$. More precisely,
for every $\gep > 0$ small enough there exist constants $M,\eta>0$
such that $\forall N\in\mathbb{N}$
\begin{equation}\label{eq:crit}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(|S_N| \le M \, T_N\big)
\;\ge\; 1-\gep \,, \qquad
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(|S_N| \ge \eta \, T_N\big)
\;\ge\; 1-\gep \,.
\end{equation}
\item \label{part:3}
\rule{0pt}{1.3em}If $\,N^{1/3} \ll T_N \le (const.)\sqrt{N}$, then
$S_N \asymp T_N$. More precisely,
for every $\gep > 0$ small enough there exist
constants $L,\eta > 0$ such that $\forall N\in \mathbb{N}$
\begin{equation}\label{eq:supercrit1}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(
0 < |S_n| < T_N \,, \ \forall n \in \{ L, N\} \big) \;\ge\; 1- \gep\,,
\qquad \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( |S_N| \ge \eta \, T_N \big) \;\ge\; 1-\gep \,.
\end{equation}
\item \label{part:4}
\rule{0pt}{1.3em}If $\,T_N \gg \sqrt{N}$, then
$S_N \asymp \sqrt N$. More precisely,
for every $\gep > 0$ small enough there exist
constants $L,M,\eta > 0$ such that $\forall N\in \mathbb{N}$
\begin{equation}\label{eq:supercrit2}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(
0 < |S_n| < M \sqrt{N} \,, \ \forall n \in \{ L, N\} \big) \;\ge\; 1-\gep\,,
\qquad \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( |S_N| \ge \eta \sqrt{N} \big)
\;\ge\; 1-\gep \,.
\end{equation}
\end{enumerate}
\end{theorem}
\medskip
To have a more intuitive view on the scaling behaviors
in \eqref{eq:asdelta<0}, let us consider the concrete example
$T_N \sim (const.) N^a$: in this case we have
\begin{equation} \label{eq:scalings}
S_N \; \asymp \; \begin{cases}
N^{(1-a)/2} & \text{if } 0 \le a \le \frac 13\\
N^a & \text{if } \frac 13 \le a \le \frac 12\\
N^{1/2} & \text{if } a \ge \frac 12
\end{cases}\,.
\end{equation}
As the speed of growth of $T_N$ increases, in a first
time (until $a=\frac 13$) the scaling of $S_N$
decreases, reaching a minimum $N^{1/3}$, after which it
increases to reattain the initial value $N^{1/2}$, for $a \ge \frac 12$.
We have thus shown that the asymptotic behavior of our model displays two
transitions, at $T_N \approx \sqrt{N}$ and at $T_N \approx N^{1/3}$.
While the first one is somewhat natural,
in view of the diffusive behavior of the simple random walk,
the transition happening at $T_N \approx N^{1/3}$ is
certainly more surprising and somehow unexpected.
\smallskip
Let us make some further comments on Theorem~\ref{th:main}.
\begin{itemize}
\item About regime (\ref{part:1}), that is when $T_N \ll N^{1/3}$,
we actually conjecture that equation \eqref{eq:infinite} can be strengthened to
a full convergence in distribution:
$S_N/(C_\delta \sqrt {N/T_N}) \Longrightarrow {\ensuremath{\mathcal N}} (0,1)$.
The reason for the slightly weaker result that we present is that
we miss precise renewal theory estimates for a basic renewal process,
that we define in \S\ref{sec:renewal}. As a matter of fact,
using the techniques in \cite{cf:Ney} one can refine
our proof and show that the full convergence in
distribution holds true in the restricted regime $T_N \ll N^{1/6}$,
but we omit the details for conciseness
(see however the discussion following Proposition~\ref{th:bound_renewal}).
\item Equation \eqref{eq:infinite}
implies that the sequence $\{S_N/(C_\delta \sqrt {N/T_N})\}_N$
is {\sl tight}, and that the limit law of any converging subsequence
is absolutely continuous w.r.t. the Lebesgue
measure on $\mathbb{R}$. Moreover, the density of this limit law is bounded above and below
by a multiple of the standard normal density.
\item The case when $T_N \to T \in \mathbb{R}$ as $N\to\infty$ has not been included
in Theorem~\ref{th:main} for the sake of simplicity. However a straightforward
adaptation of our proof shows that in this case equation \eqref{eq:infinite}
still holds true, with $C_\delta$ replaced by a different
($T$-dependent) constant $\widehat C_\delta(T)$.
\item We stress that in regimes (\ref{part:3}) and (\ref{part:4})
the polymer really touches the interface at zero a finite
number of times, after which it does not touch any other interface.
\end{itemize}
\smallskip
\subsection{A link with a polymer in a slit}
\label{sec:slit}
It turns out that our model $\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta}$ is closely related to a model
which has received quite some attention in the recent physical literature,
the so-called {\sl polymer confined between two attractive walls}
\cite{cf:Brak,cf:Martin,cf:Owczarek}
(also known as polymer in a slit). This is a model for the
steric stabilization and sensitized flocculation of colloidal dispersions
induced by polymers,
which can be simply described as follows:
given $N,T \in 2\mathbb{N}$, take the first $N$ steps of the simple
random walk constrained not to exit the interval $\{0,T\}$,
and give each trajectory a reward/penalization $\gamma \in \mathbb{R}$
each time it touches $0$ or $T$ (one can also consider two different
rewards/penalties $\gamma_0$ and $\gamma_T$, but we will stick to the
case $\gamma_0 = \gamma_T = \gamma$). We are thus considering
the probability measure $Q_{N,\gamma}^T$ defined by
\begin{equation} \label{eq:phys}
\frac{\dd Q_{N,\gamma}^T}{\dd P_N^{c,T}}(S)
\;\propto\; \exp\left( \gamma \sum_{i=1}^N
\boldsymbol{1}_{\{S_i = 0 \text{ or } S_i = T\}} \right)
\,,
\end{equation}
where $P_N^{c,T}(\,\cdot\,) := P(\,\cdot\,|\, 0 \le S_i \le T \text{ for all } 0 \le i \le N)$
is the law of the simple random walk {\sl constrained} to stay between
the two walls located at $0$ and $T$.
\begin{figure}[t]
\includegraphics[width=.84\textwidth]{multiuni2.pdf}
\smallskip
\caption{A polymer trajectory in a multi-interface medium transformed,
after reflection on the interfaces $0$ and $T$,
in a trajectory of polymer in a slit. The dotted
lines correspond to the parts of trajectory that appear
upside-down after the reflection.}
\label{fig:multuni1}
\end{figure}
Consider now the simple random walk
{\sl reflected} on both walls $0$ and $T$, which may be defined as
$\{\Phi_T(S_n)\}_{n\ge 0}$, where $(\{S_n\}_{n\ge 0}, P)$ is the
ordinary simple random walk and
\begin{equation*}
\Phi_T(x) \;:=\; \min \big\{\,[x]_{2T}, 2T - [x]_{2T} \big\}\,,
\qquad \text{with} \qquad
[x]_{2T} \;:=\: 2T\, \Big(\frac{x}{2T} - \Big\lfloor \frac{x}{2T} \Big\rfloor\Big)\,,
\end{equation*}
that is, $[x]_{2T}$
denotes the equivalence class of $x$ modulo $2T$ (see Figure~\ref{fig:multuni1}
for a graphical description). We denote by $P_N^{r,T}$ the law of
the first $N$ steps of $\{\Phi_T(S_n)\}_{n\geq 0}$. Of
course, $P_N^{r,T}$ is different from $P_N^{c,T}$: the latter is the uniform measure on
the simple random walk paths $\{S_n\}_{0 \le n \le N}$ that stay in
$\{0,T\}$, while under the former each such path has a probability which
is proportional to $2^{{\ensuremath{\mathcal N}} _N}$, where
${\ensuremath{\mathcal N}} _N = \sum_{i=1}^N \boldsymbol{1}_{\{S_i = 0 \text{ or } S_i = T\}}$ is the
number of times the path has touched the walls. In other terms, we have
\begin{equation} \label{eq:phys2}
\frac{\dd P_N^{c,T}}{\dd P_N^{r,T}} (S) \;\propto\;
\exp \left( -(\log 2) \sum_{i=1}^N \boldsymbol{1}_{\{S_i = 0 \text{ or } S_i = T\}} \right) \,.
\end{equation}
If we consider the reflection under $\Phi_T$ of our model, that is
the process $\{\Phi_T(S_n)\}_{0 \le n \le N}$ under $\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta}$,
whose law will be simply denoted by $\Phi_T(\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta})$, then it comes
\begin{equation}\label{eq:phyphy}
\frac{\dd \Phi_T(\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N,\delta})}{\dd P_N^{r,T}} (S) \;\propto\;
\exp \left( \delta \sum_{i=1}^N \boldsymbol{1}_{\{S_i = 0 \text{ or } S_i = T\}} \right) \,.
\end{equation}
At this stage, a look at equations \eqref{eq:phys}, \eqref{eq:phys2} and \eqref{eq:phyphy} points
out the link with our model:
we have the basic identity $Q^T_{N,\delta + \log 2} = \Phi_T(\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N, \delta})$, for
all $\delta \in \mathbb{R}$ and $T,N \in 2\mathbb{N}$. In words, the polymer confined
between two attractive walls is just the reflection of our
model through $\Phi_T$, up to a shift of the pinning intensity by $\log 2$.
This allows a direct translation of all our results in this new framework.
\smallskip
Let us describe in detail a particular issue, namely,
the study of the model $Q^T_{N,\gamma}$
when $T = T_N$ is allowed to vary with $N$
(this is interesting, e.g., in order to interpolate between the two extreme
cases when one of the two quantities $T$ and $N$ tends to $\infty$ before the other).
This problem is considered in \cite{cf:Owczarek}, where the authors obtain
some asymptotic expressions for the partition function
$Z_{n,w}(a,b)$ of a polymer in a slit, in the case of two different rewards/penalties
(we are following their notation, in which
$n=N$, $w = T$, $a = \exp(\gamma_0)$ and $b = \exp(\gamma_T)$)
and with the boundary condition $S_N = 0$.
Focusing on the case $a = b = \exp(\gamma)$,
we mention in particular equations (6.4)--(6.6) in~\cite{cf:Owczarek},
which for $a < 2$ read as
\begin{equation} \label{eq:them1}
Z_{n,w}(a,a) \;\approx\; \frac{(const.)}{n^{3/2}} \, f_{\text{phase}}
\left( \frac{\sqrt n}{w} \right)\,,
\end{equation}
where we have neglected a combinatorial factor $2^n$ (which just comes from
a different choice of notation), and where
the function $f_{\text{phase}}(x)$ is such that
\begin{equation} \label{eq:them2}
f_{\text{phase}}(x) \;\to\; 1 \ \text{ as } \ x \to 0\,, \qquad
f_{\text{phase}}(x) \;\approx\; x^3 \, e^{-\pi^2 x^2 / 2}
\ \text{ as } \ x \to \infty \,.
\end{equation}
The regime $a < 2$ corresponds to $\gamma < \log 2$, hence, in view of
the correspondence $\delta = \gamma - \log 2$ described above, we are exactly in
the regime $\delta < 0$ for our model $\ensuremath{\boldsymbol{\mathrm{P}}}^T_{N, \delta}$.
We recall \eqref{eq:model} and, with the help of equation
\eqref{eq:overandover}, we can express the partition function
with boundary condition $S_N \in (2T)\mathbb{Z}$ as
\begin{equation*}
Z^{T,\, \{S_N\in (2T)\mathbb{Z}\}}_{N,\delta} \;\sim\; O(1)\,
Z^{T,\,\{S_N\in T\mathbb{Z}\}}_{N,\delta} \;\sim\; O(1) \, e^{\phi(\delta,T) N} \,
{\ensuremath{\mathcal P}} _{\delta, T}(N \in \tau) \,,
\end{equation*}
where, with some abuse of notation, we denote by $O(1)$ a quantity which
stays bounded away from $0$ and $\infty$ as $N \to\infty$. In this formula,
$\phi(\delta, T)$ is the {\sl free energy} of our model and
$(\{\tau_n\}_{n \in \mathbb{Z}^+}, {\ensuremath{\mathcal P}} _{\delta, T})$ is a basic renewal process,
introduced respectively in \S\ref{sec:free_energy} and \S\ref{sec:renewal} below.
In the case when $T = T_N \to \infty$, we can use the asymptotic development
\eqref{eq:phineg} for $\phi(\delta, T)$, which, combined with the bounds
in \eqref{eq:bound_renewal}, gives as $N,T \to \infty$
\begin{equation*}
Z^{T,\, \{S_N\in (2T)\mathbb{Z}\}}_{N,\delta} \,=\, \frac{O(1)}{N^{3/2}} \;
\max \bigg\{ 1, \bigg( \frac{\sqrt N}{T} \bigg)^3 \bigg\} \;
\exp \left(- \frac{\pi^2}{2} \frac{N}{T^2} + \frac{2 \pi^2}{e^{-\delta}-1}
\frac{N}{T^3}
+ o\left(\frac{N}{T^3}\right) \right).
\end{equation*}
Since $Z^{T,\,\{S_N\in (2T)\mathbb{Z}\}}_{N,\delta} = Z_{n,w}(a,a)$, we can rewrite
this relation using the notation of~\cite{cf:Owczarek}:
\begin{equation*}
Z_{n,w}(a,a) \;\approx\; \frac{(const.)}{n^{3/2}} \, f_{\text{phase}}
\left( \frac{\sqrt n}{w} \right) \, g \bigg( \frac{n^{1/3}}{w} \bigg) \,,
\quad \ \text{where} \ \ g(x) \;\approx\; e^{\frac{2\pi^2}{e^{-\delta}-1} x}
\ \ \text{as} \ x \to \infty\,.
\end{equation*}
We have therefore obtained a refinement of
equations \eqref{eq:them1}, \eqref{eq:them2}.
This is linked to the fact that we have gone beyond the first order in
the asymptotic development of the free energy $\phi(\delta, T)$, making
an additional term of the order $N/T_N^3$ appear.
We stress that this new term gives a non-negligible (in fact, exponentially diverging!)
contribution as soon as $T_N \ll N^{1/3}$ ($w \ll n^{1/3}$ in the notation
of~\cite{cf:Owczarek}).
This corresponds to the fact that, by Theorem~\ref{th:main},
the trajectories that touch the walls a number of times of the order $N/T_N^3$
are actually dominating the partition function when $T_N \ll N^{1/3}$.
Of course, a higher order development of the free energy
(cf. Appendix~\ref{sec:fe_estimates}) may lead to
further correction terms.
\smallskip
\subsection{Outline of the paper}
Proving Theorem \ref{th:main} requires to settle some technical tools,
partially taken from~\cite{cf:CP},
that we present in Section \ref{sec:preliminary}.
More precisely,
in \S\ref{sec:free_energy} we introduce the free energy $\phi(\delta,T)$
of the polymer and we describe its asymptotic behavior
as $T\to \infty$ (for fixed $\delta<0$).
In \S\ref{sec:renewal} we enlighten a basic correspondence between the polymer
constrained to hit one of the interfaces at its right extremity and an
explicit renewal process.
In \S\ref{sec:asymp} we investigate further this renewal process,
providing estimates on the renewal function, which are of
crucial importance for the proof of Theorem~\ref{th:main}.
Sections \ref{sec:parti}, \ref{sec:partii},
\ref{sec:partiii} and \ref{sec:partiv} are dedicated respectively
to the proof of parts (\ref{part:1}), (\ref{part:2}), (\ref{part:3})
and (\ref{part:4}) of Theorem~\ref{th:main}. Finally,
some technical results are proven in the appendices.
We stress that the value of $\delta < 0$ is kept fixed throughout
the paper, so that the generic constants appearing in the proofs
may be $\delta$-dependent.
\medskip
\section{A renewal theory viewpoint}
\label{sec:preliminary}
In this section we recall some features of our model, including
a basic renewal theory representation, originally proven in \cite{cf:CP},
and we derive some new estimates.
\smallskip
\subsection{The free energy}
\label{sec:free_energy}
Considering for a moment our model when $T_N \equiv T \in 2\mathbb{N}$ is fixed,
i.e., it does not vary with $N$, we define the {\sl free energy}
$\phi(\delta, T)$ as the rate of exponential growth
of the partition function $Z_{N,\delta}^T$ as $N\to\infty$:
\begin{equation}\label{eq:fe}
\phi( \delta, T) \;:=\; \lim_{N\to\infty} \, \frac 1N
\, \log Z^{T}_{N,\delta} \;=\;
\lim_{N\to\infty} \, \frac 1N
\, \log \, E \left( e^{H_{N,\delta}^T} \right) \,.
\end{equation}
Generally speaking, the reason for looking at this function is
that the values of $\delta$ (if any) at which $\delta \mapsto \phi(\delta, T)$ is
not analytic correspond physically to the occurrence of a
{\sl phase transition} in the system.
As a matter of fact, in our case $\delta \mapsto \phi(\delta, T)$ is analytic
on the whole real line, for every $T \in 2\mathbb{N}$.
Nevertheless, the free energy $\phi(\delta, T)$ turns out to be a very useful
tool to obtain a path description of our model, even when
$T = T_N$ varies with $N$, as we explain in detail
in \S\ref{sec:renewal}. For this reason, we now recall some basic facts
on $\phi(\delta, T)$, that were proven in \cite{cf:CP}, and we derive its
asymptotic behavior as $T\to\infty$.
We introduce $\tau_1^T := \inf\{ n>0:\, S_n\in\{-T, 0, +T\} \}$,
that is the first epoch at which the polymer visits an
interface, and we denote by $Q_T(\lambda):= E \big( e^{-\lambda \tau_1^T} \big)$
its Laplace transform under the law of the simple random walk.
We point out that $Q_T(\lambda)$ is finite and
analytic on the interval $(\lambda_0^T, \infty)$,
where $\lambda_0^T
< 0$,
and $Q_T(\lambda) \to +\infty$ as $\lambda \downarrow \lambda_0^T$ (as a matter of fact,
one can give a closed explicit expression for $Q_T(\lambda)$, cf.
equations (A.4) and (A.5) in \cite{cf:CP}). A basic fact is that $Q_T(\cdot)$
is sharply linked to the free energy: more precisely, we have
\begin{equation}\label{eq:energie}
\phi(\delta, T) = (Q_T)^{-1}(e^{-\delta}),
\end{equation}
for every $\delta \in \mathbb{R}$
(see Theorem~1 in~\cite{cf:CP}). From this, it is easy to obtain
an asymptotic expansion of $\phi(\delta, T)$ as $T \to \infty$, for fixed
$\delta < 0$, which reads as
\begin{equation} \label{eq:phineg}
\phi(\delta,T) \;=\; - \frac{\pi^2}{2T^2} \bigg( 1 -
\frac{4}{e^{-\delta} - 1}\,\frac 1T + o\bigg( \frac 1T \bigg) \bigg)\,,
\end{equation}
as we prove in Appendix~\ref{sec:fe_estimates}.
\smallskip
\subsection{A renewal theory interpretation}
\label{sec:renewal}
We now recall a basic renewal theory description of our model,
that was proven in \S2.2 of~\cite{cf:CP}.
We have already introduced the first epoch $\tau_1^T$ at which
the polymer visits an interface. Let us extend this definition:
for $T\in 2\mathbb{N}\cup \{\infty\}$, we set $\tau^T_{0}=0$ and for $j\in \mathbb{N}$
\begin{equation}\label{jump}
\tau^T_{j} \;:=\; \inf\big\{n > \tau^T_{j-1}: \ S_n\in T \mathbb{Z} \big\}
\qquad \text{and} \qquad
\varepsilon^T_{j} \;:=\; \tfrac{S_{\tau^T_{j}}-S_{\tau^T_{j-1}}}{T}\,,
\end{equation}
where for $T = \infty$ we agree that $T\mathbb{Z} = \{0\}$. Plainly,
$\tau^T_j$ is the $j^{\text{th}}$ epoch at which $S$
visits an interface and $\varepsilon^T_j$
tells whether the $j^{\text{th}}$ visited interface
is the same as the $(j-1)^{\text{th}}$ ($\varepsilon^T_j=0$),
or the one above
($\varepsilon^T_j=1$) or below ($\varepsilon^T_j=-1$).
We denote by $q_T^j(n)$ the joint law of $(\tau^T_1, \gep^T_1)$
under the law of the simple random walk:
\begin{equation}\label{eq:defQ}
q^j_{T}(n) \;:=\; P\big( \tau^T_1=n\,,\, \varepsilon^T_1=j \big)\,.
\end{equation}
Of course, by symmetry we have that $q^1_T(n) = q^{-1}_T(n)$ for every $n$ and $T$.
We also set
\begin{equation} \label{eq:deftau}
q_T(n) \;:=\; P\big( \tau_1^T = n \big) \;=\; q_T^0(n) \,+\,
2 \, q_T^1(n) \,.
\end{equation}
Next we introduce a Markov chain $(\{(\tau_j, \gep_j)\}_{j \ge 0}, {\ensuremath{\mathcal P}} _{\delta, T})$
taking values in $(\mathbb{N}\cup\{0\}) \times \{-1, 0, 1\}$,
defined in the following way: $\tau_0 := \gep_0 := 0$ and under ${\ensuremath{\mathcal P}} _{\delta, T}$
the sequence of vectors $\{(\tau_j - \tau_{j-1}, \gep_j)\}_{j \ge 1}$ is i.i.d.
with marginal distribution
\begin{equation} \label{eq:defPdeltaT}
{\ensuremath{\mathcal P}} _{\delta, T}(\tau_1 = n,\, \gep_1 = j)
\;:=\; e^\delta \, q^j_T(n) \, e^{-\phi(\delta, T) \, n} \,.
\end{equation}
The fact that the r.h.s. of this equation indeed defines a probability
law follows from \eqref{eq:energie}, which implies that $Q(\phi(\delta,T)) = E(e^{-\phi(\delta,T) \tau_1^T}) = e^{-\delta}$.
Notice that the process $\{\tau_j\}_{j \ge 0}$ alone under ${\ensuremath{\mathcal P}} _{\delta, T}$
is a (undelayed) {\sl renewal process}, i.e. $\tau_0 = 0$ and
the variables $\{\tau_j - \tau_{j-1}\}_{j \ge 1}$ are i.i.d., with step law
\begin{equation} \label{eq:taudelta}
{\ensuremath{\mathcal P}} _{\delta, T}(\tau_1 = n ) \;=\; e^{\delta} \, q_T(n) \, e^{-\phi(\delta,T) n}
\;=\; e^{\delta} \, P(\tau_1^T = n) \, e^{-\phi(\delta,T) n} \,.
\end{equation}
Let us now make the link between the law ${\ensuremath{\mathcal P}} _{\delta, T}$ and our model
$\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^T$. We introduce two variables that count how many epochs
have taken place before $N$, in the processes $\tau^T$ and $\tau$ respectively:
\begin{equation} \label{eq:L}
L_{N,T} \;:=\; \sup \big\{ n \ge 0:\ \tau_n^T \le N \big\}\,,
\qquad
L_{N} \;:=\; \sup \big\{ n \ge 0:\ \tau_n \le N \big\} \,.
\end{equation}
We then have the following crucial result (cf. equation (2.13) in \cite{cf:CP}):
for all $N,T \in 2\mathbb{N}$
and for all $k\in\mathbb{N}$, $\{t_i\}_{1 \le i \le k} \in \mathbb{N}^k$,
$\{\sigma_i\}_{1 \le i \le k} \in \{-1, 0, +1\}^k$ we have
\begin{align} \label{eq:crucial}
\begin{split}
& \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T} \Big( L_{N,T} = k,\ (\tau_i^T , \gep_i^T) = (t_i,\sigma_i),\,
1 \le i \le k \,\Big|\, N \in \tau^T \Big)\\
& \qquad \quad \;=\; {\ensuremath{\mathcal P}} _{\delta,T} \Big( L_{N} = k,\ (\tau_i , \gep_i)
= (t_i,\sigma_i),\,
1 \le i \le k \,\Big|\, N \in \tau \Big)\,,
\end{split}
\end{align}
where $\{N \in \tau\} := \union_{k=0}^\infty \{\tau_k = N\}$
and analogously for $\{N \in \tau^T\}$.
In words, the process $\{(\tau_j^T, \gep_j^T)\}_{j}$ under
$\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^T(\,\cdot\,| N\in\tau^T)$ is distributed like
the Markov chain $\{(\tau_j, \gep_j)\}_j$
under ${\ensuremath{\mathcal P}} _{\delta, T}(\,\cdot\,|N\in\tau)$.
It is precisely this link with renewal theory that makes our model
amenable to precise estimates.
Note that the law ${\ensuremath{\mathcal P}} _{\delta, T}$ carries no explicit
dependence on~$N$.
Another basic relation we are going to use repeatedly is the following one:
\begin{equation} \label{eq:overandover}
E \left[ e^{H^{T}_{k, \delta}(S)} \, \boldsymbol{1}_{\{k \in \tau^T\}} \right]
\;=\; e^{\phi(\delta, T) k} \, {\ensuremath{\mathcal P}} _{\delta, T} \big(k \in \tau \big) \,,
\end{equation}
which is valid for all $k, T \in 2\mathbb{N}$ (cf. equation (2.11) in \cite{cf:CP}).
\smallskip
\subsection{Some asymptotic estimates}
\label{sec:asymp}
We now derive some estimates that will be used throughout the paper.
We start from the asymptotic behavior of $P(\tau_1^T = n)$
as $n\to\infty$. Let us set
\begin{equation} \label{eq:gT}
g(T) \;:=\; -\log \cos \left( \frac{\pi}{T} \right)
\;=\; \frac{\pi^2}{2 T^2} + O\left( \frac{1}{T^4} \right)\,,
\qquad (T \to \infty)\,.
\end{equation}
We then have the following
\medskip
\begin{lemma}\label{th:ineg2}
There exist positive constants $T_0, c_1, c_2, c_3, c_4$
such that when $T > T_0$ the following relations hold
for every $n \in 2\mathbb{N}$:
\begin{gather} \label{eq:boundq}
\frac{c_1}{\min\{T^3, n^{3/2}\}}\, e^{-g(T) n} \;\le\;
P(\tau_1^T = n) \;\le\; \frac{c_2}{\min\{T^3, n^{3/2}\}}
\, e^{-g(T) n} \,,\\
\label{eq:boundqbis}
\frac{c_3}{\min\{T, \sqrt{n}\}}\, e^{-g(T) n} \;\le\;
P(\tau_1^{T} > n) \;\le\; \frac{c_4}{\min\{T, \sqrt{n}\}}
\, e^{-g(T) n} \,.
\end{gather}
\end{lemma}
\medskip
The proof of Lemma~\ref{th:ineg2} is somewhat technical and is
deferred to Appendix~\ref{sec:lemmaineg2}.
Next we turn to the study of the
renewal process $\big( \{\tau_n\}_{n \ge 0}, {\ensuremath{\mathcal P}} _{\delta, T} \big)$.
It turns out that the law of $\tau_1$ under ${\ensuremath{\mathcal P}} _{\delta, T}$
is essentially split into two components:
the first one at $O(1)$, with mass $e^\delta$, and the second one at $O(T^3)$,
with mass $1-e^\delta$ (although we do not fully prove these results, it is useful
to keep them in mind). We start with the following estimates
on ${\ensuremath{\mathcal P}} _{\delta, T}(\tau_1 = n )$, which follow quite easily from Lemma~\ref{th:ineg2}.
\medskip
\begin{lemma}\label{th:good}
There exist positive constants $T_0, c_1, c_2, c_3, c_4$
such that when $T > T_0$ the following relations hold
for every $m, n \in 2\mathbb{N} \cup \{+\infty\}$ with $m < n$:
\begin{align} \label{eq:boundren}
\frac{c_1}{\min\{T^3, k^{3/2}\}}\, e^{-(g(T) + \phi(\delta,T)) k} & \;\le\;
{\ensuremath{\mathcal P}} _{\delta, T} (\tau_1 = k) \;\le\; \frac{c_2}{\min\{T^3, k^{3/2}\}}
\, e^{-(g(T) + \phi(\delta,T)) k} \\
\label{eq:boundrenbislb}
{\ensuremath{\mathcal P}} _{\delta, T}(m \le \tau_1 < n) & \;\ge\;
c_3 \, \left( e^{-(g(T) + \phi(\delta,T)) m} -
e^{-(g(T) + \phi(\delta,T)) n} \right) \\
\label{eq:boundrenbisub}
{\ensuremath{\mathcal P}} _{\delta, T}(\tau_1 \ge m)
& \;\le\; c_4 \, e^{-(g(T) + \phi(\delta,T)) m} \,.
\end{align}
\end{lemma}
\medskip
\begin{proof}
Equation \eqref{eq:boundren} is an immediate consequence of equations
\eqref{eq:taudelta} and \eqref{eq:boundq}. To prove \eqref{eq:boundrenbislb},
we sum the lower bound in \eqref{eq:boundren} over $k \in 2\mathbb{N}$,
observing that by \eqref{eq:phineg} and \eqref{eq:gT},
for every fixed $\delta < 0$, we have as $T\to\infty$
\begin{equation} \label{eq:gplusphi}
g(T) \,+\, \phi(\delta, T) \;=\; \frac{4 \pi^2}{2(e^{-\delta}-1)} \,
\frac{1}{T^3} \, \big( 1 + o(1) \big) \,.
\end{equation}
To get \eqref{eq:boundrenbisub}, we sum the upper bound in \eqref{eq:boundren}
over $k \in 2\mathbb{N}$ and we are done.
\end{proof}
\medskip
Notice that equation \eqref{eq:boundren}, together with
\eqref{eq:gplusphi}, shows indeed that the law of $\tau_1$
has a component at $O(T^3)$, which is approximately geometrically distributed.
Other important asymptotic relations are the following ones:
\begin{align}\label{eq:asET}
{\ensuremath{\mathcal E}} _{\delta, T}(\tau_1) \;&=\; \frac{e^\delta (e^{-\delta}-1)^2}{2 \pi^2} \, T^3
\;+\; o(T^3)\,,\\
\label{eq:asET2}
{\ensuremath{\mathcal E}} _{\delta, T}(\tau_1^2) \;&=\; \frac{e^\delta (e^{-\delta}-1)^3}{2 \pi^4} \, T^6
\;+\; o(T^6)\,,
\end{align}
which are proven in Appendix~\ref{sec:further_estimates}. We stress
that these relations, together with equation \eqref{eq:asQ1bis},
imply that, under ${\ensuremath{\mathcal P}} _{\delta, T}$, the time $\hat \tau$ needed to hop
from an interface to a neighboring one is of order $T^3$, and this is
precisely the reason
why the asymptotic behavior of our model has a transition at $T_N \approx N^{1/3}$,
as discussed in the introduction.
Finally, we state an estimate on the renewal function
${\ensuremath{\mathcal P}} _{\delta, T}(n \in \tau)$, which is proven in Appendix~\ref{sec:bound_renewal}.
\medskip
\begin{proposition}\label{th:bound_renewal}
There exist positive constants $T_0, c_1, c_2$
such that for $T > T_0$ and for all $n \in 2\mathbb{N}$ we have
\begin{gather} \label{eq:bound_renewal}
\frac{c_1}{\min\{n^{3/2}, T^3\}} \;\le\;
{\ensuremath{\mathcal P}} _{\delta,T} (n \in \tau) \;\le\;
\frac{c_2}{\min\{n^{3/2}, T^3\}}\,.
\end{gather}
\end{proposition}
\medskip
Note that the large $n$ behavior of \eqref{eq:bound_renewal}
is consistent with the classical renewal
theorem, because $1/{\ensuremath{\mathcal E}} _{\delta,T}(\tau_1) \approx T^{-3}$, by \eqref{eq:asET}.
One could hope to refine this estimate,
e.g., proving that for $n \gg T^3$ one has
${\ensuremath{\mathcal P}} _{\delta,T} (n \in \tau) = (1+o(1))/{\ensuremath{\mathcal E}} _{\delta,T}(\tau_1)$:
this would allow strengthening part~(\ref{part:1}) of Theorem~\ref{th:main}
to a full convergence in distribution
$S_N/(C_\delta \sqrt {N/T_N}) \Longrightarrow {\ensuremath{\mathcal N}} (0,1)$.
It is actually possible
to do this for $n \gg T^6$, using the ideas and techniques
of~\cite{cf:Ney}, thus strengthening Theorem~\ref{th:main}
in the restricted regime $T_N \ll N^{1/6}$ (we omit the details).
\medskip
\section{Proof of Theorem~\ref{th:main}: part (\ref{part:1})}
\label{sec:parti}
We are in the regime when $N/T_N^3\to \infty$ as $N\to \infty$.
The scheme of this proof is actually very
similar to the one of the proof of part (i) of Theorem 2 in~\cite{cf:CP}.
However, more technical difficulties
arise in this context, essentially because, in the depinning case ($\delta<0$), the density of contact between the polymer
and the interfaces vanishes as $N\to \infty$,
whereas it is strictly positive in the
pinning case ($\delta>0$). For this reason,
it is necessary to display this proof in detail.
Throughout the proof we set
$v_\delta=(1-e^\delta)/2$ and $k_N=\lfloor N/{\ensuremath{\mathcal E}} _{\delta,T_N}(\tau_1)\rfloor$.
Recalling \eqref{jump} and \eqref{eq:L}, we set $Y_0^{T_N}=0$ and
$Y_i^{T_N}=\gep_1^{T_N}+\dots+\gep_i^{T_N}$ for $i\in\{1,\dots,L_{N,T_N}\}$.
Plainly, we can write
\begin{equation}\label{eq:simpli}
S_N \;=\; Y^{T_N}_{L_{N,T_N}} \cdot T_N
\,+\, s_N \,, \qquad \text{with} \quad |s_N|\,<\,T_N \,.
\end{equation}
In view of equation \eqref{eq:asET}, this relation shows
that to prove \eqref{eq:infinite} we can equivalently
replace $S_N/(C_\delta \sqrt{N/T_N})$ with
$Y_{L_{N,T_N}}^{T_N}/\sqrt{v_\delta k_N}$.
\smallskip
\subsection{Step 1}
\label{sec:s1}
Recall \eqref{eq:defPdeltaT} and set $Y_n=\gep_1+\dots+\gep_n$ for all $n\geq 1$.
The first step consists in proving that for all $a<b$ in $\overline{\mathbb{R}}$
\begin{equation}\label{step1}
\lim_{N\to \infty}\; {\ensuremath{\mathcal P}} _{\delta,T_N}\Big(a<\frac{Y_{k_{N}}}{\sqrt{v_\delta k_{N}}}
\leq b\Big) \;=\; P(a<Z\leq b)\,,
\end{equation}
that is, under ${\ensuremath{\mathcal P}} _{\delta,T_N}$ and as $N\to \infty$ we have
$Y_{k_{N}}/\sqrt{v_\delta k_{N}} \Longrightarrow Z$, where ``$\Longrightarrow$''
denotes convergence in distribution.
The random variables $(\gep_1,\dots,\gep_N)$, defined under ${\ensuremath{\mathcal P}} _{\delta, T_N}$, are symmetric and i.i.d.\;. Moreover, they take
their values in $\{-1,0,1\}$, which together with \eqref{eq:asQ1bis} entails
\begin{equation}\label{eq:nowecant}
{\ensuremath{\mathcal E}} _{\delta,T_N}(|\varepsilon_1|^3) \;=\;
{\ensuremath{\mathcal E}} _{\delta,T_N}((\varepsilon_1)^2) \;\longrightarrow\;
v_\delta \qquad \text{as} \ N\to \infty.
\end{equation}
Observe that $k_N\to \infty$ as $N\to \infty$ and
${\ensuremath{\mathcal E}} _{\delta,T_N}(\tau_1) = O(T_N^3)$, by \eqref{eq:asET}.
Thus, we can apply the Berry Esse\`en Theorem
that directly proves \eqref{step1} and completes this step.\qed
\smallskip
\subsection{Step 2}
\label{sec:s2}
Henceforth, we fix a sequence of integers $(V_N)_{N\geq 1}$ such that
$T_N^3 \ll V_N \ll N$.
In this step we prove that, for all $a<b \in \overline{\mathbb{R}}$, the
following convergence occurs, uniformly in $u\in\{0,\dots, 2V_N\}$:
\begin{equation}\label{step2}
\lim_{N\to \infty} \; {\ensuremath{\mathcal P}} _{\delta,T_N}
\Bigg(a<\frac{Y_{L_{N-u}}}{
\sqrt{v_\delta k_N}}\leq b \Bigg) \;=\; P(a<Z\leq b)\,.
\end{equation}
To obtain \eqref{step2}, it is sufficient to prove that,
as $N\to \infty$ and under the law ${\ensuremath{\mathcal P}} _{\delta, T_N}$,
\begin{equation}\label{eq:imp}
U_N:=\frac{Y_{k_N}}{\sqrt{v_\delta k_N}} \Longrightarrow Z \quad \quad
\text{and} \quad \quad
G_N:=\sup_{u\in\{0,\dots,2V_N\}}\bigg|\frac{Y_{L_{N-u}}-Y_{k_N}}{\sqrt{v_\delta k_N}}\bigg|\Longrightarrow 0 \,.
\end{equation}
Step 1 gives directly the first relation in \eqref{eq:imp}.
To deal with the second relation, we must show that
${\ensuremath{\mathcal P}} _{\delta,T_N}(G_N\geq \gep)\to 0$ as $N\to \infty$, for all $\gep>0$.
To this purpose, notice that
$\{G_N\geq \gep\}\subseteq A_\eta^N \cup B_{\eta,\gep}^N$, where for $\eta > 0$
we have set
\begin{align}
A_\eta^N:&=\big\{L_N-k_N\geq \eta k_N\big\}
\cup\big\{L_{N-2V_N}-k_N\leq -\eta k_N\big\}\\
B_{\eta,\gep}^N:&= \Bigg\{\sup
\bigg\{\bigg|\frac{Y_{k_N+i}-Y_{k_N}}{\sqrt{v_\delta k_N}}\bigg|\,,
\ i \in \{-\eta k_N,\dots,\eta k_N\} \bigg\} \geq \gep\Bigg\} \,.
\end{align}
Let us focus on ${\ensuremath{\mathcal P}} _{\delta,T_N}(A_\eta^N)$.
Introducing the centered variables
$\tilde{\tau_k} := \tau_k - k \cdot {\ensuremath{\mathcal E}} _{\delta,T_N}(\tau_1)$,
for $k \in \mathbb{N}$, by the Chebychev inequality we can write
(assuming that $(1-\eta)k_N \in \mathbb{N}$ for notational convenience)
\begin{align}\label{yeswecan}
\nonumber
{\ensuremath{\mathcal P}} _{\delta, T_N} & \big(L_{N-2V_N}-k_N<-\eta k_N\big)
\;=\; {\ensuremath{\mathcal P}} _{\delta, T_N} \big( \tau_{(1-\eta)k_N} > N-2V_N \big) \\
\nonumber
&\;=\; {\ensuremath{\mathcal P}} _{\delta, T_N} \big( \tilde\tau_{(1-\eta)k_N} > N- 2V_N
-(1-\eta) k_N {\ensuremath{\mathcal E}} _{\delta,T}(\tau_1) \,=\, \eta N- 2V_N \big) \\
& \;\le\; \frac{(1-\eta) k_N \var_{\delta, T_N}(\tau_1)}{(\eta N - 2V_N)^2} \;\le\;
\frac{N\; \var_{\delta,T_N}(\tau_1)}{(\eta N-2V_N)^2\, {\ensuremath{\mathcal E}} _{\delta,T_N}(\tau_1)}\,.
\end{align}
With the help of the estimates in \eqref{eq:asET}, \eqref{eq:asET2},
we can assert that
$\var_{\delta,T_N}(\tau_1)/{\ensuremath{\mathcal E}} _{\delta,T_N}(\tau_1) = O(T_N^3)$.
Since $N \gg V_N$ and $N \gg T_N^3$, the r.h.s. of~\eqref{yeswecan}
vanishes as $N \to \infty$. With a similar technique, we prove that
${\ensuremath{\mathcal P}} _{\delta, T_N} \big(L_N-k_N>\eta k_N\big)\to 0$
as well, and consequently ${\ensuremath{\mathcal P}} _{\delta,T_N}(A_\eta^N)\to 0$ as $N\to \infty$.
At this stage it remains to show that, for every fixed $\gep>0$,
the quantity ${\ensuremath{\mathcal P}} _{\delta, T_N} \big(B_{\eta,\gep}^N \big)$
vanishes as $\eta\to 0$, {\sl uniformly in $N$}. This holds true
because $\{Y_n\}_n$ under ${\ensuremath{\mathcal P}} _{\delta, T_N}$ is a symmetric random walk,
and therefore $\{(Y_{k_N + j} - Y_{k_N})^2\}_{j \ge 0}$
is a submartingale (and the same with $j \mapsto -j$).
Thus, the maximal inequality yields
\begin{equation}\label{eq:soutcha}
{\ensuremath{\mathcal P}} _{\delta, T_N} \big( B_{\eta,\gep}^N \big) \;\le\; \frac{2}{\gep} \,
\frac{{\ensuremath{\mathcal E}} _{\delta, T_N} \big( (Y_{k_N + \eta k_N} - Y_{k_N})^2 \big)}
{v_\delta k_N} \;\le\;
\frac{2\, \eta\, {\ensuremath{\mathcal E}} _{\delta, T_N}(\gep_1^2)}{\gep v_\delta}
\;\le\; \frac{2 \, \eta}{\gep \, v_\delta} \,.
\end{equation}
We can therefore assert that the r.h.s in \eqref{eq:soutcha} tends
to $0$ as $\eta\to 0$, uniformly in $N$. This completes the step.\qed
\smallskip
\subsection{Step 3}
\label{sec:s3}
Recall that $k_N=\lfloor N/{\ensuremath{\mathcal E}} _{\delta,T_N}(\tau_1)\rfloor$.
In this step we assume for simplicity that $N\in2\mathbb{N}$,
and we aim at switching from the free measure ${\ensuremath{\mathcal P}} _{\delta,T_N}$ to
${\ensuremath{\mathcal P}} _{\delta,T_N}\big(\cdot\,\big|\, N\in\tau \big)$.
More precisely, we want to prove that there exist two constants $0<c_1< c_2<\infty$
such that for all $a<b \in\overline{\mathbb{R}}$ there exists $N_0 > 0$
such that for $N\ge N_0$ and for all $u \in \{0,\dots,V_N\} \cap 2\mathbb{N}$
\begin{equation}\label{step3}
c_1 \, P(a<Z\leq b) \;\le\; {\ensuremath{\mathcal P}} _{\delta,T_N}
\bigg( a < \frac{Y_{L_{N-u}}}{
\sqrt{v_\delta k_N}} \leq b \,\bigg|\, N-u \in\tau \bigg)
\;\le\; c_2 \, P(a<Z\leq b) \,.
\end{equation}
A first observation is that we can safely replace
${L_{N-u}}$ with ${L_{N-u-T_N^3}}$ in \eqref{step3}.
To prove this, since $k_N \to \infty$, the following bound is sufficient:
for every $N, M \in 2\mathbb{N}$
\begin{equation} \label{eq:tec_toprove}
\sup_{u \in \{0, \ldots, V_N\} \cap 2\mathbb{N}} \;
{\ensuremath{\mathcal P}} _{\delta,T_N} \Big( \big| Y_{L_{N-u}} - Y_{L_{N-u-T_N^3}} \big| \ge M
\,\Big|\, N-u \in \tau \Big)
\;\le\; \frac{(const.)}{M} \,.
\end{equation}
Note that the l.h.s. is bounded above by
${\ensuremath{\mathcal P}} _{\delta,T_N} \big( \# \big\{ \tau \cap[N-u-T_N^3, N-u) \big\} \ge M
\,\big|\, N-u \in \tau \big)$. By time-inversion and the renewal property
we then rewrite this as
\begin{equation}\label{eq:barabao}
\begin{split}
& {\ensuremath{\mathcal P}} _{\delta,T_N} \big( \# \big\{ \tau \cap(0, T_N^3] \big\} \ge M
\,\big|\, N-u \in \tau \big) \;=\;
{\ensuremath{\mathcal P}} _{\delta,T_N} \big( \tau_M \le T_N^3 \,\big|\, N-u \in \tau \big) \\
& \qquad \;\le\; \sum_{n=1}^{T_N^3} {\ensuremath{\mathcal P}} _{\delta,T_N} \big( \tau_M = n \big)
\cdot \frac{{\ensuremath{\mathcal P}} _{\delta,T_N} \big( N-u-n \in \tau \big)}
{{\ensuremath{\mathcal P}} _{\delta,T_N} \big( N-u \in \tau \big)} \,.
\end{split}
\end{equation}
Recalling that $N \gg V_N \gg T_N^3$ and using the estimate
\eqref{eq:bound_renewal}, we see that the ratio in the r.h.s. of
\eqref{eq:barabao} is bounded above by some constant, uniformly
for $0 \le n \le T_N^3$ and $u \in \{0, \ldots, V_N\} \cap 2\mathbb{N}$.
We are therefore left with estimating ${\ensuremath{\mathcal P}} _{\delta,T_N} \big( \tau_M \le T_N^3 \big)$.
Recalling the definition $\tilde \tau_k := \tau_k - k \cdot {\ensuremath{\mathcal E}} _{\delta, T}(\tau_1)
\sim \tau_k - c k T^3$ as $T \to \infty$, where $c > 0$ by \eqref{eq:asET},
it follows that for large $N\in\mathbb{N}$ we have
\begin{equation*}
\begin{split}
& {\ensuremath{\mathcal P}} _{\delta, T_N}(\tau \cap [0, T_N^3] \ge M) \;=\;
{\ensuremath{\mathcal P}} _{\delta, T_N}(\tau_M \le T_N^3) \\
& \quad \;\le\; {\ensuremath{\mathcal P}} _{\delta, T_N} \bigg( \tilde \tau_M \le
-\frac c2 \, M \,T_N^3 \bigg) \;\le\;
\frac{4M \, \var_{\delta,T_N}(\tau_1)}{c^2 \, M^2 \, T_N^6}
\;\le\; \frac{(const.)}{M}\,,
\end{split}
\end{equation*}
having applied the Chebychev inequality and \eqref{eq:asET2}. This proves
\eqref{eq:tec_toprove}.
Let us come back to \eqref{step3}. By summing over the last point in $\tau$
before $N-u-T_N^3$ (call it $N-u-T_N^3-t$) and the first point in $\tau$
after $N-u-T_N^3$ (call it $N-u-T_N^3+r$), using the Markov property
we obtain
\begin{align} \label{eq:long}
\begin{split}
& {\ensuremath{\mathcal P}} _{\delta,T_N} \Bigg(a< \frac{Y_{L_{N-u-T_N^3}}}{ \sqrt{v_\delta k_N}}
\leq b \,\Bigg |\, N-u\in\tau \Bigg)\\
& \ \;=\; \sum_{t=0}^{N-T_N^3-u} {\ensuremath{\mathcal P}} _{\delta, T_N}
\Bigg( a<\frac{Y_{L_{N-u-T_N^3-t}}}{\sqrt{v_\delta k_N}} \leq b \,,\,
N-u-T_N^3-t \in \tau \Bigg) \cdot {\ensuremath{\mathcal P}} _{\delta, T_N}\big( \tau_1 > t \big)
\cdot \Theta^u_{\delta,N}(t) \,,
\end{split}
\end{align}
where $\Theta^u_{\delta,N}$ is defined by
\begin{equation}\label{theta}
\Theta^u_{\delta,N}(t) \;:=\; \frac{\sum_{r=1}^{T_N^3}
{\ensuremath{\mathcal P}} _{\delta, T_N}\big(\tau_1 = t+r\big) \cdot {\ensuremath{\mathcal P}} _{\delta, T_N}
\big(T_N^3-r \in \tau \big)}
{{\ensuremath{\mathcal P}} _{\delta, T_N} \big( N-u\in\tau \big) \cdot
{\ensuremath{\mathcal P}} _{\delta, T_N}\big( \tau_1>t \big)}\,.
\end{equation}
Let us set ${\ensuremath{\mathcal I}} _N^u:=\{0,\dots,N-u-T_N^3\}$.
Notice that replacing $\Theta^u_{\delta,N}(t)$ by the constant $1$
in the r.h.s. of \eqref{eq:long}, the latter becomes equal to
\begin{equation}\label{eq:intew}
{\ensuremath{\mathcal P}} _{\delta,T_N}
\bigg(a<\frac{Y_{L_{N-u-T_N^3}}}{\sqrt{v_\delta k_N}}\leq b \bigg).
\end{equation}
Since $u + T_N^3 \le 2 V_N$ for large $N$ (because
$V_N \gg T_N^3$), equation \eqref{step2} implies that
\eqref{eq:intew} converges as $N\to\infty$ to $P(a < Z \le b)$,
uniformly for $u \in \{0, \ldots, V_N\} \cap 2\mathbb{N}$.
Therefore, equation \eqref{step3} will be proven (completing this step)
once we show that there exists $N_0$ such that
$\Theta^u_{\delta,N}(t)$ is bounded from above and below by two constants
$0<l_1<l_2<\infty$, for $N \ge N_0$ and for
all $u\in\{0,\dots,V_N\}$ and $t\in{\ensuremath{\mathcal I}} _N^u$.
Let us set $K_N(n) := {\ensuremath{\mathcal P}} _{\delta, T_N}(\tau_1 = n)$ and $u_N(n) :=
{\ensuremath{\mathcal P}} _{\delta, T_N}(n \in \tau)$.
The lower bound is obtained by restricting the sum in the numerator of \eqref{theta} to
$r\in\{1,\dots, T_N^3/2\}$. Recalling that $N\gg V_N \gg T_N^3$,
and applying the upper (resp. lower) bound in \eqref{eq:bound_renewal}
to $u_N(N-u)$ (resp. $u_N(T_N^3-r)$), we have that for large $N$,
uniformly in $u\in\{0,\dots,V_N\}$ and $t\in {\ensuremath{\mathcal I}} _N^u$,
\begin{equation}\label{eq:etec}
\Theta^u_{\delta,N}(t) \;\geq \;
\frac{\sum_{r=1}^{T_N^3/2}
K_N(t+r)\cdot u_N
\big(T_N^3-r\big)}
{u_N\big( N-u\big) \cdot
\sum_{j=1}^{\infty} K_N(t+j)}
\;\geq\; \frac{c_1}{c_2} \cdot \frac{\sum_{r=1}^{T_N^3/2}
K_N(t+r)}
{\sum_{j=1}^{\infty} K_N(t+j)}\,.
\end{equation}
Then, we use \eqref{eq:boundrenbislb} to bound from below the numerator
in the r.h.s. of \eqref{eq:etec} and we use \eqref{eq:boundrenbisub}
to bound from above its denominator. This allows to write
\begin{equation}\label{eq:etec2}
\Theta^u_{\delta,N}(t) \;\geq \;
\frac{c_1\, c_3\, (1-e^{-(g(T_N)+\phi(\delta,T_N))\frac{T_N^3}{2}})}{c_2\, c_4}\,.
\end{equation}
Moreover, \eqref{eq:gplusphi} shows that there exists $m_\delta>0$ such
that $g(T_N)+\phi(\delta,T_N)\sim m_\delta/T_N^3$ as $N\to \infty$, which
proves that the r.h.s. of \eqref{eq:etec2} converges to a constant $c>0$
as $N$ tends to $\infty$. This completes the proof of the lower bound.
The upper bound is obtained by splitting the r.h.s. of \eqref{theta} into
\begin{equation}\label{thetap}
R_N+D_N\;:=\; \frac{\sum_{r=1}^{T_N^3/2}
K_N(t+r) \cdot u_N(T_N^3-r)}
{u_N\big( N-u\big) \cdot
\sum_{j=1}^{\infty} K_N(t+j)}\,+\,\frac{\sum_{r=1}^{T_N^3/2}
K_N(t+T_N^3-r) \cdot u_N(r)}
{u_N\big( N-u\big) \cdot
\sum_{j=1}^{\infty} K_N(t+j)}.
\end{equation}
The term $R_N$ can be bounded from above by a constant by simply applying
the upper bound in \eqref{eq:bound_renewal}
to $u_N(T_N^3-r)$ for all $r\in\{1,\dots,T_N^3/2\}$ and the lower bound to
$u_N(N-u)$.
To bound $D_N$ from above, we use the upper bound in \eqref{eq:boundren},
which, together with the fact that
$g(T_N)+\phi(\delta,T_N)\sim m_\delta/T_N^3$,
shows that there exists $c>0$ such that for $N$ large enough and
$r\in\{1,\dots,T_N^3/2\}$ we have
\begin{equation}\label{eq:theta2}
K_N(t+T_N^3-r)\leq \frac{c}{T_N^3}
\, e^{-(g(T_N) + \phi(\delta,T_N))\, t}.
\end{equation}
Notice also that by \eqref{eq:boundrenbislb} we can assert that
\begin{equation}\label{eq:etac}
\sum_{j=1}^{\infty} K_N(t+j)\geq c_3 e^{-(g(T_N) + \phi(\delta,T_N))\,t}.
\end{equation}
Finally, \eqref{eq:theta2}, \eqref{eq:etac} and the fact that $u_N(N-u)\geq c_1/T_N^3$ for all
$u\in\{0,\dots,V_N\}$ (by \eqref{eq:bound_renewal}) allow to write
\begin{equation}\label{theta3}
D_N\;\leq\; \frac{c \sum_{r=1}^{T_N^3/2} u_N(r)}
{c_1 c_3}.
\end{equation}
By applying the upper bound in \eqref{eq:bound_renewal}, we can check easily that $\sum_{r=1}^{T_N^3/2} u_N(r)$ is bounded from above uniformly in $N\geq 1$ by a constant. This completes the proof of the step.\qed
\smallskip
\subsection{Step 4}
\label{sec:s4}
In this step we complete the proof of Theorem~\ref{th:main} (\ref{part:1}),
by proving equation \eqref{eq:infinite}, that we
rewrite for convenience: there exist $0<c_1< c_2<\infty$
such that for all $a<b \in\overline{\mathbb{R}}$ and for large $N\in2\mathbb{N}$
(for simplicity)
\begin{equation}\label{step4}
c_1 \, P(a<Z\leq b) \;\leq\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\Bigg( a<\frac{Y^{T_N}_{L_N}}{\sqrt{v_\delta k_N}}\leq b \Bigg)
\;\leq\; c_2 \, P(a<Z\leq b) \,.
\end{equation}
We recall \eqref{jump} and we start summing over the location $\mu_N := \tau^{T_N}_{L_{N,T_N}}$ of
the last point in $\tau^{T_N}$ before $N$:
\begin{equation}\label{eq:trun}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \Bigg(
a<\frac{Y_{L_{N,T_N}}^{T_N}}{\sqrt{v_\delta k_N}}\leq b \Bigg)
\;=\; \sum_{\ell = 0}^N \; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\Bigg(a<\frac{Y_{L_{N,T_N}}^{T_N}}{\sqrt{v_\delta k_N}}\leq b \,\bigg|\, \mu_N = N-\ell \Bigg)
\ \cdot \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}\big( \mu_N = N-\ell \big)\,.
\end{equation}
Of course, only the terms with $\ell$ even are non-zero.
We want to show that the sum in the r.h.s. of \eqref{eq:trun}
can be restricted to $\ell \in\{0,\dots,V_N\}$.
To that aim, we need to prove that
$\sum_{\ell =V_N }^N \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}\big( \mu_N = N-\ell \big)$ tends to $0$
as $N\to \infty$. We start by displaying a lower bound
on the partition function $Z_{N,\delta}^{T_N}$.
\smallskip
\begin{lemma}\label{le:boundzn}
There exists a constant $c>0$ such that for $N$ large enough
\begin{equation}\label{eq:lemest}
Z_{N, \delta}^{T_N} \;\geq\;
\frac{c}{T_N} \; e^{\phi(\delta,T_N) N} \,.
\end{equation}
\end{lemma}
\begin{proof}
Summing over the location of $\mu_N$ and
using the Markov property, together with \eqref{eq:overandover}, we have
\begin{align}\label{eq:egalit}
\nonumber Z_{N,\delta}^{T_N} & \;=\; E\Big[ e^{H^{T_N}_{N,\delta}(S)} \Big]
\;=\; \sum_{r=0}^N E\Big[ e^{H^{T_N}_{N,\delta}(S)} \,
\boldsymbol{1}_{\{\mu_N = r \}} \Big]\\
\nonumber & \;=\; \sum_{r=0}^N E\Big[ e^{H^{T_N}_{r,\delta}(S)} \,
\boldsymbol{1}_{\{r \in \tau^{T_N}\}} \Big] \, P(\tau_1^{T_N} > N-r)\\
& \;=\; \sum_{r=0}^N e^{\phi(\delta,T_N) r} \,
{\ensuremath{\mathcal P}} _{\delta, T_N} (r \in \tau)\, P(\tau_1^{T_N} > N-r) \,.
\end{align}
From \eqref{eq:egalit} and the lower bounds
in \eqref{eq:boundqbis} and \eqref{eq:bound_renewal}, we obtain for $N$ large enough
\begin{equation}\label{eq:egal2}
Z_{N,\delta}^{T_N} \;\ge\; (const.)\;
e^{\phi(\delta,T_N) N} \sum_{r=0}^N\;
\frac{e^{-[\phi(\delta,T_N)+g(T_N)](N-r)}}
{\min\{\sqrt{N-r+1}, T_N\}\,\min\{(r+1)^{3/2}, T_N^3 \}} \,.
\end{equation}
At this stage, we recall that
$\phi(\delta,T)+g(T) = m_\delta/T^3 + o(1/T^3)$ as $T\to\infty$,
with $m_\delta > 0$, by \eqref{eq:gplusphi}. Since $T_N^3 \ll N$,
we can restrict the sum in \eqref{eq:egal2}
to $r\in\{N-T_N^3,\dots,N-T_N^2\}$, for large $N$, obtaining
\begin{align}\label{eq:egal3}
Z_{N,\delta}^{T_N} \;\ge\; (const.)\,\frac{e^{\phi(\delta,T_N) N}}{T_N^4} \,
\sum_{r=N-T_N^3}^{N-T_N^2}\, e^{-\big( \frac{m_\delta}{T_N^3}+
o\big( \frac{1}{T_N^3} \big) \big)(N-r)}
\;\geq\; (const.') \,
\frac{e^{\phi(\delta,T_N) N}}{T_N} \,,
\end{align}
because the geometric sum gives a contribution of order $T_N^3$.
\end{proof}
We can now bound from above (using the Markov property and \eqref{eq:overandover})
\begin{align}\label{eq:calc}
\nonumber\sum_{l=0}^{N-V_N} \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}(\mu_N & =\ell)
\;=\;\sum_{\ell=0}^{N-V_N}
\frac{\ensuremath{\boldsymbol{\mathrm{E}}} \big( \exp\big( H_{\ell, \delta}^{T_N}(S) \big) \boldsymbol{1}_{\{\ell \in \tau\}}\big)
\cdot \ensuremath{\boldsymbol{\mathrm{P}}} \big( \tau_1 > N-\ell \big)}{Z_{N,\delta}^{T_N}}\\
\nonumber &\;=\;\sum_{\ell=0}^{N-V_N}
\frac{{\ensuremath{\mathcal P}} _{\delta,T_N}(\ell\in\tau)\ e^{\phi(\delta, T_N)\ell}\ \ensuremath{\boldsymbol{\mathrm{P}}} \big( \tau_1 > N-\ell \big)}{Z_{N,\delta}^{T_N}}\\
& \;\leq\; (const.)\, \sum_{\ell=0}^{N-V_N}
\frac{T_N}{\min\{(\ell+1)^{3/2}, T_N^3\}}\cdot \frac{e^{-[\phi(\delta, T_N)+g(T_N)] (N-\ell)}}{\min\{\sqrt{N-\ell},T_N\}} \,,
\end{align}
where we have used Lemma \ref{le:boundzn} and the upper bounds in \eqref{eq:boundqbis} and \eqref{eq:bound_renewal}.
For notational convenience we set $d(T_N)=\phi(\delta, T_N)+g(T_N)$.
Then, the estimate \eqref{eq:gplusphi} and the fact that $V_N\gg T_N^3$ imply that
\begin{align}\label{eq:ratic}
\begin{split}
\sum_{\ell=0}^{N-V_N} \ensuremath{\boldsymbol{\mathrm{P}}}_{n,\delta}^{T_N}(\mu_N=\ell)&\;\leq\;
(const.)\, e^{-d(T_N) V_N}
\sum_{\ell=0}^{N-V_N}
\frac{e^{-d(T_N) (N-V_N-\ell)}}{\min\{(\ell+1)^{3/2},T_N^3\}}\\
&\;\leq\; (const.')\, e^{-d(T_N) V_N} \Bigg(\sum_{\ell=0}^{\infty}
\frac{1}{(l+1)^{3/2}}+\sum_{\ell=0}^{\infty}
\frac{e^{-d(T_N) (\ell)}}{T_N^3}\Bigg) \,.
\end{split}
\end{align}
Since $d(T_n)\sim m_\delta/T_N^3$, with $m_\delta > 0$,
and $V_N\gg T_N^3$ we obtain that the l.h.s. of \eqref{eq:ratic}
tends to $0$ as $N\to \infty$.
Thus, we can write
\begin{equation}\label{eq:trun2}
\begin{split}
& \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \Bigg(a<
\frac{Y_{L_{N,T_N}}^{T_N}}{\sqrt{v_\delta k_N}}\leq b \Bigg) \\
& \quad \;=\; \sum_{\ell = 0}^{V_N} \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\Bigg(a<\frac{Y_{L_{N,T_N}}^{T_N}}{\sqrt{v_\delta k_N}}\leq b \,\Bigg|\,
\mu_N = N-\ell \Bigg)
\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}\big( \mu_N = N-\ell \big)
\; + \; \gep_N(a,b) \,,
\end{split}
\end{equation}
where $\gep_N(a,b)$ tends to $0$ as $N\to \infty$, uniformly
over $a,b \in\mathbb{R}$. At this stage,
by using the Markov property and \eqref{eq:crucial} we may write
\begin{align*
& \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\Bigg(a<\frac{Y_{L_{N,T_N}}^{T_N}}{\sqrt{v_\delta k_N}}\leq b
\,\Bigg|\, \mu_N = N-\ell \Bigg)
\;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}
\Bigg(a<\frac{Y_{L_{N-\ell,T_N}}^{T_N}}{\sqrt{v_\delta k_N}}\leq b
\,\Bigg|\, N-\ell \in \tau^T \Bigg) \\
& \qquad \;=\; {\ensuremath{\mathcal P}} _{\delta,T_N}
\Bigg(a<\frac{Y_{L_{N-\ell}}}{\sqrt{v_\delta k_N}}\leq b \,\Bigg|\,
N-\ell\in\tau \Bigg) \,.
\end{align*}
Plugging this into \eqref{eq:trun2},
recalling \eqref{step3} and the fact that
$\sum_{\ell = 0}^{V_N} \ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta}(\mu_N = N-\ell) \to 1$ (by \eqref{eq:ratic}),
it follows that equation \eqref{step4} is proven, and the proof is complete.
\qed
\medskip
\section{Proof of Theorem~\ref{th:main}: part (\ref{part:2})}
\label{sec:partii}
We assume that $T_N \sim (const.) N^{1/3}$ and we start proving
the first relation in \eqref{eq:crit}, that we rewrite as follows:
for every $\gep > 0$ we can find $M>0$ such that for large $N$
\begin{equation*}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( |S_N| > M \cdot T_N \big) \; \le \gep\,.
\end{equation*}
Recalling that $L_N^T$ is the number of times the polymer
has touched an interface up to epoch $N$, see \eqref{eq:L},
we have $|S_N| \le T_N \cdot (L_{N,T_N} + 1)$, hence it suffices to show that
\begin{equation} \label{eq:toproveii}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( L_{N,T_N} > M \big) \; \le \gep\,.
\end{equation}
By using \eqref{eq:crucial} we have
\begin{align*}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( & L_{N,T} > M \big)
\;=\; \frac{1}{Z_{N,\delta}^{T_N}} \, E\Big[ e^{H^{T_N}_{N,\delta}(S)}
\, \boldsymbol{1}_{\{L_{N,T_N} > M\}} \Big] \\
& \;=\; \frac{1}{Z_{N,\delta}^{T_N}} \,
\sum_{r=0}^N E\Big[ e^{H^{T_N}_{r,\delta}(S)} \,
\boldsymbol{1}_{\{L_{r,T_N} > M\}} \,
\boldsymbol{1}_{\{r \in \tau^{T_N}\}} \Big] \, P(\tau_1^{T_N} > N-r) \\
& \;=\; \frac{1}{Z_{N,\delta}^{T_N}} \,
\sum_{r=0}^N e^{\phi(\delta, T_N)r} \, {\ensuremath{\mathcal P}} _{\delta, T_N} \big( L_{r,T_N} > M ,
\, r \in \tau^{T_N} \big) \, P(\tau_1^{T_N} > N-r) \,.
\end{align*}
By \eqref{eq:boundqbis} and \eqref{eq:gT} it follows easily that
\begin{equation} \label{eq:plainlb}
Z^{T_N}_{N,\delta} \;\ge\; P(\tau_1^{T_N} > N) \;\ge\;
\frac{(const.)}{T_N}\, e^{-\frac{\pi^2}{2 T_N^2} N}
\end{equation}
(note that this bound holds true whenever we have $(const.) N^{1/4} \le T_N \le (const.')
\sqrt{N}$ for large $N$).
Using this lower bound on $Z^{T_N}_{N,\delta}$,
together with the upper bound in \eqref{eq:boundqbis}, the asymptotic developments
in \eqref{eq:gplusphi} and \eqref{eq:gT}, we obtain
\begin{align*}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( L_{N,T} > M \big)
\;\le\; (const.) \, T_N \,
\sum_{r=0}^N {\ensuremath{\mathcal P}} _{\delta, T_N} \big( L_{r,T_N} > M ,
\, r \in \tau^{T_N} \big) \, \frac{1}{\min\{\sqrt{N-r+1}, T_N\}} \,.
\end{align*}
The contribution of the terms with $r > N-T_N^2$
is bounded with the upper bound \eqref{eq:bound_renewal}:
\begin{equation*}
T_N \sum_{r=N-T_N^2}^N \frac{1}{T_N^3} \, \frac{1}{\sqrt{N-r+1}}
\;\le\; \frac{(const.)}{T_N} \;\longrightarrow \;0 \qquad
(N\to\infty) \,,
\end{equation*}
while for the terms with $r \le N-T_N^2$ we get
\begin{equation*}
T_N \, \sum_{r=0}^N {\ensuremath{\mathcal P}} _{\delta, T_N} \big( L_{r,T_N} > M ,
\, r \in \tau^{T_N} \big) \, \frac{1}{T_N} \;=\;
{\ensuremath{\mathcal E}} _{\delta, T_N} \big( (L_{N,T_N} - M)
\boldsymbol{1}_{\{L_{N,T_N} > M\}} \big) \,.
\end{equation*}
Finally, we simply observe that $\{L_{N,T_N}=k\} \subseteq
\inter_{i=1}^k\{\tau_i-\tau_{i-1} \le N\}$, hence
\begin{equation*}
{\ensuremath{\mathcal P}} _{\delta, T_N}(L_{N,T_N}=k) \;\le\;
\big( {\ensuremath{\mathcal P}} _{\delta, T_N}(\tau_1 \le N) \big)^k \;\le\; c^k\,,
\end{equation*}
with $0 < c < 1$, as it follows from \eqref{eq:boundrenbislb} and
\eqref{eq:gplusphi} recalling that $N = O(T_N^3)$.
Putting together the preceding estimates, we have
\begin{align*}
\ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( L_{N,T_N} > M \big) & \;\le\;
(const.) \, {\ensuremath{\mathcal E}} _{\delta, T_N} \big( (L_{N,T_N} - M)
\boldsymbol{1}_{\{L_{N,T_N} > M\}} \big) \\
& \;=\; (const.) \, \sum_{k=M+1}^\infty (k-M)
\, {\ensuremath{\mathcal P}} _{\delta, T_N}(L_{N,T_N} = k) \\
& \;\le\; (const.) \, \sum_{k=M+1}^\infty (k-M) \, c^k
\;\le\; (const.') \, c^M\,,
\end{align*}
and \eqref{eq:toproveii} is proven by choosing $M$ sufficiently large.
\smallskip
Finally, we prove at the same time
the second relations in \eqref{eq:crit} and \eqref{eq:supercrit1}, by showing that
for every $\gep > 0$ there exists $\eta > 0$ such that
for large $N$
\begin{equation} \label{eq:secondpart}
\ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta} \big( |S_N| \le \eta \, T_N \big) \;\le\; \gep \,,
\end{equation}
whenever $T_N$ satisfies $(const.) N^{1/3} \le T_N \le (const.') \sqrt{N}$
for large $N$.
Letting $P_k$ denoting the law of the simple random walk starting at $k \in \mathbb{N}$
and $\tau_1^\infty$ its first return to zero, it follows by Donsker's invariance
principle that there exists $c>0$ such that
$\inf_{0 \le k \le \eta T_N} P_k (\tau_1^\infty \le \eta^2 T_N^2\,,\
S_i < T_N \,\forall i \le \tau_1^\infty) \ge c$
for large $N$. Therefore we may write
\begin{equation*}
\begin{split}
& c \,\ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta} \big( |S_N| \le \eta \, T_N \big) \;=\;
\frac{c}{Z^{T_N}_{N,\delta}} \, \sum_{k= 0}^{\eta T_N}
E \Big[ e^{H^{T_N}_{N,\delta}(S)} \, \boldsymbol{1}_{\{|S_N| = k\}} \Big] \\
& \;\le\;
\frac{1}{Z^{T_N}_{N,\delta}} \, \sum_{k= 0}^{\eta T_N}
\, \sum_{u=0}^{\eta^2 T_N^2} \,
E \Big[ e^{H^{T_N}_{N,\delta}(S)} \, \boldsymbol{1}_{\{|S_N| = k\}} \Big]
\, P_k(\tau_1^\infty = u\,,\ S_i < T_N \,\forall i \le u) \\
& \;=\; \frac{1}{Z^{T_N}_{N,\delta}} \, \sum_{k= 0}^{\eta T_N}
\, \sum_{u=0}^{\eta^2 T_N^2} \,
E \Big[ e^{H^{T_N}_{N+u,\delta}(S)} \, \boldsymbol{1}_{\{|S_N| = k\}}
\, \boldsymbol{1}_{\{|S_{N+i}| < T_N \,\forall i \le u\}} \, \boldsymbol{1}_{\{S_{N+u}=0\}} \Big]\,.
\end{split}
\end{equation*}
Performing the sum over $k$, dropping the second indicator function and using
equations \eqref{eq:overandover}, \eqref{eq:bound_renewal} and \eqref{eq:phineg},
we obtain the estimate
\begin{equation*}
\begin{split}
& \ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta} \big( |S_N| \le \eta \, T_N \big) \;\le\;
\frac{1}{c\, Z^{T_N}_{N,\delta}} \, \sum_{u=0}^{\eta^2 T_N^2} \,
E \Big[ e^{H^{T_N}_{N+u,\delta}(S)} \, \boldsymbol{1}_{\{N+u \in \tau^{T_N}\}} \Big] \\
& \;\le\; \frac{1}{c\, Z^{T_N}_{N,\delta}} \, \sum_{u=0}^{\eta^2 T_N^2} \,
e^{\phi(\delta,T_N) (N+u)} \, {\ensuremath{\mathcal P}} _{\delta, T_N}(N+u \in \tau) \;\le\;
(const.) \, \frac{\eta^2 \,T_N^2}{Z^{T_N}_{N,\delta} \, T_N^3} \,
e^{-\frac{\pi^2}{2 T_N^2} N}\,.
\end{split}
\end{equation*}
Then \eqref{eq:plainlb} shows that equation \eqref{eq:secondpart}
holds true for $\eta$ small, and we are done.\qed
\medskip
\section{Proof of Theorem~\ref{th:main}: part (\ref{part:3})}
\label{sec:partiii}
We now give the proof of part (\ref{part:3}) of Theorem~\ref{th:main}.
More precisely, we prove the first relation in \eqref{eq:supercrit1},
because the second one has been proven at the end of Section~\ref{sec:partii}
(see \eqref{eq:secondpart} and the following lines).
We recall that we are in the regime when $N^{1/3} \ll T_N \le (const.)\sqrt{N}$,
so that in particular
\begin{equation} \label{eq:ass1}
C \;:=\; \inf_{N\in\mathbb{N}} \, \frac{N}{T_N^2} \;>\; 0 \,.
\end{equation}
We start stating an immediate corollary of Proposition~\ref{th:bound_renewal}.
\begin{corollary} \label{th:corcor}
For every $\gep > 0$ there exist $T_0 > 0$,
$M_\gep \in 2\mathbb{N}$, $d_\gep > 0$ such that for $T > T_0$
\begin{equation*} \label{eq:renconc}
\sum_{k=M_\gep}^{d_\gep T^3} {\ensuremath{\mathcal P}} _{\delta,T} \big( k\in\tau \big)
\;\le\; \gep \,.
\end{equation*}
\end{corollary}
Note that we can restate the first relation in \eqref{eq:supercrit1}
as $\ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta} \big( \tau^{T_N}_{L_{N,T_N}} \le L \big) \ge 1 - \gep$.
Let us define three intermediate quantities, by setting
for $l\in\mathbb{N}$
\begin{align}
\label{eq:B1}
B_1(l,N) & \;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}\big(\tau^{T_N}_{L_{N,T_N}}\leq l\big)\,
Z_{N,\delta}^{T_N} \,,\\
B_2(l,N) & \;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(l<\tau^{T_N}_{L_{N,T_N}}\leq N-\eta T_N^2\big)
\, Z_{N,\delta}^{T_N} \,,\\
B_3(N) & \;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(\tau^{T_N}_{L_{N,T_N}}> N-\eta T_N^2\big)
\, Z_{N,\delta}^{T_N} \,,
\end{align}
where we fix $\eta := C/2$, so that $\eta T_N^2 \le N/2$.
The first relation in \eqref{eq:supercrit1} will be proven
once we show that for all $\gep>0$, there exists
$l_\gep \in \mathbb{N}$ such that for large $N$ we have
\begin{align}\label{eq:2cond}
\frac{B_2(l_\gep,N)}{B_1(l_\gep,N)} \;\le\; \gep
\qquad \text{and} \qquad
\frac{B_3(N)}{B_1(l_\gep,N)} \;\le\; \gep \,.
\end{align}
We start giving a simple lower bound of $B_1$: since $\{\tau^{T_N}_{L_{N,T_N}}\leq l\}
\supseteq \{\tau_1^{T_N} > N\}$, we have
\begin{align}\label{eq:lbB1}
B_1(l,N) \;\ge\; E \Big[ e^{H^{T_N}_{N,\delta}(S)} \,
\boldsymbol{1}_{\{\tau_1^{T_N} > N\}} \Big]
\;=\; P \big( \tau_1^{T_N} > N \big)
\;\ge\; \frac{(const.)}{T_N} \,e^{-\frac{\pi^2}{2T_N^2} N} \,,
\end{align}
having applied the lower bound in \eqref{eq:boundqbis}.
Next we consider $B_2$.
Summing over the possible values of $\tau_{L_{N,T_N}}^{T_N}$ and using
\eqref{eq:overandover}, we have
\begin{equation}\label{eq:recob}
\begin{split}
B_2(l,N) & \;=\; \sum_{n=l+1}^{N-\eta T_N^2}
E \Big[ e^{H^{T_N}_{n,\delta}(S)} \, \boldsymbol{1}_{\{n \in \tau^{T_N}\}} \Big]
\cdot P\big( \tau_1^{T_N} > N - n \big)\\
& \;=\; \sum_{n=l+1}^{N-\eta T_N^2}
\mathcal{P}_{\delta, T_N}(n\in\tau) \; e^{\phi(\delta,T_N) n} \;
P\big(\tau_1^{T_N}>N-n\big) \\
& \;\le\; \frac{(const.)}{T_N} \, e^{-\frac{\pi^2}{2T_N^2} N} \,
\left( \sum_{n=l+1}^{N} \mathcal{P}_{\delta, T_N}(n\in\tau) \right) \,,
\end{split}
\end{equation}
where we have applied the upper bound in \eqref{eq:boundqbis} and the equalities \eqref{eq:phineg} and \eqref{eq:gT}
(we also assume that $\eta T_N^2 \in \mathbb{N}$ for simplicity).
Since $N \ll T_N^3$, by Corollary~\ref{th:corcor}
we can fix $l = l_\gep$ depending only on $\gep$
such that $B_2/B_1 \le \gep$ (recall \eqref{eq:lbB1}).
Finally we analyze $B_3(N)$: in analogy with \eqref{eq:recob} we write
\begin{align*}
B_3(N) & \;\le\; \sum_{n=N - \eta T_N^2 + 1}^{N}
\mathcal{P}_{\delta, T_N}(n\in\tau) \; e^{\phi(\delta,T_N) n} \;
P\big(\tau_1^{T_N}>N-n\big) \\
& \;\le\; e^{-\frac{\pi^2}{2T_N^2} N} \,
\frac{(const.)}{T_N^3} \, \sum_{n=N - \eta T_N^2 + 1}^{N}
\frac{(const.')}{\sqrt{N-n+1}} \;\le\;
(const.'') \, e^{-\frac{\pi^2}{2T_N^2} N} \, \frac{1}{T_N^2} \,,
\end{align*}
where we have applied the upper bounds in \eqref{eq:boundqbis} and
\eqref{eq:bound_renewal} (note that $n \ge (C/2) \, T_N^2$).
Therefore $B_3/B_1 \le \gep$ for $N$ large, and
the first relation in \eqref{eq:supercrit1} is proven.
\medskip
\section{Proof of Theorem~\ref{th:main}: part (\ref{part:4})}
\label{sec:partiv}
We now assume that $T_N \gg \sqrt{N}$, that is
\begin{equation} \label{eq:ass1bis}
\lim_{N\to\infty} \, \frac{N}{T_N^2} \;=\; 0 \,.
\end{equation}
The proof is analogous to the proof of part~(\ref{part:3}),
given in Section~\ref{sec:partiii}. We set
for $l \in \mathbb{N}$
\begin{align}
\label{eq:B1new}
B_1(l,N) & \;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N}\big(\tau^{T_N}_{L_{N,T_N}} < l\big)\,
Z_{N,\delta}^{T_N} \,,\\
B_2(l,N) & \;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big(l\le \tau^{T_N}_{L_{N,T_N}}
\leq N/2 \big)
\, Z_{N,\delta}^{T_N} \,,\\
B_3(N) & \;=\; \ensuremath{\boldsymbol{\mathrm{P}}}_{N,\delta}^{T_N} \big( \tau^{T_N}_{L_{N,T_N}}> N/2 \big)
\, Z_{N,\delta}^{T_N} \,,
\end{align}
and we first show that for every $\gep > 0$ we can choose $l_\gep \in \mathbb{N}$
such that for large $N$
\begin{align} \label{eq:priimo}
\frac{B_2(l_\gep,N)}{B_1(l_\gep,N)} \;\le\; \gep
\qquad \text{and} \qquad
\frac{B_3(N)}{B_1(l_\gep,N)} \;\le\; \gep \,.
\end{align}
We start with a lower bound: since $\{\tau^{T_N}_{L_{N,T_N}}\leq l\}
\supseteq \{\tau_1^{T_N} > N\}$, by \eqref{eq:boundqbis} we have
\begin{align} \label{eq:lbB1new}
B_1(l,N) \;\ge\; E \Big[ e^{H^{T_N}_{N,\delta}(S)} \,
\boldsymbol{1}_{\{\tau_1^{T_N} > N\}} \Big]
\;=\; P \big( \tau_1^{T_N} > N \big)
\;\ge\; \frac{(const.)}{\sqrt{N}} \,.
\end{align}
Next consider $B_2$.
Summing over the possible values of $\tau_{L_{N,T_N}}^{T_N}$ and using
\eqref{eq:overandover}, we have
\begin{equation}
\begin{split}
B_2(l,N) & \;=\; \sum_{k=l}^{N/2}
E \Big[ e^{H^{T_N}_{k,\delta}(S)} \, \boldsymbol{1}_{\{k \in \tau^{T_N}\}} \Big]
\cdot P\big( \tau_1^{T_N} > N - k \big)\\
& \;=\; \sum_{k=l}^{N/2}
\mathcal{P}_{\delta, T_N}(k\in\tau) \; e^{\phi(\delta,T_N) k} \;
P\big(\tau_1^{T_N}>N-k\big)
\end{split}
\end{equation}
(we assume that $N/2 \in \mathbb{N}$ for notational convenience).
By the upper bound in \eqref{eq:boundqbis} we have
$P\big(\tau_1^{T_N}>N-k\big) \le (const.')/\sqrt{N-k}$.
Since $\phi(\delta, T_N) \le 0$, we obtain
\begin{equation*}
B_2(l,N) \;\le\; \frac{(const.)}{\sqrt N} \, \left(
\sum_{k=l}^{N} \mathcal{P}_{\delta, T_N}(k\in\tau) \right) \,,
\end{equation*}
which can be made arbitrarily small by fixing $l = l_\gep$,
thanks to Corollary~\ref{th:corcor}, hence we have proven
that $B_2/B_1 \le \gep$ for large $N$.
In a similar fashion, for $B_3$ we can write
\begin{align*}
& B_3 \;=\; \sum_{n= N/2 + 1}^N {\ensuremath{\mathcal P}} _{\delta, T_N}(n \in \tau)
\, e^{\phi(\delta, T_N) n} \, P\big( \tau_1^{T_N} > N-n \big) \\
& \ \;\le\; (const.) \,
\sum_{n= N/2 + 1}^N \frac{1}{n^{3/2}} \, \frac{1}{\sqrt{N-n+1}}
\;\le\; \frac{(const.)}{(N/2)^{3/2}} \,
\sum_{n= N/2 + 1}^N \frac{1}{\sqrt{N-n+1}} \;\le\; \frac{(const.')}{N} \,,
\end{align*}
where we have used the upper bounds in \eqref{eq:bound_renewal} and
\eqref{eq:boundqbis} as well as
the fact that $\phi(\delta, T_N) n = o(1)$ uniformly in $n \le N$,
by \eqref{eq:phineg}. Therefore for large~$N$ we have $B_3 / B_1 \le \gep$
and equation \eqref{eq:priimo} is proven. This implies that, for every $\gep > 0$,
there exists $l_\gep \in \mathbb{N}$ such that for large $N$
\begin{equation} \label{eq:ingredient}
\ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta}
\big( \tau^{T_N}_{L_{N,T_N}} < l_\gep \big) \ge 1-\gep\,.
\end{equation}
Next we turn to the proof of the both relations in \eqref{eq:supercrit2}
at the same time.
In view of \eqref{eq:ingredient}, it suffices to show that, for every $\gep > 0$,
we can choose $M \in \mathbb{N}$ and $\eta > 0$ such that for large $N$
\begin{equation} \label{eq:qwerty}
\ensuremath{\boldsymbol{\mathrm{P}}}^{T_N}_{N,\delta} \bigg( \Big\{ \tau^{T_N}_{L_{N,T_N}} < l_\gep \Big\} \cap
\bigg( \bigg\{ \sup_{n \le N} |S_n| > M \sqrt{N} \bigg\} \cup
\Big\{ |S_N| \le \eta \sqrt{N} \Big\} \bigg) \bigg)
\;\le\; \gep \,.
\end{equation}
Summing over the values of $\tau^{T_N}_{L_{N,T_N}}$ and using \eqref{eq:overandover},
the l.h.s. of \eqref{eq:qwerty} is bounded from above by
\begin{equation*}
\sum_{u=0}^{l_\gep-1} {\ensuremath{\mathcal P}} _{\delta, T_N} ( u \in \tau ) \, e^{\phi(\delta, T_N) u}
\, A_{N,u}(M,\eta) \,,
\end{equation*}
where
\begin{equation*}
A_{N,u}(M,\eta) \,:=\,
\frac{P \big( \big\{\tau_1^{T_N} > N-u \big\} \cap
\big( \big\{ \sup_{n \le N-u} |S_n| > M \sqrt{N} \big\} \cup
\big\{ |S_{N-u}| \le \eta \sqrt{N} \big\} \big) \big)}
{Z^{T_N}_{N,\delta}} \,.
\end{equation*}
Therefore equation \eqref{eq:qwerty} will be proven once we show that
we can chose $M, \eta$ such that
$A_{N,u}(M,\eta) \le \gep/l_\gep$, for $N$ large.
For the partition function appearing in the denominator,
applying \eqref{eq:B1new} and \eqref{eq:lbB1new} we easily obtain
$Z^{T_N}_{N,\delta} \ge (const.) / \sqrt{N}$.
Setting $N_u := N-u$ for short,
the numerator in the definition of $A_{N,u}(M,\eta)$ can be bounded from above by
\begin{equation*}
P\big( |S_i| > 0\,,\, \forall i \le N_u \big) \cdot
P\bigg( \bigg\{ \sup_{n \le N_u} |S_n| > M \sqrt{N} \bigg\} \cup
\Big\{ |S_{N_u}| \le \eta \sqrt{N} \Big\}
\,\bigg|\, |S_i| > 0\,,\, \forall i \le N_u \bigg)\,.
\end{equation*}
It is well-known \cite{cf:Fel1}
that $P\big( |S_i| > 0\,,\, \forall i \le n \big) \le (const.) / \sqrt n$.
Recalling the weak convergence of the random walk conditioned to stay
positive toward the Brownian meander \cite{cf:Bol}, we conclude that for
every fixed $u \le l_\gep$ and for large $N$ we have the bound
\begin{equation}
A_{N,u}(M,\eta) \;\le\; (const.) \, P\bigg( \bigg\{ \sup_{0 \le t \le 1}
m_t > M \bigg\} \cup \big\{ m_1 \le \eta \big\} \bigg) \,.
\end{equation}
We can then choose $M$ large and $\eta$ small so as to satisfy the desired bound
$A_{N,u}(M,\eta) \le \gep/l_\gep$, and the proof is completed.\qed
\bigskip
|
1,314,259,995,565 | arxiv | \section{Introduction}
The interplay of strong spin-orbit coupling (SOC) with superconductivity has become a major focus of research in
recent years, as both are essential ingredients to stabilize Majorana bound states. The spin-orbit interaction affects the electronic states in a material in various ways and in particular can lead to non-trivial topologies of the band structure. In topological insulators SOC separates the conduction and valence bands, leading to an insulating state with an inverted band gap \cite{hasanKaneRMP,qiZhangRMP,hasanMooreAnnRev}. The latter leads directly to the presence of
Dirac surface states protected by time-reversal symmetry \cite{D.Hsieh13022009,XiaNatPhys09,seoYazdaniNat10}.
Another consequence of SOC is the Rashba effect \cite{rashba1960,astPRL07,crepaldiGrioniPRL12,bahramyNatComm12}, which in the absence of inversion symmetry lifts the spin degeneracy of the electronic bands, generating intricate spin textures in the electronic wave functions \cite{hsiehHasanNature09,IshizakaNatMat11,wangGedikPRL11}. Commonly observed at surfaces or interfaces, in noncentrosymmetric materials the Rashba-Dresselhaus effect leads to a lifting of spin-degeneracy of the {\slshape bulk} bands. Combined with superconductivity this can lead to mixing of spin-singlet and spin-triplet pairing components \cite{bauerSigristBook,bauer2004heavy} and, more interestingly, to a topologically nontrivial superconducting phase \cite{Schnyder2008gfback,beriPRB10,Andreas_flat,satoFujimotoPRB09}.
Noncentrosymmetric BiPd \cite{kheiker1953x,Zhuravlev1957,bhatt1980kristallstruktur,Bhatt1979P17,Ionov1989} becomes superconducting below $3.8$~K \cite{Alekseevskii1952,joshi2011superconductivity,mondal2012andreev,matano2013nmr,sunNatComm15,Peets2016} and
offers a unique opportunity to study the interplay between SOC and superconductivity. The large spin-orbit interaction of the heavy element Bi results in a sizeable spin splitting of the bulk bands of BiPd~\cite{sunNatComm15}. This in turn can lead to nontrivial wavefunction topologies and unconventional superconducting states~\cite{sasakiAndoPRL11,levyStroscioPRL13}.
Along with the half-Heusler compounds~\cite{Liu2011,Kim2016,Nakajima2016} and PbTaSe$_2$~\cite{Ali2014,Bian2016,Guan2016}, BiPd constitutes a rare example of a noncentrosymmetric superconductor which cleaves easily, enabling high-resolution surface-sensitive spectroscopy of its electronic states\cite{sunNatComm15,Neupane1505}.
In this paper we report the observation of Rashba spin-split Dirac surface states of noncentrosymmetric BiPd by
angle-resolved photoemission spectroscopy (ARPES) and low-temperature scanning tunneling microscopy and spectroscopy (STM/STS). Due to the lack of inversion symmetry, the (010) and (0\=10) surface states can appear at different energies and exhibit different dispersions and spin-polarizations.
By combining the experimental results with relativistic first-principles band structure calculations we identify the Dirac surface states of both the (010) and (0\=10) surfaces. This observation of distinct Dirac surface states originating from the opposing surface terminations represents a unique demonstration of the impact of the lack of inversion symmetry on the electronic states.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.8\textwidth]{newFig1.pdf}
\end{center}
\caption{\label{mFig1}(a) Crystal structure of BiPd, showing the preferred cleaving plane. (b) Schematic representation of the Brillouin zone of BiPd as well as the surface Brillouin zone of the (010) and (0\=10) surfaces. (c) Surface Brillouin zone with the cuts shown in panels (d)-(g) in blue.
(d) Experimental electronic band structure of a BiPd(010) surface along the S--$\Gamma$ direction ($\nu=21~\mathrm{eV}$). (e) Electronic structure measured with $\nu=29~\mathrm{eV}$. The photoemission intensities in (d) and (e) and in other photoemission maps in this paper are displayed using the colour scale shown in e.
(f) and (g) Calculated electronic structure of BiPd in slab geometry including the cuts shown in (d) and (e). The size of the circles is proportional to the spectral weight of Bi $6p$ states in the first (red) and second (blue) layer of the (010) and (0$\bar{\textrm{1}}$0) surfaces.
The surface states are labelled SS$n$+ and SS$n$-, where + and - denote whether they occur on the (010) or (0$\bar{\textrm{1}}$0) surface, while $n$ numbers the surface states sequentially with increasing binding energy.}
\end{figure*}
The crystal growth using a modified Bridgman--Stockbarger technique has been described in detail elsewhere~\cite{PeetsLT27}. The crystals were cooled slowly through the $\alpha-\beta$ phase transition to maximize the domain size of the low-temperature $\alpha$ phase; resulting in high-quality crystals~\cite{Peets2016}. At low temperature $\alpha$-BiPd (in the following referred to as ``BiPd'') forms in the noncentrosymmetric space group P2$_1$ \cite{Zhuravlev1957,bhatt1980kristallstruktur,Bhatt1979P17,Ionov1989}. The structure is characterized by two double layers stacked along the monoclinic $b$ axis, which are related by a 180$^{\circ}$ screw symmetry [see Fig.~\ref{mFig1}(a)]. Since the bonding between double layers is weaker than within them, the crystals readily cleave perpendicular to the monoclinic $b$ axis and, as previously demonstrated~\cite{sunNatComm15}, are twinned such that both (010) and (0\=10) surfaces can appear on the same side of the crystal (see ref.~\onlinecite{supp} for details on the cleaving procedure).
ARPES measurements were performed on freshly cleaved surfaces using (i) a Helium source ($\nu=21.2~\mathrm{eV}$ and 40.8~eV) with a hemispherical SPECS HSA3500 electron analyzer, and (ii) linearly-polarized synchrotron light from the UE112-PGM undulator beamline at BESSY II with a Scienta R8000 analyzer. The sample was held at temperatures lower than 100~K during cleaving and throughout the measurements.
STM experiments were performed in a home-built low-temperature STM operating at temperatures down to $1.5~\mathrm{K}$ in cryogenic vacuum~\cite{Singh2013}. Samples were prepared by {\it in-situ} cleaving at low temperatures. Tips were cut from a PtIr wire. Bias voltages were applied to the sample. Differential conductance spectra have been recorded through a lock-in amplifier ($f=408~\mathrm{Hz}$, $V_\mathrm{mod}=2~\mathrm{mV}$).
Figures~\ref{mFig1}(b) and (c) show schematically the bulk Brillouin zone and its surface projection. Figures~\ref{mFig1}(d) and (e) show the results of ARPES, measured along the $\Gamma$--S direction in the Brillouin zone at two different photon energies. The most prominent feature of the surface electronic structure when measured with a He-I lamp is the appearance of a strong state (labelled SS1+ in Fig.~\ref{mFig1}(d)) at the S-point at 0.7~eV binding energy. In addition, at higher photon energy (Fig.~\ref{mFig1}(e)), within the same directional band gap at the S-point a surface state SS1-- can be identified, albeit with much weaker intensity. These are identified as surface states through their lack of dispersion with varying the incident photon energy and hence $k_z$ \cite{supp}.
To understand the origin and topological nature of these surface states, we have employed fully relativistic linear muffin tin orbital calculations~\cite{andersenPRB75,book:AHY04,perlovYaresko} using a repeated slab system consisting of six BiPd double layers separated by two empty double layers which represent the vacuum. We find that around the Fermi energy $E_{\mathrm{F}}$, all the bands are mainly of Bi~$6p$ orbital character with subdominant but non-negligible contributions of Pd~$4d$ states.
The strong atomic SOC of Bi induces a spin splitting of the bands of the order of tens of meV and, moreover, results in a large energy shift of states that have predominant $p_{1/2}$ orbital character\cite{MPK80}. The latter leads to formation of a band gap at the $\Gamma$ point~\cite{sunNatComm15,MPK80}.
In Figs.~\ref{mFig1}(f) and (g), we show the calculated dispersions near $E_{\mathrm{F}}$ of the (010) and (0\=10) surfaces of BiPd, respectively, along high symmetry directions of the surface BZ [Fig.~\ref{mFig1}(b) and (c)]. The momentum-resolved surface densities of states at the (010) and (0\=10) sides are indicated by filled circles. Interestingly, Dirac surface states appear both at the $S$ and $\Gamma$ points of the surface BZ.
Thus by comparison with band structure calculations, the features SS1+ and SS1- seen in ARPES can be directly associated with the surface states of the BiPd surface.
The simultaneous observation of SS1+ and SS1- in the measurement is not reproduced in the calculations: the two states originate from opposite surface terminations, with the one at higher binding energy arising from the (010) termination and the one closer to the Fermi energy from the (0\=10) termination. Since these two terminations correspond to opposite surfaces of a single crystal, their simultaneous observation by ARPES indicates twin domains with opposite direction of the crystallographic $b$ axis within the beam spot. A structural transition around 200$^\circ$C~\cite{Bhatt1979P17,Ionov1989} is known to cause twinning, and this type of twin boundary has been previously observed by STM~\cite{sunNatComm15}.
\begin{figure}[htb]
\includegraphics[width=0.9\columnwidth]{fig2.pdf}
\caption{\label{mFig2}(a) and (b) Intensity maps of the energy and $k$-resolved surface band structure of BiPd measured along the $\text{S}-\Gamma^{\prime}$ direction with $\nu=21~\mathrm{eV}$ and $\nu=29~\mathrm{eV}$, respectively. (c) Schematic representation of the surface states. (d) Constant energy cuts obtained at 0.59, 0.7, and 0.83~eV, energies indicated as dashed horizontal lines in (a). Overlaid on the constant energy cuts is a schematic of the surface Brillouin zone.}
\end{figure}
In full agreement between experiment and theory, the spin-splitting of the surface state is substantially larger in the S-$\Gamma^\prime$ direction compared to the S-$\Gamma$ direction. Experimental data for the S-$\Gamma^\prime$ direction are shown in Figs.~\ref{mFig2}(a) and (b), taken at the same photon energies $\nu$ as Figs.~\ref{mFig1}(d) and (e), respectively. The two measurements show the states SS1+ and SS1-- with different intensities, but otherwise at the same energy and having the same dispersion, confirming that they are of two-dimensional character. The different intensities are likely due to final state effects. There are small differences in binding energies between experiment and calculation on the order of 100~meV. One likely source of this discrepancy is surface relaxation which is neglected in the calculation. Constant energy contours obtained at the energies around the Dirac point, shown in Fig.~\ref{mFig2}(d) for the energies labelled in Fig.~\ref{mFig2}(a), clearly reveal the two band maxima in the S-$\Gamma^\prime$ direction due to the strongly anisotropic Rashba splitting (see also Ref.~\onlinecite{supp}).
Data and calculations yield a further set of surface states at higher binding energies, which we label SS2+ and SS2-. As opposed to the hole-like SS1$\pm$ states, SS2$\pm$ have an electron-like dispersion. In the experiment, they are most clearly resolved with $\nu=21~\mathrm{eV}$ (Fig.~\ref{mFig2}(a)). They are located near the bottom of the directional band gap at the S-point and quickly develop into surface resonances when moving away from $S$.
Besides the surface states found at the $S$ points, the calculations reveal an additional pair of surfaces states at the $\Gamma$ point (labelled by SS0$\pm$ in Figs.~\ref{mFig1}(f) and (g)), which are in the unoccupied states and thus inaccessible to ARPES. For one termination, this state has been detected previously by STS~\cite{sunNatComm15}. While the Dirac-cone states at the $S$ point are present even if SOC is neglected, the Dirac state at the $\Gamma$ point appears within a gap opened up by SOC and arises as a consequence of an SOC-driven band inversion. This scenario is reminiscent of the topological insulator Bi$_2$Se$_3$~\cite{XiaNatPhys09}, indicating a possible topological origin. Here we show the signature of the surface state at $\Gamma$ for \emph{both} terminations from tunneling spectra, see Fig.~\ref{stmfig}. The terminations in the STM data have been identified from the surface corrugation (compare fig.~\ref{stmfig}(a-c)). Spectra of the surface state (fig.~\ref{stmfig}(d) show only a very small shift of $\sim 6~\mathrm{meV}$ between the two terminations, with the surface state showing up at larger energies on the termination which we identify as the (0\=10) surface.
\begin{figure}[htb]
\includegraphics[width=0.9\columnwidth]{Fig-ARPES-mod-2-calibrated.pdf}
\caption{\label{stmfig}(a) and (b) Topographies of the (0\=10) and (010) terminations respectively, obtained with the same tip. Blue/red spheres represent Bi atoms and green/purple Pd atoms in the top surface layer, compare fig.~\ref{mFig1}(a). (c) Linecuts of the two terminations, showing the different corrugations. Linecuts shifted horizontally for clarity. (d) $dI/dV$ spectra obtained on (0\=10) and (010) terminations. The surface state on the (0\=10) face is at a slightly larger energy than on (010) ($V_\mathrm s=0.5~\mathrm V$, $I_\mathrm s=2~\mathrm{nA}$).}
\end{figure}
We note that the band crossings of the Dirac states both at the $\Gamma$ and $S$ points are protected by time-reversal symmetry due to Kramers' theorem. Consistently for all surface states in the occupied states (SS1$\pm$, SS2$\pm$) those on the (010) surface occur at an energy at least 100~meV higher than on (0\=10), whereas the shift is very small and in the opposite direction for the surface state in the unoccupied states (SS0$\pm$).
We have fitted the standard Rashba-Bychkov model~\cite{Bychkov1984} to cuts through the experimental band structure maps along the high-symmetry directions to extract the magnitude of spin splitting for the most prominent surface state, SS1+. The dispersion about the high-symmetry $S$ point is modeled as
\begin{equation}
E_{\pm}(k) = \frac{\hbar^2}{2 m^{\ast}} (\left |k\right|\pm k_\mathrm R)^2 + E_0,
\label{Rashbamodel}
\end{equation}
where $k$ denotes the momentum along the chosen direction in the surface BZ, $m^\ast$ is the effective mass, and $k_\mathrm R$ and $E_0$ denote the momentum offset and the energy of the band maxima, respectively.
We quantify the size of the Rashba splitting by the momentum offset $k_\mathrm R$ and the energy difference $E_\mathrm R = \hbar^2 k_\mathrm R^2 /( 2 m^{\ast})$ between the band maximum $E_0$ and the band crossing point.
The fits used to extract these parameters for SS1+ are shown in Figs.~\ref{fig3fits}(a) and (b) for the S--$\Gamma^\prime$ and S--$\Gamma$ directions, respectively. The Rashba momentum offset $k_\mathrm R$ and energy $E_\mathrm R$ along the S--$\Gamma^\prime$ direction in BiPd rank among the largest reported thus far, while both are significantly smaller in the S--$\Gamma$ direction. The results are summarized and compared with a selection of previously reported values in Table~\ref{tab:compare}. Despite the large momentum offset, the Rashba parameter $\alpha_\mathrm R=\hbar^2k_\mathrm R/m^\ast$ of BiPd is smaller than for the Bi/Ag(111) surface alloy due to the much larger effective mass of the surface states of BiPd.
Large Rashba splittings, leading to well-separated spin-split bands, may prove useful for applications involving the transport of spin rather than charge. Interesting, the Figure~\ref{fig3fits}(c) shows a three-dimensional representation of the dispersion of SS1+ near the S-point, highlighting the anisotropy in the Rashba spin-splitting.
\begin{figure}[t!]
\includegraphics[width=0.9\columnwidth]{fig4_new.pdf}
\caption{\label{fig3fits}(a) and (b) Cuts in the S--$\Gamma^\prime$ and S--$\Gamma$ directions, respectively, from which the band structure parameters of
the SS1+ state have been determined. Solid lines show a fit of the Rashba model [Eq.~\eqref{Rashbamodel}] to the data. (c) Band structure of the Rashba-spin split surface state at the S-point (SS1+) as determined by fitting the Rashba model to the ARPES data. The Rashba spin splitting is highly anisotropic.}
\end{figure}
\begin{table}[htb]
\caption{\label{tab:compare}Rashba momentum $k_\mathrm R$ in \AA$^{-1}$, Rashba energy $E_\mathrm R$ in meV, and Rashba parameter $\alpha_\mathrm R$ in eV$\cdot$\AA\ of materials with large Rashba-type band splitting.}
\begin{tabular}{lr@{.}lcr@{.}lc}\hline
Sample & \multicolumn{2}{c}{$k_\mathrm R$} & $E_\mathrm R$& \multicolumn{2}{c}{$\alpha_\mathrm R$} & Ref. \\ \hline\hline
Au(111) & 0&012 & 2.1 & 0&33 & \onlinecite{LaShell1996} \\
Bi(111) & 0&05 & 14 & 0&55 & \onlinecite{Koroteev2004} \\
Bi/Ag surface alloy & 0&13 & 200 & 3&05 & \onlinecite{astPRL07} \\
BiTeI & 0&052 & 100 & 3&8 & \onlinecite{IshizakaNatMat11} \\
BaNiS$_2$ & 0&2 & 150 & 0&26 & \onlinecite{Santos2016}\\
BiPd SS1+, S--$\Gamma$ & 0&13 & 17 & 0&25 & this work\\
BiPd SS1+, S--$\Gamma^{\prime}$ & 0&75 & 208 & 0 &55 & this work \\ \hline
\end{tabular}
\end{table}
Since BiPd is noncentrosymmetric and no symmetry element can transform a (010) surface into a (0\=10) surface, the shapes and energies of these surfaces' Dirac states can be quite different, and indeed this is what we observe.
Our data reveal a surprising richness of Dirac surface states on the (010)/(0\=10) surfaces of BiPd. Evidence for a surface state above $E_\mathrm F$ at $\Gamma$~\cite{sunNatComm15} and the observation of surface state SS1+ have been recently reported~\cite{Neupane1505} (although with a different assignment of the $S$ and $\Gamma$ points in the latter). From a detailed comparison of calculations, ARPES and STM data we can identify two distinct surface states below the Fermi level at the $S$ point and one at the $\Gamma$ point, on each surface. The data reveal signatures of surface states from opposite orientations of the cystallographic $b$-axis, which occur on opposite faces of an ideal crystal, implying twinning on the scale of the ARPES spot size. Macroscopic studies of the impact of the lack of inversion symmetry on the material properties may therefore need to detwin the material to yield information from a single domain. The overall consistency of our results with the previously published data confirms the high reproducibility of the properties of BiPd.
The Rashba splitting of the surface states at the S-point exhibits a strong anisotropy, suggesting strongly directionally-dependent SOC in the surface state. This strong directional dependence can be understood by comparison with the surface structure of BiPd: the $\Gamma$--S direction is along rows of Bi (or Pd) atoms, therefore electronic states propagating along this direction are only moderately exposed to the surface corrugation. Along the $\Gamma$--S$^\prime$ (or equivalently S--$\Gamma^\prime$) direction, rows of Bi and Pd atoms alternate, and electronic states with wavevectors along this direction are exposed much more strongly to the surface corrugation and hence to the surface electric fields which generate the spin splitting. The connection between surface corrugation and the spin-orbit splitting has been discussed previously in the context of the Bi/Ag(111) surface alloy \cite{gierz_structural_2010,bian_origin_2013}. In BiPd, the corrugation of the top-most layer is a direct consequence of the crystal structure of the bulk material, boosting the spin splitting of the surface states only in specific directions due to the anisotropy of the crystal structure.
In summary, through comparison of ARPES and STM experiments with band structure calculations, we have confirmed the presence of unconventional Dirac surface states in
noncentrosymmetric BiPd. The extremely large and anisotropic Rashba splitting in this system makes it an excellent candidate for future studies on the intricate spin texture of spin-split bands. Our results suggest a new way to engineer anisotropic spin textures and Rashba splittings of surface states by exploiting the low symmetry of the surface termination. The findings provide independent confirmation of the existence of twin boundaries in the material~\cite{sunNatComm15}, which may prove crucial to understanding its superconducting properties~\cite{Peets2016,Yan2016}.
\emph{Acknowledgments.---}
The authors thank Ed~Yelland for useful discussions. Funding from the MPG-UBC center and the Engineering and Physical Sciences Research Council (EP/I031014/1 and EP/L505079/1) are acknowledged.
This work was supported by the DFG within projects STA315/8-1 and BE5190/1-1. We also thank the staff at Bessy II of the Helmholtz-Zentrum Berlin for their assistance.
|
1,314,259,995,566 | arxiv | \section{Introduction}
In the work of Keller--Rubinstein--Sternberg \cite{MR978829,MR1025956}, they created a general gradient flow theory in the descriptions of fast reaction and slow diffusion, and established its relations to the mean curvature flow (MCF) and the harmonic heat flows into manifolds. These involve some formal statements associated with the multiple component phase transitions with higher dimensional wells. Such statements are also refereed to as the {\it Keller--Rubinstein--Sternberg problem}. More precisely they investigated the vectorial Allen--Cahn equation (also called Ginzburg--Landau equation)
\[\partial_t \mathbf{u}_\e=\Delta \mathbf{u}_\e-\e^{-2} \partial F(\mathbf{u}_\e),\label{GL introduction}\]
where $\mathbf{u}_\e(x,t):\O\times (0,T) \mapsto \mathbb{R}^n$ is a mapping depending on a small parameter $\e>0$ and $\O\subset \mathbb{R}^k$ is a bounded domain with $C^1$ boundary. Here $F(\mathbf{u})$ is a double equal-well potential with ground state being the disjoint union of two hypersurfaces $\mathfrak{m}_\pm \subset \mathbb{R}^n$, and $\partial F(\mathbf{u})$ is the differential of $F$ at $\mathbf{u}$. The {\it Keller--Rubinstein--Sternberg problem} is concerned with the limiting behavior of $\mathbf{u}_\e$ as $\e$ tends to zero.
In the work of Lin--Pan--Wang \cite{Lin2012a}, they set up an analytic program to
rigorously justify the formally asymptotic analysis given in the aforementioned works of Keller--Rubinstein--Sternberg, and give a complete resolution of the static problem. To be more precise
they considered the minimizers of the Ginzburg--Landau functional
\[\int_\O\( \frac{\e }2 |\nabla \mathbf{u}_\e |^2+\frac 1\e F(\mathbf{u}_\e )\)\, dx\label{GL energy}\]
that satisfy well-prepared boundary conditions on $\partial\O$. They established the co-dimensional one interface limit of \eqref{GL energy}, which essentially generalized the $\Gamma$-convergence of Modica--Mortola \cite{MR0445362}
to the vectorial cases (though they did not state their main theorems in such an abstract manner). More importantly, they showed that the limit of $\mathbf{u}_\e$ in the bulk region corresponds to the harmonic maps into $\mathfrak{m}_\pm$, and they derived the so called {\it minimal pair boundary condition} which services as a constraint on the limiting harmonic maps when they are restricted on the interface. Such a non-standard boundary condition is a new feature that arises due to minimization of surface tension in vectorial cases. Note that such a condition holds trivially in the case of scalar Allen-Cahn equation because the limiting mappings get stuck at the two distinct points in $\mathbb{R}^1$.
The aim of this work is to solve the (dynamical) {\it Keller--Rubinstein--Sternberg problem} when $\mathfrak{m}_\pm$ are two disjoint hypersurfaces in the target space $\mathbb{R}^n$.
More precisely, we prove the following statements: Firstly, for well-prepared initial data, as $\e$ tends to $0$, the solution gradients of \eqref{GL introduction} will undergo phase transitions
across a moving interface $\Sigma_t$ that propagates as a (two-phase) mean curvature flow. Secondly, in the two bulk regions $\O_t^\pm$ segregated by the interface $\Sigma_t$, the solutions will converge to harmonic heat flows mapping into $\mathfrak{m}_\pm$ respectively. Finally, the one-sided traces of the limiting harmonic heat flows on $\Sigma_t$ must satisfy the {\it minimal pair boundary condition}.
Our first result is a vectorial analogy of the co-dimensional one scaling limit of scalar parabolic Allen--Cahn equation to the (two-phase) mean curvature flow, i.e. the special case of \eqref{GL introduction} when $\mathfrak{m}_\pm$ are two distinct points $a^\pm\in \mathbb{R}^1$. There have been major progresses in the scalar case over the last 30 years, made under different frameworks. Here we mention two classes of results and will discuss some other classes in the sequel. One is the convergence to a Brakke's flow by Ilmanen \cite{MR1237490} using a version of Huisken's monotonicity formula together with tools from geometric measure theory. See also \cite{MR1425577,MR1803974,MR2253464,MR3495430,MR2440879} and the references therein for further renovations. Despite of its energetic nature, a major difficulty of such an approach is the control of the so called {\it discrepancy measure}, and in every existing literature in this direction the solution relies crucially on a version of Modica's maximum principle \cite{MR803255}.
There have been attempts to generalize such a method to the vectorial cases. However, it is not clear whether Modica's maximum principle holds for elliptic system, cf. \cite{MR3624937}.
Another approach, which relies more on the parabolic comparison principle, is the convergence to the viscosity solution of mean curvature flow. These are weak solutions to the mean curvature flow built independently by Chen--Giga-Goto \cite{MR1100211} and Evans--Spruck \cite{MR1100206}. Concerning the global in time convergence of scalar Allen--Cahn equation to such solutions, we refer the readers to the work of Evans--Soner--Souganidis \cite{MR1177477}, the work of Soner \cite{MR1674799} and the references therein. These two approaches both give global in time (weak) convergences to weakly defined solutions of MCFs up to their life spans. However, as their technics involve parabolic maximum principle and the comparison principle in one way or another, it is not clear how to use them to attack the vectorial cases in general. It is worth mentioning that for radially symmetric initial data and when $\mathfrak{m}_\pm$ are two concentric circles, Bronsard--Stoth \cite{MR1443865} obtain global in time convergences to MCF of planar circles.
To the best of our knowledge, there are mainly two approaches to rigorously justify the convergences of the vectorial Allen--Cahn equations, both assuming the limiting interface propagation problem has a (local in time) classical solution. Compared with the aforementioned methods which lead to global in time (weak) convergences, they have quite different natures.
One of these methods is the asymptotical expansion technics developed by De Mottoni--Schatzman \cite{MR1672406} and Alikakos--Bates--Chen \cite{MR1308851}, which has been used recently in \cite{fei2021matrix,MR4059996} for matrix-valued cases of \eqref{GL introduction}.
In particular, Fei--Lin--Wang--Zhang \cite{fei2021matrix} studied the case when $\mathfrak{m}_\pm=O^\pm(n)$, the $n$-dimensional orthogonal group, and they derived the {\it minimal pair boundary condition} in a constructive way.
By inner-outer expansions together with a gluing procedure, such an approach reduces the convergence problem to a linear stability problem given that the limiting system (not merely the limiting interface motion) is strongly well-posed. We refer Lin--Wang \cite{MR4002307} for a general theory for the strong well-posedness of the limiting system.
The major challenge of this approach is the analysis of the spectrum of the linearized operator at the so called `optimal profile'. Indeed, the spectrum estimate in \cite{fei2021matrix} makes delicate use of decompositions involving the geometry of $O(n)$ and is much trickier than the scalar case obtained previously in \cite{MR1672406,MR1284813}.
Another approach, which also assumes a regular solution of the limiting interface motion but not the limiting harmonic heat flows, is the relative entropy method developed by Fischer--Laux--Simon \cite{fischer2020convergence}, motivated by Jerrard--Smets \cite{MR4072686} and Fischer--Hensel \cite{MR3353807}. A generalization to matrix--valued case has been done by Laux-Liu \cite{MR4284534} to study the isotropic--nematic transition in Landau--De Gennes model of liquid crystals, which essentially corresponds to the case when $\mathfrak{m}_+=\mathbb{RP}^2$ and $\mathfrak{m}_-=0$. More recently, in \cite{liu2021sharp} the author used these methods, together with those developed recently by Lin--Wang \cite{lin2020isotropic}, to attack the convergence problem of an anisotropic 2D Ginzburg--Landau model. In particular, he derived some delicate convergence results of the level sets of the solutions, which are crucial to obtain anchoring boundary conditions of the limiting solutions on the moving interface.
Now we introduce a minimum amount of terminologies necessary for stating the main result of this work.
Let
\[ \mathfrak{m}_\pm\text{ be two disjoint smooth, closed, connected hypersurfaces in } \mathbb{R}^n.\label{mm assumption}\]
For technical purposes we assume $0\in\mathfrak{m}_-$.
We assume that $F:\mathbb{R}^n\to [0,\infty)$ is a smooth function with $\mathfrak{m}_\pm$ being its double equal-wells:
\[\operatorname{Arg~min} F= \mathfrak{m}:= \mathfrak{m}_+\sqcup \mathfrak{m}_-. \label{limit manifold}\]
We assume that $F(\mathbf{u})$ only depends on the distance from $\mathbf{u}$ to $\mathfrak{m}$. That is,
\[F(\mathbf{u})=f(\mathrm{d}_\mathfrak{m}^2(\mathbf{u})),\label{bulk potential}\]
where $\mathrm{d}_\mathfrak{m}(\mathbf{u})$ is the signed-distance (see \eqref{dN global} below for the full definition), and $f$ satisfies
\begin{align} \label{bulk2}
\begin{cases}
f(s) \in C^2(\mathbb{R}^+, \mathbb{R}^+), &\\
c_1s\leq f(s) \leq c_2 s& \text{ if } 0\leq s\leq \delta_0^2,\\
f(s)= c_3 & \text{ if } s\geq \delta_0^2,\\
\lim_{s\downarrow 0} \tfrac{f(s)}s=c_4,
\end{cases}
\end{align}
for fixed positive constants $c_1,c_2,c_3,c_4,\delta_0\in \mathbb{R}^+$. Here $\delta_0$ is a small number so that the nearest-point projection $P_\mathfrak{m}$ from $B_{2\delta_0}(\mathfrak{m})$, the $2\delta_0$-tubular neighborhood of $\mathfrak{m}$, to $\mathfrak{m}$ is smooth.
We consider the following initial boundary value problems on a bounded domain $\O\subset \mathbb{R}^d$ with $C^1$ boundary:
\begin{subequations}\label{Ginzburg-Landau sys}
\begin{align}
\partial_{t} \mathbf{u}_\e &= \Delta \mathbf{u}_\e - \e ^{-2}\partial F(\mathbf{u}_\e )&&~\text{in}~ \Omega\times (0,T),\label{Ginzburg-Landau}\\
\mathbf{u}_\e &=\mathbf{u}_{\e}^{in}, &&~\text{in}~\Omega\times \{0\},\\
\mathbf{u}_\e &=\mathbf{g}, &&~ \text{on}~\partial\O\times (0,T).\label{bc of omega}
\end{align}
\end{subequations}
Here $\partial F(\mathbf{u})$ is the gradient of $F(\mathbf{u})$, and $\mathbf{g}:\overline{\O}\mapsto \mathfrak{m}_-$ is a smoothing mapping.
Our main result is concerned with the asymptotical behaviors of solutions to \eqref{Ginzburg-Landau sys} for well-prepared initial datum. To give an analytic characterization of such initial datum, we need to set up the geometry of the interface motion. To this end, we
assume
\begin{equation}\label{interface}
\Sigma=\bigcup_{t\in [0,T]}\Sigma_t \times \{t\}~\text{is a smoothly evolving closed hypersurface in}~\O,
\end{equation}
starting from a closed smooth surface $ \Sigma_0\subset \O$. We denote by $\O^\pm_t$ the domain enclosed by $\Sigma_t$, and by
\begin{align}
\mathrm{d}_\Sigma(x,t) \text{ the signed-distance from } x \text{ to the set } \Sigma_t \text{ taking positive values in }\O^+_t
\end{align}
and negative values in $\O^-_t=\O\backslash \overline{\O^+_t}$. In other words,
\begin{equation}\label{def:omegapm}
\Omega^{\pm}_t:= \{x\in\Omega\mid \mathrm{d}_\Sigma(x,t)\gtrless0\}.
\end{equation}
To avoid contact angle problems, we assume that $\Sigma$ stays at least $\delta_0$ distant away from $\partial\O$.
Following \cite{MR3353807,MR4072686,fischer2020convergence}, we define the modulated energy (also called the relative entropy energy) by
\begin{align}
\label{entropy}
E_\e [\mathbf{u}_\e | \Sigma](t) &:= \int_\O \(\frac{\e}{2}\left|\nabla \mathbf{u}_\e (\cdot,t)\right|^2+\frac{1}{\e} {F (\mathbf{u}_\e (\cdot,t))}- \boldsymbol{\xi} \cdot\nabla \psi_\e (\cdot,t) \)\, dx.
\end{align}
Here $\boldsymbol{\xi}$ is an appropriate extension of the unit normal vector field of $\Sigma$ (see \eqref{def:xi} below), and $\psi_\e$ is the
scalar function
\begin{align}
\psi_\e (x,t):= \mathrm{d}_F \circ \mathbf{u}_\e (x,t) \label{psi}
\end{align}
with $\mathrm{d}_F$ being defined by \eqref{quasidistance} below. As we shall see later on, the integrand of \eqref{entropy} is non-negative, and enjoys several coercivity estimates including controls of the {\it discrepancy} and the deviations of the normal vectors.
We also need the surface tension coefficient
\[c_F :=2 \int_{0}^{\frac 12\mathrm{dist}_{\mathfrak{m}}} \sqrt{2f\left(\lambda^2 \right)} d \lambda,\label{linpanwang cf equ}\]
where
$\mathrm{dist}_{\mathfrak{m}}$
is the Euclidean distance between $\mathfrak{m}_+$ and $\mathfrak{m}_-$,
and another modulated energy controlling the bulk errors:
\[ B[\mathbf{u}_\e | \Sigma](t):= \int_\O \Big(c_F\chi-c_F+ 2(\psi_\e-c_F)^- \Big)\eta\circ \mathrm{d}_\Sigma \, dx+\int_\O \( \psi_\e-c_F\)^+|\eta\circ\mathrm{d}_\Sigma| \, dx.\label{gronwall2new}\]
In \eqref{gronwall2new} $\chi(\cdot,t)=\mathbf{1}_{\O_t^+}-\mathbf{1}_{\O_t^-}$ and $g^\pm$ denotes the positive/negative parts of a function $g$ respectively, and $\eta$ is a $\delta_0$-truncation of the identity function (cf. \eqref{truncation eta}). In particular, we have $\eta\circ \mathrm{d}_\Sigma\geq 0$ in $\O$ due to our convention on the signed-distance function, and thus the integrands in \eqref{gronwall2new} are all non-negative. See the proof of Theorem \ref{thm volume convergence} below for more details.
The main result of this work is the following:
\begin{theorem}\label{main thm}
Assume that the surface $\Sigma_t$ \eqref{interface} evolves by mean curvature flow during $[0,T]$. If the initial datum of \eqref{Ginzburg-Landau sys} is well-prepared in the sense that
\begin{equation}\label{initial}
\e\|\mathbf{u}_\e(\cdot,0)\|_{L^\infty}+B[\mathbf{u}_\e | \Sigma](0)+E_\e [\mathbf{u}_\e | \Sigma](0)\leq C_1\e
\end{equation}
for some constant $C_1$ that is independent of $\e $, then there exists $C_2$ independent of $\e$ so that
\begin{align}
&\sup_{t\in [0,T]} E_\e [\mathbf{u}_\e | \Sigma](t)\leq C_2\e,\label{intro cali}\\
&\sup_{t\in [0,T]}\int_\O| \psi_\e-c_F \mathbf{1}_{\O_t^+} | \, dx\leq C_2\e^{1/2},\label{volume convergencethm}\\
&\lim_{\e\to 0}\int_\O\left(\frac {\e}2\left|\nabla \mathbf{u}_\e \right|^2+ \frac{F\left(\mathbf{u}_\e \right)}{\e}\right) \, d x=c_F\mathcal{H}^{d-1}(\Sigma_t),\quad \forall t\in [0,T].\label{intro energy conv}
\end{align}
Moreover, for some subsequence $\e _k\downarrow 0$ there holds
\begin{equation}\label{strong global of Q}
\mathbf{u}_{\e _k}\xrightarrow{k\to\infty } \mathbf{u}^\pm ~\text{weakly in}~ L^2(0,T;H^1_{loc}(\O^\pm_t)),
\end{equation}
where $\mathbf{u}^\pm$ are weak solutions to the harmonic heat flows into $\mathfrak{m}_\pm $ respectively and
\begin{equation}\label{reg limit}
\mathbf{u}^\pm \in L^\infty\(0,T;H^1( \Omega^\pm_t;\mathfrak{m}_\pm)\),\quad \partial_t \mathbf{u}^\pm \in L^2\(0,T; L^2_{loc}(\O^\pm_t)\).
\end{equation}
Furthermore, for almost every $t\in (0,T)$, $(\mathbf{u}^+,\mathbf{u}^-)|_{\Sigma_t}$ is a minimal pair, i.e.
\[|\mathbf{u}^+-\mathbf{u}^-|_{\mathbb{R}^n}(x,t)=\mathrm{dist}_\mathfrak{m} \qquad \mathcal{H}^{d-1}- a.e. ~ x \in \Sigma_t.\label{thm minimal pair}\]
\end{theorem}
A few comments are in order. Firstly, in \eqref{initial} the $L^\infty$ bound of the initial datum is used (together with \eqref{bulk2}) to obtain an uniform in space-time $L^\infty$-bound of $\mathbf{u}_\e $, i.e.
\[\|\mathbf{u}_\e \|_{L^\infty(\Omega\times(0,T))}\leq c_0\label{L infinity bound1}\]
for some fixed constant $c_0$. Such an estimate, derived by applying the maximum principle to \eqref{Ginzburg-Landau}, enables us to avoid several technical complications in the passage of the limit $\e\downarrow 0$. Indeed, even in the case when $d=2$, severe difficulties arise in the anisotropic model considered in \cite{liu2021sharp} where an estimate like \eqref{L infinity bound1} is not available.
Secondly,
we also get a bound like $\sup_{t\in [0,T]}B[\mathbf{u}_\e | \Sigma](t)\leq C\e$. However, we merely use such an estimate to derive \eqref{volume convergencethm}, which is crucial to obtain the minimal pair condition \eqref{thm minimal pair}. Now we turn to the discussion of the limiting maps $\mathbf{u}^\pm$. If we denote the second fundamental forms of $\mathfrak{m}_\pm$ at points $\mathbf{p}^\pm$ by $A^\pm(\mathbf{p}^\pm)(\cdot,\cdot)$, respectively.
Then the theorem above claims that the pair of mappings
\[\mathbf{u}^\pm(\cdot, t): \Omega^\pm_t \mapsto \mathfrak{m}_\pm\subset \mathbb{R}^n\label{upm mapping}\]
satisfies the following system in the weak sense:
\begin{subequations}\label{twophaselimit}
\begin{align}
\partial_{t} \mathbf{u}^\pm -\Delta \mathbf{u}^\pm&= A^\pm(\mathbf{u}^\pm) (\nabla \mathbf{u}^\pm,\nabla\mathbf{u}^\pm) &&~\text{in}~ \cup_{t\in [0,T]}\Omega^\pm_t\times \{t\},\label{twophaselimitharmonicflow}\\
|\mathbf{u}^+-\mathbf{u}^-|_{\mathbb{R}^n} &= \mathrm{dist}_\mathfrak{m} &&~\mathcal{H}^{d-1}\text{- a.e on }~\Sigma_t,\label{twophaselimitminimal}\\
\mathbf{u}^- &=\mathbf{g}, &&~ \text{on}~\partial\O. \label{twophaselimitbc}
\end{align}
\end{subequations}
The equation \eqref{twophaselimitharmonicflow} say $\mathbf{u}^\pm$ are harmonic map heat flows from the moving domain $\Omega^\pm_t$ to the target manifold $\mathfrak{m}_\pm$. The equation \eqref{twophaselimitminimal} is referred to as the minimal pair boundary condition by \cite{Lin2012a,MR4002307}.
In \eqref{twophaselimitbc}, $\mathbf{g}:\overline{\O}\mapsto \mathfrak{m}_-$ is a smooth mapping.
To make the main theorem applicable, we shall show that the class of initial datum fulfilling the condition \eqref{initial} is geometrically rich. This is stated in the following result.
\begin{theorem}\label{thm init}
For any $\delta\in (0,\delta_0)$ and
any pair of mappings $\mathbf{u}^{in}_\pm\in H^1(\O_0^\pm,\mathfrak{m}_\pm)$ with
\[|\mathbf{u}^{in}_+(p)-\mathbf{u}^{in}_-(p)|_{\mathbb{R}^n} = \mathrm{dist}_\mathfrak{m}, \quad \forall p\in \Sigma_0,\label{MC initial data}\]
there exist $\mathbf{u}^{in}_\e\in H^1(\O,\mathbb{R}^n)\cap L^\infty(\O)$ and a constant $C=C(\delta, \mathbf{u}^{in}_\pm)$ so that
\begin{align}
\mathbf{u}_\e^{in} &=\mathbf{u}^{in}_{\pm}~\text{ inside }~\O_0^\pm\backslash B_{2\delta}(\Sigma_0),\label{u coincide}\\
E_\e [\mathbf{u}_\e^{in} | \Sigma_0] & \leq C \e,\label{u cali}\\
B[\mathbf{u}_\e^{in} | \Sigma_0] &\leq C\e.\label{u bulk}
\end{align}
\end{theorem}
The rest of the work will be organized as follows: in Section \ref{sec pre}, we shall recall results that will be employed throughout the work. These include the compactness and closure of special function with bounded variation (cf. \cite[Chapter 4]{MR1857292}), the theory of minimal connection developed by Sternberg \cite{MR930124} and Lin--Pan--Wang \cite{Lin2012a}, the elements of differential geometry used in the description of interface motion, and finally the relative entropy method by Fischer--Laux--Simon \cite{fischer2020convergence}. In particular,
in Subsection \ref{sec entropy}, we shall adapt this later method to system \eqref{Ginzburg-Landau sys}, and then derive a differential inequality, i.e. Proposition \ref{gronwallprop}. Such an inequality was first derived by Laux--Liu \cite{MR4284534} for a matrix-valued system. This Proposition, when combined with Chen--Struwe \cite{MR990191} along with other results in Section \ref{sec level}, leads to the convergences to harmonic heat flows locally away from $\Sigma_t$. Another important consequence of Proposition \ref{gronwallprop} is a sharp $L^1$-convergence rate estimate of $\psi_\e$, obtained in Theorem \ref{thm volume convergence}. This theorem will be used
to derive fine estimates of the level sets of $\psi_\e$ in Lemma \ref{area control}, as well as convergences of some corrections of $\mathbf{u}_\e$ up to the free boundary $\Sigma_t$. All of these will be done in Section \ref{sec level}, and we shall use them to derive the minimal pair boundary condition \eqref{thm minimal pair} in Section \ref{sec mp}, and thus finish the proof of Theorem \ref{main thm}. Finally we prove Theorem \ref{thm init} in Section \ref{sec initial data}.
We end the introductory part by introducing the notations and conventions that will be employ through this work.
Unless specified otherwise $C>0$ is a generic constant whose value might change from line to line, and will depend only on the geometry of the interface \eqref{interface} but not on $\e$ or $t\in [0,T]$. In order to simplify the presentation, we shall sometimes abbreviate the estimates like $X\leq CY$ by $X\lesssim Y$ for some non-negative quantities $X,Y$.
We provide a list of symbols for the convenience of the readers:
\begin{itemize}
\item $A:B$ is the Frobenius inner product of two square matrices $A,B$, defined by $ \operatorname{tr} A^{\mathsf{T}} B$.
\item $\partial_i=\partial_{x_i} ~(0\leq i\leq d)$ with the convention that $\partial_t=\partial_0$.
\item $\nabla f$ is the (distributional) gradient of a function $f$ with variables $x=(x_1,\cdots,x_d)$.
\item $\partial W=(\partial_{u_1} W,\cdots,\partial_{u_n} W)$ is the gradient of a smooth function $W=W(\mathbf{u})$.
\item $\partial \mathrm{d}_F(\mathbf{u})$: the generalized gradient of $\mathrm{d}_F$ (cf. \eqref{def of linear map in chain}).
\item $\partial U$: measure-theoretic boundary of a set $U$ of finite perimeter with measure-theoretic outer normal vector $\nu$.
\item $\mathrm{dist}(\mathbf{u},A)$: distance from $\mathbf{u} $ to $A\subset \mathbb{R}^n$.
\item $\mathrm{dist}_{\mathfrak{m}}$: the distance between $\mathfrak{m}_\pm$ in $\mathbb{R}^n$, i.e. $\mathrm{dist}_{\mathfrak{m}}:=\inf_{\mathbf{p}^{\pm} \in \mathfrak{m}_\pm}\left|\mathbf{p}^{+}-\mathbf{p}^{-}\right|_{\mathbb{R}^n} $.
\item $\mathrm{d}_\mathfrak{m}(\mathbf{u})$: signed-distance from $\mathbf{u}\in\mathbb{R}^n$ to $\mathfrak{m}= \mathfrak{m}_+\sqcup \mathfrak{m}_-$ (cf. \eqref{dN global}).
\item $U_\pm$ are the domains enclosed by $\mathfrak{m}_\pm$ respectively, and $U_0=\mathbb{R}^n-\overline{U_+\cup U_-}$ (cf. \eqref{def U3}).
\item $B_\delta(U)$: the $\delta$-(tubular) neighborhood of a set $U$ in the corresponding Euclidean space. In particular, $B_\delta(x)$ is the ball centered at $x$.
\end{itemize}
\section{Preliminaries}\label{sec pre}
\subsection{Special function of bounded variation}
\begin{definition}
We say that $\mathbf{u} \in BV(\Omega,\mathbb{R}^n)$ is a special function with bounded variation and we write $\mathbf{u} \in SBV(\Omega)$, if the Cantor part of its distributional derivative $\nabla^c u$ vanishes, i.e.
\[
\nabla \mathbf{u}=\nabla^a \mathbf{u} \, \mathcal{L}^d+\left(\mathbf{u}^+-\mathbf{u}^-\right) \otimes \nu_\mathbf{u} ~\mathcal{H}^{d-1} \mres J_\mathbf{u} \quad \forall \mathbf{u} \in SBV(\Omega)
\]
where $\nabla^a$ denotes the absolutely continuous part of the distributional derivative (with respect to Lebesgue measure $\mathcal{L}^d$) and $J_\mathbf{u}$ is the jump set of $\mathbf{u}$ with measure theoretical outer normal vector $\nu_\mathbf{u}$.
\end{definition}
The following two results will be used to obtain convergences up to the free boundary. We refer the monograph of Ambrosio--Fusco--Pallara \cite{MR1857292} for proofs.
\begin{prop}\label{thmsbv}(Closure of $SBV$) Let $\varphi:[0, \infty) \rightarrow[0, \infty], $ be lower semicontinuous increasing functions and assume that
$\lim _{t \rightarrow \infty} \frac{\varphi(t)}{t}=\infty$.
Let $\Omega \subset \mathbb{R}^n$ be open and bounded, and let $\{\mathbf{u}_k\} \subset SBV(\Omega)$ such that
\[\sup _k \int_{\Omega} \varphi\left(\left|\nabla^a \mathbf{u}_k\right|\right) d x+\sup _k \int_{J_{\mathbf{u}_k}} \left|\mathbf{u}_k^{+}-\mathbf{u}_k^{-}\right| \, d \mathcal{H}^{d-1}<\infty\label{sbvthm4.4}\]
If $\{\mathbf{u}_k\}$ weakly-star converges in $BV(\Omega)$ to $\mathbf{u}$, then $\mathbf{u} \in SBV(\Omega)$, the absolute continuous part of the gradients $\nabla^a \mathbf{u}_k$ weakly converge to $\nabla^a \mathbf{u}$ in $L^{1}(\Omega)$, and the jump part of the gradient $\nabla^{j} \mathbf{u}_k$ weakly-star converge to $\nabla^j \mathbf{u}$ in $\Omega$. Moreover,
\begin{align}
\int_{\Omega} \varphi(|\nabla^a \mathbf{u}|) d x \leq \liminf _{k \rightarrow \infty} \int_{\Omega} \varphi\left(\left|\nabla^a \mathbf{u}_k\right|\right) d x \quad \text { if } \varphi \text { is convex }.
\end{align}\label{LSC sbv}
\end{prop}
\begin{prop}\label{AFP2}
(Compactness of $SBV$) Let $\varphi, \Omega$ as in Proposition \ref{thmsbv}. Let $\{\mathbf{u}_k\}\subset SBV(\Omega)$ be satisfying \eqref{sbvthm4.4} and assume, in addition, that $\left\|\mathbf{u}_k\right\|_{L^{\infty}}$ is uniformly bounded in $k$. Then, there exists a subsequence
\[\mathbf{u}_k\xrightarrow{k\to\infty}\mathbf{u} \in SBV(\Omega) \text{ weakly star in $BV(\O)$}. \]
\end{prop}
\subsection{Minimal connections}
Now let's briefly describe some basic properties of minimal orbits.
For any $\mathbf{p}^\pm\in \mathfrak{m}_\pm$, we define their minimal connection by
\begin{align}\label{der mc3}
\mathcal{C}_F(\mathbf{p}^+,\mathbf{p}^-)&:= \inf\left\{\int_{\mathbb{R}} \frac 12 |\boldsymbol{\gamma } '(t)|^2+ F (\boldsymbol{\gamma } (t))\, dt \,\Big| \, \boldsymbol{\gamma } \in H^1(\mathbb{R},\mathbb{R}^n ),\boldsymbol{\gamma } (\pm \infty)=\mathbf{p}^\pm\in \mathfrak{m}_\pm\right\}.
\end{align}
\begin{lemma}\label{lemma mc1}
The function $ \mathcal{C}_F(\mathbf{p}^+,\mathbf{p}^-): \mathfrak{m}_+\times \mathfrak{m}_-\mapsto \mathbb{R}^+$ is Lipschitz continuous. Moreover,
Let $c_F$ be the number defined by \eqref{linpanwang cf equ}, then
\[c_F=\inf_{\mathbf{p}^\pm\in \mathfrak{m}_\pm} \mathcal{C}_F(\mathbf{p}^+,\mathbf{p}^-),\label{def cf}\]
\end{lemma}
To proceed, we define the centralized potential, the even function
\[\tilde{F}(\lambda):= \begin{cases}f\left(\left(\tfrac {\mathrm{dist}_{\mathfrak{m}}}2 +\lambda\right)^2 \right) & \text { if } \lambda \leq 0, \\ f\left(\left(\tfrac {\mathrm{dist}_{\mathfrak{m}}}2 -\lambda\right)^2 \right) & \text { if } \lambda \geq 0,\end{cases}\label{centralized potential}\]
and the associated scalar-valued minimal connection problem
\[c_{\tilde{F}}:=\min \left\{\int_{\mathbb{R}}\left(\frac 12 \left|\boldsymbol{\gamma }'(s)\right|^2 +\tilde{F}(\boldsymbol{\gamma }(s))\right) d t~\Big|~ \boldsymbol{\gamma } \in H^{1}(\mathbb{R}), \boldsymbol{\gamma }(\pm \infty)=\pm \tfrac {\mathrm{dist}_{\mathfrak{m}}}2 \right\}.\label{optimial mini}\]
\begin{lemma}
It holds that
\[c_{\tilde{F}}=2 \int_{0}^{\tfrac{\mathrm{dist}_{\mathfrak{m}}}2} \sqrt{2\widetilde{F}(\lambda)} d \lambda \quad =2 \int_{0}^{\tfrac{\mathrm{dist}_{\mathfrak{m}}}2} \sqrt{2f\left(\lambda^2 \right)} d \lambda.\label{linpanwang 2.2}\]
There exists a minimizer $\alpha(s)$
of \eqref{optimial mini} that also satisfies
\begin{subequations}\label{optimal profile alpha}
\begin{align}
&\alpha(s) \in C^{\infty}\left(\mathbb{R},\left(-\tfrac{\mathrm{dist}_{\mathfrak{m}}}2, \tfrac{\mathrm{dist}_{\mathfrak{m}}}2\right)\right) \text{ is odd and strictly increasing in }\mathbb{R},\label{odd increase}\\
&-\alpha^{\prime \prime}(s)+ \tilde{F}'(\alpha(s))=0, \quad s\in \mathbb{R}; \quad \alpha(\pm \infty)=\pm \tfrac {\mathrm{dist}_{\mathfrak{m}}}2 .\label{travelling wave}\\
&\alpha'(s)=\sqrt{2\tilde{F}(\alpha(s))} \quad \forall s \in \mathbb{R},\label{alpha'=}\\
&\left|\alpha'(s)\right|+\left|\alpha(s)\mp\tfrac {\mathrm{dist}_{\mathfrak{m}}}2 \right| \leq C e^{-C |s|} \quad \text { as } s \rightarrow\pm\infty.\label{exp alpha}
\end{align}
\end{subequations}
\end{lemma}
We need an equivalent condition to the minimal pair one stated at \eqref{thm minimal pair}.
To this end, we introduce
\begin{align}\label{def Mpm}
&M^{+}:=\left\{\mathbf{p}^{+} \in \mathfrak{m}_+: \exists \,\mathbf{p}^{-} \in \mathfrak{m}_- \text { s.t. }\left|\mathbf{p}^{+}-\mathbf{p}^{-}\right|=\mathrm{dist}_{\mathfrak{m}}\right\}, \\
&M^{-}:=\left\{\mathbf{p}^{-} \in \mathfrak{m}_-: \exists \, \mathbf{p}^{+} \in \mathfrak{m}_+ \text { s.t. }\left|\mathbf{p}^{+}-\mathbf{p}^{-}\right|=\mathrm{dist}_{\mathfrak{m}}\right\}
\end{align}
\begin{lemma}{\cite[Theorem 2.1]{Lin2012a}}\label{lemma mc}
The function $\mathcal{C}_F(\cdot,\cdot):\mathfrak{m}_+\times\mathfrak{m}_-\mapsto [0,\infty)$ is Lipschitz continuous, and
$\mathcal{C}_F(\mathbf{p}^+,\mathbf{p}^-)\geq c_F$ for any $\mathbf{p}^\pm\in \mathfrak{m}_\pm$. Moreover, we have the equivalence
\begin{align}
\mathbf{p}^\pm\in \mathfrak{m}_\pm,\quad\mathcal{C}_F(\mathbf{p}^+,\mathbf{p}^-)=c_F \Longleftrightarrow \mathbf{p}^\pm\in M^\pm,\quad |\mathbf{p}^+-\mathbf{p}^-|=\mathrm{dist}_{\mathfrak{m}}.\label{rigidity cf}
\end{align}
Furthermore, assuming the left-hand side of \eqref{rigidity cf}. Then
the corresponding minimal connecting orbit $\boldsymbol{\gamma } \in H^{1}(\mathbb{R}, \mathbb{R}^{k})$ attaining $c^{F}$ \eqref{optimial mini} is the line segment
\[
\boldsymbol{\gamma }(t)=\tfrac{\mathbf{p}^++\mathbf{p}^-}{2}+\alpha(t) \tfrac{\mathbf{p}^+-\mathbf{p}^-}{\mathrm{dist}_{\mathfrak{m}}}, \quad t \in \mathbb{R},\label{straightline}
\]
where $\alpha \in H^{1}(\mathbb{R})$ is a solution to \eqref{optimial mini} and equivalently \eqref{travelling wave}.
\end{lemma}
Seemly more complicated, the condition on the LHS of \eqref{rigidity cf} is more compatible to the variational structure of the functional \eqref{GL energy} compared with the one on the RHS.
\subsection{Quasi-distance function}
To proceed, we introduce the quasi-distance function. To define it, we denote by
\[
\left\{
\begin{split}
&U_\pm \text{ the bounded open domain enclosed by }\mathfrak{m}_\pm, \\
&U_+\cap U_-=\emptyset, \qquad U_0=\mathbb{R}^n-\overline{U_+\cup U_-}.
\end{split}
\right.\label{def U3}\]
We denote the signed-distance function from $\mathbf{u}\in\mathbb{R}^n$ to the set $\mathfrak{m}_\pm$ by $\mathrm{d}_{\mathfrak{m}_\pm}(\mathbf{u})$, and make the following sign convention so that $\mathrm{d}_{\mathfrak{m}_\pm}$ are smooth in $B_{\delta_0}(\mathfrak{m}_\pm)$:
\[\mathrm{d}_{\mathfrak{m}_\pm}(\mathbf{u}) >0 \text{ for } \mathbf{u}\in U_\pm. \]
Then we define the signed-distance from $\mathbf{u}$ to $\mathfrak{m}$ according to its distance to its two components:
\[\mathrm{d}_{\mathfrak{m}}(\mathbf{u})=\left\{
\begin{split}
\mathrm{d}_{\mathfrak{m}_+}(\mathbf{u})&\qquad \text{ when } \mathrm{dist} (\mathbf{u},\mathfrak{m}_+)\leq \tfrac {\mathrm{dist}_{\mathfrak{m}}}2 ,\\
\mathrm{d}_{\mathfrak{m}_-}(\mathbf{u})&\qquad \text{ when } \mathrm{dist} (\mathbf{u},\mathfrak{m}_-)\leq \tfrac {\mathrm{dist}_{\mathfrak{m}}}2.
\end{split}
\right.\label{dN global}\]
With these notations, we define
\[\label{quasidistance}
\mathrm{d}_F( \mathbf{u}):=
\left\{
\begin{split}
\tfrac 12 c_F\quad &\text{ if } \mathrm{dist} (\mathbf{u},\mathfrak{m}_-)> \tfrac{\mathrm{dist}_\mathfrak{m}}2,\mathbf{u}\in U_-,\\
\int_0^{|\mathrm{d}_\mathfrak{m}(\mathbf{u})|}\sqrt{2f(\lambda^2)}\, d\lambda\quad &\text{ if } \mathrm{dist}(\mathbf{u},\mathfrak{m}_-)\leq \tfrac{\mathrm{dist}_\mathfrak{m}}2,\\
\tfrac 12 c_F \quad &\text{ if } \mathrm{dist}(\mathbf{u},\mathfrak{m})> \tfrac{\mathrm{dist}_\mathfrak{m}}2, \mathbf{u}\in U_0,\\
c_F-\int_0^{\mathrm{d}_\mathfrak{m}(\mathbf{u})}\sqrt{2f(\lambda^2)}\, d\lambda\quad & \text{ if } \mathrm{dist} (\mathbf{u},\mathfrak{m}_+)\leq \tfrac{\mathrm{dist}_\mathfrak{m}}2, \mathbf{u}\in U_0;\text{ or }\mathbf{u}\in U_+
\end{split}
\right.
\]
where $c_F$ is the surface tension coefficient \eqref{linpanwang cf equ}.
The function \eqref{quasidistance} is a modification of the one used in \cite{MR930124,MR985992}. Important properties of \eqref{quasidistance} is summarized in the following lemma.
\begin{lemma}\label{lemma quasidis}
The function $\mathrm{d}_F(\mathbf{u})$ (defined by \eqref{quasidistance}) is $C^1$ in $B_{\delta_0}(\mathfrak{m})$ and $\frac{\partial \mathrm{d}_F(\mathbf{u})}{| \partial \mathrm{d}_F(\mathbf{u})|}$ is a continuous unit vector field in $B_{\delta_0}$ so that
\[\frac{\partial \mathrm{d}_F(\mathbf{u})}{| \partial \mathrm{d}_F(\mathbf{u})|}= \partial \mathrm{d}_\mathfrak{m}(\mathbf{u})\qquad \forall \mathbf{u} \in B_{\delta_0}(\mathfrak{m}).\label{normalize pdf}\]
Moreover, it is Lipschitz continuous in $\mathbb{R}^n$, and
\[\label{eq:2.7}|\partial \mathrm{d}_F( \mathbf{u})|\leq \sqrt{2F (\mathbf{u})}\quad a.e.~\mathbf{u}\in \mathbb{R}^n, \]
and is
related to $c_F$ \eqref{optimial mini} by
\begin{align}
\label{eq:1.6}
\mathrm{d}_F(\mathbf{u})&=\left\{
\begin{array}{rl}
0\qquad\text{if and only if}&~\mathbf{u}\in \mathfrak{m}_-,\\
c_F\qquad \text{if and only if}&~\mathbf{u} \in \mathfrak{m}_+.
\end{array}
\right.\end{align}
\end{lemma}
\begin{proof}
The continuity of $\mathrm{d}_F$ follows from \eqref{linpanwang cf equ}. To show $\mathrm{d}_F\in C^1(B_{\delta_0}(\mathfrak{m}))$, it suffices to look into the second and the fourth cases in its definition \eqref{quasidistance}. In $B_{\delta_0}(\mathfrak{m})$, the possible singularities of its derivative are points in $\mathfrak{m}_-$, which are removable. Indeed, using the last condition in \eqref{bulk2}, one can verify that
\[h(s)=\int_0^{s^{1/2}}\sqrt{2f(\lambda^2)}\, d\lambda\in C^1[0,\infty).\]
As a result, we have from \eqref{quasidistance} that $\mathrm{d}_F(\mathbf{u})=h( \mathrm{d}_\mathfrak{m}^2)$ in $B_{\delta_0}(\mathfrak{m}_-)$.
This also implies \eqref{normalize pdf} by taking one-sided limits approaching $\mathfrak{m}$. We note that the signed distance function $\mathrm{d}_\mathfrak{m}$ is smooth in $B_{\delta_0}(\mathfrak{m})$.
It is obvious that $\mathrm{d}_F$ is Lipschitz in each subdomain where it is defined. So it suffices to check the Lipschitz condition across adjacent regions. For instance, if $\mathrm{dist}(\mathbf{u}_1,\mathfrak{m}_-)> \tfrac{\mathrm{dist}_\mathfrak{m}}2,\mathbf{u}_1\in U_-$ and $\mathrm{dist} (\mathbf{u}_2,\mathfrak{m}_-)\leq \tfrac{\mathrm{dist}_\mathfrak{m}}2 $, then by \eqref{linpanwang cf equ} and the first two cases in \eqref{quasidistance},
\begin{align*}
0&\leq \mathrm{d}_F(\mathbf{u}_1)-\mathrm{d}_F(\mathbf{u}_2)=\int_{\mathrm{dist}(\mathbf{u}_2,\mathfrak{m}_-)}^{\frac 12\mathrm{dist}_\mathfrak{m}}\sqrt{2f(\lambda^2)}\, d\lambda\\
&\leq \int_{\mathrm{dist}(\mathbf{u}_2,\mathfrak{m}_-)}^{\mathrm{dist}(\mathbf{u}_1,\mathfrak{m}_-)}\sqrt{2f(\lambda^2)}\, d\lambda\leq C \(\mathrm{dist}(\mathbf{u}_2,\mathfrak{m}_-)-\mathrm{dist}(\mathbf{u}_1,\mathfrak{m}_-)\)\leq C|\mathbf{u}_2-\mathbf{u}_1|.
\end{align*}
Other cases can be treated in a similar way. The inequality \eqref{eq:2.7} and the formula \eqref{eq:1.6} follows directly from the definition \eqref{quasidistance}.
\end{proof}
\subsection{Geometry of interfaces}\label{subsection geo}
Under a local parametrization $\boldsymbol{\varphi}_t(s):U\to \Sigma_t$, the MCF writes
\[\partial_t \boldsymbol{\varphi}_t(s)\cdot \mathbf{n}(s,t)=\mathbf{H}(\boldsymbol{\varphi}_t(s),t)\cdot \mathbf{n}(s,t) \label{csf}\]
where $\mathbf{H}$ is the (mean) curvature vector pointing to the inner normal $\mathbf{n}$.
For $\delta>0$, the $\delta$-neighborhood of $\Sigma_t$ are the open sets \begin{equation}
B_\delta(\Sigma_t):= \{x\in\Omega: | \mathrm{d}_\Sigma(x,t)|<\delta\}.
\end{equation}
We shall choose the $\delta_0$ (first appeared in \eqref{bulk2}) smaller enough so that the nearest point projection $$P_{\Sigma}(\cdot,t): B_{4\delta_0}(\Sigma_t) \mapsto \Sigma_t$$ is smooth for any $t\in [0,T]$, and the interface \eqref{interface} stays at least $\delta_0$ distance away from the boundary of the domain $\partial\O$.
Analytically we have $$P_\Sigma(x,t) =x-\nabla \mathrm{d}_\Sigma (x,t) \mathrm{d}_\Sigma (x,t).$$ So for each fixed $t\in [0,T]$, any point $x\in B_{4\delta_0}(\Sigma_t)$ corresponds to a unique pair $(r,s)$ with $r=\mathrm{d}_\Sigma (x,t)$ and $s\in \mathbb{T}^1$, and thus the identity
$$\mathrm{d}_\Sigma (\boldsymbol{\varphi}_t(s)+r\mathbf{n}(s,t), t)\equiv r$$ holds with independent variables $(r,s,t)$.
Differentiating this identity with respect to $r$ and $t$ leads to the following identities:
\[\nabla \mathrm{d}_\Sigma (x,t)= \mathbf{n}(s,t),\qquad -\partial_t \mathrm{d}_\Sigma (x,t)=\partial_t \boldsymbol{\varphi}_t(s)\cdot\mathbf{n}(s,t)=: V(s,t).\label{velocity}\]
This extends the normal vector and the normal velocity of $\Sigma_t$ to a neighborhood of it.
Recall in \eqref{def:xi}
that
we extend the normal vector field $\mathbf{n}$ of the
interface $\Sigma_t$ to a neighborhood of it.
Now we come to the definition of $\boldsymbol{\xi}$ in the modulated energy $E_\e [\mathbf{u}_\e | \Sigma](t)$ \eqref{entropy}. This is done by extending the inner normal vector field $\mathbf{n}$ through
\[\boldsymbol{\xi} (x,t)=\phi \( \frac{\mathrm{d}_\Sigma(x,t)}{\delta_0}\)\nabla \mathrm{d}_\Sigma(x,t).\label{def:xi}\]
In \eqref{def:xi} $\phi(x)\geq 0$ is an even, smooth function on $\mathbb{R}$ that decreases for $x\in [0,1]$, and satisfies
\begin{equation}\begin{cases}
\phi(x)>0~&\text{ for }~|x|< 1, \\
\phi(x)=0~&\text{ for }~|x|\geq 1, \\
1-4 x^2\leq \phi(x)\leq 1-\frac 12 x^2~&\text{ for }~|x|\leq 1/2.
\end{cases}\label{phi func control}
\end{equation}
\picdis{\begin{tikzpicture}[scale = 0.8]
\begin{axis}[axis equal,axis lines = left,
]
\addplot[domain=0: 0.9999,color=red,samples=100]{exp(x^2/(x^2-1))} node[above] {$\phi(x)$};
\addplot[domain=0:0.5,color=black]{1-4*x^2} node[above] {$1-4x^2$};
\addplot[domain=0:1,color=black]{1-0.5*x^2} node[below] {$1-\frac{x^2}2$};
\end{axis}
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale = 1]
\begin{axis}[axis equal,
axis lines=none,
xtick=\empty,
ytick=\empty,
]
\draw (30, 100) node {$\O_t^-$};
\addplot[samples=8, domain=1.8:pi-0.3,
variable=\t,
quiver={
u={-cos(deg(t))},
v={-sin(deg(t))},
scale arrows=0.09},
->,black]
({cos(deg(t))}, {sin(deg(t))});
\addplot[samples=10, domain=1.8:pi-0.3,
variable=\t,
quiver={
u={-1.5*cos(deg(t))},
v={-1.5*sin(deg(t))},
scale arrows=0.07},
->,black]
({1.5*cos(deg(t))}, {1.5*sin(deg(t))});
\addplot[samples=10, domain=1.8:pi-0.3,
variable=\t,
quiver={
u={-1.25*cos(deg(t))},
v={-1.25*sin(deg(t))},
scale arrows=0.13},
->,black]
({1.25*cos(deg(t))}, {1.25*sin(deg(t))}) ;
\addplot[samples=100, domain=1.5:pi-0.2,dashed]
({1.5*cos(deg(x))}, {1.5*sin(deg(x))});
\addplot[samples=100, domain=1.5:pi-0.2, dashed,black]
({cos(deg(x))}, {sin(deg(x))}) ;
\addplot[samples=100, domain=1.5:pi-0.2, very thick,red]
({1.25*cos(deg(x))}, {1.25*sin(deg(x))}) node[right]{$\Sigma_t$};
\draw (100, 45) node {$\boldsymbol{\xi} $};
\draw (100, 10) node {$\O_t^+$};
\end{axis}
\end{tikzpicture}
}
To fulfill these requirements, we can simply choose
\[\phi(x)=e^{\frac 1{x^2-1}+1}~\text{for}~|x|< 1~\text{and}~\phi(x)=0~\text{for}~|x|\geq 1.\label{phi func}\]
We also need to extend the curvature of \eqref{interface}. To proceed, choose a cut-off function
\[\eta_0\in C_c^\infty(B_{2\delta_0}(\Sigma_t))~\text{ with }\eta_0=1\text{ in }B_{\delta_0}(\Sigma_t).\label{cut-off eta delta}\]
We extend the curvature vector $\mathbf{H}$ by
\[ \mathbf{H}(x,t)=\kappa \nabla \mathrm{d}_\Sigma (x,t) \quad\text{with}\quad \kappa(x,t)=-\Delta \mathrm{d}_\Sigma (P_\Sigma(x,t))\eta_0(x,t).\label{def:H}\]
By \eqref{cut-off eta delta}, $\mathbf{H}$ is extended constantly in the normal direction. So we have
\begin{align}
(\mathbf{n}\cdot\nabla )\mathbf{H}&=0\text{ and } (\boldsymbol{\xi} \cdot\nabla )\mathbf{H}=0\quad \forall t\in [0,T],~ x\in B_{\delta_0}(\Sigma_t).\label{normal H}
\end{align}
Moreover, by \eqref{def:xi} we have
\begin{equation}\label{bc n and H}
\boldsymbol{\xi} =0 ~\text{and}~\mathbf{H}=0\quad \forall t\in [0,T],~ x\in \partial\O.
\end{equation}
\subsection{Modulated energy method}\label{sec entropy}
As the gradient flow of the Ginzburg--Landau energy \eqref{GL energy}, the system
\eqref{Ginzburg-Landau} has the following energy dissipation law
\begin{equation}\label{dissipation}
A_\e (\mathbf{u}_\e (\cdot,T))+ \int_0^T \int_\O \e |\partial_t \mathbf{u}_\e |^2 \,d x \,d t=A_\e (\mathbf{u}_\e (\cdot,0)),~\text{for all}~ T> 0.
\end{equation}
For initial data undergoing a transition near the interface $\Sigma_t$, due to concentrations of $\nabla \mathbf{u}_\e $ near $\Sigma_t$, the dissipation law \eqref{dissipation} is not sufficient to derive quantitative convergences of $\mathbf{u}_\e$, even away from $\Sigma_t$. Following a recent work of Fisher et al. \cite{fischer2020convergence} we shall develop in this section a calibrated inequality, which modulates the concentration and obtain the compactness of $\{\mathbf{u}_\e\}$ in Sobolev spaces.
Recalling the notions in Subsection \ref{subsection geo}, we claim the following identities which will be employed to prove the calibrated inequality:
\begin{subequations}\label{xi der}
\begin{align}
\nabla\cdot \boldsymbol{\xi} +\mathbf{H} \cdot \boldsymbol{\xi} &= O(\mathrm{d}_\Sigma ),\label{div xi H}\\
\partial_t \mathrm{d}_\Sigma (x,t) +(\mathbf{H}( x,t)\cdot\nabla) \mathrm{d}_\Sigma (x,t) &=0\quad \text{in}~B_{\delta_0}(\Sigma_t).\label{mcf}\\
\partial_t \boldsymbol{\xi} +\left(\mathbf{H} \cdot \nabla\right) \boldsymbol{\xi} +\left(\nabla \mathbf{H}\right)^{\mathsf{T}} \boldsymbol{\xi} &=0\quad \text{in}~B_{\delta_0}(\Sigma_t),\label{xi der1} \\
\partial_t |\boldsymbol{\xi} |^2 +\left(\mathbf{H} \cdot \nabla\right)|\boldsymbol{\xi} |^2 &=0\quad \text{in}~B_{\delta_0}(\Sigma_t),\label{xi der2}
\end{align}
\end{subequations}
where $\nabla \mathbf{H}:=\{\partial_j H_i\}_{1\leq i, j\leq d}$ is a matrix with $i$ being the row index.
\begin{proof}[Proof of \eqref{xi der}]
Recalling \eqref{def:xi}, $\phi_0(\tau):=\phi (\frac \tau{\delta_0})$ is an even function. So it follows from $\phi_0'(0)=0$ and Taylor's expansion in $\mathrm{d}_\Sigma$ that
\begin{align*}
\nabla\cdot \boldsymbol{\xi} &=|\nabla \mathrm{d}_\Sigma|^2 \phi_0'(\mathrm{d}_\Sigma)+\phi_0(\mathrm{d}_\Sigma)\Delta \mathrm{d}_\Sigma(x,t)
\\&=O(\mathrm{d}_\Sigma) +\phi_0 (\mathrm{d}_\Sigma)\Delta \mathrm{d}_\Sigma(P_I(x,t),t),
\end{align*}
and this together with \eqref{def:H} leads to \eqref{div xi H}.
Using \eqref{velocity} and \eqref{def:H}, we can write \eqref{csf} as the transport equation \eqref{mcf}.
By \eqref{mcf} we have the following identities in $B_{\delta_0}(\Sigma_t)$:
\begin{align*}
\partial_t \nabla \mathrm{d}_\Sigma+(\mathbf{H} \cdot \nabla) \nabla \mathrm{d}_\Sigma +(\nabla \mathbf{H} )^{\mathsf{T}} \nabla \mathrm{d}_\Sigma=0,\\
\partial_t \phi_0(\mathrm{d}_\Sigma )+ (\mathbf{H} \cdot\nabla) \phi_0(\mathrm{d}_\Sigma)=0.
\end{align*}
These two equations together imply \eqref{xi der1}. Finally \eqref{xi der2} is a consequence of \eqref{xi der1} and \eqref{normal H}.
\end{proof}
Now we discuss the differentiability of $\psi_\e$ (cf. \eqref{psi}).
It follows from Lemma \ref{lemma quasidis} that $\mathrm{d}_F(\cdot)$ is a Lipschitz function in $\mathbb{R}^n$ with $\mathrm{d}_F(0)=0$ under the assumption that $0\in \mathfrak{m}^-$. Following Laux--Simon \cite{MR3847750}, for every $ (x,t)\in \O\times [0,T]$, we consider the restriction of $\mathrm{d}_F(\mathbf{u}_\e(x,t))$ to the affine space
\begin{align*}
&T^{\mathbf{u}_\e}_{x,t}:=\mathbf{u}_\e(x,t)+ {\rm span}\{\partial_0 \mathbf{u}_\e(x,t) ,\cdots, \partial_d\mathbf{u}_\e(x,t)\},
\end{align*}
denoted by $\mathrm{d}_F|_{T^{\mathbf{u}_\e}_{x,t}}$. By the generalized chain rule \cite{MR969514}, $\mathrm{d}_F|_{T^{\mathbf{u}_\e}_{x,t}}$
is differentiable at $\mathbf{u}_\e(x,t)$. Now we denote the orthogonal projection from $\mathbb{R}^n$ to the subspace $T^{\mathbf{u}_\e}_{x,t}-\mathbf{u}_\e(x,t)$ by $\Pi_{x,t}^{\mathbf{u}_\e}$, and define the generalized differential by
\[\partial \mathrm{d}_F(\mathbf{u}_\e):=\partial \(\mathrm{d}_F|_{T^{\mathbf{u}_\e}_{x,t}}\)\Big|_{\mathbf{u}_\e(x,t)}\circ \Pi^{\mathbf{u}_\e}_{x,t}.\label{def of linear map in chain}\]
Then we have for $0\leq i\leq d$ that
\[\partial \mathrm{d}_F(\mathbf{u}_\e)\cdot \partial_i \mathbf{u}_\e=\partial \(\mathrm{d}_F|_{T^{\mathbf{u}_\e}_{x,t}}\)\Big|_{\mathbf{u}_\e(x,t)}\cdot \partial_i \mathbf{u}_\e=\partial_i (\mathrm{d}_F\circ \mathbf{u}_\e)\]
where the second equality is due to the directional derivative at $\mathbf{u}_\e(x,t)$ pointing to $\partial_i \mathbf{u}_\e(x,t)$.
This proves the generalized chain rule
\begin{align}
\label{ADM chain rule}
\partial_i\psi_\e (x,t) = \partial_i \mathbf{u}_\e (x,t)\cdot\partial\mathrm{d}_F\(\mathbf{u}_\e (x,t)\). \end{align}
Moreover, we generalize the a.e. point-wise differential inequality \eqref{eq:2.7} to
\[\label{eq:2.7global}|\partial \mathrm{d}_F( \mathbf{u}_\e)|\leq \sqrt{2F (\mathbf{u}_\e)}. \]
To proceed, we define the phase-field analogues of the normal vector and the mean curvature vector respectively by
\begin{subequations}
\begin{align}
\mathbf{n}_\e (x,t)&:=\begin{cases}
\frac{\nabla \psi_\e }{|\nabla \psi_\e|}(x,t)&\text{ if } \nabla \psi_\e (x,t)\neq 0,\\
0& \text{otherwise}.
\end{cases}
\label{normal diff}\\
\mathbf{H}_\e (x,t)&:=\begin{cases}
-\left(\e \Delta \mathbf{u}_\e -\frac{1}{\e }\partial F(\mathbf{u}_\e ) \right)\cdot\frac{\nabla \mathbf{u}_\e }{\left|\nabla \mathbf{u}_\e \right|} &\text{ if } \nabla \mathbf{u}_\e\neq 0,\\
0&\text{otherwise}.
\end{cases}
\label{mean curvature app}
\end{align}
\end{subequations}
Note that in \eqref{mean curvature app}, the inner product is made with the column vectors of $\nabla \mathbf{u}_\e=(\partial_1 \mathbf{u}_\e,\cdots,\partial_d \mathbf{u}_\e)$.
We also define the orthogonal projection:
\[ \label{projection1}
\Pi_{\mathbf{u}_\e } \partial_i \mathbf{u}_\e :=
\left\{
\begin{split}
\big(\partial_i \mathbf{u}_\e \cdot \partial \mathrm{d}_\mathfrak{m}(\mathbf{u}_\e)\big) \partial \mathrm{d}_\mathfrak{m}(\mathbf{u}_\e)&~\text{ if }~ \mathbf{u}_\e\in B_{\delta_0}(\mathfrak{m}),\\
\(\partial_i \mathbf{u}_\e \cdot\frac{\partial \mathrm{d}_F (\mathbf{u}_\e ) } {|\partial \mathrm{d}_F (\mathbf{u}_\e )|}\) \frac{\partial\mathrm{d}_F(\mathbf{u}_\e)}{|\partial \mathrm{d}_F (\mathbf{u}_\e )|} &~\text{ if }~ \mathbf{u}_\e\notin B_{\delta_0} \text{ and } \partial\mathrm{d}_F(\mathbf{u}_\e) \neq 0,\\
0 &~ \text{ if }~ \mathbf{u}_\e\notin B_{\delta_0} \text{ and } \partial\mathrm{d}_F(\mathbf{u}_\e) = 0
\end{split}
\right.
\]
where $\partial \mathrm{d}_F$ is interpreted as the generalized differential \eqref{def of linear map in chain} in case $\mathrm{d}_F$ is not classically differentiable at $\mathbf{u}_\e$.
\begin{lemma}
The following two identities holds:
\begin{align}
\label{projectionnorm}
|\nabla \psi_\e | &= |\Pi_{\mathbf{u}_\e } \nabla \mathbf{u}_\e | |\partial \mathrm{d}_F (\mathbf{u}_\e )| &\text{ for any } (x,t),\\
\label{projection}
\Pi_{\mathbf{u}_\e } \nabla \mathbf{u}_\e &=\frac{|\nabla\psi_\e |} {|\partial \mathrm{d}_F (\mathbf{u}_\e )|^2}\partial \mathrm{d}_F (\mathbf{u}_\e )\otimes \mathbf{n}_\e &\text{ if } \partial \mathrm{d}_F (\mathbf{u}_\e )\neq 0,
\end{align}
where in \eqref{projection} the projection applies to each component of $\nabla \mathbf{u}_\e$.
\end{lemma}
\begin{proof}
Concerning \eqref{projectionnorm}, if $\mathbf{u}_\e\in B_{\delta_0}(\mathfrak{m})$, then according to Lemma \ref{lemma quasidis}, $\mathrm{d}_F$ is $C^1$ and the generalized differential \eqref{def of linear map in chain} coincide with the classical one, and $\frac{\partial\mathrm{d}_F\(\mathbf{u}_\e \)}{|\partial\mathrm{d}_F\(\mathbf{u}_\e \)|}|$ is well-defined. Thus
\begin{align}
\partial_i\psi_\e \overset{\eqref{ADM chain rule}}= \partial_i \mathbf{u}_\e \cdot \frac{\partial\mathrm{d}_F\(\mathbf{u}_\e \)}{|\partial\mathrm{d}_F\(\mathbf{u}_\e \)|}|\partial\mathrm{d}_F\(\mathbf{u}_\e\)| \overset{\eqref{normalize pdf}}= \partial_i \mathbf{u}_\e \cdot \partial \mathrm{d}_\mathfrak{m}(\mathbf{u}_\e)|\partial\mathrm{d}_F\(\mathbf{u}_\e\)|
\end{align}
This together with $|\partial \mathrm{d}_\mathfrak{m}(\mathbf{u}_\e)|=1$ and the first case in \eqref{projection1} leads to \eqref{projectionnorm}.
If $\mathbf{u}_\e\notin B_{\delta_0}$ and $\partial\mathrm{d}_F(\mathbf{u}_\e) \neq 0$, then the second case in \eqref{projection1} leads to \eqref{projectionnorm}. If $\mathbf{u}_\e\notin B_{\delta_0}$ and $\partial\mathrm{d}_F(\mathbf{u}_\e) = 0$, then by \eqref{def of linear map in chain}, we have $\nabla \psi_\e=0$ too. This finishes the proof of \eqref{projectionnorm}.
The statement \eqref{projection} holds when $\nabla\psi_\e=0$ because \eqref{projectionnorm} then implies $\Pi_{\mathbf{u}_\e } \nabla \mathbf{u}_\e=0$. When $\nabla\psi_\e\neq 0$, then
\begin{align}
\frac{|\nabla\psi_\e |} {|\partial \mathrm{d}_F (\mathbf{u}_\e )|^2}\partial \mathrm{d}_F (\mathbf{u}_\e )\otimes \mathbf{n}_\e\overset{
\eqref{normal diff}}=\frac{\partial \mathrm{d}_F (\mathbf{u}_\e ) } {|\partial \mathrm{d}_F (\mathbf{u}_\e )|^2}\otimes \nabla\psi_\e \nonumber\\\overset{\eqref{ADM chain rule}}= \frac{\partial \mathrm{d}_F (\mathbf{u}_\e ) } {|\partial \mathrm{d}_F (\mathbf{u}_\e )|^2}\otimes \(\nabla \mathbf{u}_\e\cdot\partial\mathrm{d}_F(\mathbf{u}_\e)\).
\end{align}
When $\mathbf{u}_\e\notin B_{\delta_0}$, this last term is exactly the second case defining $\Pi_{\mathbf{u}_\e } \nabla \mathbf{u}_\e$. When $\mathbf{u}_\e\in B_{\delta_0}$, by \eqref{normalize pdf} it simplifies to
\[ \big(\nabla \mathbf{u}_\e \cdot \partial \mathrm{d}_\mathfrak{m}(\mathbf{u}_\e)\big) \partial \mathrm{d}_\mathfrak{m}(\mathbf{u}_\e) \overset{\eqref{projection1}}=\Pi_{\mathbf{u}_\e } \nabla \mathbf{u}_\e. \]
This finish the proof of \eqref{projection}.
\end{proof}
As we shall not integrate the time variable $t$ throughout this section, we shall abbreviate the spatial integration $\int_\O$ by $\int$ and sometimes we omit the $\,dx$.
The following lemma gives various coercivity estimates of $E_\e [\mathbf{u}_\e | \Sigma]$ \eqref{entropy}. It was due to \cite{MR4284534}, generalizing the one by \cite{fischer2020convergence} to vectorial cases. We present the proof for the convenience of the readers.
\begin{lemma}\label{lemma:energy bound}
There exists a universal constant $C>0$ which is independent of $t\in [0,T)$ and $\e $ such that the following estimates hold for every $t\in (0,T)$:
\begin{subequations} \label{energy bound}
\begin{align}
\int \(\frac{\e }{2} \left|\nabla \mathbf{u}_\e \right|^2+\frac{1}{\e } F (\mathbf{u}_\e )-|\nabla \psi_\e | \) \, d x & \leq E_\e [ \mathbf{u}_\e | \Sigma ] , \label{energy bound-1}\\
\e \int \( \left|\nabla \mathbf{u}_\e -\Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|^2 \)\, d x & \leq 2 E_\e [ \mathbf{u}_\e | \Sigma ] ,\label{energy bound0}\\
\int\left(\sqrt{\e }\left|\Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|-\frac1{\sqrt{\e }} \left|\partial \mathrm{d}_F (\mathbf{u}_\e )\right| \right)^{2}\, d x & \leq 2 E_\e [ \mathbf{u}_\e | \Sigma ] ,\label{energy bound2}\\
\int\( {\frac{\e }{2}}\left| \nabla \mathbf{u}_\e \right|^{2} +\frac{1}{\e } F (\mathbf{u}_\e )+\left|\nabla \psi_\e\right|\)\left(1-\boldsymbol{\xi} \cdot\mathbf{n}_\e\right) \, d x & \leq C E_\e [ \mathbf{u}_\e | \Sigma ] ,\label{energy bound1}
\\
\int \(\frac{\e }2 \left|\nabla \mathbf{u}_\e \right| ^{2} +\frac{1}{\e } F (\mathbf{u}_\e )+|\nabla\psi_\e |\) \min\(\mathrm{d}_\Sigma^2,1\)\, d x & \leq C E_\e [ \mathbf{u}_\e | \Sigma ].
\label{energy bound3}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
Using \eqref{normal diff}, we obtain $\nabla\psi_\e=|\nabla\psi_\e|\mathbf{n}_\e$. Note also that \eqref{projection1} implies
\[\left|\nabla \mathbf{u}_\e -\Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|^2+\left| \Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|^2=\left|\nabla \mathbf{u}_\e \right|^2.\label{gougudingli}\]
Altogether, we can write
\begin{align}
E_\e [ \mathbf{u}_\e | \Sigma ] = & \int \frac \e 2\left| \nabla \mathbf{u}_\e \right|^2 +\frac{1}{\e } F (\mathbf{u}_\e )-|\nabla \psi_\e | +\int |\nabla\psi_\e | (1-\boldsymbol{\xi} \cdot\mathbf{n}_\e )\nonumber\\
= & \frac{\e }2 \int \left|\nabla \mathbf{u}_\e -\Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|^2 \nonumber \\
&+ \int \frac \e 2\left| \Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|^2 +\frac{1}{\e } F (\mathbf{u}_\e )-|\nabla \psi_\e |\nonumber \\
& +\int |\nabla\psi_\e | (1-\boldsymbol{\xi} \cdot\mathbf{n}_\e ).\label{E decom1}
\end{align}
By \eqref{eq:2.7global} and \eqref{projectionnorm},
the second last integral has non-negative integrand. and the claim follows from and completing a square. This also yields
\eqref{energy bound-1}, \eqref{energy bound0} and \eqref{energy bound2}.
Combining \eqref{energy bound-1} with
$E_\e [ \mathbf{u}_\e | \Sigma ]\geq \int\left(1-\boldsymbol{\xi} \cdot\mathbf{n}_\e\right)\left|\nabla \psi_\e\right|$
and $1-\boldsymbol{\xi} \cdot\mathbf{n}_\e\leq 2$ yields \eqref{energy bound1}.
Finally, by \eqref{phi func control} and $\delta_0\in (0,1)$ we have
\[1-\boldsymbol{\xi} \cdot\mathbf{n}_\e \geq 1-\phi\(\frac {\mathrm{d}_\Sigma}{\delta_0}\) \geq \min \(\frac {\mathrm{d}_\Sigma^2}{2\delta_0^2}, 1-\phi(\tfrac 1 2)\)\geq C_{\phi,\delta_0} \min(\mathrm{d}_\Sigma^2,1).\label{lowerbdcali}\]
This together with \eqref{energy bound1} implies \eqref{energy bound3}.
\end{proof}
The following result was first proved in \cite{fischer2020convergence} for the scalar case, and was generalized to a matrix-valued one in \cite{MR4284534}.
We present the proof in Appendix \ref{appendix} for the convenience of the readers
\begin{prop}\label{gronwallprop}
There exists a constant $C=C(\Sigma_t)$ depending on the interface $\Sigma_t$ such that
\begin{align}
\frac{d}{d t} E_\e [ \mathbf{u}_\e | \Sigma] &+\frac 1{2\e }\int \(\e ^2 | \partial_t \mathbf{u}_\e |^2-|\mathbf{H}_\e |^2\)\,dx+\frac 1{2\e }\int \Big| \mathbf{H}_\e -\e |\nabla \mathbf{u}_\e |\mathbf{H} \Big|^2\,dx \nonumber \\
&+\frac 1{2\e }\int \Big| \e \partial_t \mathbf{u}_\e -(\nabla\cdot \boldsymbol{\xi} )\partial \mathrm{d}_F (\mathbf{u}_\e ) \Big|^2\,dx \leq CE_\e [ \mathbf{u}_\e | \Sigma]. \label{gronwall}
\end{align}
\end{prop}
\section{Estimates of level sets}\label{sec level}
The main task of this section is to derive the convergence rate estimate \eqref{volume convergencethm} and use it to obtain fine estimates of the level sets of $\psi_\e$. We start with a corollary of Proposition \ref{gronwallprop}.
\begin{lemma}\label{lemma level}
There exists a universal constant $C=C( \Sigma_0)$ such that
\begin{subequations}
\begin{align}
&\sup_{t\in [0,T]} \int_\O \( \frac{\e}2 \left|\nabla \mathbf{u}_\e \right| ^2 +\frac1 {\e}{F (\mathbf{u}_\e )}-\boldsymbol{\xi}\cdot \nabla \psi_\e \)\, d x\leq C\e,\label{calibration est2}\\
\label{energy bound4}
&\sup_{t\in [0,T]} \int_\O \( \left|\nabla \mathbf{u}_\e -\Pi_{\mathbf{u}_\e }\nabla \mathbf{u}_\e \right|^2 \)\, dx+ \int_0^T\int_\O \( \left|\partial_t \mathbf{u}_\e -\Pi_{\mathbf{u}_\e }\partial_t \mathbf{u}_\e \right|^2 \)\, dxdt\leq C,\\
&\sup_{t\in [0,T]} A_\e(\mathbf{u}_\e(\cdot,t)) + \sup_{t\in [0,T]} \|\nabla\psi_\e(\cdot, t)\|_{L^1(\O) } \leq C,\label{nablapsiest}\\
&\sup_{t\in [0,T]} \int_\O |\nabla \psi_\e|-\boldsymbol{\xi}\cdot \nabla \psi_\e \, d x\leq C\e,\label{calibration est1}.
\end{align}
\end{subequations}
Moreover, for any fixed $\delta\in (0, \delta_I)$, there holds
\begin{subequations}\label{est away}
\begin{align}\label{space der bound local}
\sup_{t\in [0,T]}\int_{\O^\pm_t\backslash B_\delta(\Sigma_t)}\(|\nabla \mathbf{u}_\e |^2+\frac{{F(\mathbf{u}_\e )}}{\e ^2}\)\, dx \leq \delta^{-2}C,\\
\int_0^T\int_{\O^\pm_t\backslash B_\delta(\Sigma_t)} |\partial_t \mathbf{u}_\e |^2\, dx dt\leq \delta^{-2}C.\label{time der bound local}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
To prove \eqref{energy bound4}, it follows from \eqref{gronwall} and the assumption \eqref{initial} that
\begin{align}\label{energy1}
& \sup_{t\in [0,T]} \frac{1}\e E_\e [ \mathbf{u}_\e | \Sigma] (t)+\frac 1{\e ^2}\int_0^T\int_\O \Big| \e \partial_t \mathbf{u}_\e -\partial \mathrm{d}_F (\mathbf{u}_\e ) (\nabla\cdot\boldsymbol{\xi}) \Big|^2\, dxdt \nonumber \\
& +\frac 1{\e ^2}\int_0^T\int_\O \(\e ^2 | \partial_t \mathbf{u}_\e |^2-|\mathbf{H}_\e |^2+ \Big| \mathbf{H}_\e -\e \mathbf{H} |\nabla \mathbf{u}_\e |\Big|^2\)\, dxdt\nonumber\\
&\qquad \leq \frac 1\e e^{(1+T)C( \Sigma_0)} E_\e [\mathbf{u}_\e | \Sigma](0) \leq e^{(1+T)C( \Sigma_0)}. \end{align}
Now we show the third term on the left hand side of \eqref{energy1} has non-negative integrand.
By \eqref{Ginzburg-Landau} and \eqref{mean curvature app} we have $\mathbf{H}_\e =-\e \partial_t \mathbf{u}_\e \cdot\frac{\nabla \mathbf{u}_\e }{|\nabla \mathbf{u}_\e |}$. Using this, we can expand the second integrand in \eqref{energy1} and apply the Cauchy-Schwarz inequality to obtain
\begin{align*}
&~\e ^2 | \partial_t \mathbf{u}_\e |^2-|\mathbf{H}_\e |^2+ \Big| \mathbf{H}_\e -\e \mathbf{H} |\nabla \mathbf{u}_\e |\Big|^2\\
=&~\e ^2 | \partial_t \mathbf{u}_\e |^2+\e ^2 |\mathbf{H}|^2 |\nabla \mathbf{u}_\e |^2+2\e ^2 (\mathbf{H}\cdot \nabla) \mathbf{u}_\e \cdot\partial_t \mathbf{u}_\e \\
\geq &~\e ^2|\partial_t \mathbf{u}_\e +(\mathbf{H} \cdot\nabla) \mathbf{u}_\e |^2.
\end{align*}
This leads to \eqref{calibration est2}, and together with \eqref{energy1} implies
\begin{equation}\label{time est1}
\int_0^T\int_\Omega |\partial_t \mathbf{u}_\e +(\mathbf{H} \cdot\nabla) \mathbf{u}_\e |^2\, dx dt\leq e^{(1+T)C( \Sigma_0)}.
\end{equation}
On the other hand, using the orthogonal projection \eqref{projection1}, we obtain
\begin{align*}
\Big| \e \partial_t \mathbf{u}_\e -\partial \mathrm{d}_F (\mathbf{u}_\e ) (\nabla\cdot\boldsymbol{\xi}) \Big|^2=\Big| \e \partial_t \mathbf{u}_\e -\e \Pi_{\mathbf{u}_\e } \partial_t \mathbf{u}_\e \Big|^2+\Big| \e \Pi_{\mathbf{u}_\e } \partial_t \mathbf{u}_\e -\partial \mathrm{d}_F (\mathbf{u}_\e ) (\nabla\cdot\boldsymbol{\xi}) \Big|^2.
\end{align*}
This together with \eqref{energy1} yields
\begin{align}\label{energy2}
\frac 1{\e ^2}\int_0^T\int_\O \Big| \e \partial_t \mathbf{u}_\e -\e \Pi_{\mathbf{u}_\e } \partial_t \mathbf{u}_\e \Big|^2 \leq e^{(1+T)C( \Sigma_0)}. \end{align}
The above two estimates together with \eqref{energy bound2} implies \eqref{energy bound4}.
Concerning \eqref{nablapsiest},
\[A_\e (\mathbf{u}_\e (\cdot,0))\overset{\eqref{dissipation}}=A_\e(\mathbf{u}_\e)\overset{\eqref{projection1},\eqref{eq:2.7global}}\geq \int_\O \(\frac \e 2|\Pi_{\mathbf{u}_\e} \nabla \mathbf{u}_\e|^2+\frac 1{2\e}|\partial \mathrm{d}_F(\mathbf{u}_\e)|^2\)\, dx\geq \int_\O |\nabla\psi_\e|\, dx.\]
The estimate \eqref{calibration est1} follows from \eqref{energy bound4} and \eqref{energy bound1}, and \eqref{calibration est2} follows from
\eqref{energy bound0}.
Finally, combining \eqref{space der bound local} with \eqref{time est1} leads us to \eqref{time der bound local}.
\end{proof}
We shall use \eqref{est away} together with the method of Chen--Struwe \cite{MR990191} to show that the weak limit of $\mathbf{u}_\e$ are harmonic heat flows from the bulk regions $\O_t^\pm$ to $\mathfrak{m}_\pm$ respectively. However, the bulk potential \eqref{bulk potential} depends on the relative distances to these two manifolds, and we must find a quantitative way to distinguish them. This is done in the following:
\begin{theorem}\label{thm volume convergence}
Under the assumptions of Theorem \ref{main thm}, there exists $C>$ independent of $\e$ so that
\[\int_\O| \psi_\e-c_F \mathbf{1}_{\O_t^+} | \, dx\leq C\e^{1/2}\label{volume convergence}\]
\end{theorem}
\begin{proof}
The proof will be done in two steps.
{\it Step 1: derivation of differential inequalities.}
Let $\chi(x,t)=\mathbf{1}_{\O_t^+}-\mathbf{1}_{\O_t^-}$ and let $\eta(\cdot)$ be the truncation of the identity map
\[\eta(x)=\left\{
\begin{array}{rl}
x\qquad \text{when } &x \in [-\delta_0, \delta_0],\\
\delta_0\qquad \text{when } &x\geq \delta_0,\\
-\delta_0\qquad \text{when } &x\leq -\delta_0,
\end{array}
\right.\label{truncation eta}\]
and we denote $\zeta=|\eta|$.
It follows from \eqref{projection1} and the generalized chain rule \eqref{ADM chain rule} that
\begin{align}\label{volume evo1}
\partial_t \psi_\e
=&\(\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e\)\cdot \partial \mathrm{d}_F(\mathbf{u}_\e) -\mathbf{H}\cdot\nabla\psi_\e
\end{align}
In the sequel, $(h)^\pm$ indicates the positive/negative part of a function $h(x)$. By \cite[pp. 153]{MR3409135},
\[\partial_{x_i} (h(x))^+= (\partial_{x_i} h(x)) \mathbf{1}_{\{x:h(x)>0\}}(x)\quad \text{for } a.e.~~x.\label{der positive part}\]
So we can write, using the formula $f=f^+-f^-$, that
\[2\psi_\e-c_F=2(\psi_\e-c_F)^++ c_F -2(\psi_\e-c_F)^-.\label{psi deco}\]
We shall establish a differential inequality of the two energies which sum up to \eqref{gronwall2new}:
\begin{subequations}\label{gronwall0}
\begin{align}
g_\e(t):=&\int \( \psi_\e-c_F\)^+\zeta\circ\mathrm{d}_\Sigma \, dx\label{gronwall1}\\
h_\e(t):=&\int \Big(c_F\chi-c_F+ 2(\psi_\e-c_F)^- \Big)\eta\circ\mathrm{d}_\Sigma \, dx\label{gronwall2}
\end{align}
\end{subequations}
Since $\psi_\e\geq 0$, we have $(\psi_\e-c_F)^-\in [0,c_F]$ and thus $c_F -2(\psi_\e-c_F)^-$ ranges in $ [-c_F,c_F]$.
In view of \eqref{truncation eta}, we have $\eta \chi\geq 0$, so the integrands of these two energies are both non-negative.
By \eqref{initial},
\[g_\e(0)+h_\e(0)\lesssim \e.\]
Now we proceed in the derivation of G\"{o}nwall's inequalities of $g_\e$ and $h_\e$.
Using \eqref{volume evo1}
\begin{align*}
g_\e'(t)
\overset{\eqref{volume evo1}}=&\int_{\{ \psi_\e > c_F\}} (\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e)\cdot \partial \mathrm{d}_F(\mathbf{u}_\e) \zeta(\mathrm{d}_\Sigma) \\
&-\int_{\{ \psi_\e > c_F\}} \mathbf{H}\cdot \nabla\psi_\e \zeta(\mathrm{d}_\Sigma) +\int \(\psi_\e-c_F\)^+ \partial_t \zeta(\mathrm{d}_\Sigma) \\
\overset{\eqref{der positive part} }{=}&\int_{\{ \psi_\e > c_F\}} (\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e)\cdot\partial \mathrm{d}_F(\mathbf{u}_\e)\zeta(\mathrm{d}_\Sigma) \\
&-\int \mathbf{H}\cdot \nabla \(\psi_\e-c_F\)^+ \zeta(\mathrm{d}_\Sigma) -\int \(\psi_\e-c_F\)^+ \mathbf{H}\cdot\nabla \zeta(\mathrm{d}_\Sigma) \\
&+\int \(\partial_t \zeta(\mathrm{d}_\Sigma)+\mathbf{H}\cdot\nabla \zeta(\mathrm{d}_\Sigma)\) \(\psi_\e-c_F\)^+\end{align*}
Finally by an integration by part, we can combine the second and the third integral and obtain
\begin{align*}
g_\e'(t)
\overset{\eqref{eq:2.7global}}\leq &\int_{\{ \psi_\e > c_F\}} \left|(\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e)\cdot\frac{\partial \mathrm{d}_F(\mathbf{u}_\e)}{|\partial \mathrm{d}_F(\mathbf{u}_\e)|} \sqrt{2F(\mathbf{u}_\e)}\right| \zeta(\mathrm{d}_\Sigma) \\
&+\int (\div \mathbf{H}) \(\psi_\e-c_F\)^+ \zeta(\mathrm{d}_\Sigma) +\int \(\partial_t \zeta(\mathrm{d}_\Sigma)+\mathbf{H}\cdot\nabla \zeta(\mathrm{d}_\Sigma)\) \(\psi_\e-c_F\)^+\nonumber\\
&\overset{\eqref{mcf}}\leq \int \e \Big|\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e\Big|^2+ \int \frac 1{\e}{F(\mathbf{u}_\e)}\zeta^2(\mathrm{d}_\Sigma) +Cg_\e(t)\\
&\overset{ \eqref{energy bound3}}\leq \int \e \Big|\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e\Big|^2+CE_\e[\mathbf{u}_\e |\Sigma] +Cg_\e(t). \end{align*}
In view of \eqref{time est1}, we can apply Gr\"{o}nwall's lemma and obtain $g_\e(t)\leq C \e$.
Similar calculation shows
$h_\e'(t)\leq C h_\e(t)$. For simplicity we denote $$c_F\chi-c_F+ 2(\psi_\e-c_F)^-=:w_\e.$$ Using $\partial_i \chi\eta (\mathrm{d}_\Sigma)\equiv 0$ (in the sense of distributions), we find
\[\partial_i w_\e \eta(\mathrm{d}_\Sigma)=2\partial_i \psi_\e \mathbf{1}_{\{\psi_\e< c_F\}}\eta(\mathrm{d}_\Sigma)\quad \text{for } a.e.~~x.\] So by the same calculation for $g_\e$ we obtain
\begin{align*}
h_\e'(t)
\leq &\int_{\{ \psi_\e < c_F\}} 2\left|(\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e)\cdot\frac{\partial \mathrm{d}_F(\mathbf{u}_\e)}{|\partial \mathrm{d}_F(\mathbf{u}_\e)|} \sqrt{2F(\mathbf{u}_\e)}\right| \zeta(\mathrm{d}_\Sigma) \\
&+\int (\div \mathbf{H})w_\e\eta(\mathrm{d}_\Sigma) +\int \Big(\partial_t \eta(\mathrm{d}_\Sigma)+\mathbf{H}\cdot\nabla \eta(\mathrm{d}_\Sigma)\Big) w_\e\\
\leq & \int \e \Big|\partial_t \mathbf{u}_\e+(\mathbf{H}\cdot\nabla)\mathbf{u}_\e\Big|^2+C E_\e[\mathbf{u}_\e |\Sigma] +Ch_\e(t),\end{align*}
and $h_\e(t)\leq C\e$ follows from the Gr\"{o}nwall lemma.
Finally,
\begin{align}
&\int |2\psi_\e-c_F-c_F\chi|\zeta(\mathrm{d}_\Sigma)\nonumber\\
& \overset{\eqref{psi deco}}\leq \int 2(\psi_\e-c_F)^+\zeta(\mathrm{d}_\Sigma) + \int \Big|c_F -2(\psi_\e-c_F)^--c_F\chi\Big| \zeta(\mathrm{d}_\Sigma)\nonumber\\
& = 2g_\e+h_\e\leq C\e.\label{gronwall3}
\end{align}
{\it Step 2: pass to the unweighted inequality.}
We first note that \eqref{gronwall3} implies $\eqref{volume convergence}$ with $\O$ being replaced by $\O\backslash B_{\delta_0}(\Sigma_t)$. So we shall focus on the estimate in $B_{\delta_0}(\Sigma_t)$.
We shall use the following elementary estimate
\[
\left(\int_{0}^{\delta_0}|f(r)| \, dr\right)^2 \leq 2\|f\|_{\infty} \int_{0}^{\delta_0}|f(r)| r \, dr\quad \forall f \in L^{\infty}\left(0, \delta_0\right).
\]
Let $\chi_\e=2\psi_\e-c_F$.
For each fixed $p\in \Sigma_t$, we can apply the above inequality and yield
\begin{align*}
&\(\int_{0}^{\delta_0}\left|\chi (p+ y \mathbf{n}, t )-\chi_\e (p+ y \mathbf{n}, t )\right| \, dy\)^2\\
=&\int_{0}^{\delta_0}\left|\chi (p+y \mathbf{n}, t )-\chi_\e (p+y \mathbf{n}, t )\right| \mathrm{d}_\Sigma\left(p+y \mathbf{n}, t \right) \, dy
\end{align*}
This allows us to estimate for the $\delta_0$ -neighborhood of $\Sigma_t $
\begin{align*}
& \(\int_{B_{\delta_0}(\Sigma_t) }\left|\chi(x,t)-\chi_\e(x, t)\right| \, dx\)^2 \\
\lesssim& \(\sum_{\pm} \int_{\Sigma_t } \int_{0}^{\delta_0}\left| \chi\left(p\pm y \mathbf{n}, t\right)-\chi_\e (p\pm y \mathbf{n}, t )\right| \, dy d S(p)\)^2 \\
\lesssim& \int_{\Sigma_t }\int_{-\delta_0}^{\delta_0}\left|\chi\left(p+y \mathbf{n}, t\right)-\chi_\e\left(p+y \mathbf{n}, t\right)\right| \mathrm{d}_{\Sigma}\left(p+y \mathbf{n}, t \right) \, dy d S(p) \\
\lesssim& \int_{B_{\delta_0}(\Sigma_t) }\left|\chi(x, t)-\chi_\e(x, t)\right| \mathrm{d}_\Sigma(x, t ) \, dx
\end{align*}
This gives the proof in $B_{\delta_0}(\Sigma_t)$.
\end{proof}
\begin{corollary}\label{global control prop}
There exists a sequence of $\e_k\downarrow 0$ and $ \mathbf{u}^\pm(x,t)$
with
\begin{align}\label{u equal 0 region}
\mathbf{u}^\pm\in L^\infty(0,T; L^\infty(\O)\cap H^1_{loc}(\O^+_t;\mathfrak{m}_\pm))
,\partial_t \mathbf{u}^\pm\in L^2(0,T; L^2_{loc}(\O^+_t;\mathfrak{m}_\pm))
\end{align}
such that $\mathbf{u}_k:=\mathbf{u}_{\e_k}$ satisfies
\begin{subequations}\label{weak strong convergence}
\begin{align}
\partial_t \mathbf{u}_k\xrightarrow{ k\to\infty } \partial_t \mathbf{u}^\pm &~\text{weakly in}~ L^2(0,T;L^2_{loc}(\O^\pm_t)),\label{deri con2}\\
\nabla \mathbf{u}_k\xrightarrow{k\to\infty } \nabla \mathbf{u}^\pm &~\text{weakly in}~ L^\infty(0,T;L^2_{loc}(\O^\pm_t)),\label{deri con}\\
\mathbf{u}_k\xrightarrow{k\to\infty } \mathbf{u}^\pm & ~\text{strongly in}~ C([0,T];L^2_{loc}(\O^\pm_t)).\label{deri con1}
\end{align}
\end{subequations}
\end{corollary}
\begin{proof}
It follows from \eqref{L infinity bound1}, \eqref{space der bound local} and \eqref{time der bound local} that, for any $\delta\in (0,\delta_0)$, there exists a subsequence $\e_k=\e_k(\delta)>0$ such that
\begin{subequations}\label{udelta conv}
\begin{align}
\mathbf{u}_{\e_k}\xrightarrow{k\to\infty } \mathbf{u}^\pm&~\text{weakly-star in}~ L^\infty(0,T ;L^\infty(\O))\label{convergence L4},\\
\partial_t \mathbf{u}_{\e_k}\xrightarrow{ k\to\infty } \partial_t \bar{\mathbf{u}}_\delta^\pm&~\text{weakly in}~ L^2(0,T;L^2(\O^\pm_t\backslash B_\delta(\Sigma_t))),\label{convergence weak time der}\\
\nabla \mathbf{u}_{\e_k}\xrightarrow{k\to\infty } \nabla \bar{\mathbf{u}}_\delta^\pm&~\text{weakly-star in}~ L^\infty(0,T;L^2(\O^\pm_t\backslash B_\delta(\Sigma_t))),\label{convergence weak gradient}
\end{align}
\end{subequations}
and $\mathbf{u}^\pm=\bar{\mathbf{u}}_\delta^\pm$ a.e. in $U_\pm(\delta):=\cup_{t\in [0,T]} \(\O_t^\pm\backslash B_\delta(\Sigma_t)\)\times \{t\}$. This combined with \eqref{convergence weak time der} and \eqref{convergence weak gradient} leads to
\begin{equation}\label{regular limit}
\mathbf{u}\in L^\infty(0,T;H^1_{loc}(\O^\pm_t)) ~\text{with}~\partial_t \mathbf{u}\in L^2(0,T;L^2_{loc}(\O^\pm_t)).
\end{equation}
It remains to show that $\mathbf{u}^\pm$ are mappings into $\mathfrak{m}_\pm$.
By a diagonal argument we obtain \eqref{weak strong convergence}. Using \eqref{deri con1}, \eqref{space der bound local}, and Fatou's lemma, we deduce that $F(\mathbf{u})=0$ a.e. in $\O$. In view of \eqref{limit manifold} we deduce that the images of $\mathbf{u}^\pm$ lies in $\mathfrak{m}$. By \eqref{deri con1} and \eqref{L infinity bound1}
\[\psi_{\e_k}\overset{\eqref{psi}}=\mathrm{d}_F\circ \mathbf{u}_k \xrightarrow{k\to\infty} \mathrm{d}_F\circ \mathbf{u}^\pm \text{ strongly in }C([0,T];L^2_{loc}(\O^\pm_t)).\]
This together with \eqref{volume convergence} and \eqref{eq:1.6} yields that $\mathbf{u}^\pm$ maps into $\mathfrak{m}_\pm$ respectively. Combining this with \eqref{regular limit} yields \eqref{u equal 0 region}.
\end{proof}
\begin{lemma}\label{area control}
For any $\delta\in (0,\delta_0)$, there exist three numbers $b^\pm_\delta \in [ \delta,2\delta]$ s.t. the sets
\[\{x: \psi_\e > c_F-b^+_\delta \}\text{ and }\{x:\psi_\e <b^-_\delta\}\] have finite perimeters and
\begin{align}
& \left|\mathcal{H}^{d-1}\(\{x:\psi_\e =c_F-b^+_\delta\}\)-\mathcal{H}^{d-1} (\Sigma_t)\right|\nonumber\\&+\left|\mathcal{H}^{d-1}\(\{x:\psi_\e =b^-_\delta\}\)-\mathcal{H}^{d-1} (\Sigma_t)\right| \leq C \e^{1/2}\delta^{-1}.\label{area compare}
\end{align}
\end{lemma}
\begin{proof}
For any $\delta<\delta_0\ll c_F$, we denote \footnote{
Note that this definition only applies within the proof of the lemma.}
\[\O_t^{\e,\delta}=\{x\in \O:c_F-2\delta< \psi_\e(x,t) < c_F-\delta\}\]
Recall the co-area formula of BV function \cite[section 5.5]{MR3409135} which states that the (measure-theoretic) boundary $\partial \O_t^{\e,\delta,\pm}$ has finite perimeter for almost every $\delta$.
Note that we shall denote the corresponding (measure-theoretic) outer normal vector by $\nu$. Using \eqref{energy bound4} and \eqref{energy bound1}, we deduce for almost every $\delta \in (0,\delta_0)$ that
\begin{align*}
C\e \overset{\eqref{calibration est1}}\geq & \int_{ \O_t^{\e,\delta}} \(|\nabla\psi_\e|- \boldsymbol{\xi}\cdot \nabla\psi_\e\)\, dx \qquad (\geq 0)\\
=&\int_{c_F -2\delta}^{c_F-\delta} \mathcal{H}^{d-1}\(\{x:\psi_\e =s\}\)\, ds- \int_{ \partial \O_t^{\e,\delta}} \boldsymbol{\xi}\cdot \nu \psi_\e \, d\mathcal{H}^{d-1} +\int_{ \O_t^{\e,\delta}} (\div \boldsymbol{\xi}) \psi_\e \, dx
\end{align*}
where $\nu$ is the outward unit normal of the set under integration, defined on its (measure-theoretic) boundaries.
So we obtain
\begin{align}
\Big|\int_{c_F -2\delta}^{c_F-\delta} \mathcal{H}^{d-1}\(\{x:\psi_\e =s\}\)\, ds - \int_{ \partial \O_t^{\e,\delta}} \boldsymbol{\xi}\cdot \nu \psi_\e \, d\mathcal{H}^{d-1}\Big| \lesssim \e + |\O_t^{\e,\delta} |\label{volume est1}
\end{align}
Applying the divergence theorem and adding zero, the second integral on the LHS writes
\begin{align}\label{volume est2}
& \int_{ \partial \O_t^{\e,\delta}} \boldsymbol{\xi}\cdot \nu \psi_\e \, d\mathcal{H}^{d-1}\nonumber\\
=& \(c_F-2\delta \) \int_{ \{\psi_\e> c_F-2\delta\}} \div \boldsymbol{\xi} \, dx+ (c_F-\delta) \int_{ \{\psi_\e< c_F- \delta\}} \div \boldsymbol{\xi} \, dx\nonumber\\
&\underbrace{-\(c_F-2\delta \)\int_{\O_t^+}\div \boldsymbol{\xi}\, dx-(c_F-\delta) \int_{\O_t^-}\div \boldsymbol{\xi}\, dx+ \delta \mathcal{H}^{d-1} (\Sigma_t)}_{=0}
\end{align}
Note that by the orientation of $\nu$ and $\boldsymbol{\xi}$, we see the three terms in the last line of \eqref{volume est2} sum up to zero. So merging terms having same prefactors on the RHS of \eqref{volume est2}, and then substituting \eqref{volume est2} into \eqref{volume est1} yields
\begin{align}\label{volume est3}
&\left| \int_{c_F -2\delta}^{c_F-\delta} \mathcal{H}^{d-1}\(\{x:\psi_\e =s\}\)\, ds-\delta \mathcal{H}^{d-1} (\Sigma_t)\right|\nonumber\\
&\lesssim \e + |\O_t^{\e,\delta}+ \Big| \O_t^+\triangle {\{x: \psi_\e> c_F-2\delta\}} \Big| + \Big| \O_t^- \triangle {\{x: \psi_\e < c_F -\delta\}} \Big|
\end{align}
where $A\triangle B=(A-B)\cup (B-A)$ is the symmetric difference of two sets $A,B$.
Up to a constant, we rewrite the last two terms by
\begin{align*}
r_\e^+:= &~\Big| \O_t^+\triangle {\{x: \psi_\e> c_F-2\delta\}} \Big|\\
=&~\Big| {\{x\in \O_t^+: \psi_\e\leq c_F-2\delta\}} \Big|+ \Big| {\{x\in \O_t^-: \psi_\e >c_F-2\delta\}} ,\\
r_\e^-:= &~\Big| \O_t^-\triangle {\{x: \psi_\e < c_F- \delta\}} \Big|\\
=&~\Big| {\{x\in \O_t^-: \psi_\e\geq c_F- \delta\}} \Big|+ \Big| {\{x\in \O_t^+: \psi_\e < c_F-\delta\}} \Big|.\end{align*}
Now using the Chebyshev inequality and \eqref{volume convergence} yields $r_\e^-+r_\e^+\lesssim \e^{1/2}\delta^{-1}$, and substitute this estimate into \eqref{volume est3} leads
\begin{align}\label{volume est4}
&\left| \frac 1 \delta \int_{c_F-2 \delta}^{c_F-\delta} \mathcal{H}^{d-1}\(\{x:\psi_\e =s\}\)\, ds- \mathcal{H}^{d-1} (\Sigma_t)\right|\leq C\e^{1/2}\delta^{-1}.
\end{align}
So the existence of $b^+_\delta\in [\delta,2\delta]$ satisfying \eqref{area compare} follows by Fubini's theorem. The other case can be proved in the same way and we omit the proofs here.
\end{proof}
\begin{prop}\label{prop estimates}
There exists a subsequence $\e_k\downarrow 0$ and $b^\pm_k \in [1/k,2/k]$ so that the sets
\begin{subequations}\label{omegasets}
\begin{align}
\O_t^{k,+}&=\{x\in \O: \psi_k(x,t) > c_F-b^+_k\},\\
\O_t^{k,-}&=\{x\in \O: \psi_k(x,t)< b^-_k\}
\end{align}
\end{subequations}
have uniformly bounded perimeter
where $\psi_k=\psi_{\e_k}$, and
\begin{align}\label{area compare1}
&\left|\mathcal{H}^{d-1}(\partial \O_t^{k,\pm})-\mathcal{H}^{d-1} (\Sigma_t)\right| \leq C \e_k^{1/4},\\
& \mathbf{1}_{\O_t^{k,\pm}}\xrightarrow{k\to \infty} \mathbf{1}_{\O_t^\pm}\text{ weakly-star in }BV(\O).\label{area converge}
\end{align}
Moreover there exists some $K_1>0$ so that for any $k>K_1$, the solution $\mathbf{u}_k:=\mathbf{u}_{\e_k}$ satisfies
\begin{align}
&\mathbf{u}_k(\O_t^{k,\pm})\subset B_{\delta_0}(\mathfrak{m}_\pm)\label{ukin ball},\\
&\sup_{t\in [0,T]} \int_{\O_t^{k,\pm}} \Big| \nabla P_{\mathfrak{m}} (\mathbf{u}_k) \Big|^2\, dx+ \int_0^T\int_{\O_t^{k,\pm}} \Big| \partial_t P_{\mathfrak{m}} (\mathbf{u}_k) \Big|^2\, dxdt\leq e^{(1+T)C( \Sigma_0)}\label{local energy}
\end{align}
\end{prop}
\begin{proof}
For each $k\in\mathbb{N},$ larger than the integer part of $2/\delta_0$, we choose $\delta=1/k$ in Lemma \ref{area control}. This yields $b_k^\pm \in [1/k,2/k]$ so that
\[ \left|\mathcal{H}^{d-1}\(\{x:\psi_\e =c_F-b^+_k\}\)-\mathcal{H}^{d-1} (\Sigma_t)\right|\leq \e^{1/2}k\]
Choosing $\e_k \in (0,k^{-4})$ leads to the `plus' cases of \eqref{area compare1}. By \eqref{volume convergence} we have for each fixed $k$ that
\[\mathbf{1}_{\{x:\psi_\e > c_F-b^+_k\}}\xrightarrow{\e\to 0}\mathbf{1}_{\O_t^+} \text{ strongly in } L^1(\O).\]
So by a diagonal argument,
we find $\e_k \in (0,k^{-4})$ so that
\[\mathbf{1}_{\O_t^{k,+}}\xrightarrow{k\to \infty} \mathbf{1}_{\O_t^+}\text{ strongly in } L^1(\O).\]
This combined with \eqref{area compare1} implies the `plus case of \eqref{area converge}.
The `minus' cases can be done in the same way.
Concerning \eqref{local energy}, by \eqref{quasidistance} and $b_k^\pm\to 0$, there exists $K_1>0$ so that for any $k\geq K_1$ there holds
\begin{align*}
\mathrm{d}_F\circ \mathbf{u}_k>c_F-b_k^+\text{ implies } \mathbf{u}_k\in B_{\delta_0}(\mathfrak{m}_+),\\
\mathrm{d}_F\circ \mathbf{u}_k< b_k^-\text{ implies } \mathbf{u}_k\in B_{\delta_0}(\mathfrak{m}_-).
\end{align*}
So for $k\geq K_1$, we have \eqref{ukin ball} and
the nearest point projection to $\mathfrak{m}$ is smooth. So we have
\[\mathbf{u}_k=P_{\mathfrak{m}}(\mathbf{u}_k)+\mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k) \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k), \]
where $\mathrm{d}_\mathfrak{m}$ is the signed-distance function, which is defined by \eqref{dN global} and is smooth in $B_{\delta_0}(\mathfrak{m})$.
Differentiating the above equation gives
\[\partial_{x_i} \mathbf{u}_k=\partial_{x_i} \(P_{\mathfrak{m}} (\mathbf{u}_k)\)+\partial_{x_i} \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k) \partial \mathrm{d}_{\mathfrak{m}}( \mathbf{u}_k )+\mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k) \partial_{x_i} \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k).\label{decom near m}\]
Note that the last term is orthogonal to $\partial \mathrm{d}_{\mathfrak{m}}( \mathbf{u}_k )$ by $|\partial\mathrm{d}_{\mathfrak{m}}|=1$. So we have
\begin{align}
|\partial_{x_i} \mathbf{u}_k|^2& =|\partial_{x_i} P_{\mathfrak{m}} (\mathbf{u}_k)|^2+|\partial_{x_i} \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k)|^2+\mathrm{d}_{\mathfrak{m}}^2(\mathbf{u}_k) |\partial_{x_i} \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k)|^2\nonumber\\
&\quad +2 \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k) \partial_{x_i} \(P_{\mathfrak{m}} (\mathbf{u}_k)\)\cdot \partial_{x_i} \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k)\nonumber\\
&\geq (1-C\delta_0)|\partial_{x_i} P_{\mathfrak{m}} (\mathbf{u}_k)|^2+|\partial_{x_i} \mathbf{u}_k\cdot \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k)|^2+\mathrm{d}_{\mathfrak{m}}^2(\mathbf{u}_k) |\partial_{x_i} \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k)|^2.
\label{nonlinear decom}
\end{align}
On the other hand, by \eqref{projection1}
\[|\Pi_{\mathbf{u}} \partial_{x_i} \mathbf{u}_k|^2=|\partial_{x_i} \mathbf{u}_k\cdot\partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k) |^2\label{linear decom}\]
Subtracting \eqref{nonlinear decom} from \eqref{linear decom}, and using orthogonality of the projection \eqref{projection1}, we obtain
\begin{align*}
&~|\partial_{x_i} \mathbf{u}_k -\Pi_{\mathbf{u}_k} \partial_{x_i} \mathbf{u}_k|^2 \\
=&~ |\partial_{x_i} \mathbf{u}_k|^2-|\Pi_{\mathbf{u}_k} \partial_{x_i} \mathbf{u}_k|^2 \\
\geq &~ (1-C\delta_0)|\partial_{x_i} P_{\mathfrak{m}} (\mathbf{u}_k)|^2 +\mathrm{d}_{\mathfrak{m}}^2(\mathbf{u}_k) |\partial_{x_i} \partial \mathrm{d}_{\mathfrak{m}}(\mathbf{u}_k)|^2.
\end{align*}
This together with \eqref{energy bound4} implies \eqref{local energy}.
\end{proof}
\begin{theorem}\label{thm usesbv}
The limit $\mathbf{u}^\pm$ in \eqref{u equal 0 region} are weak solutions of harmonic heat flows from $\cup_{t\in [0,T]} \O_t^\pm$ to $\mathfrak{m}_\pm$ respectively and satisfy, in addition to \eqref{u equal 0 region}, that
\begin{align}
\mathbf{u}^\pm\in L^2(0,T; H^1(\O_t^\pm,\mathfrak{m}_\pm))\label{upm regu}
\end{align}
and .
Moreover, with the notations in Proposition \ref{prop estimates}, for a.e. $t\in [0,T]$ there holds
\begin{align}
\mathbf{v}_k(\cdot,t):=\sum_\pm P_{\mathfrak{m}_\pm} \circ \mathbf{u}_k(\cdot,t) ~\mathbf{1}_{\O_t^{k,\pm} } &\xrightarrow{k\to\infty} \mathbf{u}=\sum_{\pm}\mathbf{u}^\pm(\cdot,t) ~\mathbf{1}_{\O_t^\pm } \text{ weakly-star in } BV(\O)\label{strong convergence of u1},\\
\nabla^{a} \mathbf{v}_k & \xrightarrow{k\to\infty} \mathbf{1}_{\O_t^\pm} \nabla \mathbf{u}^\pm \text{ weakly in } L^1(\O),\\
\sum_\pm \int_{\O_t^\pm} |\nabla \mathbf{u}^\pm(\cdot,t)|^2\, dx&\leq \liminf_{k\to \infty}\sum_\pm \int_{\O_t^{k,\pm}} \left|\nabla^a \mathbf{v}_k(\cdot,t)\right|^2\,dx,\label{lower semi sbv}
\end{align}
where $\nabla^{a} \mathbf{v}_k$ is the absolute part of the distributional gradient $\nabla$.
\end{theorem}
\begin{proof}
The sequence $\mathbf{v}_k(\cdot,t)$ is bounded in $L^\infty(\O)$, and by \eqref{local energy} we deduce that their distributional derivatives have no Cantor parts. Moreover, the absolute continuous parts and the jump sets enjoy the estimates \eqref{local energy} and \eqref{area compare1} respectively.
So it follows from Proposition \ref{AFP2} that $\{\mathbf{v}_k(\cdot,t)\}$ is compact in $SBV(\O)$: there exists $\mathbf{v}\in SBV(\O)$ so that $\mathbf{v}_k\to\mathbf{v}$ weakly-star in $BV(\O)$ as $k\to \infty$,
and the absolute continuous part of the gradient
\[\nabla^{a} \mathbf{v}_k= \sum_\pm \nabla P_{\mathfrak{m}_\pm} (\mathbf{u}_k) ~\mathbf{1}_{\O_t^{k,\pm} } \xrightarrow{k\to\infty} \nabla^{a} \mathbf{v} \text{ weakly in } L^1(\O).\]
To identify $\mathbf{v}$, we combine \eqref{area converge} with \eqref{deri con1} and deduce that $\mathbf{v}=\sum_\pm \mathbf{1}_{\O_t^+}\mathbf{u}^\pm$ a.e., and thus \eqref{strong convergence of u1} is proved. By lower semicontinuity of SBV functions (cf. \eqref{LSC sbv}), we get \eqref{lower semi sbv}, and thus we can improve the spatial regularity in \eqref{u equal 0 region} to \eqref{upm regu}.
Finally combining \eqref{est away}, \eqref{weak strong convergence} with Chen--Struwe \cite{MR990191} implies that, $\mathbf{u}^\pm: \cup_{t\in [0,T]} \O_t^\pm\mapsto \mathfrak{m}_\pm$ are weak solutions to harmonic heat flows respectively.
\end{proof}
\section{Proof of Theorem \ref{main thm}}\label{sec mp}
We first recall that the estimate \eqref{intro cali} is proved in Lemma \ref{lemma level} (cf. \eqref{calibration est2}). The estimate \eqref{volume convergencethm} is obtained in Theorem
\ref{thm volume convergence}. Now we prove \eqref{intro energy conv}: for every $t\in [0,T]$,
\begin{align}\label{calibration est3}
&\lim_{\e\to 0}\int_\O\left(\frac {\e}2\left|\nabla \mathbf{u}_\e \right|^2+ \frac{F(\mathbf{u}_\e)}\e \right) d x\nonumber \\\nonumber&\overset{\eqref{calibration est2}}=\lim_{\e\to 0}\int_\O \boldsymbol{\xi}\cdot\nabla \psi_\e\, dx\\\nonumber&\overset{\eqref{bc n and H}}=\lim_{\e\to 0}-\int_\O \div \boldsymbol{\xi} \, \psi_\e\, dx\nonumber\\
&\overset{\eqref{volume convergence}}= c_F\int_{\O_t^+} \div \boldsymbol{\xi}
\overset{\eqref{def:xi}}=c_F\mathcal{H}^{d-1}(\Sigma_t),
\end{align}
where we used the divergence theorem in the last step.
The convergence \eqref{strong global of Q} is obtained (along with others) in Corollary \ref{global control prop}. The limit $\mathbf{u}^\pm$ being harmonic heat flow with regularity \eqref{reg limit} has been done in Theorem \ref{thm usesbv}.
It remains to prove minimal pair boundary conditions \eqref{thm minimal pair}.
We shall argue for a.e. $t\in [0,T]$.
By \eqref{upm regu} and the trace theorem, for any $\tau\in [0,\delta_0)$, we have $\mathbf{u}^\pm(p\pm \tau \mathbf{n}_{\Sigma}(p))\in \mathfrak{m}_\pm$ where $\mathcal{C}_F(\cdot,\cdot)$ is well-defined and Lipschitz continuous in $\mathfrak{m}_+\times\mathfrak{m}_-$.
By \eqref{upm regu} and Sobolev's trace estimate, we have for a.e. $s\in (0,\delta_0)$ that
\[ \mathbf{u}^\pm \left(p\pm s \mathbf{n}_{\Sigma}(p)\right)\xrightarrow{s\to 0} \mathbf{u}^\pm (p ) \text{ strongly in } L^2(\Sigma_t).\label{uniform convergence slice2}\]
This combined with Lipschitz continuity of $\mathcal{C}_F$ yields for a.e. $s\in (0,\delta_0)$,
\[ \mathcal{C}_F\(\mathbf{u}^+(p+s \mathbf{n}_{\Sigma}(p)),\mathbf{u}^-(p-s \mathbf{n}_{\Sigma}(p))\)\xrightarrow{s\to 0} \mathcal{C}_F(\mathbf{u}^+ (p ) , \mathbf{u}^-(p))\text{ strongly in } L^2(\Sigma_t).\label{uniform convergence slice2}\]
On the other hand, \eqref{deri con1} implies the strong convergence of $\mathbf{u}_i=\mathbf{u}_{\e_i}$ on almost every slices. More precisely, for a.e. $s\in (0,\delta_0)$, there holds
\[\mathbf{u}_k \left(p\pm s \mathbf{n}_{\Sigma}(p)\right)\xrightarrow{k\to\infty} \mathbf{u}^\pm(p\pm s \mathbf{n}_{\Sigma}(p))\text{ strongly in } L^2(\Sigma_t).\label{uniform convergence slice4}\]
Now we shall derive a lower bound of the Ginzburg--Landau energy on the $s$-neighborhood of $\Sigma_t$. By a change of variables $z=\e_k y$ we have
\begin{align}\label{der mp1}
&\int_\O\left(\frac {\e_k}2\left|\nabla \mathbf{u}_k \right|^2+\e_k^{-1} F (\mathbf{u}_k )\right) d x\nonumber\\
\geq & \int_{\Sigma_t} \int_{-s}^{s}\left(\frac {\e_k}2|\partial_s \mathbf{u}_k (p+ z\mathbf{n}_\Sigma(p)) ) |^2 +\e_k^{-1} F\left(\mathbf{u}_k (p+ z\mathbf{n}_\Sigma(p))\right)\right) d z \, d\mathcal{H}^{d-1}(p)\nonumber\\
=& \int_{\Sigma_t} \int_{ -\frac{s}{\e_k}} ^{\frac{s}{\e_k}}\left(\frac 12|\partial_y \mathbf{u}_k (p+ \e_k y \mathbf{n}_\Sigma(p)) |^2+F \(\mathbf{u}_k (p+ \e_k y \mathbf{n}_\Sigma(p)) \)\right) d y \, d\mathcal{H}^{d-1}(p)\nonumber\\
\geq & \int_{\Sigma_t} \alpha_k(p,s) \, d\mathcal{H}^{d-1}(p),
\end{align}
where $\alpha_k(p,s)$ (with $p \in \Sigma_t, $ and a.e. $s\in (-\delta_0,0)\cup (0,\delta_0)$) is defined by
$$
\begin{aligned}
\alpha_k(p,s):=\inf &\left\{\int_{-\frac{s}{\e_k}}^{\frac{s}{\e_k}}\left(\frac 12|\boldsymbol{\gamma }'(y)|^2+F(\boldsymbol{\gamma }(y))\right) d y:\right.\\
&\left.\boldsymbol{\gamma } \in H^{1}\left(\left(-\frac{s}{\e_k}, \frac{s}{\e_k}\right), \mathbb{R}^n\right), \boldsymbol{\gamma }\left(\pm \frac{s}{\e_k}\right)=\mathbf{u}_k \left(p \pm s \mathbf{n}_{\Sigma}(p)\right)\right\}.
\end{aligned}
$$
By \eqref{uniform convergence slice4} and \eqref{der mc3}, for a.e $s\in (0,\delta_0)$ we have
\[\alpha_k(p,s)\xrightarrow{k\to\infty} \mathcal{C}_F(\mathbf{u}^+(p+s \mathbf{n}_{\Sigma}(p)),\mathbf{u}^-(p-s \mathbf{n}_{\Sigma}(p))) \text{ for }\mathcal{H}^{d-1}- a.e. \quad p\in \Sigma_t.\]
Combining this with \eqref{uniform convergence slice2} and a diagonal process, we can pass to a $s_k\downarrow 0$ so that
\[\alpha_k(p,s_k)\xrightarrow{k\to\infty} \mathcal{C}_F(\mathbf{u}^+(p),\mathbf{u}^-(p)) \text{ for }\mathcal{H}^{d-1}- a.e. \quad p\in \Sigma_t.\label{alphaconv1}\]
Now we turn to the estimate of the Ginzburg--Landau energy:
combining \eqref{calibration est3} with \eqref{der mp1}
yields
\begin{align}\label{der mp2}
c_F\mathcal{H}^{d-1}(\Sigma_t) &\overset{\eqref{calibration est3}}=\liminf_{k\to\infty}\int_\O\left(\frac {\e_k}2\left|\nabla \mathbf{u}_k \right|^2+ \e_k^{-1} F(\mathbf{u}_k)\right) d x\nonumber\\
&\overset{\eqref{der mp1}}\geq \liminf_{k\to\infty} \int_{\Sigma_t} \alpha_k(p,s_k) \, d\mathcal{H}^{d-1}(p) \nonumber\\
&\overset{\eqref{alphaconv1}}\geq \int_{\Sigma_t} \mathcal{C}_F(\mathbf{u}^+(p),\mathbf{u}^-(p)) \, d\mathcal{H}^{d-1}(p).
\end{align}
Note that in the last step we used \eqref{alphaconv1} and the Fatou's lemma.
By Lemma \ref{lemma mc}, we have $\mathcal{C}_F(\mathbf{u}^+(p),\mathbf{u}^-(p))\geq c_F$ for $\mathcal{H}^{d-1}$-a.e. $p\in \Sigma_t$, and by \eqref{der mp2} we actually obtain equality here.
So the minimal pair boundary conditions \eqref{thm minimal pair} follows from \eqref{rigidity cf}.
\section{Construction of initial data: Proof of Theorem \ref{thm init}}\label{sec initial data}
We shall first modify and extend $\mathbf{u}^{in}_\pm$ in the transitional region $B_\delta(\Sigma_0)$ so that we can glue them into a new mapping that fulfills the desired properties in Theorem \ref{thm init}.
Let $\Psi_\delta: \O_0^\pm\backslash B_\delta(\Sigma_0)\mapsto \O_0^\pm$ be global diffeomorphism up to the boundaries so that
\begin{align}
\Psi_\delta(p\pm \delta\mathbf{n}_{\Sigma_0}(p))&=p,\quad \forall p\in \Sigma_0,\\
\Psi_\delta(x)&=x,\quad \forall x\in \O_0^\pm\backslash B_{2\delta}(\Sigma_0).\label{psi fix}
\end{align}
We extend $\mathbf{u}_{in}^\pm$ to some $\mathbf{u}_0^{\pm}\in H^1(\O_0^\pm\cup \overline{B_{\delta}(\Sigma_0)},\mathfrak{m}_\pm)$ defined by
\[\mathbf{u}_0^{\pm}=\left\{\begin{split}
\mathbf{u}_{in}^\pm\circ \Psi_\delta &\text{ in } &\O_0^\pm\backslash B_\delta(\Sigma_0),\\
\mathbf{u}_{in}^\pm\circ P_{\Sigma_0} &\text{ in } & \overline{B_\delta(\Sigma_0)}.
\end{split}\label{u in extension}
\right.\]
Note that $\mathbf{u}_0^\pm$ in \eqref{u in extension} are constant in normal modifications of $\mathbf{u}_{in}^\pm$ in $B_\delta(\Sigma_0)$. This combined with \eqref{MC initial data} yields
\begin{align}
(\mathbf{u}_0^+(x),\mathbf{u}_0^-(x))|_{ B_\delta(\Sigma_0)}\text{ being minimal pairs}.\label{initial mp}
\end{align}
We shall construct $\mathbf{u}_\e^{in}$ by gluing $\mathbf{u}_0^\pm$. To this end, we define a cut-off function
\[\eta_\delta\in C_c^\infty(B_\delta(\Sigma_t);[0,1])~\text{ with }\eta_\delta=1\text{ in }B_{\delta/2}(\Sigma_t).\label{cut-off eta delta2}\]
Recall the optimal profile $\alpha$ \eqref{optimal profile alpha} and the cut-off function \eqref{cut-off eta delta}.
We define
\[S_\e(x)=\eta_\delta(x,0) \alpha\(\tfrac{\mathrm{d}_\Sigma(x,0)}\e\)+(1-\eta_\delta(x,0))\tfrac {\mathrm{dist}_\mathfrak{m}}2 \(\mathbf{1}_{\O^+_0}(x)-\mathbf{1}_{\O^-_0}(x)\).\label{se def}\]
Note that the discontinuity caused by $\mathbf{1}_{\O^\pm_0}$ is cut off by $\eta_\delta$. So $S_\e$ is a smooth function, and we extract its leading order by
\[S_\e(x)= \alpha\(\tfrac {\mathrm{d}_\Sigma(x,0)}\e\)-\hat{S}_\e(x),\]
where $\hat{S}_\e$ is the tail term
\[\hat{S}_\e(x)=(1-\eta_\delta(x,0))\(\alpha\(\tfrac {\mathrm{d}_\Sigma(x,0)}\e\)-\tfrac {\mathrm{dist}_\mathfrak{m}}2 \(\mathbf{1}_{\O^+_0}(x)-\mathbf{1}_{\O^-_0}(x)\)\).\label{hat s def}\]
We note that
$\mathrm{d}_\Sigma(x,0)$ is Lipschitz continuous in $\O$, and by Rademacher's theorem $|\nabla \mathrm{d}_\Sigma(x,0)|\leq 1$ a.e. in $\O$. This combined with \eqref{exp alpha}
yields
\[\|\hat{S}_\e\|_{L^{\infty}(\O)}+\|\nabla \hat{S}_\e\|_{L^{\infty}(\O)}\leq C e^{-C/\e}.\label{exp hatS}\]
and thus
\begin{subequations}
\begin{align}
S_\e(x)&= \alpha\(\tfrac {\mathrm{d}_\Sigma(x,0)}\e\)+O(e^{-C/\e})\text{ in } \O,\label{def Se1}\\
\nabla S_\e(x)&= \tfrac {\nabla \mathrm{d}_\Sigma(x,0)}\e\alpha'+O(e^{-C/\e})\quad \text{a.e. in } \O,\label{def Se2}\\
S_\e(x)&\xrightarrow{\e\to 0} \tfrac {\mathrm{dist}_\mathfrak{m}}2 \(\mathbf{1}_{\O^+_0}(x)-\mathbf{1}_{\O^-_0}(x)\)\text{ a.e. in }\O.
\end{align}
\end{subequations}
Using \eqref{u in extension} and \eqref{se def}, we define $\mathbf{u}_\e^{in}$ by
\begin{align}\label{uu initial}
\mathbf{u}_\e^{in}(x)=\tfrac {\mathbf{u}_0^+(x)+\mathbf{u}_0^-(x)}2 +S_\e(x)\tfrac{\mathbf{u}_0^+(x)-\mathbf{u}_0^-(x)}{\mathrm{dist}_\mathfrak{m}}.
\end{align}
We claim \eqref{u coincide} holds. Indeed in the domains $\O_0^\pm\backslash B_{2\delta}(\Sigma_0)$, we have $\eta_\delta=0$ and thus
\[\mathbf{u}_\e^{in}\overset{\eqref{se def}}=\tfrac {\mathbf{u}_0^+ +\mathbf{u}_0^-}2+\tfrac {\mathrm{dist}_\mathfrak{m}}2 (\mathbf{1}_{\O^+_0} -\mathbf{1}_{\O^-_0})\tfrac{\mathbf{u}_0^+(x)-\mathbf{u}_0^-(x)}{\mathrm{dist}_\mathfrak{m}}=\sum_\pm \mathbf{u}_0^\pm\mathbf{1}_{\O^\pm_0}. \]
This combined with \eqref{u in extension} yields
\[\mathbf{u}_\e^{in}(x) \overset{\eqref{u in extension}}=\mathbf{u}_{in}^\pm\circ \Psi_\delta(x)\overset{\eqref{psi fix}}=\mathbf{u}_{in}^\pm(x),\quad \forall x\in \O_0^\pm\backslash B_{2\delta}(\Sigma_0).\]
Now we turn to the verification of \eqref{u cali}.
Substituting \eqref{def Se1} and \eqref{def Se2} into \eqref{uu initial}, we obtain
\begin{align}\label{grad uuin}
\nabla\mathbf{u}_\e^{in}&=\tfrac {\nabla\mathbf{u}_0^++\nabla\mathbf{u}_0^-}2 +\tfrac{\nabla\mathrm{d}_\Sigma}\e\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\nonumber\\
&+\alpha(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac {\nabla\mathbf{u}_0^+ -\nabla\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}+ O(e^{-C/\e})\(|\nabla\mathbf{u}_0^+|+|\nabla\mathbf{u}_0^-|\)\qquad \text{a.e. in }\O.
\end{align}
\subsection{Proof of \eqref{u cali}: Estimate near $\Sigma_0$}
By \eqref{initial mp} and the second part of Lemma \ref{lemma mc}, we have
\[(\mathbf{u}_0^+(x)-\mathbf{u}_0^-(x))\perp_{\mathbb{R}^n} T_{\mathbf{u}_0^\pm(x)}\mathfrak{m}_\pm,\quad \forall x\in B_\delta(\Sigma_0).\] Since $\mathbf{u}_0^\pm$ map into $\mathfrak{m}_\pm$ respectively, we have $\nabla \mathbf{u}_0^\pm(x)\in T_{\mathbf{u}^\pm(x)}\mathfrak{m}_\pm$. So we have for $1\leq i\leq k$ that
\[(\mathbf{u}_0^+(x)-\mathbf{u}_0^-(x))\cdot \partial_{x_i} \mathbf{u}_0^\pm(x),\quad \forall x\in B_\delta(\Sigma_0).\label{mc orth gradi}\]
Using this equation we can compute the square of \eqref{grad uuin}:
the square of the second term on the right hand side of \eqref{grad uuin} is
\[\left|\tfrac{\mathrm{d}_\Sigma}\e\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\right|\overset{\eqref{initial mp}}=\e^{-2} \(\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e})\)^2\text{ in }B_{\delta}(\Sigma_0).\]
This together with \eqref{mc orth gradi} enable us to compute the square of \eqref{grad uuin} by
\begin{align}\label{grad uuin1}
|\nabla\mathbf{u}_\e^{in}|^2=& \e^{-2} \(\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e})\)^2 +\left|\tfrac{\nabla\mathbf{u}_0^++\nabla\mathbf{u}_0^-}2+\alpha(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac{\nabla\mathbf{u}_0^+ -\nabla\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\right|^2\nonumber\\
&+ O(e^{-C/\e})\(|\nabla\mathbf{u}_0^+|+|\nabla\mathbf{u}_0^-|\)^2\qquad \text{ in }B_{\delta}(\Sigma_0).
\end{align}
By \eqref{def Se1} and \eqref{uu initial}, we have
\[F(\mathbf{u}_\e^{in})=F\(\tfrac {\mathbf{u}_0^++\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)+O(e^{-C/\e})\qquad \text{ in } \O.\label{F init expand1}\]
To be more explicit on the RHS of \eqref{F init expand1}, we first deduce from \eqref{optimal profile alpha} that
\[\boldsymbol{\gamma }(s):\mathbb{R}\mapsto \tfrac {\mathbf{u}_0^++\mathbf{u}_0^-}2 +\alpha\(s\)\tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\]
prescribes the line segment $\overline{\mathbf{u}_0^-\mathbf{u}_0^+}$ and $\gamma(0)=\tfrac {\mathbf{u}_0^++\mathbf{u}_0^-}2$ is the middle point.
For $x\in B_\delta(\Sigma_0)\cap \O_0^+$, we have $\alpha(\tfrac{\mathrm{d}_\Sigma(x)}{\e})>0$ and by \eqref{odd increase}, the distance from $\boldsymbol{\gamma }(\frac{\mathrm{d}_\Sigma(x)}\e)$ to $\mathfrak{m}$ equals to the distance between $\boldsymbol{\gamma }(\frac{\mathrm{d}_\Sigma(x)}\e)$ and $\mathbf{u}_0^+\in\mathfrak{m}_+$. So
\[\left|\mathrm{d}_\mathfrak{m}\(\tfrac {\mathbf{u}_0^++\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)\right|=\left|\tfrac{\mathrm{dist}_\mathfrak{m}}2-\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\right| \left| \tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\right|
\overset{\eqref{initial mp}}=\left|\tfrac{\mathrm{dist}_\mathfrak{m}}2-\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\right| \label{dist repre}\]
on $ B_\delta(\Sigma_0)\cap \O_0^+$, and thus
\begin{align}\label{dist repre1}
&F\(\tfrac {\mathbf{u}_0^++\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac {\mathbf{u}_0^+-\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)\nonumber\\
\overset{\eqref{bulk potential},\eqref{dist repre}}=&f\(\left|\tfrac{\mathrm{dist}_\mathfrak{m}}2-\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\right|^2\)
\overset{\eqref{centralized potential}}=\tilde{F}\(\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\).
\end{align}
Similar calculation gives the case when $x\in B_\delta(\Sigma_0)\cap \O_0^-$, and to summarize we have from \eqref{F init expand1} and \eqref{dist repre1} that
\[F(\mathbf{u}_\e^{in})=\tilde{F}\(\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\)+O(e^{-C/\e})\qquad \text{ in } B_\delta(\Sigma_t).\label{F init expand2}\]
Now we compute $\boldsymbol{\xi}\cdot\nabla( \mathrm{d}_F \circ \mathbf{u}_\e)$:
\begin{align}\label{F init expand3}
-\int_\O \eta_\delta \boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in}) &\overset{\eqref{cut-off eta delta2}}= \int_\O \div (\eta_\delta \boldsymbol{\xi} ) \mathrm{d}_F \circ \mathbf{u}_\e^{in} \nonumber\\&
\overset{\eqref{uu initial}}= \int_\O \div (\eta_\delta\boldsymbol{\xi} ) \mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +S_\e \tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\)
\end{align}
Using \eqref{def Se1} and the Lipschitz property of $\mathrm{d}_F$, we can write
\begin{align}\label{F init expand4}
-\int_\O \eta_\delta\boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in}) =& \int_\O \div( \eta_\delta \boldsymbol{\xi} ) \mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\) +O(e^{-C/\e}).
\end{align}
Thanks to \eqref{initial mp}, we can find the value of $\mathrm{d}_F$ on $B_\delta(\Sigma_t)$: there are four cases in the definition \eqref{quasidistance} for $\mathrm{d}_F$, but over a line segment inside a minimal connection involves only the second and the fourth cases. To be more precise, for any $ x\in B_\delta(\Sigma_0)\cap \O_0^+$, since $\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})>0$ and $(\mathbf{u}_0^+, \mathbf{u}_0^-)$ are minimal pairs (cf. \eqref{initial mp}), we have
\begin{align*}
&\mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)\\
& \overset{\eqref{quasidistance}}=c_F-\int_0^{\mathrm{d}_{\mathfrak{m}_+}\big(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\big)}\sqrt{2f(\lambda^2)}\, d\lambda\\
& =c_F-\int_0^{\frac{\mathrm{dist}_\mathfrak{m}}2-\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}\sqrt{2f(\lambda^2)}\, d\lambda\\
& \overset{\eqref{centralized potential}}=c_F-\int_{\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}^{\frac{\mathrm{dist}_\mathfrak{m}}2}\sqrt{2\tilde{F}(\lambda)}\, d\lambda\qquad \forall x\in B_\delta(\Sigma_0)\cap \O_0^+.
\end{align*}
Similar calculation applies to the case when $ x\in B_\delta(\Sigma_0)\cap \O_0^-$ and $\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})<0$:
\begin{align*}
&\mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)\\
& \overset{\eqref{quasidistance}}= \int_0^{\mathrm{d}_{\mathfrak{m}_-}\(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)}\sqrt{2f(\lambda^2)}\, d\lambda\\
& \overset{\eqref{linpanwang cf equ}}=\int_0^{\frac{\mathrm{dist}_\mathfrak{m}}2+\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}\sqrt{2f(\lambda^2)}\, d\lambda\\
& \overset{\eqref{centralized potential}}=\int_{-\frac{\mathrm{dist}_\mathfrak{m}}2}^{\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}\sqrt{2\tilde{F}(\lambda)}\, d\lambda\qquad \forall x\in B_\delta(\Sigma_0)\cap \O_0^-.
\end{align*}
To summarize, we have
\[\label{longcase1}
\mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^-}2 +\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\)=\left\{
\begin{split}
c_F- \int_{\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}^{\frac{\mathrm{dist}_\mathfrak{m}}2}\sqrt{2\tilde{F}(\lambda)}\, d\lambda\qquad \forall x\in B_\delta(\Sigma_0)\cap \O_0^+,\\
\int_{-\frac{\mathrm{dist}_\mathfrak{m}}2}^{\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}\sqrt{2\tilde{F}(\lambda)}\, d\lambda
\qquad \forall x\in B_\delta(\Sigma_0)\cap \O_0^-.
\end{split}
\right.
\]
Recall from \eqref{cut-off eta delta2} that $\eta_\delta$ vanishes outside $B_\delta(\Sigma_0)$. So substituting \eqref{longcase1} into \eqref{F init expand4} and integrating by parts
yield
\begin{align}\label{F init expand5}
\int_\O \eta_\delta\boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in}) =& \int_\O \e^{-1} \eta_\delta\boldsymbol{\xi} \cdot \nabla\mathrm{d}_\Sigma \alpha'(\tfrac{\mathrm{d}_\Sigma}{\e}) \sqrt{2\tilde{F}\(\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\)}\, dx +O(e^{-C/\e}).
\end{align}
This combined with \eqref{F init expand2} and \eqref{grad uuin1} yields
\begin{align}\label{grad uuin2}
&\int_\O \(\frac 12 |\nabla\mathbf{u}_\e^{in}|^2+\frac{F(\mathbf{u}_\e^{in})}{\e^2}- \frac 1 \e \boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in})\)\eta_\delta\nonumber\\
&= \int_\O \(\e^{-2} \frac 12 \(\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e})\)^2 +\e^{-2}\tilde{F}\(\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\) -\e^{-2} \boldsymbol{\xi} \cdot \nabla\mathrm{d}_\Sigma \alpha'(\tfrac{\mathrm{d}_\Sigma}{\e}) \sqrt{2\tilde{F}\(\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\)} \)\eta_\delta\nonumber\\
& +\int_\O \frac 12 \eta_\delta \left|\tfrac{\nabla\mathbf{u}_0^++\nabla\mathbf{u}_0^-}2+\alpha(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac{\nabla\mathbf{u}_0^+ -\nabla\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\right|^2\nonumber\\
&+O(e^{-C/\e})\int_\O \(1+ |\nabla\mathbf{u}_0^+|^2+|\nabla\mathbf{u}_0^-|^2\)\eta_\delta.\end{align}
By \eqref{alpha'=} the first term on the right hand side above simplifies to
\begin{align}
\int_\O \eta_\delta\e^{-2} (1- \boldsymbol{\xi} \cdot \nabla\mathrm{d}_\Sigma) \(\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e})\)^2\overset{\eqref{def:xi}}=\int_\O O(1)\eta_\delta \tfrac{\mathrm{d}_\Sigma^2}{\e^2} \(\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e})\)^2\overset{\eqref{exp alpha}}=O(\e).
\end{align}
The above two equations together implies
\begin{align}\label{grad uuin3}
&\int_\O \(\frac 12 |\nabla \mathbf{u}_\e^{in}|^2+\frac{F(\mathbf{u}_\e^{in})}{\e^2}- \frac 1 \e \boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in})\)\eta_\delta\nonumber\\
&=\int_\O\frac 12 \eta_\delta \left|\tfrac{\nabla\mathbf{u}_0^++\nabla\mathbf{u}_0^-}2+\alpha(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac{\nabla\mathbf{u}_0^+ -\nabla\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\right|^2+O(\e)\int_\O \(1+ |\nabla\mathbf{u}_0^+|^2+|\nabla\mathbf{u}_0^-|^2\)\eta_\delta.\end{align}
\subsection{Proof of \eqref{u cali}: Estimates away from $\Sigma_0$.}
Using \eqref{exp alpha}, we have
\[|\alpha'(\tfrac{\mathrm{d}_\Sigma}{\e})|+\left|\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})- \tfrac{\mathrm{dist}_\mathfrak{m}}2 (\mathbf{1}_{\O_0^+}-\mathbf{1}_{\O_0^-}) \right|\leq Ce^{-C/\e}\text{ in }\O\backslash B_{\delta/2}(\Sigma_0).\label{exp alpha1}\] Applying the above estimates to \eqref{grad uuin} yields
\begin{align}\label{grad uuin4}
\nabla\mathbf{u}_\e^{in} &= \tfrac {\nabla\mathbf{u}_0^++\nabla\mathbf{u}_0^-}2 +\tfrac{\mathrm{dist}_\mathfrak{m}}2 (\mathbf{1}_{\O_0^+}-\mathbf{1}_{\O_0^-})\tfrac {\nabla\mathbf{u}_0^+ -\nabla\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\nonumber\\
&\qquad + O(e^{-C/\e})\(1+|\nabla\mathbf{u}_0^+|\mathbf{1}_{\O_0^+}+|\nabla\mathbf{u}_0^-|\mathbf{1}_{\O_0^-}\)\nonumber\\
&=\nabla\mathbf{u}_0^+ \mathbf{1}_{\O_0^+}+\nabla\mathbf{u}_0^- \mathbf{1}_{\O_0^-}\nonumber\\
&\qquad + O(e^{-C/\e})\(1+|\nabla\mathbf{u}_0^+|\mathbf{1}_{\O_0^+}+|\nabla\mathbf{u}_0^-|\mathbf{1}_{\O_0^-}\)
\quad \text{a.e. in }\O\backslash B_{\delta/2}(\Sigma_0).
\end{align}
By \eqref{cut-off eta delta2}
the function $(1-\eta_\delta)$ vanishes on $B_{\delta/2}(\Sigma_0).$ So multiplying this function to \eqref{grad uuin4} yields
\begin{align}\label{grad uuin5}
(1-\eta_\delta)|\nabla\mathbf{u}_\e^{in}|^2 &=(1-\eta_\delta) \sum_\pm |\nabla\mathbf{u}_0^\pm |^2\mathbf{1}_{\O_0^\pm} \nonumber\\
&\quad + O(e^{-C/\e})\(1+\sum_\pm |\nabla\mathbf{u}_0^\pm |^2\mathbf{1}_{\O_0^\pm}\)\qquad \text{a.e. in }\O.\end{align}
Similar but easier calculation of \eqref{F init expand1} yields
\begin{align}\label{grad uuin6}
(1-\eta_\delta ) F(\mathbf{u}_\e^{in}) &= O(e^{-C/\e}) \qquad \text{ in }\O.\end{align}
By the Lipschitz continuity of $\mathrm{d}_F$, \eqref{uu initial} and \eqref{exp alpha1}, we have on $\O\backslash B_{\delta/2}(\Sigma_0)$ that
\begin{align*}
\mathrm{d}_F(\mathbf{u}_\e^{in})&\overset{\eqref{uu initial}}=\mathrm{d}_F\(\tfrac {\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +S_\e \tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\)\nonumber\\
&\overset{\eqref{def Se1}}=\mathrm{d}_F\(\tfrac {\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +\alpha \tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\)+O(e^{-C/\e})\nonumber\\
&\overset{\eqref{exp alpha1}}=\mathrm{d}_F\(\tfrac {\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +\tfrac{\mathrm{dist}_\mathfrak{m}}2 (\mathbf{1}_{\O_0^+}-\mathbf{1}_{\O_0^-}) \tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\)+O(e^{-C/\e})\nonumber\\
&\, ~=\mathrm{d}_F\(\mathbf{u}_0^+ \mathbf{1}_{\O_0^+}+ \mathbf{u}_0^- \mathbf{1}_{\O_0^-}\)+O(e^{-C/\e})
\end{align*}
As $\mathbf{u}_0^\pm$ map into $\mathfrak{m}_\pm$ (cf. \eqref{u in extension}), we have
\begin{align}\label{grad uuin7}
\mathrm{d}_F(\mathbf{u}_\e^{in})&\overset{\eqref{quasidistance}}=c_F \mathbf{1}_{\O_0^+}+O(e^{-C/\e})\quad \text{ in }\O\backslash B_{\delta/2}(\Sigma_0).\end{align}
As $(1-\eta_\delta)$ vanishes on $B_{\delta/2}(\Sigma_0)$,
\begin{align}\label{grad uuin8}
\div\((1-\eta_\delta)\boldsymbol{\xi}\)\mathrm{d}_F\circ \mathbf{u}_\e^{in}
\overset{\eqref{quasidistance}}=\div\((1-\eta_\delta)\boldsymbol{\xi}\) c_F \mathbf{1}_{\O_0^+} +O(e^{-C/\e}).
\end{align}
where $c_F$ is the constant \eqref{def cf}.
So we have
\begin{align}\label{grad uuin9}
-&\int_\O (1-\eta_\delta)\boldsymbol{\xi}\cdot\nabla \(\mathrm{d}_F\circ \mathbf{u}_\e^{in}\) \nonumber\\
&\overset{\eqref{bc n and H}}=\int_\O \div\((1-\eta_\delta)\boldsymbol{\xi}\)\mathrm{d}_F(\mathbf{u}_\e^{in}) \nonumber\\
&\overset{\eqref{grad uuin8}}= \int_{\O_0^+}\div\((1-\eta_\delta)\boldsymbol{\xi}\) c_F +O(e^{-C/\e}) \nonumber\\
&\overset{\eqref{grad uuin8}}= c_F\int_{\Sigma_0} (1-\eta_\delta)\boldsymbol{\xi}\cdot\nu \,d\mathcal{H}^{d-1} +O(e^{-C/\e})=O(e^{-C/\e}).
\end{align}
Putting \eqref{grad uuin5}, \eqref{grad uuin6} and \eqref{grad uuin9} together, we obtain
\begin{align}\label{grad uuin10}
&\int_\O \(\frac 12 |\nabla\mathbf{u}_\e^{in}|^2+\frac{F(\mathbf{u}_\e^{in})}{\e^2}- \frac 1 \e \boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in})\)(1-\eta_\delta)\nonumber\\
&=\sum_\pm\int_{\O_0^\pm} \frac{1-\eta_\delta}2 |\nabla\mathbf{u}_0^\pm|^2 +O(e^{-C/\e})+O(e^{-C/\e})\sum_\pm\int_{\O_0^\pm} |\nabla\mathbf{u}_0^\pm|^2 .\end{align}
Combining this with \eqref{grad uuin3} leads to
\begin{align}\label{grad uuin11}
&\int_\O \(\frac 12 |\nabla\mathbf{u}_\e^{in}|^2+\frac{F(\mathbf{u}_\e^{in})}{\e^2}- \frac 1 \e \boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in})\) \nonumber\\
&=\int_\O\frac 12 \eta_\delta \left|\tfrac{\nabla\mathbf{u}_0^++\nabla\mathbf{u}_0^-}2+\alpha(\tfrac{\mathrm{d}_\Sigma}{\e}) \tfrac{\nabla\mathbf{u}_0^+ -\nabla\mathbf{u}_0^-}{\mathrm{dist}_\mathfrak{m}}\right|^2+\sum_\pm\int_{\O_0^\pm} \frac{1-\eta_\delta}2 |\nabla\mathbf{u}_0^\pm|^2+O(\e) .\end{align}
This leads to \eqref{u cali}.
As a byproduct, we compute the limit of \eqref{grad uuin11}.
As $\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})\xrightarrow{\e\to 0} \tfrac{\mathrm{dist}_\mathfrak{m}}2 (\mathbf{1}_{\O_0^+}-\mathbf{1}_{\O_0^-})$ a.e. in $\O$, we can apply the dominated convergence to the first integral on the RHS of \eqref{grad uuin11} and get
\begin{align}\label{grad uuin12}
\lim_{\e\to 0}\int \(\frac 12 |\nabla\mathbf{u}_\e^{in}|^2+\frac{F(\mathbf{u}_\e^{in})}{\e^2}- \frac 1 \e \boldsymbol{\xi}\cdot\nabla(\mathrm{d}_F \circ \mathbf{u}_\e^{in})\)
= \sum_\pm \int_{\O_0^\pm} |\nabla\mathbf{u}_0^\pm|^2 .\end{align}
\subsection{Proof of \eqref{u bulk}.} Recall \eqref{gronwall2new} that
\begin{align}
B[\mathbf{u}_\e^{in} | \Sigma_0] := &\int_\O \Big(c_F\chi-c_F+ 2\(\mathrm{d}_F \circ \mathbf{u}_\e^{in}-c_F\)^- \Big)\eta\circ \mathrm{d}_\Sigma \, dx\nonumber\\
&+\int_\O \( \mathrm{d}_F \circ \mathbf{u}_\e^{in}-c_F\)^+|\eta\circ\mathrm{d}_\Sigma| \, dx,\label{gronwall2newint}
\end{align}
where $\chi =\mathbf{1}_{\O_0^+}-\mathbf{1}_{\O_0^-}$, and $\eta$ is defined by \eqref{truncation eta}.
We also recall from \eqref{linpanwang cf equ} and \eqref{linpanwang 2.2} that
\[c_F =2 \int_{0}^{\tfrac{\mathrm{dist}_{\mathfrak{m}}}2} \sqrt{2\widetilde{F}(\lambda)} d \lambda.\]
Using \eqref{def Se1} and the Lipschitz property of $\mathrm{d}_F$,
\begin{align}
\mathrm{d}_F \circ \mathbf{u}_\e^{in} \overset{\eqref{uu initial}}=\mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +S_\e \tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\) \overset{\eqref{def Se1}}=\mathrm{d}_F \(\tfrac{\mathbf{u}_0^+ +\mathbf{u}_0^- }2 +\alpha(\tfrac{\mathrm{d}_\Sigma}\e) \tfrac{\mathbf{u}_0^+ -\mathbf{u}_0^- }{\mathrm{dist}_\mathfrak{m}}\)+O(e^{-C/\e})
\end{align}
This combined with \eqref{grad uuin7} and \eqref{longcase1} implies that the second integral of \eqref{gronwall2newint} is of order $O(e^{-C/\e})$. Concerning the first one,
we first deduce from \eqref{grad uuin7} that its integrand is of order $O(e^{-C/\e})$ on $\O\backslash B_{\delta/2}(\Sigma_0)$. So it remains to estimate the integral of the transitional region $B_{\delta}(\Sigma_0)$:
\begin{align}
&\int_{\O_0^+\cap B_{\delta}(\Sigma_0)} \Big(c_F\chi-c_F+ 2\(\mathrm{d}_F \circ \mathbf{u}_\e^{in}-c_F\)^- \Big)\eta\circ \mathrm{d}_\Sigma \, dx\nonumber\\
\overset{\eqref{longcase1}}=& ~2\e \int_{\O_0^+\cap B_{\delta}(\Sigma_0)} \( \int_{\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}^{\frac{\mathrm{dist}_\mathfrak{m}}2}\sqrt{2\tilde{F}(\lambda)}\, d\lambda\)\frac{\eta\circ \mathrm{d}_\Sigma}\e \, dx+O(e^{-C/\e})\leq C\e
\end{align}
by a change of variable. In a similar way
\begin{align}
&\int_{\O_0^-\cap B_{\delta}(\Sigma_0)} \Big(c_F\chi-c_F+ 2\(\mathrm{d}_F \circ \mathbf{u}_\e^{in}-c_F\)^- \Big)\eta\circ \mathrm{d}_\Sigma \, dx\nonumber\\
\overset{\eqref{longcase1}}=&~ -2\e\int_{\O_0^-\cap B_{\delta}(\Sigma_0)} \( \int^{\alpha(\tfrac{\mathrm{d}_\Sigma}{\e})}_{\frac{\mathrm{dist}_\mathfrak{m}}2}\sqrt{2\tilde{F}(\lambda)}\, d\lambda\)\frac{\eta\circ \mathrm{d}_\Sigma}\e \, dx\overset{\eqref{truncation eta}}\leq C\e.
\end{align}
All together we finish the proof of \eqref{u bulk}.
\ \
\noindent{\it Acknowledgements}.
Y. Liu is partially supported by NSF of China under Grant 11971314.
|
1,314,259,995,567 | arxiv | \section{Introduction}
Coherent spin manipulation in quantum
dots (QDs) \cite{spin resonance,control-geometric phase1,control-geometric phase2,optical-control1,optical-control2,optical-control3,Nowack,Petta,Giroday}
is the key element in the state-of-the-art spintronics and solid-state quantum information \cite{Hanson,Dyakonov}.
Accurate spin manipulation can be achieved by several techniques.
One of them is the conventional electron spin resonance
induced by a magnetic field oscillating at the Zeeman transition frequency \cite{spin resonance}.
A more robust technique is the spin manipulation with geometric Berry phases during adiabatic motion
\cite{control-geometric phase2, control-geometric phase1}. Nowadays,
there is also a growing interest in the electric control of
spin using spin-orbit (SO) coupling \cite{Rashba}. It has been applied to high-fidelity
spin manipulation on the $100$ ns time scale \cite{Nowack,Petta,Giroday}.
This highly efficient all-electrical method has several advantages.
For example, it is easy to generate time-dependent electric fields on the nanoscale by
adding local electrodes and produce spin manipulation by making them
Zeeman-resonant \cite{Nowack}. As a result, Rabi spin oscillations appear at a frequency
much smaller than the Zeeman frequency making the flip relatively slow and prone to decoherence.
We shall propose here another all-electrical technique to flip spin
with high fidelity via ``shortcuts to adiabaticity'',
in a time that can be much shorter than any decoherence time.
Recently, several shortcuts to adiabaticity have been put forward to
speed up the adiabatic passage of quantum systems,
and achieve a robust and fast adiabatic-like control
\cite{Rice,Berry09,Chen10b,Oliver,bec,Chen,Nice,Nice2,3d,ChenPRA,Masuda,Adol,Onofrio,transport}.
The transitionless or counter-diabatic control algorithms proposed by Demirplak, Rice \cite{Rice}, and Berry \cite{Berry09},
are designed to add supplementary time-dependent interactions that cancel the diabatic couplings of a reference process.
The system then follows exactly the adiabatic trajectory of the original unperturbed process, in principle in an
arbitrarily short time. Transitionless quantum drivings
have been implemented in two-level systems: spins in a magnetic field \cite{Berry09}, atoms \cite{Chen10b},
and Bose-Einstein condensates in optical lattices \cite{Oliver}. A different shortcut is provided by inverse
engineering the transient Hamiltonian \cite{bec,Chen} using Lewis-Riesenfeld invariants \cite{LR}.
This method has been used for time-dependent traps
\cite{bec,Chen,Nice,Nice2,3d}, atomic transport \cite{transport},
and other applications \cite{Adol,Onofrio}. Although these two methods are potentially equivalent \cite{ChenPRA},
their implementations and results can be quite different. Here we choose the invariant-based inverse
engineering approach, since it is better suited than the transitionless driving to be produced by the desired
all-electrical means.
\textit{Model.-}
We consider the electric control of electron spin in a QD formed in the $x$-$y$ plane of a two-dimensional electron gas
confined in the $z$-direction by the coordinate-dependent material composition, under
a weak magnetic field $\mathbf{B}_0 \parallel \mathbf{z}$, as shown in Fig. \ref{model}.
\begin{figure}[]
\scalebox{0.50}[0.50]{
\includegraphics{fig1.eps}}
\caption{(Color online) Schematic diagram of spin dynamics of electron in a QD in the presence of electric fields
$\mathcal{E}_{i}(t)$, and perpendicular magnetic field $\mathbf{B}_0\parallel{z}-$axis.}
\label{model}
\end{figure}
Here the total Hamiltonian $H$ of the electron interacting with the external electric field
${\bf \mathcal{E}} (t) = -\partial \textbf{A}/c\partial t$ is
$
H=H_0+ H_{\rm so} + H_{\rm int},
$
with \cite{Rashba}
\beqa
\label{H_0}
H_0 &=& \frac{p^2_x+p^2_y}{2m} + U(x,y) + \frac{\Delta_z}{2} \sigma_z ,
\\
H_{\rm so} &=& (-\alpha \sigma_y + \beta \sigma_z) p_x + \alpha\sigma_x p_y,
\\
H_{\rm int} &=& - \frac{e}{c} \textbf{A}(t) \cdot \textbf{v},
\eeqa
where $m$ is the electron effective mass and $\sigma_i$ ($i=x,y,z$) are the Pauli matrices.
$H_0$ represents the kinetic energy, the
potential $U(x,y)$, and the Zeeman splitting $\Delta_z = g \mu_B B_0$, where
$\mu_B$ is the Bohr magneton, and $g$ is the Land\'{e} factor. The eigenfunctions of $H_0$ are $\psi_{j}(x,y)\left|\sigma\right>$,
where $|\sigma\ra=|\pm1\ra$ is the eigenstate of $\sigma_{z}$, and the spectrum is given by $E_{j}\pm\Delta_{z}/2$, where
$E_{j}$ are the orbital eigenenergies in the confinement potential.
The SO coupling is the sum of structure-related Rashba ($\alpha$) and bulk-originated
Dresselhaus ($\beta$) terms for $[1 1 0]$ growth axis \cite{Golovach}.
The vector potential $\textbf{A}(t)$ is in the $(x,y)-$plane and corresponding
spin-dependent velocity operators are
\beqa
v_x &=& \frac{i}{\hbar} \left[H_0+H_{\rm so}, x\right] = p_x/m + \beta \sigma_z - \alpha \sigma_y,
\\
v_y &=& \frac{i}{\hbar} \left[H_0+H_{\rm so}, y\right] = p_y/m + \alpha \sigma_x.
\eeqa
We focus on the doublet $\Psi_{1}=\psi_1\left|1\right>$,$\Psi_{2}=\psi_1\left|-1\right>$,
include the higher orbitals by L\"{o}wdin partition \cite{lowdin-partition,Winkler}, and
reduce the full Hamiltonian into an effective $2 \times 2$ one \cite{suppl material},
\beqa
\label{H'}
H^{\textrm{eff}} = \frac{g \mu_B}{2}
\left(
\begin{array}{cc}
Z & X + i Y
\\
X - i Y & -Z
\end{array}
\right),
\eeqa
where $X = B_2(1+\xi_y) $, $Y = (\alpha/\beta)(1+\xi_x) B_1 $, $Z = B_0 + (1+\xi_x) B_1 $,
with the effects of higher states characterized by $\xi_x$ and $\xi_y$, and
the components of the designed electric field are renormalized by the factors of $1/(1+\xi_{i})$.
The effective magnetic fields are expressed with the SO coupling
parameters as $B_1= - 2 e \beta A_x/c g \mu_B$, and $B_2=- 2 e \alpha A_y/c g \mu_B$.
The resulting electric fields are:
\beqa
\label{electric field}
\mathcal{E}_x (t)= \frac{g \mu_B }{2 e \beta } \frac{\partial B_1}{\partial t},~~
\mathcal{E}_y (t)= \frac{g \mu_B }{2 e \alpha } \frac{\partial B_2}{\partial t}.
\eeqa
In practice, some slowly varying electric fields can be applied to drive the state from $\Psi_1$ to $\Psi_2$
adiabatically along an instantaneous eigenstate of
the Hamiltonian in Eq. (\ref{H'}). To accelerate the driving using the transitionless algorithm,
counter-diabatic fields should be provided \cite{ChenPRA}.
However, the common dependence of $Y$ and $Z$ on $B_1$ precludes the
implementation of the fast driving terms only by electric fields.
In contrast, invariant-based inverse engineering naturally leads to
an all-electrical driving.
\textit{Dynamical invariant and spin-flip example.-}
We shall design the time dependence of the external electric fields to guarantee
the state transfer in some fixed time $t_f$ by using the dynamical $2\times 2$ invariant $I(t)$
satisfying the condition $dI(t)/dt \equiv \partial I(t)/\partial t - [H^{\textrm{eff}}(t), I(t)]/ i \hbar=0$.
Parametrizing the Bloch sphere (Fig. \ref{model}),
by the angles $\theta$ and $\varphi$, we construct
yet unknown orthogonal eigenstates $|\chi_{\pm} (t) \ra$ of $I(t)$ as
\beqa
|\chi_{+}(t) \ra
=
\left(\begin{array}{c}
\cos\displaystyle{\frac{\theta}{2}} e^{i \varphi}
\\
\sin\displaystyle{\frac{\theta}{2}}
\end{array}
\right),
|\chi_{-}(t) \ra
=
\left(\begin{array}{c}
\sin \displaystyle{\frac{\theta}{2}}
\\
- \cos \displaystyle{\frac{\theta}{2}} e^{-i \varphi}
\end{array}
\right).~
\eeqa
They satisfy $I(t) |\chi_\pm (t)\ra = \lambda_\pm |\chi_\pm (t)\ra$.
Introducing $\lambda_{\pm}=\pm g \mu_B B_{c}/2$,
we construct the invariant
as \cite{ChenPRA}
\beqa
\label{invariant}
I (t) = \frac{g \mu_B }{2} B_{c}
\left(
\begin{array}{cc}
\cos{\theta} & \sin{\theta} e^{i \varphi}
\\
\sin{\theta} e^{-i \varphi} & -\cos{\theta}
\end{array}
\right),
\eeqa
where
$B_{c}$ is an arbitrary constant magnetic field to keep $I(t)$ with units of energy. According to the
Lewis-Riesenfeld theory,
the solution of the Schr\"{o}dinger equation, $i \hbar \partial_t \Psi=H^{\textrm{eff}}(t) \Psi$, is a superposition
of orthonormal ``dynamical modes", $\Psi (t) = \sum_n C_n e^{i \alpha_{n}(t)} |\chi_{n} (t) \ra$ \cite{LR}, where
$C_n$ are time-independent amplitudes and
$\alpha_{n}(t)$ is Lewis-Riesenfeld phase,
\beq
\alpha_n (t) =\frac{1}{\hbar} \int^t_0 \langle \chi_n(t') | i\hbar \frac{\partial }{\partial t'} - H^{\textrm{eff}}(t')| \chi_n(t') \rangle dt'.
\eeq
From the invariant condition, $dI(t)/dt=0$,
the angles $\theta$ and $\varphi$ are related to $X$, $Y$ and $Z$ by
auxiliary equations
\beqa
\label{auxiliary equations}
\dot{\theta} &=& \eta (X \sin\varphi - Y \cos\varphi),
\\
\label{auxiliary equations2}
\dot{\varphi} &=& \eta (X \cos\varphi \cot\theta + Y \sin\varphi \cot\theta - Z),
\eeqa
where $\eta =g \mu_B/\hbar$. Since $X$ is a function of $B_2$,
while $Y$ and $Z$ are functions of $B_1$, once $\theta$ and $\varphi$ are fixed,
Eqs. (\ref{auxiliary equations}) and
(\ref{auxiliary equations2}) give the effective magnetic fields
\beqa
\label{B_1}
B_1 &=& \frac{ -\beta \dot{\theta} \cot\theta \cos\varphi + \beta (\dot{\varphi} + \eta B_0) \sin\varphi}{ \eta (1+\xi_x)(\alpha \cot\theta -\beta\sin\varphi)},
\\
\label{B_2}
B_2 &=& \frac{\alpha \dot{\theta} \cot\theta \sin\varphi + \alpha (\dot{\varphi} + \eta B_0) \cos\varphi - \beta \dot{\theta}}{\eta (1+\xi_y) (\alpha \cot\theta -\beta\sin\varphi)},
\eeqa
from which the electric fields are calculated using Eq. (\ref{electric field}).
During the spin-flip process, there exist some time instants $t=t_s$ which satisfy
\beq
\label{denominator}
\alpha \cot \theta(t_s) = \beta \sin \varphi(t_s),
\eeq
and make the denominators of $B_1$ and $B_2$ zero. To get rid of such singularities we impose the conditions
\beqa
\label{numerator B_1}
\beta \sin \varphi(t_s)\!\left[\dot{\varphi}(t_s) + \eta B_0 -(\beta/\alpha) \dot{\theta} (t_s) \cos\varphi (t_s)\right]\!=\! 0, ~~~
\\
\label{numerator B_2}
\alpha\cos \varphi(t_s)\!\left[\dot{\varphi}(t_s) + \eta B_0 - (\beta/\alpha) \dot{\theta}(t_s)\cos\varphi(t_s)\right]\!=\! 0,~~~
\eeqa
which make the numerators of $B_1$ and $B_2$ zero simultaneously. In the following example, we will show how this works.
\begin{figure}[]
\scalebox{0.60}[0.60]{\includegraphics{fig2new.eps}}
\caption{(Color online) Dependence of the maximum of applied magnetic
field $B^{\max}_0$ (solid blue) on the time $t_f$ for a third order polynomial ansatz for $\theta$ and $\varphi$,
with the parameters:
$\hbar \alpha=2 \times 10^{-6}$ meV$\cdot$cm, $\beta = \alpha/2$, $g=-0.44$ for GaAs. $B_0=0.075$ T
(dashed red) corresponds to $\Delta_z= 23$ mK.}
\label{Bmax}
\end{figure}
In general, the eigenstates of the invariant are not the same as the instantaneous eigenstates of the Hamiltonian,
since $I(t)$ and $H^{\rm eff}(t)$ do not commute.
If we impose for $\theta$ at $t=0$ and $t_f$ that
\beqa
\label{boundary1}
\theta (0) = 0, ~ \theta (t_f) = \pi, ~
\dot{\theta}(0) = 0,~ \dot{\theta}(t_f) = 0,
\eeqa
then $[H^{\textrm{eff}}(0), I(0)]=0$ and $[H^{\textrm{eff}}(t_f), I(t_f)]=0$,
which guarantees
common eigenstates at initial and final times.
Moreover the state obeying Eq. (\ref{boundary1}) will flip from $|\Psi_{1}\ra$ at $t=0$ to $|\Psi_{2}\ra$ at $t=t_f$, up to phase factors, along the eigenstate $|\chi_{+}(t) \ra$.
To design the trajectory at intermediate times
we assume the polynomial ansatz $\theta=\sum_{j=0}^3 a_j t^j$, where the $a_j$ can be fixed by solving the system implied by Eq. (\ref{boundary1}).
This leads to $\theta(t_f/2)=\pi/2$, so $\cot \theta$ covers the whole $(-\infty,\infty)$ range
passing through zero at $t=t_f/2$. This may lead to one or several times satisfying Eq. (\ref{denominator}), as we will see below in more detail.
To determine $|\chi_{+} (t)\ra$ fully, we also need the trajectory for $\varphi$.
As the initial and final states
are the poles of the Bloch sphere, the phase $\varphi$ is not well defined there.
We may nevertheless specify how the trajectory approaches them, and impose
limits from the right at $t=0$, and from the left at $t=t_f$, for example,
\beqa
\label{boundary2}
\varphi (0^+) = \pi/2, ~~~~ \varphi (t^{-}_f) = \pi/2.
\eeqa
These conditions are not sufficient though, since we still have to deal with the singularities and their cancellation.
As $\cot \theta(t_f/2) =0$, we may satisfy Eq. (\ref{denominator})
and impose zeros of the denominators for $B_{1,2}$ at $t_s=t_f/2$, if
$\sin \varphi(t_f/2) =0$. Imposing the two conditions
\beqa
\label{boundary3}
\varphi(t_f/2) = 0, ~~~~~~~~~~
\\
\label{boundary4}
\dot{\varphi}(t_f/2) = (\beta/\alpha) \dot{\theta}(t_f/2)- \eta B_0,
\eeqa
to satisfy {Eqs. (\ref{numerator B_1}) and (\ref{numerator B_2})}
at $t_s=t_f/2$, we
cancel the singularity there.
With the conditions in Eqs. (\ref{boundary2})-(\ref{boundary4}), we solve the third order polynomial
ansatz $\varphi=\sum_{j=0}^3 b_j t^j$ to determine $\varphi(t)$.
Explicit calculations demonstrate that for the third-order polynomial ansatz
and boundary conditions imposed here,
there is only one (removable) singularity at $t_s=t_f/2$ when the field $B_0$ is smaller than certain upper limit $B_0^{\max}$, shown in
Fig. \ref{Bmax} as a function of $t_f$.
For $B_0>B_0^{\max}$, more solutions of (\ref{denominator})
appear [$B_0$ and $\sin\varphi(t)$ are coupled by
Eq. (\ref{boundary4})],
which cannot be canceled with the third order polynomial. To satisfy
Eqs. (\ref{numerator B_1}) and (\ref{numerator B_2}) at more than one zero of the denominators of $B_{1,2}$, one may set
higher order polynomials for $\varphi$ and further conditions. This increases the bound $B_0^{\max}$, but also complicates the driving fields.
\begin{figure}[]
\scalebox{0.60}[0.60]{\includegraphics{fig3a.eps}}
\scalebox{0.60}[0.60]{\includegraphics{fig3b.eps}}
\scalebox{0.60}[0.60]{\includegraphics{fig3c.eps}}
\caption{(Color online) For $t_f=1$ ns: (a) Polynomial ansatzes of auxiliary angles $\theta=\sum_{j=0}^3 a_j t^j$ (solid blue),
$\varphi=\sum_{j=0}^3 b_j t^j$ with $B_0=0.15$ T (dashed red) and $B_0=1.05$ T (dotted black).
The designed electric fields ${\mathcal E}_x$ (solid blue) and ${\mathcal E}_y$ (dashed red) by which spin flip
can be realized for $B_0=0.15$ T (b) and $B_0=1.05$ T (c).
Other parameters are the same as those in Fig. \ref{Bmax}.
For simplicity, we put here $\xi_{x}=\xi_{y}=0$ to skip the trivial dependence on these factors.}
\label{example}
\end{figure}
In the present examples, we just apply the third order polynomial ansatz with the boundary conditions
in Eqs. (\ref{boundary1})-(\ref{boundary4}), so that the applied magnetic field $B_0$ should not
go beyond the upper limit in Fig. \ref{Bmax}. As the upper limit field grows for smaller times $t_f$,
this is not a problem in practice. Figure \ref{example} shows examples of spin flip for different values of $B_0$.
With the functions $\theta$ and $\varphi$ fixed [see Fig. \ref{example} (a)], the designed electric
fields, $\mathcal{E}_x(t)$ and $\mathcal{E}_y(t)$, corresponding to $B_0=0.15$ T and $B_0=1.05$ T
(close to the upper limit), are depicted in Fig. \ref{example} (b) and (c). The populations
(not shown) of the two spin states, given by $P_{1}= \cos^2{(\theta/2)}$ and $P_{-1}= \sin^2{(\theta/2)}$,
cross each other smoothly as $\theta$ goes from $0$ to $\pi$. The choice of $B_0$ determines the trajectory on the Bloch sphere for a given $t_f$.
When $B_0$ approaches the upper limit, the electric fields exhibit sharp peaks, see Fig. \ref{example} (c).
The smooth time-dependence in Fig. \ref{example} (b) is well suited for the applications,
while the complicated dependence in Fig. \ref{example} (c) should be avoided.
Undesirable excitation of the orbital modes does not occur here
since the spin flip $t_{f}\sim 1$ ns, while the energy split
of the orbital states in typical QDs exceeds $0.1$ meV. Therefore, regarding the orbital
motion, our perturbation is strongly adiabatic, and no orbital excitation occurs.
\begin{figure}[]
\scalebox{0.60}[0.60]{\includegraphics{fig4a.eps}}
\scalebox{0.60}[0.60]{\includegraphics{fig4b.eps}}
\caption{(Color online) Time evolution of $\cos \theta$ (a) and $\sin \varphi$ (b) for the same Hamiltonian with the designed electric fields and $B_0=0.15$ T, where $\epsilon=0$, $\varphi_0=\pi/2$ (solid blue); $\epsilon=0.01$, $\varphi_0=\pi/2$ (dashed red);
$\epsilon=0.01$, $\varphi_0=\pi/4$ (dot-dashed black). Other parameters are the same as in Fig. \ref{example}.}
\label{angles}
\end{figure}
To check the stability with respect to initialization errors, we assume
now the initial state as $(\sqrt{1-\epsilon} e^{i \varphi_0}, \sqrt{\epsilon})^T$
with an arbitrary phase $\varphi_0$ and find $\theta$ and $\varphi$ from
Eqs. (\ref{auxiliary equations}) and (\ref{auxiliary equations2}) for
the same designed electric fields (Fig. \ref{angles}).
The final value
$\theta(t_f)$ depends on the error $\epsilon$, but insensitive to the
initial phase $\varphi_0$,
while $\varphi(t_f)$ is sensitive to both initial conditions. Since our goal
is to realize the spin flip, the final $\varphi$ is irrelevant,
and the experimental effort should focus on achieving a small error $\epsilon$.
\textit{Decoherence and noise effects.-}
To show feasibility of our approach, we study the effects of noise
and decoherence on the spin-flip fidelity. We begin with a generic approach for coupling
to the incoherent environment, based on the conventional Lindblad formalism as can arise, e.g., from
interaction with the conduction electron bath.
The master equation reads \cite{Sipe}
\begin{equation}
\label{density matrix}
\dot{\rho} = -\frac{i}{\hbar} [H^{\textrm{eff}},\rho]- \frac{\gamma}{2}\sum_{i}[\sigma_i,[\sigma_i,\rho]],
\end{equation}
where $\gamma$ is the dephasing rate.
We introduce the Bloch vector with components $u=\rho_{1-1}+\rho_{-11}$, $v= -i(\rho_{1-1}-\rho_{-11})$, and $w=\rho_{11}-\rho_{-1-1}$,
and obtain from Eq. (\ref{density matrix})
\beqa
\label{bloch equation}
\left(\begin{array}{ccc}
\dot{u}
\\
\dot{v}
\\
\dot{w}
\end{array}\right)
=
\left(\begin{array}{ccc}
-4 \gamma & \eta Z & - \eta Y
\\
-\eta Z &-4 \gamma & \eta X
\\
\eta Y & - \eta X & -4 \gamma
\end{array}\right)
\left(\begin{array}{ccc}
u
\\
v
\\
w
\end{array}\right).
\eeqa
We solve Eq.(\ref{bloch equation}) numerically and calculate fidelity $F = |\la -1 | \Psi (t_f)\ra|$, see Fig. \ref{fidelity}.
For $\gamma t_{f}\ll1$ the time-dependent perturbation theory \cite{3d} yields the bound
$
F \gtrsim 1 - 2 \gamma t_f
$.
Since the induced flip occurs very fast, it can overcome the main danger for
the low-temperature spin manipulation in QDs coming from the hyperfine coupling to
the nuclear spins, where the decoherence times exceed $100$ ns \cite{Nowack}.
\begin{figure}[]
\scalebox{0.50}[0.50]{\includegraphics{fig5.eps}}
\caption{(Color online) Fidelity as a function of $\gamma$ for different times $t_f=0.1$ ns (solid red)
and $t_f=1 $ ns (dotted black). The fidelity estimated from perturbation theory is also compared,
for $t_f=0.1$ ns (dashed blue, undistinguished) and $t_f=1 $ ns (dot-dashed orange).
$B_0=0.15$ T and other parameters are the same as in Fig. \ref{Bmax}.}
\label{fidelity}
\end{figure}
Another source of decoherence is the device-dependent noise in the electric
field acting on the spin. This can be important when the relatively weak electric fields are applied. We analyze in
detail the effect of this noise and find that our method is robust to this randomness in \cite{suppl material-2}.
\textit{Conclusions and outlook.-}
We have proposed a fast and robust method to flip electron spin
in a QD with SO coupling and weak perpendicular magnetic field.
The spin-flip process, designed by Lewis-Riesenfeld invariants, is faster than the decoherence
for any known low-temperature dephasing mechanism.
This method can be further complemented by optimal control theory for time- and energy-minimization subjected to
different physical constraints \cite{Boscain}. Implementation of this technique will allow for
high-fidelity spin-manipulation for quantum information processing.
{\it Note added:} We have corrected errors in the published version.
We acknowledge funding by the Basque Government (Grants No.
IT472-10 and BFI-2010-255), Ministerio de Ciencia e Innovacion (Grant No.
FIS2009-12773-C02-01), UPV/EHU program (UFI 11/55),
and National Natural Science Foundation of China (Grant No. 61176118) and
Shanghai Rising-Star Program (Grant No. 12QH1400800).
|
1,314,259,995,568 | arxiv | \chapter{\clearpage
\global\@topnum\z@
\@afterindenttrue
\secdef\@chapter\@schapter}
\begin{document}
\begin{titlepage}
\begin{picture}(350,130)(0,10)
\put(330,135){KEK-preprint-94-162}
\put(330,120){NWU-HEP 94-07}
\put(330,105){DPNU-94-59}
\put(330,90){TIT-HPE-94-013}
\put(330,75){TUAT-HEP 94-07}
\put(330,60){OCU-HEP 94-07}
\put(330,45){PU-94-692}
\put(330,30){INS-REP 1077}
\put(330,15){KOBE-HEP 94-06}
\put(-10,30){\epsfysize=3.5cm\epsfbox{kekmark.ps}}
\end{picture}
\begin{center}
\begin{Large}
Measurement of inclusive particle spectra and test of MLLA prediction
in $e^+e^-$ annihilation at $\sqrt{s}$=58GeV
\end{Large}
\vskip 0.5cm
(TOPAZ collaboration)
\vskip 0.3cm
R.Itoh$^a$, M.Yamauchi$^a$, A.Yamaguchi$^b$
K.Abe$^c$, T.Abe$^c$, I.Adachi$^a$,
K.Adachi$^b$, M.Aoki$^c$, M.Aoki$^d$, S.Awa$^b$,
K.Emi$^e$, R.Enomoto$^a$, H.Fujii$^a$, K.Fujii$^a$,T.Fujii$^f$,
J.Fujimoto$^a$,
K.Fujita$^g$, N.Fujiwara$^b$, H.Hayashii$^b$,
B.Howell$^h$, N.Iida$^b$, Y.Inoue$^g$, H.Iwasaki$^a$, M.Iwasaki$^b$,
K.Kaneyuki$^d$, R.Kajikawa$^c$,
S.Kato$^i$, S.Kawabata$^a$, H.Kichimi$^a$, M.Kobayashi$^a$,
D.Koltick$^h$, I.Levine$^h$, S.Minami$^d$,
K.Miyabayashi$^c$, A.Miyamoto$^a$, K.Muramatsu$^b$, K.Nagai$^j$,
K.Nakabayashi$^c$, E.Nakano$^c$, O.Nitoh$^e$, S.Noguchi$^b$, A.Ochi$^d$,
F.Ochiai$^k$,
N.Ohishi$^c$, Y.Ohnishi$^c$, Y.Ohshima$^d$,
H.Okuno$^i$, T.Okusawa$^g$,T.Shinohara$^e$, A.Sugiyama$^c$,
S.Suzuki$^c$, S.Suzuki$^d$, K.Takahashi$^e$, T.Takahashi$^g$,
T.Tanimori$^d$, T.Tauchi$^a$, Y.Teramoto$^g$, N.Toomi$^b$,
T.Tsukamoto$^a$, O.Tsumura$^e$, S.Uno$^a$, T.Watanabe$^d$,
Y.Watanabe$^d$ and A.Yamamoto$^a$ \\
\end{center}
{\small \it
\leftline{(a)
KEK, National Laboratory for High Energy Physics, Ibaraki-ken 305,
Japan }
\leftline{(b)
Department of Physics, Nara Women's University, Nara 630, Japan }
\leftline{(c)
Department of Physics, Nagoya University, Nagoya 464, Japan}
\leftline{(d)
Department of Physics, Tokyo Institute of Technology, Tokyo 152,
Japan}
\leftline{(e)
Department of Applied Physics, Tokyo Univ. of Agriculture and
Technology, Tokyo 184, Japan}
\leftline{(f)
Department of Physics, University of Tokyo, Tokyo 113, Japan}
\leftline{(g)
Department of Physics, Osaka City University, Osaka 558, Japan }
\leftline{(h)
Department of Physics, Purdue University, West Lafayette, IN
47907, USA }
\leftline{(i)
Institute for Nuclear Study, University of Tokyo, Tanashi,
Tokyo 188, Japan }
\leftline{(j)
The Graduate School of Science and Technology, Kobe University,
Kobe 657,
Japan }
\leftline{(k)
Faculty of Liberal Arts, Tezukayama University, Nara 631, Japan }
}
\begin{center}
{\it Submitted to Physics Letters B}
\end{center}
\end{titlepage}
\newpage
\begin{titlepage}
\begin{abstract}
Inclusive momentum spectra are measured for all charged particles and for
each of $\pi^{\pm}$, $K^{\pm}$, $K^0/\overline{K^0}$, and
$p/\overline{p}$
in hadronic events produced via $e^+e^-$ annihilation at
$\sqrt{s}$=58GeV . The measured
spectra are compared with QCD predictions based on the
modified leading log approximation(MLLA).
The MLLA model reproduces the measured spectra
well. The energy dependence of the peak positions of the
spectra is studied by
comparing the measurements with those at other energies.
The energy dependence is also well described by the MLLA model.
\end{abstract}
\end{titlepage}
\newpage
\section{Introduction}
The LLA (leading log approximation) parton shower model
well reproduces the various
distributions of observables for hadronic final states of $e^+e^-$
collisions, when the ``coherence'' effect of soft gluons
is taken into account.
Several Monte Carlo programs were
written based on this scheme (JETSET63\cite{JETSET63}, for example) \
and were used in various experiments to
study QCD. However, since the coherence effect is the consequence
of higher order corrections, the
effect could only be inserted by hand in the LLA scheme in these Monte
Carlo programs.
On the other hand, even in low $Q^2$ region,
the momentum spectrum of gluons in the parton shower process can be
analytically calculated
using the modified leading log approximation(MLLA)\cite{MLLA}. The
coherence effect is taken into account in MLLA by consistently
importing a part of next-to-leading order corrections.
The distribution of the particle spectra is expressed as a function of
two parameters,
$Y$=log($\frac{E\Theta}{Q_0}$) and $\lambda$=log($\frac{Q_0}{\Lambda}$):
\begin{eqnarray}
x_p{\overline D^g_q}(x,Y,\lambda) &=& \frac{4C_F(Y+\lambda)}{bB(B+1)}
\int_{\epsilon +{\rm i}\infty}^{\epsilon -{\rm
i}\infty}\frac{d\omega}{2\pi{\rm i}}x_p^{-\omega}\Phi
(-A+B+1,B+2,-\omega (Y+\lambda)) \nonumber\\
&\times& \frac{\Gamma (A)}{\Gamma (B)}
(\omega\lambda)^B\Psi (A,B+1,\omega\lambda) \label{eq:MLLA}
\end{eqnarray}
Here $x_p$ is the momentum of a particle normalized by the beam energy
$E=\sqrt{s}/2$, $\Theta$ is the opening angle of the jet cone,
and $C_F$=$\frac{4}{3}$, $b=\frac{23}{3}$, $A$=$\frac{12}{b\omega}$ and
$B=\frac{307}{27b}$ for five quark flavors, respectively.
The two functions, $\Phi$ and $\Psi$, are two solutions of the
confluent hyper-geometric equation.
The quantities $\Lambda$ and $Q_0$ are the QCD scale parameter and the
energy cut-off of the parton evolution, respectively.
This calculation predicts depletion of soft partons as a consequence of
the destructive interference of soft gluons.
This depletion shows up clearly in eq.~\ref{eq:MLLA} when written as a
function of $\xi = {\rm ln}(1/x_p)$.
This function has a maximum at a certain $\xi$ value
and decreases in the larger $\xi$ region. The coherence effect reduces the
available phase space in this region and therefore the effect can be
studied by comparing the measured inclusive cross section with this
calculation.
However, since this expression is calculated for partons, it cannot be
compared directly with the measurement unless the distribution at the
level of
final state hadrons is ensured to be similar to that of partons.
The concept of Local Parton Hadron Duality (LPHD) dictates that the
distribution of the final state hadrons is closely related to that of
partons\cite{LPHD}.
The conversion of partons into hadrons,
which occurs at low virtuality scale, includes only small momentum
transfer, and hence leaves the distribution essentially unchanged.
In this article we compare the calculated parton spectrum (eq.~\ref{eq:MLLA})
directly with the measured hadron spectrum assuming LPHD.
The $Q_0$, which was primarily introduced to regularize collinear singularity,
gives a cut-off on parton energies.
Therefore, the value of $Q_0$ should be close to the mass of
the particle being considered\cite{mh}, while $\Lambda$ should be
common to all the particle species.
These dependences can be experimentally tested by measuring the values
of $Q_0$ and $\Lambda$ for various particle species.
The momentum spectrum might still be distorted by fragmentations and
decays in spite of LPHD. This complication
can be avoided partially when the energy evolution of the distribution
is considered.
The peak position of the distribution (eq.~\ref{eq:MLLA}) is given
using the limiting spectra calculation in which $\Lambda$ is
assumed to be equal to $Q_0$\cite{MLLA}:
\begin{equation}
\xi_{max} = \frac{1}{2}Y
+B\sqrt{\frac{b}{16N_c}Y} - \frac{bB^2}{16N_c}
\label{eq:Peak}
\end{equation}
where $N_c$ is the number of colors ($= 3$).
This should be compared with the
variation as $\xi_{max}\sim Y$
expected from phase space consideration alone.
\section{Event Selection}
The data used in this analysis are accumulated by the TOPAZ detector
at the TRISTAN
$e^+e^-$ collider at center-of-mass energies between 52.0 and 61.4~GeV.
The average energy is 58.0~GeV, and the total integrated luminosity is
113.7/pb.
Details of the TOPAZ detector is described elsewhere\cite{TOPAZ}.
In this analysis,
the data from a Time Projection Chamber (TPC) is mainly used
for tracking and dE/dx measurement for charged particles\cite{TPC}.
The trigger condition relevant to this analysis is the track trigger which
requires two or more charged tracks in the fiducial volume of the TPC, and the
energy trigger which requires 4~GeV or more energy deposit in the barrel lead
glass calorimeter.
The event trigger is generated by a logical OR of those two, and the trigger
efficiency for multihadronic events is practically 100\%.
Out of the triggered events, multihadronic events are selected
by the following
conditions; (a) five or more charged tracks having transverse momenta with
respect to the beam axis larger than 0.15~GeV/$c$ are
originating from the
interaction point with the angle larger than 37$^\circ$ with respect
to the beam axis,
(b) total visible energy is larger than 1/2 of the
center-of-mass energy, and (c) momentum imbalance along the beam direction is
smaller than 0.4.
By these conditions, the event selection efficiency is estimated to be
67.1\% with a background contamination of less than 2.0\%.
Two-photon processes and $\tau^+\tau^-$ productions are the
main sources of the background.
In addition to these cuts, the jet axis is required
to have an angle larger than 40$^\circ$ with
respect to the beam axis to ensure that the event is
well contained in the detector acceptance.
Applying these criteria, 11247 events remain to be
used in the analysis.
\section{Particle Identification and Measurement of Cross section}
The particle species $\pi^{\pm}$, $K^{\pm}$ and
$p/\overline{p}$ are identified
by measuring the dE/dx of tracks detected in TPC. The detail of the
particle identification technique is described in ref.\cite{Itoh-D}.
The typical resolution of the dE/dx measurement for minimum ionizing
pions is 4.6\%. Fig.~\ref{fig:dEdX} shows dE/dx distribution
as a function of the track momentum for a part of the
event sample.
\begin{figure}
{\centerline{\epsfysize=15cm\epsfbox{fig1.ps}}}
\caption{dE/dx distribution as a function of track momentum measured
by TOPAZ-TPC.\label{fig:dEdX}}
\end{figure}
The cross section is calculated from the
number of tracks in each momentum slice.
In the low momentum region, the number of tracks for each particle species are
directly obtained by counting the number of tracks in each dE/dx band.
The dE/dx bands are, however, not well separated in the higher momentum
region. To extract the number of
each particle species in this region,
the dE/dx distribution in a momentum slice is fitted
with a superposition of four Gaussians
corresponding to $e^\pm$, $\pi^\pm$'s, $K^\pm$'s and
$p/\overline{p}$ in each momentum slice.
The widths and the centers of the Gaussians are determined from the
measurement for Bhabha events and cosmic ray $\mu$'s.
Only the normalizations of the Gaussians are the free parameters of
the fit. A typical fit to the dE/dx distribution is shown in
Fig.~\ref{fig:dEdXfit}.
Because of a hardware calibration problem of the TPC, the dE/dx
resolution is time dependent. Therefore the
data sample is divided into two groups and the procedure to count the
number of tracks of each particle species is done separately for these two
groups.
\begin{figure}[t]
{\centerline{\epsfysize=10cm\epsfbox{fig2.ps}}}
\caption{A typical fit to dE/dx distribution in a momentum bin. The
distribution is fitted by four Gaussians corresponding to $e^{\pm}$,
$\pi^{\pm}$, $K^{\pm}$ and $p/\overline{p}$. \label{fig:dEdXfit}}
\end{figure}
$K_s$'s are identified by searching for their daughter charged pion pairs
in TPC. A pair of tracks must satisfy following conditions to be
identified as a $K_s$: a) the distance of the closest
approach of the two tracks is less than 0.8cm; b) the distance from the
interaction point to the decay vertex is longer than 2.0cm where the decay
vertex is defined as the center of the closest approach of two tracks;
c) the angle formed by the vector from the interaction point to the
decay vertex and the momentum sum of
the two tracks at
the decay vertex is less than 8 degrees if the position of decay vertex
is longer than 6.0 cm; or c') the closest distance from the vector sum
to the interaction point is less than 0.6 cm if the decay length is
shorter than 6.0 cm; d) the closest approach of one of the tracks to the
interaction point is more than 0.5cm if the momentum sum of
the tracks is less than 2GeV/c; e) tracks are identified as
$\pi^\pm$ by the dE/dx measurement in the TPC; and f) the tracks are not
from the gamma conversion.
For each of track pairs remained after these selections, the invariant mass is
calculated assuming the tracks are charged pions.
Fig.~\ref{fig:Ks-mass} shows the mass distribution for the reconstructed
$K_s$'s. The distribution is fitted by a sum of a Gaussian and a background
function (exponential + polynomial) and the number of reconstructed $K_s$'s
is obtained to be 771 $\pm$ 61. In the same way, the numbers of these
$K_s$'s are counted for each momentum
slice.
\begin{figure}[t]
{\centerline{\epsfysize=12cm\epsfbox{fig3.ps}}}
\vspace{0.5cm}
\caption{Reconstructed mass distributions of $K_s$ in two different
momentum bins: (a) 1.8$<\xi<$2.1, (b) 3.3$<\xi<$3.6 \label{fig:Ks-mass}}
\end{figure}
The inclusive cross section, $1/\sigma_{had}\;d\sigma/dx_p$ is then
calculated from the number of particles observed in each momentum slice
using following formula:
\begin{equation}
\frac{1}{\sigma_{had}} \frac{d\sigma}{dx_i} =
A(x_i)\frac{1}{\Delta x_i} \frac{N_{obs}(x_i)}{N_{had}},
\end{equation}
where $x_i$ is the i'th slice of the momentum fraction,
$\sigma_{had}$ is the total hadronic cross section,
$\Delta x_i$ is the width of the slice,
$N_{had}$ is the number of hadronic events in the event sample, and
$N_{obs}(x_i)$ is the number of particles counted in the slice, respectively.
$A(x_i)$ is the factor to correct the counted number for the effects of
detector acceptance and initial state radiation. $A(x_i)$ is
calculated for each momentum slice separately with the Monte Carlo
simulation as
\begin{equation}
A(x_i) = \frac{N_{gen}(x_i)}{N_{gen}^{total}} /
\frac{N_{sim}(x_i)}{N_{sim}^{total}}
\end{equation}
where $N_{gen}(x_i)$ is the generated number of particles in the
slice $x_i$
without including the effects of the detector
acceptance and the initial state radiation. $N_{gen}^{total}$ is the
corresponding
total number of generated events. $N_{sim}(x_i)$ is the number of detected
particles in the slice $x_i$ generated with the effects
of detector acceptance and initial state radiation, and
$N_{sim}^{total}$ is the corresponding total number of generated events.
The JETSET6.3\cite{JETSET63} and 7.3\cite{JETSET73} Monte Carlo programs are
used to obtain these numbers combined with the
TOPAZ detector simulation program. As for $K_s$, to convert the counted
number to the cross section of $K^0/\overline{K^0}$, $A(x_i)$ is
multiplied by 2.
The cross sections are measured as a function of $\xi$.
The measured cross section for all charged particles is
shown in Table~\ref{Table:Xsec0}. Table~\ref{Table:Xsec1} shows the
cross sections measured for each of
$\pi^{\pm}$, $K^{\pm}$, $p/\overline{p}$,
while that for $K^0/\overline{K^0}$ is shown in
Table ~\ref{Table:Xsec2}.
\begin{table}
\begin{center}
\begin{tabular}{||c|c||}
\hline
$\xi$ = ln(1/$x_p$) & $1/\sigma_{tot}\;d\sigma/d\xi$ \\ \hline
0.7 & 0.287$\pm$0.015 \\
0.9 & 0.656$\pm$0.018 \\
1.1 & 0.942$\pm$0.024 \\
1.3 & 1.313$\pm$0.031 \\
1.5 & 1.783$\pm$0.041 \\
1.7 & 2.455$\pm$0.055 \\
1.9 & 3.116$\pm$0.067 \\
2.1 & 3.547$\pm$0.076 \\
2.3 & 4.182$\pm$0.089 \\
2.5 & 4.380$\pm$0.093 \\
2.7 & 4.877$\pm$0.103 \\
2.9 & 5.089$\pm$0.107 \\
3.1 & 5.674$\pm$0.119 \\
3.3 & 5.705$\pm$0.120 \\
3.5 & 5.673$\pm$0.119 \\
3.7 & 5.406$\pm$0.114 \\
3.9 & 5.135$\pm$0.109 \\
4.1 & 4.564$\pm$0.098 \\
4.3 & 4.037$\pm$0.087 \\
4.5 & 3.265$\pm$0.071 \\
4.7 & 2.498$\pm$0.055 \\
4.9 & 1.637$\pm$0.046 \\
\hline
\end{tabular}
\caption{Measured cross section for all charged particles as a
function of $\xi$. Errors include both of statistical and systematic
errors. \label{Table:Xsec0}}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline
& \multicolumn{3}{|c|}
{$1/\sigma_{tot}\;d\sigma/d\xi$} \\ \hline
$\xi$ = ln(1/$x_p$) & $\pi^{\pm}$ & $K^{\pm}$ & $p/\overline{p}$ \\
\hline
1.32 & 1.07$\pm$0.12 & 0.41$\pm$0.08 & 0.17$\pm$0.04 \\
1.62 & 1.50$\pm$0.14 & 0.61$\pm$0.10 & 0.25$\pm$0.05 \\
1.92 & 2.24$\pm$0.21 & 0.63$\pm$0.10 & 0.23$\pm$0.04 \\
2.22 & 2.75$\pm$0.23 & - & - \\
2.47 & 3.25$\pm$0.27 & - & - \\
2.62 & 3.75$\pm$0.42 & 0.75$\pm$0.19 & 0.30$\pm$0.21 \\
2.77 & 3.95$\pm$0.64 & 0.69$\pm$0.16 & 0.33$\pm$0.22 \\
2.98 & 3.84$\pm$0.80 & - & - \\
3.22 & - & - & - \\
3.41 & 4.75$\pm$0.50 & 0.51$\pm$0.08 & 0.22$\pm$0.04 \\
3.51 & 4.86$\pm$0.46 & 0.39$\pm$0.06 & 0.20$\pm$0.03 \\
3.61 & 4.79$\pm$0.37 & 0.44$\pm$0.06 & 0.21$\pm$0.03 \\
3.77 & 4.53$\pm$0.37 & 0.48$\pm$0.06 & 0.13$\pm$0.02 \\
3.96 & 4.18$\pm$0.34 & 0.29$\pm$0.04 & - \\
4.14 & 3.82$\pm$0.36 & 0.24$\pm$0.03 & - \\
4.42 & 3.27$\pm$0.28 & 0.12$\pm$0.02 & - \\
4.71 & 2.27$\pm$0.18 & - & - \\
4.83 & 1.94$\pm$0.17 & - & - \\
\hline
\end{tabular}
\caption{Measured cross sections for $\pi^{\pm}$, $K^{\pm}$ and
$p/\overline{p}$ as functions of
$\xi$. Special non-uniform binning of $\xi$ is used to optimize the particle
identification by the dE/dx measurement.
Errors include both of statistical and systematic errors.
\label{Table:Xsec1}}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{||c|c||}
\hline
$\xi$ = ln(1/$x_p$) & $1/\sigma_{tot}\;d\sigma(K_s)/d\xi$ \\ \hline
1.80 & 0.58$\pm$0.10 \\
2.25 & 0.65$\pm$0.12 \\
2.55 & 0.68$\pm$0.10 \\
2.85 & 0.61$\pm$0.09 \\
3.15 & 0.62$\pm$0.09 \\
3.45 & 0.49$\pm$0.09 \\
3.80 & - \\
4.25 & - \\
4.05 & 0.23$\pm$0.04 \\
\hline
\end{tabular}
\caption{Measured cross section for $K^0/\overline{K^0}$ as a
function of $\xi$. Errors are statistical only.\label{Table:Xsec2}}
\end{center}
\end{table}
The systematic error in the measurement is estimated by considering
following sources. One is the systematic ambiguity in the acceptance
correction. This is studied by varying the fragmentation parameters in
the Monte Carlo programs and by turning on and off the energy loss and
the nuclear interaction effects in the detector simulation program.
The ambiguity is estimated to be 2\% for all charged particles while 3\% for
$\pi^{\pm}$, $K^{\pm}$ and $p/\overline{p}$.
For $\pi^{\pm}$, $K^{\pm}$ and $p/\overline{p}$,
the ambiguity in the dE/dx curve as a function of
momentum used to extract numbers of each particle species
is also a source of the systematic error.
This is estimated
to be 5\% for $\pi^{\pm}$ and 10\% for $K^{\pm}$ and $p/\overline{p}$.
\section{Comparison with MLLA prediction}
The measured particle spectra are then compared directly with
the MLLA predictions by assuming LPHD.
Since the $\Lambda$ and $Q_0$ in the MLLA calculation
are unknown, their values are determined by fitting the MLLA
formula of the cross section to the data.
Fig.~\ref{Charged_Xsec} shows the measured cross section for all
charged particles as a function of $\xi$ together with the result of
the best fit
shown in the solid line. As seen from Fig.~\ref{Charged_Xsec},
the fitted MLLA calculation well
reproduces the measured cross section.
\begin{figure}[t]
{\centerline{\epsfysize=10cm\epsfbox{fig4.ps}}}
\caption{The cross section measured for all charged particles as a function of
$\xi=ln(1/x_p)$. Solid curve shows the fitted MLLA
calculation.\label{Charged_Xsec}}
\end{figure}
Fig.~\ref{pikp_Xsec} shows the cross sections measured for
$\pi^{\pm}$, $K^{\pm}$, $K^0/\overline{K^0}$ and $p/\overline{p}$
with the
fitted MLLA calculations. Also shown are the measurements by
PEP4/TPC\cite{PEP4} at $\sqrt{s} = 29 {\rm GeV}$ for $\pi^{\pm}$,
$K^{\pm}$, and $p/\overline{p}$. These measurements
are also fitted by the MLLA calculations. The fit is performed in the
range where the numerical calculation of the MLLA function is
reliable. The range is typically 1.0$<\xi<$4.0.
The measured cross sections
are well reproduced by MLLA. The figure demonstrates the
energy evolution of the peak position of the spectra:
the peak position in $\xi$ becomes larger in TOPAZ
measurements at $\sqrt{s}=58{\rm GeV}$ than those of PEP4/TPC at
$\sqrt{s}=29 {\rm GeV}$ regardless of particle species. This is also
predicted by the MLLA calculation. The detail of the study of the evolution
of the peak position is described in the next section.
\begin{figure}
{\centerline{\epsfysize=15cm\epsfbox{fig5.ps}}}
\caption{The cross sections measured for (a) $\pi^{\pm}$, (b) $K^{\pm}$,
(c) $p/\overline{p}$ and (d) $K^0/\overline{K^0}$.
Both of TOPAZ and PEP4/TPC measurements
are plotted with MLLA fits (Solid curve:TOPAZ, Dashed curve:PEP4).
\label{pikp_Xsec}}
\end{figure}
Tables~\ref{table-MLLA} and \ref{table-MLLA-2}
summarize the obtained values of $\Lambda$
and $Q_0$. Two different fits are performed: one is to determine both of
$\Lambda$ and $Q_0$ from the fit, and the other is to determine
$Q_0$ from the fit with $\Lambda$ fixed at 200 MeV.
In both cases, the value of $Q_0$ becomes larger for heavier particles.
Meanwhile, in the first case, the determined value of $\Lambda$
stays at relatively lower values (100-300 MeV). This result gives a
strong support to the MLLA + LPHD conjecture.
\begin{table}
\begin{center}
\begin{tabular}{||c|c|c|c|c||}
\hline
& \multicolumn{2}{|c|}{TOPAZ(58GeV)} &
\multicolumn{2}{|c|}{PEP4(29GeV)} \\ \hline
Free fit& $\Lambda$ (MeV) & $ Q_0$(MeV)
& $\Lambda$ (MeV) & $ Q_0$(MeV) \\ \hline
all charged & 291$\pm$10 & 375$\pm$8 & - & - \\
$\pi^{\pm}$ & 281$\pm$20 & 339$\pm$25 & 270$\pm$7 & 250$\pm$7 \\
$K^{\pm}$ & 118$\pm$68 & 575$\pm$80 & 243$\pm$74 & 531$\pm$80 \\
$K^{0}/\overline{K^0}$ & 185$\pm$19 & 649$\pm$76 & - & - \\
$p/\overline{p}$ & 143$\pm$81 & 657$\pm$90 & 356$\pm$105 & 531$\pm$100 \\
\hline
\end{tabular}
\caption{$\Lambda$ and $Q_0$ for each of particle species determined
from the fits with both $\Lambda$ and $Q_0$ as free parameters.
\label{table-MLLA}}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{||c|c|c||}
\hline
& TOPAZ(58GeV) & PEP4(29GeV) \\ \hline
$\Lambda=200$MeV & $Q_0$ (MeV) & $Q_0$ (MeV) \\ \hline
all charged & 297$\pm$12 & - \\
$\pi^{\pm}$ & 275$\pm$12 & 217$\pm$6 \\
$K^{\pm}$ & 588$\pm$44 & 514$\pm$45 \\
$K^{0}/\overline{K^0}$ & 663$\pm$110 & - \\
$p/\overline{p}$ & 627$\pm$92 & 390$\pm$ 50 \\ \hline
\end{tabular}
\caption{$Q_0$ for each of particle species determined by the fits with
$\Lambda$ fixed at 200MeV.
\label{table-MLLA-2}}
\end{center}
\end{table}
\section{The Energy Evolution of $\xi$ Distribution}
The energy evolution of the peak positions in $\xi$ is studied by comparing
the measurements at $\sqrt{s}$=29GeV(PEP4/TPC), 58GeV(TOPAZ)
and 91GeV(ALEPH)\cite{ALEPH} as a part of the PTA project\cite{PTA}.
Table~\ref{Peak_position} shows the result. The peak positions and
their errors are estimated using the fitted MLLA functions for PEP4
and TOPAZ measurements.
The MLLA predictions
using the limiting spectra calculation (eq.~\ref{eq:Peak})
with $\Lambda=200MeV$ are also shown.
As seen, the peak positions for light particles
(all charged particles and
$\pi^{\pm}$) agree with the limiting spectra calculation.
The peak positions become smaller for heavier
particles regardless of the energy scale.
\begin{table}
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline
& PEP4/TPC(29GeV) & TOPAZ(58GeV) & ALEPH(91GeV) \\ \hline
all charged & - & 3.30$\pm$0.05 & 3.61$\pm$0.01 \\
$\pi^{\pm}$ & 3.10$\pm$0.12 & 3.48$\pm$ 0.18 & 3.81$\pm$0.02 \\
$K^{\pm}$ & 2.30$\pm$0.20 & 2.56$\pm$ 0.29 & 2.63$\pm$0.04 \\
$K^0/\overline{K^0}$ & - & 2.90$\pm$0.24 & 2.88$\pm$0.03 \\
$p/\overline{p}$ & 2.30$\pm$0.20 & 2.50$\pm$0.29 & 3.00$\pm$0.09 \\ \hline
MLLA($\Lambda=200$MeV) & 3.02 & 3.46 & 3.74 \\ \hline
\end{tabular}
\caption{The energy evolution of the peak positions
in $\xi$. Also shown are predictions obtained using the limiting
spectra calculation of MLLA.\label{Peak_position}}
\end{center}
\end{table}
{}From eq.~\ref{eq:Peak},
the MLLA prediction of peak position can be approximated
in the linear form
$a{\rm ln}E + b$ where $a$ to be 0.5.
If the gluon coherence effect is not taken into account (phase
space only), $a$ should become 1.0. To obtain the value of $a$ from the
measured peak positions, a linear fit
is performed to the peak positions as a function of $\sqrt{s}$
for all charged particles measured by
TASSO\cite{TASSO}, TOPAZ, and ALEPH as shown in
Fig.~\ref{Energy_Evolution}.
{}From the fit, the value of
$a$ is obtained to be $0.54 \pm 0.05$. A similar fit is also done to the peak
positions for charged pions (TASSO, PEP4/TPC, TOPAZ and ALEPH) and $a$ is
measured to be $0.58 \pm 0.04$. These values are consistent with the
MLLA predictions.
\begin{figure}[t]
{\centerline{\epsfysize=10cm\epsfbox{fig6.ps}}}
\caption{The peak position in $\xi$ as a function of the square of the
center-of-mass energy. Solid line shows the linear fit to the peak
positions of charged particles while dashed line to those of charged
pions.\label{Energy_Evolution}}
\end{figure}
\section{Summary}
The inclusive cross sections are measured as a function of
$\xi=ln(1/x_p)$ for all charged particles and for each of $\pi^{\pm}$,
$K^{\pm}$, $p/\overline{p}$ and $K^0/\overline{K^0}$ in the hadronic
events taken at
$\sqrt{s}$=58GeV. The cross sections were
compared with the MLLA calculation assuming LPHD.
The MLLA formula describes the observed distributions very well for
all particle species over the wide momentum range.
By this comparison, $Q_0$ and $\Lambda$ in the MLLA expression are
determined for each particle species.
The determined $Q_0$ for each of the particle species is close to
the mass of the particle, while the $\Lambda$ is almost constant for these
particles. The $Q_0$ values are also determined with
$\Lambda$ fixed at
200MeV. The obtained $Q_0$ also coincides with the mass of
each particle. The same analysis is carried out for
the measurements at $\sqrt{s}=29GeV$ by PEP4/TPC and similar tendency is
observed.
These results show that
the momentum distribution of particles is identical to that
of partons at the end-point of the parton shower evolution independently
of the energy scale.
This supports the MLLA + LPHD conjecture.
The energy evolution of the peak position in $d\sigma/d\xi$ is
studied by comparing our data with the measurements by PEP4/TPC and ALEPH.
The measured peak positions of light particles are well reproduced by the
limiting spectra calculation.
The measured peak positions are fitted to a linear function of
log of the beam energy with the measurements
at other energies. The obtained slope of
the linear function is consistent with the MLLA prediction, while it
excludes models without the gluon coherence effect.
\newpage
\bibstyle{ieee}
|
1,314,259,995,569 | arxiv | \section{Introduction} \label{sec:intro}
The measurement of stellar radial velocity (RV) is one of the most effective techniques employed in the search for exoplanets.
At the Bohyansan Optical Astronomical Observatory (BOAO), we have been conducting an exoplanet search program around
late-type giant stars since 2004. This program has made contributions to both exoplanet and asteroseismic studies around
K giant stars \citep{2008JKAS...41...59H,2010A&A...509A..24H,2011A&A...529A.134L,2012A&A...546A...5L,2012A&A...548A.118L,2014A&A...566A..67L} and exoplanet detection around G giant stars \citep{2009PASJ...61..825O, 2012PASJ...64...34O, 2013PASJ...65...85S}.
In 2010, we began a new program, the Search for Exoplanet around Northern circumpolar Stars (SENS; \citealt{2015A&A...584A..79L}).
The main goal of SENS is to observe stars that are accessible year-round in order to have better sampling for our targets and thus increase the planet detection efficiency.
The stars of SENS were selected
from the \textit{HIPPARCOS} catalog with visual magnitudes of 5.0 $<$ $m_{v}$ $<$ 7.0 and
color indices of 0.6 $<$ $\textit{B -- V}$ $<$ 1.6.
The original SENS sample consist of 224 stars -- 5\% dwarfs, 40\% giant stars, and 55\% unclassified stars.
From SENS survey, we detected periodic RV variations around roughly twenty G, K, and M giant stars.
Among them, \mbox{HD 104985} \citep{2003ApJ...597L.157S}, 11 Ursae Minoris \citep{2009A&A...505.1311D}, \mbox{HD208527} \& \mbox{HD 220074} \citep{2013A&A...549A...2L}, \mbox{HD 11755}, \mbox{HD 12648}, \mbox{HD 24064} and 8 Ursae Minoris \citep{2015A&A...584A..79L}, \mbox{HD 36384}, \mbox{HD 52030}, and \mbox{HD 208742} \citep{2017ApJ...844...36L} were later
shown to host planetary companions.
In this paper, the observational strategy and data reduction are summarized in Section 2.
Section 3 describes the stellar properties and analysis of each host stars.
In Section 4, orbital solutions are described in detail.
Some possible causes of the RV variations are examined in Section 5.
The discussion about the results is presented in Section 6.
\section{Observation} \label{sec:obs}
Observations were made with the high-resolution fiber-fed Bohyunsan Observatory Echelle Spectrograph (BOES; \citealt{2007PASP..119.1052K}) of the 1.8 m telescope at BOAO.
An iodine (I$_{2}$) absorption cell is equipped in BOES for precise RV measurements.
BOES has three fibers with 80 $\micron$, 200 $\micron$, and 300 $\micron$ of diameter, which give resolutions of $R$ = 90,000, 45,000, and 27,000, respectively.
For our program, we used the 200 $\micron$ fiber. A typical exposure time of 20 minutes
yielded a signal-to-noise ratio (S/N) of about 150.
Since the start of the program in January 2010, we have collected 30 spectra each
for \mbox{HD 44385}, \mbox{HD 97619}, \mbox{HD 106574}, \mbox{HD 118904}, \mbox{HD 164428}, and \mbox{HD 202432}.
In order to check the instrumental stability, we have monitored an $\tau$ Ceti by standard since 2003.
The long-term RV accuracy of BOES is $\sim$7.6 m s$^{-1}$.
The data reduction was performed with the IRAF package for bias subtraction, flat fielding, and oder extraction, etc.
Once we extracted the 1-D spectra from the raw data, precise RVs were measured
using the program RVI2CELL \citep{2007PKAS...22...75H}.
Tables~\ref{tab:rv1}--\ref{tab:rv6} list the measured RVs of each star.
\section{Stellar characteristics} \label{sec:ste}
We obtained the basic stellar parameters (the visual magnitude V, parallax $\pi$, spectral type, B-V, luminosity, and RV) of the stars from \citet{2012AstL...38..331A} based on the \textit{HIPPARCOS} catalog \citep{1997yCat.1239....0E}. We also adopted more precise parallaxes from \citet{2016A&A...595A...1G}.
The effective temperature, log \textit{g}, metalicity ([Fe/H]), and microturbulent velocity ($v_{\rm{micro}}$) were estimated from TGVIT \citep{2005PASJ...57...27T}.
For comparison, we also obtained the effective temperature from \citet{2006ApJ...638.1004A} and \citet{2012MNRAS.427..343M,2017MNRAS.471..770M}.
We have estimated the projected rotational velocities using a line broadening technique \citep{2008PASJ...60..781T}.
The stellar radii, masses, and ages were calculated using the online tool PARAM 1.3\footnote{\url{http://stev.oapd.inaf.it/cgi-bin/param_1.3/}} \citep{2006A&A...458..609D}, which is based on a library of theoretical stellar isochrones \citep{2000A&AS..141..371G,2012MNRAS.427..127B} and Bayesian inference \citep{2005A&A...436..127J}.
\begin{table*}[h!]
\renewcommand{\thetable}{\arabic{table}}
\centering
\caption{Stellar parameters for the stars.} \label{tab:ste}
\begin{tabular}{cccccccc}
\toprule[1.5pt]
Parameter & HD 44385 & HD 97619 & HD 106574 & HD 118904 & HD 164428 & HD 202432 & Ref.\\
\midrule
Spectral type & K0 & K0 & K2 III & K2 III & K5 & K2 & 1\\
$\textit{$m_{v}$}$ (mag) & 6.771 $\pm$ 0.001 & 7.044 $\pm$ 0.001 & 5.883 $\pm$ 0.001 & 5.665 $\pm$ 0.001 & 6.388 $\pm$ 0.001 & 7.052 $\pm$ 0.001 & 1\\
$\textit{B -- V}$ (mag) & 1.266 $\pm$ 0.007 & 1.315 $\pm$ 0.008 & 1.179 $\pm$ 0.005 & 1.219 $\pm$ 0.005 & 1.452 $\pm$ 0.008 & 1.206 $\pm$ 0.009 & 1\\
RV (km s$^{-1}$) & $-$ 14.87 $\pm$ 0.20 & $-$ 23.84 $\pm$ 0.17 & $-$ 16.48 $\pm$ 0.29 & 13.72 $\pm$ 0.20 & $-$ 8.16 $\pm$ 0.20 & $-$ 2.23 $\pm$ 0.20 & 1\\
$\pi$ (mas) & 3.92 $\pm$ 0.41 & 4.93 $\pm$ 0.42 & 7.00 $\pm$ 0.28 & 7.93 $\pm$ 0.24 & 3.82 $\pm$ 0.29 & 6.40 $\pm$ 0.40 & 1\\
& 4.68 $\pm$ 0.29 & 4.69 $\pm$ 0.33 & -- & 9.02 $\pm$ 0.76 & 3.65 $\pm$ 0.27 & 6.20 $\pm$ 0.23 & 4\\
$T_{\rm{eff}}$ (K) & 4499 & 4245 & -- & 4511 & 4222 & 4569 & 2\\
& 4326 & 4334 & 4482 & 4424 & 4257 & 4465 & 3\\
& 4433 $\pm$ 125 & 4237 $\pm$ 125 & 4414 $\pm$ 125 & 4407 $\pm$ 125 & 4109 $\pm$ 125 & 4458 $\pm$ 125 & 5\\
& 4440 $\pm$ 28 & 4355 $\pm$ 20 & 4501 $\pm$ 33 & 4469 $\pm$ 23 & 4119 $\pm$ 40 & 4549 $\pm$ 30 & 6\\
$\rm{[Fe/H]}$ & 0.10 $\pm$ 0.07 & $-$ 0.07 $\pm$ 0.07 & $-$ 0.43 $\pm$ 0.04 & $-$ 0.11 $\pm$ 0.09 & $-$ 0.07 $\pm$ 0.10 & 0.16 $\pm$ 0.10 & 6\\
log $\it g$ (cgs) & 1.80 & 1.83 & 1.78 & 1.91 & 1.21 & 2.24 & 5\\
& 2.00 $\pm$ 0.12 & 2.33 $\pm$ 0.09 & 2.18 $\pm$ 0.18 & 2.13 $\pm$ 0.10 & 1.62 $\pm$ 0.17 & 2.42 $\pm$ 0.12 & 6\\
$v_{\rm{micro}}$ (km s$^{-1}$) & 1.56 $\pm$ 0.13 & 1.54 $\pm$ 0.13 & 1.59 $\pm$ 0.06 & 1.39 $\pm$ 0.15 & 1.62 $\pm$ 0.16 & 1.32 $\pm$ 0.16 & 6\\
Age (Gyr) & 1.8 $\pm$ 0.4 & 4.9 $\pm$ 1.7 & 4.6 $\pm$ 1.3 & 3.7 $\pm$ 1.6 & 2.7 $\pm$ 1.2 & 6.1 $\pm$ 2.6 & 6\\
$\textit{$R_{\star}$}$ ($R_{\odot}$) & 19.5 & 19.5 & 20.4 & 17.7 & 39.4 & 12.3 & 5\\
& 18.2 $\pm$ 1.2 & 16.7 $\pm$ 1.3 & 17.1 $\pm$ 0.9 & 14.8 $\pm$ 1.3 & 35.4 $\pm$ 2.9 & 11.1 $\pm$ 0.3 & 6\\
$\textit{$M_{\star}$}$ ($M_{\odot}$) & 1.8 $\pm$ 0.2 & 1.3 $\pm$ 0.1 & 1.2 $\pm$ 0.1 & 1.4 $\pm$ 0.2 & 1.5 $\pm$ 0.2 & 1.2 $\pm$ 0.2 & 6\\
$\textit{$L_{\star}$}$ ($L_{\odot}$) & 233.2 & 122.8 & 152.2 & 152.4 & 469.3 & 63.9 & 1 \\
& 132.6 $\pm$ 11.1 & 109.8 $\pm$ 10.1 & 142.5 $\pm$ 9.8 & 105.6 $\pm$ 10.7 & 397.9 $\pm$ 37.8 & 54.0 $\pm$ 3.6 & 5\\
& 116.0 $\pm$ 15.6 & 90.4 $\pm$ 14.2 & 108.1 $\pm$ 11.8 & 78.7 $\pm$ 13.9 & 325.0 $\pm$ 54.7 & 47.5 $\pm$ 2.9 & 6\\
$v_{\rm{rot}}$ sin $i$ (km s$^{-1}$) & 2.6 $\pm$ 0.5 & 1.9 $\pm$ 0.5 & 1.7 $\pm$ 0.5 & 1.2 $\pm$ 0.5 & 2.8 $\pm$ 0.5 & 2.1 $\pm$ 0.5 & 6\\
$P_{\rm{rot}}$ / sin $i$ (days) & 354.1 $\pm$ 72.0 & 444.7 $\pm$ 122.0 & 508.9 $\pm$ 152.1 & 624.0 $\pm$ 265.7 & 629.6 $\pm$ 125.7 & 267.4 $\pm$ 64.1 & 6\\
\bottomrule[1.5pt]
\end{tabular}
\textbf{References.}--- (1) Anderson \& Francis (2012); (2) McDonald et al (2012); (3) Ammons et al (2006); \\ (4) Gaia Collaboration et al (2016); (5) McDonald et al (2017); (6) This work. \\
\end{table*}
The stellar parameters of the observed stars are summarized in Table~\ref{tab:ste}.
\section{Orbital solutions} \label{sec:orb}
\begin{table*}[h!]
\renewcommand{\thetable}{\arabic{table}}
\centering
\caption{Preliminary orbital solutions.} \label{tab:orb}
\begin{tabular}{lcccccc}
\toprule[1.5pt]
Parameter & HD 44385 b & HD 97619 b & HD 106574 b & HD 118904 b & HD 164428 b & HD 202432 b\\
\midrule
P (days) & 473.5 $\pm$ 4.9 & 665.9 $\pm$ 9.5 & 1065.7 $\pm$ 14.6 & 676.7 $\pm$ 19.1 & 599.6 $\pm$ 8.7 & 418.8 $\pm$ 2.9 \\
K (m s$^{-1}$) & 104 $\pm$ 10 & 68 $\pm$ 6 & 149 $\pm$ 8 & 61 $\pm$ 8 & 109 $\pm$ 15 & 43 $\pm$ 3 \\
$T_{periastron}$ (JD) & 2455121 $\pm$ 37 & 2455238 $\pm$ 55 & 2455585 $\pm$ 360 & 2455478 $\pm$ 82 & 2455068 $\pm$ 46 & 2454908 $\pm$ 26\\
$e$ & 0.20 $\pm$ 0.20 & 0.23 $\pm$ 0.17 & 0.03 $\pm$ 0.03 & 0.31 $\pm$ 0.30 & 0.29 $\pm$ 0.22 & 0.21 $\pm$ 0.16 \\
$\omega$ (deg) & 292.18 $\pm$ 28.11 & 293.53 $\pm$ 23.97 & 44.66 $\pm$ 122.15 & 29.16 $\pm$ 31.02 & 203.11 $\pm$ 26.59 & 61.24 $\pm$ 21.76 \\
$m$ sin $i$ ($M_{J}$) & 5.9 $\pm$ 1.1 & 3.5 $\pm$ 1.3 & 8.5 $\pm$ 1.1 & 3.1 $\pm$ 1.2 & 5.7 $\pm$ 1.3 & 1.9 $\pm$ 0.4\\
$a$ (AU) & 1.4 $\pm$ 0.1 & 1.6 $\pm$ 0.1 & 2.2 $\pm$ 0.1 & 1.7 $\pm$ 0.1 & 1.6 $\pm$ 0.1 & 1.2 $\pm$ 0.1\\
Slope (m s$^{-1} \rm{yr}^{-1}$) & -6.9 & 11.5 & 2.4 & 1.0 & 1.2 & 4.4 \\
$N_{obs}$ & 35 & 35 & 31 & 38 & 30 & 29\\
rms (m s$^{-1}$) & 41.5 & 26.2 & 44.0 & 33.2 & 51.6 & 12.4\\
\bottomrule[1.5pt]
\end{tabular}
\end{table*}
Here we present the derived orbital parameters assuming that periodic RV variations are caused by Keplerian motion.
An initial value of the period was determined using the Lomb-Scargle periodogram
(L-S) analysis, which is appropriate for uneven sampled data and also gives an estimate of the false alarm probability (FAP) of the signal \citep{1976Ap&SS..39..447L,1982ApJ...263..835S}.
Using this initial period, we estimated all the orbital elements by an iterated non-linear least-squares method.
If a linear RV trend is shown, a slope is taken as an unknown parameter.
Table~\ref{tab:orb} lists the orbital parameters for all stars.
\subsection{HD 44385} \label{subsec:44385}
\begin{figure} [h!]
\plotone{f1.eps}
\caption{\textit{Upper panel}: RVs of HD 44385 (blue dots) and a fitted Keplerian orbit (solid line). \textit{Lower panel}: The residual RVs after subtracting the Keplerian fit.}
\label{fig:rv44385}
\end{figure}
The RV data of HD 44385 and the Keplerian motion are plotted in Fig.~\ref{fig:rv44385}.
The RVs cover more than five cycles and show a slight linear trend of --6.9 m s$^{-1} \rm{yr}^{-1}$, such linear trend is also seen in the other five systems of this study.
These linear RV variations may be caused by an unseen distant companion or due to some long-term and unknown intrinsic stellar variations.
The orbit has a period, $P$ = \mbox{473.5 $\pm$ 4.9 days}, an eccentricity, {$e$ = 0.20 $\pm$ 0.20}, and a semi-amplitude $K$ = 104 $\pm$ 10 m s$^{-1}$.
.
The RV residuals have an rms of 41.5 m s$^{-1}$, which is larger than the typical intrinsic RV variations of K giant stars.
The scaling relationship of \citet{1995A&A...293...87K} yields an amplitued of 15 m s$^{-1}$ for stellar oscillations, which may contribute to rms of the RV residuals.
However, the excess residual RV scatter is still too large.
We discuss these large residual RV scatter in Section \ref{sec:dis}.
\subsection{HD 97619} \label{subsec:97619}
\begin{figure} [h!]
\plotone{f2.eps}
\caption{\textit{Upper panel}: RVs of HD 97619 (blue dots) and a fitted Keplerian orbit (solid line). \textit{Lower panel}: The residual RVs after subtracting the Keplerian fit.}
\label{fig:rv97619}
\end{figure}
The RV data of HD 97619 and the Keplerian orbit are plotted in Fig.~\ref{fig:rv97619}.
The RVs cover more than three cycles and these also show a slight linear trend of 11.5 m s$^{-1} \rm{yr}^{-1}$, again possibly due a third body in the system.
The best Keplerian fit yields the orbital elements of $P$ = \mbox{665.9 $\pm$ 9.5 days}, {$e$ = 0.23 $\pm$ 0.17}, and $K$ = 68 $\pm$ 6 m s$^{-1}$.
The host star has a mass just a little higher than the Sun, as the other host star HD 202432.
It hosts a planet candidate with the lowest mass among the companions in the present study.
The rms of RV residuals is 26.2 m s$^{-1}$ consistent with the expected RV scatter (so called ``jitter'').
\subsection{HD 106574} \label{subsec:106574}
\begin{figure} [h!]
\plotone{f3.eps}
\caption{\textit{Upper panel}: RVs of HD 106574 (blue dots) and a fitted Keplerian orbit (solid line). \textit{Lower panel}: The residual RVs after subtracting the Keplerian fit.}
\label{fig:rv106574}
\end{figure}
The RV data of HD 106574 and the best Keplerian orbital fit are plotted in Fig.~\ref{fig:rv106574}.
The RVs cover about two cycles.
The orbit has a period of \mbox{1065.7 $\pm$ 14.6 days}, an eccentricity of 0.03 $\pm$ 0.03, and a semi-amplitude of 149 $\pm$ 8 m s$^{-1}$.
Unlike other five systems, this system has a nearly circular orbit.
The planet candidate is the most massive among our six stars.
The residuals have an rms of 44.0 m s$^{-1}$. This is considerably larger than the value of 21 m s$^{-1}$ predicted by the \citet{1995A&A...293...87K} relationship.
This possibly indicates an additional source of stellar variability as HD 44385.
\subsection{HD 118904} \label{subsec:118904}
\begin{figure} [h!]
\plotone{f4.eps}
\caption{\textit{Upper panel}: RVs of HD 118904 (blue dots) and a fitted Keplerian orbit (solid line). \textit{Lower panel}: The residual RVs after subtracting the Keplerian fit.}
\label{fig:rv118904}
\end{figure}
The RV data of HD 118904 and the best Keplerian orbital fit are plotted in Fig.~\ref{fig:rv118904}.
The RVs cover more than three cycles.
The orbital fit yields a period of \mbox{676.7 $\pm$ 19.1 days}, an eccentricity of 0.31 $\pm$ 0.30, and a semi-amplitude of 61 $\pm$ 8 m s$^{-1}$.
The rms RV residuals have a value of 33.2 m s$^{-1}$, also the normal case for a K giant star.
\subsection{HD 164428} \label{subsec:164428}
\begin{figure} [h!]
\plotone{f5.eps}
\caption{\textit{Upper panel}: RVs of HD 164428 (blue dots) and a fitted Keplerian orbit (solid line). \textit{Lower panel}: The residual RVs after subtracting the Keplerian fit.}
\label{fig:rv164428}
\end{figure}
The RV data of HD 164428 and the Keplerian orbital fit are plotted in Fig.~\ref{fig:rv164428}.
The RVs cover more than four cycles.
The orbit has a period of \mbox{599.6 $\pm$ 8.7 days}, an eccentricity of 0.29 $\pm$ 0.22, and a semi-amplitude of 109 $\pm$ 15 m s$^{-1}$.
The planet candidate has a similar mass as HD 44385 b.
The rms scatter about the orbit is 51.6 m s$^{-1}$, which is the largest jitter seen in our six systems, and is consistent with the predicted value of $\sim$50 m s$^{-1}$ for the stellar oscillations according to scaling relationships.
\subsection{HD 202432} \label{subsec:202432}
\begin{figure} [h!]
\plotone{f6.eps}
\caption{\textit{Upper panel}: RVs of HD 202432 (blue dots) and a fitted Keplerian orbit (solid line). \textit{Lower panel}: The residual RVs after subtracting the Keplerian fit.}
\label{fig:rv202432}
\end{figure}
The RV data of HD 202432 and the best Keplerian orbital fit are plotted in Fig.~\ref{fig:rv202432}.
The RVs cover six cycles and show a slight linear trend of 4.4 m s$^{-1} \rm{yr}^{-1}$.
The Keplerian fit yields a period of \mbox{418.8 $\pm$ 2.9 days}, an eccentricity of 0.21 $\pm$ 0.16, and a semi-amplitude of 43 $\pm$ 3 m s$^{-1}$.
An rms of RV residuals is 12.4 m s$^{-1}$, a value consistent with the expected velocity amplitude of 10 m s$^{-1}$ for stellar oscillations.
\\
\begin{figure*} [h]
\plotone{f7.eps}
\caption{The phase folded RV curves of all stars}
\label{fig:pha}
\end{figure*}
The RV measurements phased using best-fit orbital periods for all six stars is shown in Figure~\ref{fig:pha}.
All planet candidates except HD 106574 b have a relatively high eccentricity.
However, we have a relatively small number of observations and these stars show significant intrinsic variability.
The coarse sampling plus large scatter may result in an artificially high eccentricity. More data are needed to resolve this.
\section{The cause of the RV variations} \label{sec:cau}
The periodic variations of RVs may be also intrinsic to the star such as rotational modulation or stellar pulsations by surface features.
To identify the nature of the RV variations, we investigated Ca II H lines, photometric data, and spectral line profile variations.
\subsection{Surface activity} \label{subsec:sur}
\begin{figure} [h!]
\plotone{f8.eps}
\caption{Spectra in the region of the Ca II H line. The vertical dotted-line is located in the core of the Ca II H profiles (\mbox{3968.5 $\AA$}).}
\label{fig:ca}
\end{figure}
A rotating star with surface features such as spots or plage caused by magnetic activity will exhibit periodic RV variations, which can be misinterpreted as a planetary signal \citep{2001A&A...379..279Q}.
The Ca II H line (3968.5 {\AA}) has often been used as an good indicator of stellar activity \citep{1913ApJ....38..292E}.
If there is a chromospheric activity, an emission line may appear at the center of Ca II H absorption line profile on the core of the line may be partially filled in.
The Ca II H lines of each host star are shown in Fig.~\ref{fig:ca}.
There appears to be no noteworthy emission in the core of the Ca II H line for any target stars.
\subsection{Photometric variations} \label{subsec:pho}
\begin{figure} [h]
\plotone{f9.eps}
\caption{\textit{Top to bottom}: L-S periodogram of the RV measurements, residual RVs, \textit{HIPPARCOS} photometry, and the line bisectors of the span and curvature for HD 44385. The vertical dashed line represents 473 d period.}
\label{fig:ls44385}
\end{figure}
\begin{figure} [h]
\plotone{f10.eps}
\caption{\textit{Top to bottom}: L-S periodogram of the RV measurements, residual RVs, \textit{HIPPARCOS} photometry, and the line bisectors of the span and curvature for HD 97619. The vertical dashed line represents 667 d period.}
\label{fig:ls97619}
\end{figure}
\begin{figure} [h]
\plotone{f11.eps}
\caption{\textit{Top to bottom}: L-S periodogram of the RV measurements, residual RVs, \textit{HIPPARCOS} photometry, and the line bisectors of the span and curvature for HD 106574. The vertical dashed line represents 1071 d period.}
\label{fig:ls106574}
\end{figure}
\begin{figure} [h]
\plotone{f12.eps}
\caption{\textit{Top to bottom}: L-S periodogram of the RV measurements, residual RVs, \textit{HIPPARCOS} photometry, and the line bisectors of the span and curvature for HD 118904. The vertical dashed line represents 675 d period.}
\label{fig:ls118904}
\end{figure}
\begin{figure} [h]
\plotone{f13.eps}
\caption{\textit{Top to bottom}: L-S periodogram of the RV measurements, residual RVs, \textit{HIPPARCOS} photometry, and the line bisectors of the span and curvature for HD 164428. The vertical dashed line represents 594 d period.}
\label{fig:ls164428}
\end{figure}
\begin{figure} [h]
\plotone{f14.eps}
\caption{\textit{Top to bottom}: L-S periodogram of the RV measurements, residual RVs, \textit{HIPPARCOS} photometry, and the line bisectors of the span and curvature for HD 202432. The vertical dashed line represents 422 d period.}
\label{fig:ls202432}
\end{figure}
To check for any photometric variations of the stars, we used \textit{HIPPARCOS} archival data.
\textit{HIPPARCOS} data cover the period between November 1989 and March 1993.
Although the measurements are not contemporaneous with our observations, it is still useful to check whether the \textit{HIPPARCOS} data show any periodic photometric variability.
The \textit{HIPPARCOS} catalogue provides a total of 160, 148, 143, 119, 122, and 114 observations
of the \mbox{HD 44385}, \mbox{HD 97619}, \mbox{HD 106574}, \mbox{HD 118904}, \mbox{HD 164428}, and \mbox{HD 202432}, respectively.
The rms scatter of the data is 0.011, 0.011, 0.007, 0.011, 0.009, and 0.010 mag for each star, respectively.
Their scatter is comparable to the photometric uncertainties of \textit{HIPPARCOS} data
which ranges from 0.007 to 0.011 mag.
We do not see any significant photometric variability of the stars.
We also checked periodicity of \textit{HIPPARCOS} data from the L-S periodogram, and could not find any significant periodicity corresponding to that of the RV variations (middle panels in the Figs.~\ref{fig:ls44385}--\ref{fig:ls202432}).
\subsection{Line bisector variations} \label{subsec:lin}
The examination of spectral line shape is an important tool to help identify other origins of RV variations than Keplerian motion \citep{1998ASPC..154..311H}.
Orbital motion by a companion should cause a Doppler shift without a change of the line shape, whereas RV variations caused by rotational modulation from stellar surface structure should be accompanied by line shape variations.
We thus investigated the change of the spectral line shapes using two indicators:
The bisector velocity curvature (BVC) and the bisector velocity span (BVS).
To measure the line bisectors, we selected the lines \mbox{Ca I} 6122.2, 6439.1, 6462.6, 6717.7; \mbox{Cr I} 6330.1; \mbox{Fe I} 6141.7, 6393.6, 6411.7, 6677.9, 6750.2; \mbox{Fe II} 6151.6; \mbox{Ni I} 6643.6, 6767.8; \mbox{Ti I} 6085.2, 6742.6; \mbox{V I} 6039.7, 6081.4.
These lines are free from I$_{2}$ and telluric absorption lines and are relatively deep.
The L-S periodogram of bisectors do not show any significant periodicity associated with those of RV variations (bottom panels in the Figs~\ref{fig:ls44385}--\ref{fig:ls202432}).
\section{Discussion} \label{sec:dis}
We have found long-period RV variations of six K-giant stars.
The stars show no variations in the line shapes as measured by the spectral line bisectors.
HD~44385 does show a weak signal in the bisector curvature measurements (BVS), but this does not seem to be significant (FAP $\ge$ 0.1).
We note that a lack of bisector measurements is not proof of the planetary nature.
High quality bisector measurements are difficult to make as these require high spectral resolution and high S/N data.
In general these are of much lower quality than the RV measurements.
If we found bisector variations with the RV period, then the planet hypothesis is refuted.
On the other hand, a lack of bisector variations is not sufficient proof of the existence of the planet.
It could well be that a phenomenon produces measurable RV variations, but small bisector variations that are difficult to measure.
The stars also seem to show a lack of variations in the \textit{HIPPARCOS} photometry.
Only one star, HD~106574, shows a weak peak in the L-S periodogram at the RV period.
Again, this peak seems to be of low significance.
However, the lack of photometric variations is only suggestive as the \textit{HIPPARCOS} measurements were not contemporaneous to our data.
HD~44385, HD~106574, and HD~118904 have larger RV scatters than those predicted by the \citet{1995A&A...293...87K} relationship.
Later spectral type stars, such as HD~106574 and HD~118904, show larger RV scatters than those given in \citet{2005PASJ...57...97S,2006A&A...454..943H}.
\citet{2012A&A...548A.118L} also have detected an exoplanet with similar orbital parameters and rms of the RV residuals.
Some more exoplanets discovered using BOES have shown large RV scatters \citep{2013A&A...549A...2L,2014A&A...566A..67L,2014JKAS...47...69L,2015A&A...584A..79L,2015A&A...580A..31H,2018AJ....155..120H}.
RV scatter in giant stars may have its origin not only from the stellar pulsations but also from the stellar activities.
\citet{1993ApJ...413..339H} found RV variations in $\alpha$ Boo with a period of 233 d and an amplitude of $\sim$200 m s$^{-1}$.
This RV period was the same period found in the He I 10830 variations in this star by \citet{1987ApJS...65..255L}.
He I 10830 is a chromospheric activity indicator.
In this case, large RV variations are clearly due to the stellar activity.
Activity in giant stars is poorly understood and it may be that these are often not accompanied by variations in the ``classic'' indicators of stellar activity.
We do not know the timescales or RV amplitudes of such activity jitter for these stars, which may have contributed to the RV scatters seen in some of our stars.
The exact cause of large RV scatters seen in our stars is yet to be understood by further study.
Rotating stars with surface features can also exhibit periodic RV variations which can be mimic a ``planetary like'' signal.
Our stars appear to be relatively inactive as shown by an absence of emission in the core of Ca II H lines.
Another check on rotational modulation can come from estimates of the rotational period of the star.
From $v_{\rm{rot}}$ sin $i$ and $\textit{$R_{\star}$}$ (Table~\ref{tab:ste}) we estimate upper limits
on the rotational period of 354.1 $\pm$ 72.0 days for \mbox{HD 44385}, 444.7 $\pm$ 122.0 days for \mbox{HD 97619}, 508.9 $\pm$ 152.1 days for \mbox{HD 106574}, 624.0 $\pm$ 265.7 days for \mbox{HD 118904}, 629.6 $\pm$ 125.7 days for \mbox{HD 164428}, and 267.4 $\pm$ 64.1 days for \mbox{HD 202432}.
\mbox{HD 118904} and \mbox{HD 164428} have estimated rotation periods comparable to the RV periods.
In the case of \mbox{HD 44385}, \mbox{HD 97619}, \mbox{HD 106574} and \mbox{HD 202432}
are the maximum rotational periods significantly larger than the RV periods.
For these four stars our estimated rotational periods provide further support that we are not seeing rotational modulation.
Another explanation for the RV variations in the six stars is a new, unknown form of stellar oscillations.
One possibility is oscillatory convective modes.
These have been proposed to explain the long-period variables \citep{2015MNRAS.452.3863S}, stars that are more evolved than the K giant stars of our work.
Coincidentally, only one of stars, HD 164428, is rather evolved with stellar radius larger than 20 $R_\odot$.
However, given that we know so little about long-period oscillations in K giant stars it seems that at the present time the most likely explanation for the RV variations in our stars is Keplerian motion by planetary companions.
All of our targets are evolved intermediate mass stars.
If the variations are due to planetary companions our detections add to the sample of exoplanets around stars more massive than the sun.
Two of our stars have masses $\ge$ 1.5 $M_\odot$.
Of the approximately 2200 exoplanets with good orbits and planet mass determinations, less than 5\% of the host stars have masses greater than 1.5 $M_\odot$ (source: exoplanets.org).
Exoplanets around host stars with M $\ge$ 2 $M_\odot$ have a median $m$ sin $i$ of 2.7 $M_{J}$ and a median orbital period of $\sim$ 400 d.
Thus the companions to our six K giants have properties that are typical for planets around massive stars:
massive giant planets (1.9 -- 8.5 $M_{J}$) with orbital periods of several hundreds of days.
Although the SENS survey is not yet complete, at this stage we can still make a rough estimate
of the planet frequency of our sample.
We have found periodic variations in 31 of our sample of 224 stars.
Among these, 17 stars have confirmed planets which is a planet occurrence rate of about 8\%.
If all the RV variations are planetary in nature, then the occurrence rate can be as high as $\approx$ 15\%.
This is largely in line with the expectation that $\sim$10\% giant stars have planetary companions \citep{2015ApJ...798..112M}.
A more detailed analysis of the statistics can be made once the SENS survey is completed.
\acknowledgments
This work is supported by the KASI (Korea Astronomy and Space Science Institute) through grant No. 2017-1-860-01.
BCL acknowledges partial support by the KASI grant 2017-1-830-03.
Support for MGP and TYB was provided by the KASI under the R\&D program supervised by the Ministry of Science, ICT and Future Planning and by the National Research Foundation of Korea to the Center for Galaxy Evolution Research (No.2017R1A5A1070354). TYB was also supported by BK21 Plus of National Research Foundation of Korea.
S.G. would like to thank NSFC for the financial support through the grant No. U1531121.
This research made use of the SIMBAD database, operated at the CDS, Strasbourg, France.
We thank BOAO for its generous support.
\bibliographystyle{apj}
|
1,314,259,995,570 | arxiv | \section{Introduction}
Analysis on and of rectifiable sets in Euclidean spaces is made possible
by a variety of results, among which some of the most essential are the
Rademacher Theorem, the extension theorem for Lipschitz functions and
Area and Coarea formulae, see e.g.~\cite{EvansGariepy}. Starting from
the 90's, these topics have been studied also in non Euclidean spaces through the
notion of rectifiability in metric spaces introduced by L.~Ambrosio and B.~Kirchheim~\cite{MR1189747, AmbrosioKirchheim}.
There are, however, interesting spaces to which this notion is not adapted.
For instance, the first Heisenberg group $\HH^1$ is purely
$k$-unrectifiable for $k=2,3,4$~\cite[Theorem 7.2]{AmbrosioKirchheim}; similar phenomena occur in non Abelian Carnot groups and more generally in
sub-Riemannian manifolds. Fortunately, in
the setting of Carnot groups intrinsic notions of rectifiability are available, modeled either on intrinsic $C^1$ submanifolds or on the so-called intrinsic Lipschitz graphs~\cite{FS_JGA}. The two notions are in general different~\cite{JNGV_AntiRademacher} but they coincide~\cite[Corollary 7.4]{2020arXiv200714286V} in Heisenberg groups $\HH^n$, where intrinsic rectifiable sets
are now relatively well understood
and results analogue to those
mentioned above are known to hold
\cite{antonelli2021rectifiable_representation,antonelli2021rectifiable_aaa_structure,
chousionis2018intrinsic,corni2021area,DDFO,
fassler2020semmes,JNGV,MagStepTrev,
merlo2019geometry,merlo2020marstrandmattila,NaorYoung,2020arXiv200714286V}.
We stress the fact that these results depend strongly on the particular
Carnot group one studies. This is in sharp contrast with the study of
rectifiability in metric spaces, which strongly relies on the analytic properties
of the Euclidean spaces on which metric rectifiable sets are modeled,
and not so much on the properties of the space itself. There are indeed
Carnot groups for which some results fail (e.g.~the extension and Rademacher theorems for intrinsic Lipschitz graphs \cite{AntonelliMerloUnextendable,JNGV_AntiRademacher}) or are
still unknown.
In this paper we go one step further towards the understanding of rectifiable sets in Heisenberg groups $\HH^n$. Our main result is a Rademacher-type
Theorem for Lipschitz functions defined on intrinsic $C^1$ submanifolds in $\HH^n$, see Theorem~\ref{thm607bf499} below; analogous versions for Lipschitz functions defined on intrinsic Lipschitz graphs or on $\HH$-rectifiable sets in $\HH^n$ are provided later in Section~\ref{sec_proofThmA}, see Corollaries~\ref{cor_RademacherLipgr} and~\ref{cor_RademacherHrectifiablesets}.
We will consider only submanifolds and $\HH$-rectifiable sets {\em of low codimension $m\leq n$}; the other case {\em of low dimension} (i.e., of codimension more than $n$) is more straightforward, as these objects turn out to have standard Euclidean regularity in $\R^{2n+1}$~\cite{antonelli2020intrinsically}.
Before stating Theorem~\ref{thm607bf499}, we need to provide the notion of differentiability along a submanifold. Heisenberg groups and $C^1_\HH$ submanifolds in $\HH^n$ will be introduced in Section~\ref{sec_prelim}. In the following, $d$ denotes a homogeneous distance on $\HH^n$.
\begin{definition}[Differentiability on a submanifold]
Let $S\subset\HH^n$ be a $C^1_\HH$ submanifold
of codimension $m\leq n$; we say that a map $u:S\to\R^\ell$ is {\it tangentially Pansu differentiable along $S$ at $p\in S$}
(cfr.~\cite[Definition 2.89]{MR1857292})
if there exists a group morphism $L:\HH^n\to\R^\ell$ such that
\begin{equation}\label{eq_defdifferentiability}
\lim_{\substack{q\to p,\\ q\in S}} \frac{|u(q) - u(p) - L(p^{-1}q)|}{d(p,q)} = 0.
\end{equation}
\end{definition}
The morphism $L$ for which~\eqref{eq_defdifferentiability} holds is, in general, not unique; however, it can be proved that $L$ is uniquely determined on the tangent space $T^\HH_pS$.
This uniqueness is a consequence of statement \ref{item607d35ae} in Proposition~\ref{prop607d3565}, which is equivalent to tangential differentiability.
The restriction $L|_{T^\HH_pS}$ will be called {\it Pansu differential of $u$ at $p$ along $S$} and it will be denoted by $D_\HH^S u(p)$ or $D_\HH^S u_p$.
We can now state our main result; as customary, we denote by $Q=2n+2$ the homogeneous dimension of $\HH^n$, so that the Hausdorff dimension of a $C^1_\HH$ submanifold of codimension $m\leq n$ is $Q-m$.
\begin{thm}[Pansu--Rademacher]\label{thm607bf499}
Let $n,m,\ell$ be positive integers with $ m < n$.
If $S$ is a $C^1_\HH$ submanifold of $\HH^n$ of codimension $m$
and $u:S\to\R^\ell$ is a Lipschitz function,
then
$u$ is tangentially Pansu differentiable at $\sphH^{Q-m}$-a.e.~point of $S$.
\end{thm}
Theorem~\ref{thm607bf499} is not trivial. It does not directly follow from the Pansu Theorem~\cite{Pansu} on the a.e.~differentiability of Lipschitz functions in $\HH^n$: in fact, a Lipschitz function $u:\HH^n\to\R^\ell$ could be {\em nowhere} differentiable on $S$. On the contrary, Theorem~\ref{thm607bf499} asserts that $u$ must be $\sphH^{Q-m}$-a.e. differentiable along the horizontal directions that are tangent to $S$.
In classical Euclidean geometry an analogous result can be easily obtained from
the usual Rademacher Theorem by reasoning in local charts on the submanifold.
In Heisenberg groups $\HH^n$ a similar strategy seems feasible only for submanifolds of codimension 1 with stronger $C^{1,\alpha}_\HH$ regularity, because these submanifolds can be modeled on the Carnot group $\HH^{n-1}\times\R$ (see~\cite[Theorem~1.7]{DDFO}) where Pansu Theorem holds.
Our approach is completely different: Theorem~\ref{thm607bf499} is in fact proved via the use of currents in the Heisenberg group (see Section~\ref{sec_prelim}): although these currents involve the use of Rumin's complex of differential forms, whose construction is highly non-trivial, our proof does not require its most daunting aspects. Let $\cur S$ be the current associated with the submanifold $S$ and without loss of generality assume that $\ell=1$. We consider the blow-up of the current $u\cur S$ at a point $p\in S$ and prove that, for $\sphH^{Q-m}$-a.e.~$p\in S$, the blow-up limit is of the form $L\cur{T^\HH_pS}$, where $T^\HH_pS$ is the homogeneous tangent subgroup to $S$ at $p$ and $L$ is a homogeneous morphism $L:T^\HH_pS\to\R$. Through some minor technicalities (see Proposition~\ref{prop607d3565} and Lemma~\ref{lem607d9517}), this fact implies the tangential differentiability of $u$ along $S$ at $p$.
We must stress the fact that, in Theorem~\ref{thm607bf499}, the assumption that the codimension $m$ is {\em strictly} less than $n$ is crucial, as the following example shows.
\begin{remark}\label{rem_counterexample_m=n}
Consider the $C^1_\HH$ submanifold $S:=\{(x,y,t)\in\HH^1\equiv\R^3:x=0\}$ of codimension 1 in $\HH^1$ and let $u:S\to\R$ be the function $u(0,y,t):=v(t)$, where $v:\R\to\R$ is a $\tfrac12$-H\"older continuous function such that, for every $t\in\R$,
\[
\liminf_{s\to t}\frac{|v(s)-v(t)|}{|s-t|^{1/2}}>0.
\]
For the construction of such a $v$, see e.g.~\cite[Appendix]{JNGV_AntiRademacher} and the references therein. The H\"older continuity of $v$ easily implies the Lipschitz continuity of $u$ on $S$ with respect to the distance $d$. Now, every group morphism $L:\HH^1\to\R$ is such that $L(0,0,t)=0$; taking into account that $S$ is an Abelian subgroup of $\HH^1$ (as a group, it is isomorphic to $\R^2$) we deduce that for every fixed $(0,y,t)\in S$
\[
\liminf_{s\to t}\frac{|u(0,y,s)-u(0,y,t)-L((0,y,t)^{-1}(0,y,s))|}{d((0,y,s),(0,y,t))}= c\liminf_{s\to t}\frac{|v(s)-v(t)|}{|s-t|^{1/2}} >0,
\]
where the constant $c>0$ depends on the distance $d$. In particular, there is no group morphism $L$ for which~\eqref{eq_defdifferentiability} holds, and $u$ is a Lipschitz function that is {\em nowhere} tangentially Pansu differentiable along $S$.
\end{remark}
We conclude this introduction by stating two consequences of Theorem~\ref{thm607bf499}. The first one is a Lusin-type theorem for Lipschitz functions on $\HH$-rectifiable sets: a Lipschitz function coincide with a $C^1_\HH$ function outside an arbitrarily small set. The tangential Pansu differential along a $\HH$-rectifiable subset, $D_\HH^Ru_p$,
is introduced in Corollary~\ref{cor_RademacherHrectifiablesets}.
\begin{thm}[Lusin]\label{thm607bf4ef}
Let $n,m,\ell\ge1$ with $ m < n$.
Let $R$ be a $\HH$-rectifiable subset of $\HH^n$ with codimension $m$
and $u:R\to\R^\ell$ a Lipschitz function.
For every $\epsilon>0$ there is $g\in C^1_\HH(\HH^n;\R^\ell)$ such that
\[
\sphH^{Q-m}(\{p\in R: u(p)\neq g(p)\text{ or }D_\HH^Ru_p\neq D_\HH^Rg_p\}) < \epsilon .
\]
Moreover, $g$ can be chosen to be Lipschitz continuous on $\HH^n$ with a Lipschitz constant controlled only in terms of $n$ and of the Lipschitz constant of $u$.
\end{thm}
A second consequence of Theorem~\ref{thm607bf499} is a fully general coarea formula on $\HH$-rectifiable sets, Theorem~\ref{thm607bf4fe}.
In our previous work~\cite{JNGV} we proved a coarea formula under the assumption that the ``slicing'' function $u$ is of class $C^1_\HH$; the use of Theorem~\ref{thm607bf4ef} allows to extend this result to the more general (and more natural) case in which $u$ is Lipschitz continuous.
In fact, our interest in Theorem~\ref{thm607bf499} was originally motivated by Theorem~\ref{thm607bf4fe}, which completes the program started in~\cite{JNGV} at least in Heisenberg groups.
\begin{thm}[Coarea]\label{thm607bf4fe}
Let $n,m,\ell\ge1$ with $ m+\ell \le n$.
There is a continuous positive function $\coarea(\bb P,\alpha)$,
defined for homogeneous subgroups $\bb P$ of $\HH^n$ of codimension $m$
and homogeneous group morphisms $\alpha:\bb P\to\R^\ell$,
such that the following holds.
If $R$ and $u$ are as in Theorem~\ref{thm607bf4ef},
then, for every Borel function $h:R\to[0,+\infty)$,
\[
\int_R h(p) \coarea(T^\HH_pR,D_\HH^Ru_p) \dd\sphH^{Q-m}(p)
= \int_{\R^\ell} \int_{u^{-1}(s)} h(x) \dd\sphH^{Q-m-\ell}(x) \dd\mathscr L^\ell(s) .
\]
Moreover, if the distance $d$ is rotationally invariant\footnote{See~\eqref{eq:seba1} for the definition of rotationally invariant distance.}, then
then there exists a constant $\textfrak c=\textfrak c(n,m,\ell,d)>0$ such that
\begin{equation*}
\textfrak c \int_R h(p)J^R_H u(p) \, \dd\sphH^{Q-m} (p)=\int_{\R^\ell} \int_{u^{-1}(s)} h(x)\dd\sphH^{Q-m-\ell}(x)\,\dd\mathscr L^\ell(s)
\end{equation*}
where
\[
J^R_H u(p) = (\det(L \circ L^T))^{1/2} \quad \text{ with } \quad L= D_\HH^Ru_p \vert_{T^\HH_pR}.
\]
\end{thm}
\medskip
The paper is structured as follows. Section~\ref{sec_prelim} contains the preliminary material about Heisenberg groups, $C^1_\HH$ submanifolds, $\HH$-rectifiable sets and currents, while Section~\ref{sec_differentiability} is concerned with some technical results about tangential Pansu differentiability. Theorems~\ref{thm607bf499},~\ref{thm607bf4ef} and~\ref{thm607bf4fe} are eventually proved in Sections~\ref{sec_proofThmA},~\ref{sec_proofThmB} and~\ref{sec_proofThmC}, respectively.
\medskip
{\em Acknowledgments.}
During the preparation of this paper we were informed that Theorem~\ref{thm607bf499} also follows from some results contained in a forthcoming paper by G.~de~Philippis, A.~Marchese, A.~Merlo, A.~Pinamonti and F.~Rindler: their method, which follows the approach in~\cite{AlbertiMarchese}, is easier to generalize to other Carnot groups, though possibly less hands-on than ours. We warmly thanks them for sharing this information with us.
\section{Preliminaries}\label{sec_prelim}
For an integer $n\ge 1$, the $n$-th {\em Heisenberg group} $\HH^n$ is the nilpotent, connected and simply connected stratified Lie group associated with the step 2 algebra $V=V_1\oplus V_2$ defined by
\begin{align*}
& V_1=\textrm{span}\{X_1,\dots,X_n,Y_1,\dots,Y_n\},\qquad V_2=\textrm{span}\{T\}
\end{align*}
and where the only non-vanishing commutation relations are given by $[X_i,Y_i]=T$ for every $i=1,\dots,n$.
We will always identify $\HH^n$ with its Lie algebra through the exponential map $\exp:V\to\HH^n$.
This induces a diffeomorphism between $\HH^n$ and $\R^{2n+1}$ defined by
\[
\R^n\times \R^n\times \R\ni (x,y,t)\longleftrightarrow \exp(x_1X_1+\dots+x_nX_n+y_1Y_1+\dots+ y_nY_n+tT)\in\HH^n
\]
according to which the group operation reads
\[
(x,y,t)(x',y',t')=(x+x',y+y',t+t'+\tfrac12\textstyle\sum_{j=1}^n(x_jy_j'-x_j'y_j)).
\]
In these coordinates the generators of the algebra read as
\[
X_i=\partial_{x_i}-\frac{y_i}2 \partial_t,\qquad Y_i=\partial_{y_i}+\frac{x_i}2\partial_t,\qquad T=\partial_t
\]
for every $i=1,\dots,n$. In particular, the space $V_1$ is the kernel of the left-invariant {\em contact form} $\theta:=dt+\frac12\sum_{i=1}^n(y_idx_i-x_idy_i)$.
Heisenberg groups are endowed with dilations, i.e., with the one-parameter group of automorphisms $(\delta_\lambda)_{\lambda>0}$ defined by $\delta_\lambda(x,y,t):=(\lambda x,\lambda y,\lambda^2t)$. We endow $\HH^n$ with a left-invariant and homogeneous distance $d$, so that
\[
d(p,q)=d(p'p,p'q)\quad\text{and}\quad d(\delta_\lambda p,\delta_\lambda q)=\lambda d(p,q)\qquad\text{for every }p,p',q\in\HH^n,\lambda>0,
\]
and denote by $B(p,r)$ the open ball of center $p\in\HH^n$ and radius $r>0$.
The Hausdorff dimension of $\HH^n$ is $Q:=2n+2$.
We fix on $V$ the scalar product making the basis $X_1,\dots,X_n,Y_1,\dots,Y_n,T$ orthonormal; for every $k\in\{0,\dots,2n+1\}$ a scalar product is canonically induced on the exterior product $\text{\Large$\wedge$}_k V$. We will denote by $|\cdot|$ the norm associated with such scalar products. Also the dilations $\delta_\lambda$ can be canonically extended to $\text{\Large$\wedge$}_k V$.
Given an open set $U\subset\HH^n$, we say that $f:U\to\R$ is of class $C^1_\HH$ if $f$ is continuous and its horizontal derivatives
\[
\nabla_\HH f:=(X_1f,\dots,X_nf,Y_1f,\dots,Y_nf)
\]
are represented by continuous functions on $U$. In this case we write $f\in C^1_\HH(U)$. We agree that, for every $p\in U$, $\nabla_\HH f(p)\in\R^{2n}$ is identified with the horizontal vector
\[
\nabla_\HH f(p):=X_1f(p)X_1+\dots+Y_nf(p)Y_n \in V_1
\]
We denote by $ C^1_\HH(U,\R^m)$ the space of functions $f:U\to \R^m$ whose components belong to $C^1_\HH(U)$.
\begin{definition}\label{def_C1Hsubmanifold}
Let $m\in\{1,\dots,n\}$ be fixed. We say that $S\subset \HH^n$ is a submanifold {\em of class} $C^1_\HH$ (or {\em $\HH$-regular submanifold}) of codimension $m$ if, for every $p\in S$, there exist an open neighborhood $U\subset\HH^n$ of $p$ and $f\in C^1_\HH(U,\R^m)$ such that
\[
S\cap U=\{q\in U:f(q)=0\}\quad\text{and}\quad \text{$\nabla_\HH f(q)$ has rank $m$ for all $q\in U$.}
\]
We also define the {\em horizontal normal} $n_S^\HH(p)$ to $S$ at $p$ as the horizontal $m$-vector
\[
n_S^\HH(p):=\frac{\nabla_\HH f_1(p)\wedge\dots\wedge\nabla_\HH f_m(p)}{|\nabla_\HH f_1(p)\wedge\dots\wedge\nabla_\HH f_m(p)|}\in\text{\Large$\wedge$}_m V_1
\]
and the {\em (horizontal) tangent} $t^\HH_S(p):=*n_S^\HH(p)\in \text{\Large$\wedge$}_{2n+1-m} V$.\\
We will consider the {\em boundary} of $S$ defined as
$\partial S:=\overline S \setminus S$.
\end{definition}
In the definition of the tangent multi-vector $t^\HH_S$ the symbol $*$ denotes the Hodge operator from multivector calculus.
It is well known that the blow-up limit of a $C^1_\HH$ submanifold $S$ at $p\in S$ is the homogeneous (i.e., dilation-invariant) subgroup
\[
T^\HH_pS:=\exp(\{X\in V:X\wedge t^\HH_S=0\}).
\]
This means in particular that $\lim_{\lambda\to+\infty} \delta_{1/\lambda}(p^{-1}S)=T^\HH_pS$ in the sense of Kuratowski, see
Section~\ref{sec_differentiability}. We will refer to $T^\HH_pS$ as the {\em homogeneous tangent space} (or simply {\em tangent space}) to $S$ at $p$.
An Implicit Function Theorem~\cite[Theorem~6.5]{MR1871966} is available for $C^1_\HH$ submanifolds. If $S$ is as in Definition~\ref{def_C1Hsubmanifold} and $p\in S$ is fixed, then there exist
\begin{itemize}
\item a {\em horizontal complement} $\V=\V(p)$ to $T^\HH_pS$, i.e., a homogeneous subgroup $\V$ such that $\V\subset V_1$, $\V\cap T^\HH_pS=\{0\}$ and $\HH^n=(T^\HH_pS)\cdot\V$;
\item an open neighborhood $\Omega$ of $p$;
\item a relatively open set $U\subset T^\HH_pS$;
\item a continuous map $\phi:U\to\V$
\end{itemize}
such that $S\cap\Omega$ coincides with the {\em intrinsic graph} $\Gamma_\phi$ of $\phi$ defined by
\begin{equation}\label{eq_intrinsicgraph}
\Gamma_\phi:=\{w\phi(w):w\in U\}.
\end{equation}
See e.g.~\cite{JNGV} and the references therein.
The area formula for such graphs states that there exists a continuous function $\cal A_\phi:U\to(0,+\infty)$ such that for every Borel function $h:S\to[0,+\infty)$
\begin{equation}\label{eq_areaformula}
\int_{S\cap \Omega}h\dd\sphH^{Q-m}=\int_U h(w\phi(w))\cal A_\phi(w)\dd\sphH^{Q-m}(w).
\end{equation}
Recall that the Hausdorff dimension of $S$ (as well as that of $T^\HH_pS$) is $Q-m$; moreover, the spherical Hausdorff measure $\sphH^{Q-m}$ is locally $(Q-m)$-Ahlfors regular on $S$.
\begin{remark}\label{rem_areafactoris1}
We will later use the fact that, if $\bar w\in T^\HH_pS$ is the unique point such that $p=\bar w\phi(\bar w)$, then $\cal A_\phi(\bar w)=1$. This follows from the very definition of the area factor $\cal A$ for the spherical measure $\sphH^{Q-m}$, see~\cite[Lemma~3.2]{JNGV}.
\end{remark}
\begin{definition}\label{def_Hrectifiable}
Let $m\in\{1,\dots,n\}$ be fixed. We say that $R\subset \HH^n$ is {\em countably $\HH$-rectifiable} of codimension $m$ if there exist countably many $C^1_\HH$ submanifolds $S_i$, $i\in\N$, of codimension $m$ such that
\[
\sphH^{Q-m}\Big(R\setminus\bigcup_{i\in\N} S_i\Big)=0.
\]
We say that $R$ is {\em $\HH$-rectifiable} if, in addition, $\sphH^{Q-m}(R)<+\infty$.
\end{definition}
The following lemma, though very simple, is sometimes overlooked.
\begin{lemma}\label{lem_oneSisenough}
Let $m\leq n$ be fixed. Then, a subset $R\subset\HH^n$ is $\HH$-rectifiable of codimension $m\leq n$ if and only if, for every $\varepsilon>0$, there exists a $C^1_\HH$ submanifold $S\subset\HH^n$ of codimension $m$ such that
\begin{equation}\label{eq_unasolasuperficie}
\sphH^{Q-m}(R\setminus S)<\varepsilon.
\end{equation}
\end{lemma}
\begin{proof}
Let $\varepsilon>0$ be fixed and fix $S_i$, $i\in\N$, as in Definition~\ref{def_Hrectifiable}.
Fix also a positive integer $M$ such that
\[
\sphH^{Q-m}\Big(R\setminus\bigcup_{i\leq M} S_i\Big)<\frac\varepsilon2.
\]
We define the $C^1_\HH$ submanifold $S_0':=\{p\in S_0: d(p,\de S_0)>r_0\}$, where $r_0$ is chosen so that
\[
\sphH^{Q-m}(R\cap\de S_0')=0\qquad\text{and}\qquad \sphH^{Q-m}((R\cap S_0)\setminus S_0')<\frac\varepsilon4.
\]
Reasoning by induction, for every $i=1,\dots,M$ one can define $C^1_\HH$ submanifolds
\[
S_i':=\{p\in S_i\setminus \cup_{j<i}\overline{S_j'}: d(p,\de (S_i\setminus \cup_{j<i}\overline{S_j'}))>r_i\}
\]
where we use the fact that $S_i\setminus \cup_{j<i}\overline{S_j'}$ is a $C^1_\HH$ submanifold and $r_i>0$ is chosen so that
\[
\sphH^{Q-m}(R\cap\de S_i')=0\qquad\text{and}\qquad \sphH^{Q-m}(R\cap(S_i\setminus \cup_{j<i}\overline{S_j'})\setminus S_i')<\frac\varepsilon{2^{i+2}}.
\]
We now consider $S:=\cup_{i=0}^M S_i'$, which is a $C^1_\HH$ submanifold because it is union of finitely many $C^1_\HH$ submanifolds at positive distance from each other. Then
\begin{align*}
\sphH^{Q-m}(R\setminus S)
& \leq \sphH^{Q-m}( R\setminus\cup_{i\leq M} S_i) + \sphH^{Q-m}( R\cap(\cup_{i\leq M} S_i)\setminus(\cup_{j\leq M} S_j'))\\
&< \frac\varepsilon2 + \sphH^{Q-m}(\cup_{i\leq M} ((R\cap S_i)\setminus\cup_{j\leq M} S_j'))\\
&\leq \frac\varepsilon2 + \sphH^{Q-m}(\cup_{i\leq M} (R\cap(S_i\setminus \cup_{j\leq i}{S_j'})))\\
&= \frac\varepsilon2 + \sphH^{Q-m}(\cup_{i\leq M} (R\cap(S_i\setminus \cup_{j<i}\overline{S_j'})\setminus S_i'))\\
&< \varepsilon,
\end{align*}
where we used the fact that $\sphH^{Q-m}(R\cap\de S_j')=0$. This proves one implication, the converse one is trivial.
\end{proof}
\begin{definition}\label{def60af5092}
An {\em approximate tangent space} $T^\HH_p R$ can be defined for a countably $\HH$-rectifiable set $R\subset \HH^n$.
Let $S_i$ be as in Definition~\ref{def_C1Hsubmanifold}; then we define
\[
T^\HH_pR := T^\HH_p S_i\qquad\text{if }p\in R\cap S_i\setminus\bigcup_{j<i}S_j.
\]
\end{definition}
Definition~\ref{def60af5092} is well-posed $\sphH^{Q-m}$-a.e. on $R$, see e.g.~\cite[\S2.5]{JNGV}. It turns out that, if $R_1,R_2\subset\HH^n$ are countably $\HH$-rectifiable, then $T^\HH_p R_1=T^\HH_p R_2$ for $\sphH^{Q-m}$-a.e. $p\in R_1\cap R_2$.
We will need a few facts from Rumin's theory of differential forms in $\HH^n$ as well as from the theory of the associated currents.
The exact complex of {\em Heisenberg differential forms}
\[
0\to\R\to\Omega_\HH^0\stackrel{d_c}{\to}\Omega_\HH^1\stackrel{d_c}{\to}\dots\stackrel{d_c}{\to}\Omega_\HH^n\stackrel{d_c}{\to}\Omega_\HH^{n+1}\stackrel{d_c}{\to} \dots \stackrel{d_c}{\to}\Omega_\HH^{2n+1}\to 0
\]
was introduced by M.~Rumin in~\cite{Rumin}; here we will only partially introduce it and, for more details, we refer to~\cite[\S3]{2020arXiv200714286V} and the references therein.
For $k\geq n+1$ we have
\[
\Omega_\HH^k:=\{\omega\text{ smooth $k$-form on }\HH^n: \omega\wedge\theta=\omega\wedge\dd\theta=0\},
\]
and $d_c:\Omega_\HH^k\to\Omega_\HH^{k+1}$ coincides with the usual exterior differential $d$.
Notice that $d\theta=-\sum_{j=1}^n dx_j\wedge dy_j$ is the standard symplectic form in $\R^{2n}$ (up to a sign).
For every $p\in\HH^n$, $\lambda>0$ and $\omega\in \Omega_\HH^k$, $k\geq n+1$, one has
\begin{equation}\label{eq_commute}
d(\omega\circ L_{p,\lambda})=\lambda(d\omega)\circ L_{p,\lambda}, \qquad\text{where }L_{p,\lambda}(x)=\delta_\lambda(px).
\end{equation}
where, by a slight abuse of notation, we identify $k$-differential forms with functions $\HH^n\to\text{\Large$\wedge$}^kV$.
Formula~\eqref{eq_commute} can be proved on observing that, by definition of the Rumin's spaces,
one can write $\omega=\omega_H\wedge\theta$ for a suitable $\omega_H\in C^\infty(\HH^n,\text{\Large$\wedge$}^{k-1}V_1)$ such that $\omega_H\wedge d\theta=0$; in this way
\[
d\omega=d(\omega_H\wedge\theta)=(d\omega_H)\wedge \theta=(d\omega_H)_H\wedge \theta
\]
for a suitable $(d\omega_H)_H\in C^\infty(\HH^n,\text{\Large$\wedge$}^{k}V_1)$, and we obtain the homogeneity relations
\[
\omega\circ L_{p,\lambda}=\lambda^{-k-1}L_{p,\lambda}^*\omega,\qquad (d\omega)\circ L_{p,\lambda}=\lambda^{-k-2}L_{p,\lambda}^*(d\omega),
\]
where $L_{p,\lambda}^*$ denotes pull-back by $L_{p,\lambda}$. Since pullback and exterior differentiation commute, we eventually achieve
\[
d(\omega\circ L_{p,\lambda})=d(\lambda^{-k-1}L_{p,\lambda}^*\omega)=\lambda\ \lambda^{-k-2}L_{p,\lambda}^*(d\omega)= \lambda (d\omega)\circ L_{p,\lambda}.
\]
Let $\DH^k\subset\Omega_\HH^k$ be the space of Heisenberg $k$-forms with compact support; $d_c$ maps $\DH^k$ to $\DH^{k+1}$. A {\em Heisenberg $k$-current} is, by definition, an element of the dual space to $\DH^k$. If $S\subset\HH^n$ is a $C^1_\HH$ submanifold of codimension $m\leq n$ with $\sphH^{Q-m}\hel S$ locally finite, then $S$ induces a Heisenberg $(2n+1-m)$-current $\cur{S}$ defined by
\[
\cur{S}(\omega)=\int_S \langle t^\HH_S(p)|\omega(p)\rangle \dd\sphH^{Q-m}(p),\qquad\omega\in\DH^{2n+1-m}.
\]
Observe that by definition $\cur S=t_S^\HH\sphH^{Q-m}\hel S$ where, given a Radon measure $\mu$ and a $\mu$-measurable function
$t:\HH^n\to\text{\Large$\wedge$}_k V$, we denote by $t\mu$ the Heisenberg $k$-current
\[
(t\mu)(\omega) = \int \langle t(p)|\omega(p) \rangle \dd\mu(p).
\]
The {\em boundary} of a Heisenberg $k$-current $\mathsf T$ is the Heisenberg $(k-1)$-current $\de_c\mathsf T$ defined by
\[
\de_c\mathsf T(\omega)=\mathsf T(d_c\omega),\qquad\omega\in\DH^{k-1}.
\]
\begin{remark}\label{rem_boundarylocally0}
If $S\subset\HH^n$ is a $C^1_\HH$ submanifold of codimension $m\leq n$, then $\de_c\cur S=0$ locally on $S$, i.e., for every $p\in S$ there exists $r>0$ such that $\de_c\cur S(\omega)=0$ for every $\omega\in\DH^{2n-m}$ with support in $B(p,r)$.
Indeed,
$S$ locally coincides with an entire intrinsic Lipschitz graph on $T^\HH_pS$ by \cite[Theorem~1.5]{2020arXiv200714286V},
and the currents canonically associated with entire intrinsic Lipschitz graphs have null boundary by~\cite[Proposition~7.5]{2020arXiv200714286V}.
\end{remark}
\section{Pansu differentiability on \texorpdfstring{$C^1_\HH$}{} submanifolds}\label{sec_differentiability}
Before stating and proving the following Proposition~\ref{prop607d3565} we need to fix some terminology.
A sequence $\{E_j\}_j$ of subsets of a topological space $X$ converges to $E\subset X$ in the sense of Kuratowski if the following two conditions are satisfied:
\begin{enumerate}
\item
if $x\in E$, then there exist $x_j\in E_j$ such that $x_j\to x$;
\item
if there are $j_k\to\infty$ and $x_k\in E_{j_k}$ such that $x_k\to x$, then $x\in E$.
\end{enumerate}
Accordingly, we say that a one-parameter family $\{E_\lambda\}_{\lambda\ge 1}$ of subsets of $X$ converges to $E$ in the sense of Kuratowski
if, for every sequence $\lambda_j\to\infty$, the sequence $E_{\lambda_j}$ converges to $E$ in the sense of Kuratowski.
In a boundedly compact metric space $X$, Kuratowski limits satisfy standard properties:
the limit set $E$ is always sequentially closed;
the family of compact subsets contained in a fixed bounded set is compact and, within this family, Hausdorff convergence is equivalent to Kuratowski convergence;
every sequence of closed sets admits a convergent subsequence (cfr.~\cite[Mrowla’s Theorem, p.149]{MR1269778}).
We can now state the following result.
\begin{proposition}\label{prop607d3565}
Let $S$ be a $C^1_\HH$ submanifold of $\HH^n$ of codimension $m\leq n$
and let $u:S\to\R^\ell$ be a function.
Fix $p\in S$ and a homogeneous morphism $L:T^\HH_pS\to\R^\ell$.
The following statements are equivalent:
\begin{enumerate}[label=(\arabic*)]
\item\label{item607f44f1}
$u$ is tangentially Pansu differentiable along $S$ at $p$ and $D_\HH^Su_p=L$;
\item\label{item607d35ae}
The sets
\[
\{(\delta_\lambda(p^{-1}x), \lambda(u(x)-u(p)) ) : x\in S \} \subset \HH^n\times\R^\ell
\]
converge to
\[
\{(x,L(x)):x\in T^\HH_pS\}
\]
in the sense of Kuratowski, as $\lambda\to\infty$.
\item\label{item607d35b3}
Let $U\subset T^\HH_pS$ be an open neighborhood of $0$ and $\phi:U\to\V$ (where $\V\subset V_1$ is a horizontal complement to $T^\HH_pS$) be such that $\Gamma_\phi=\{w\phi(w):w\in U\}\subset p^{-1}S$.
Let $\phi_{\lambda}(w) := \delta_\lambda \phi(\delta_{1/\lambda}w)$; in particular, $\Gamma_{\phi_\lambda}=\delta_\lambda(\Gamma_\phi) \subset \delta_\lambda(p^{-1}S)$
and $\phi_\lambda\to 0$ uniformly on compact sets.
Then, the functions $v_\lambda:\delta_\lambda(U)\to \R^\ell$
\[
v_\lambda(w) := \lambda(u(p\delta_{1/\lambda}(w\phi_\lambda(w)))-u(p))
\]
converge uniformly on compact sets to $L$, as $\lambda\to\infty$.
\end{enumerate}
If, moreover, $u$ is Lipschitz continuous, the previous statements are equivalent to the following one:
\begin{enumerate}[resume,label=(\arabic*)]
\item\label{item607d35b9}
If $\tilde u:\HH^n\to\R^\ell$ is a Lipschitz extension of $u$, then $\tilde u|_{T^\HH_pS}$ is Pansu differentiable (as a map between homogeneous groups) at $0$ with differential $L$.
\end{enumerate}
\end{proposition}
\begin{proof}
Without loss of generality,
we assume $p=0$ and $u(0)=0$.
The equivalence of~\ref{item607f44f1} and~\ref{item607d35ae} is an easy exercise.
Next, notice that, for any neighborhood $\Omega\subset\HH^n\times\R^\ell$ of $(0,0)$
and for $\lambda$ large enough,
\[
\{(\delta_\lambda(x), \lambda u(x) ) : x\in S \}\cap\Omega
= \{(w\phi_\lambda(w) , v_\lambda(w)) : w\in \delta_\lambda U\} \cap \Omega.
\]
Therefore, \ref{item607d35ae} and \ref{item607d35b3} are equivalent.
Finally, we show that \ref{item607d35b3} is equivalent to \ref{item607d35b9} in case $u$ is Lipschitz continuous.
The Pansu differentiability of $\tilde u|_{T^\HH_0S}$ at $0$ with differential $L$
is equivalent to the locally uniform convergence of $\tilde u_\lambda(x) := \lambda \tilde u(\delta_{1/\lambda}x)$ to $L(x)$, for every $x\in T^\HH_0S$, as $\lambda\to\infty$.
Notice that, if $C$ is a Lipschitz constant for $\tilde u$, then
\begin{align*}
|\tilde u_\lambda(w) - v_\lambda(w)|
&= \lambda |\tilde u(\delta_{1/\lambda}w) - u(\delta_{1/\lambda}(w\phi_\lambda(w)) | \\
&\le C\lambda d(\delta_{1/\lambda}w , \delta_{1/\lambda}(w\phi_\lambda(w))) \\
&= C d(0,\phi_\lambda(w)) .
\end{align*}
Since $\phi_\lambda(w)\to 0$ locally uniformly, we conclude that $\tilde u_\lambda\to L$ if and only if $v_\lambda\to L$, as $\lambda\to\infty$.
\end{proof}
Before proving the next technical lemma let us fix some notation.
Given $q\in\HH^n\equiv V_1\oplus V_2$ we denote by $q_H\in V_1$ the unique element such that $q-q_H\in V_2$. Recall that a scalar product $\cdot$ has been fixed on $V$. It is well-known that, if $\W\subset\HH^n$ is a homogeneous subgroup of codimension $m\leq n$ and $L:\W\to\R$ is a homogeneous morphism, then there exists a unique $v\in \W\cap V_1$ such that
\[
L(q)= v\cdot q_H\quad\text{for every }q\in\W.
\]
In case $\W=T^\HH_pS$ for some $C^1_\HH $ submanifold $S$ and $L=D_\HH^Su_p$ is the tangential Pansu differential along $S$ at $p\in S$ of some $u:S\to\R$, the vector $v$ introduced before is called {\it horizontal gradient along $S$} of $u$ at $p$ and it is denoted by $\nabla_\HH^Su(p)\in T^\HH_pS$. Observe that $\nabla_\HH^Su$ can be interpreted as a $V_1$-valued map defined on the set of tangential Pansu differentiability points along $S$ of $u$.
\begin{lemma}\label{lem_Borel}
Let $S$ be a $C^1_\HH$ submanifold of $\HH^n$ of codimension $m\leq n$
and let $u:S\to\R$ be a Borel function. Then
\begin{enumerate}
\item[(i)] the set $D\subset S$ of points where $u$ is tangentially Pansu differentiable along $S$ is a Borel set;
\item[(ii)] the map $\nabla_\HH^S u:D\to V_1$ is Borel.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $L_k$, $k=1,2,\dots $ be a dense family of morphisms $\HH^n\to \R $. The set of differentiability points $D$ can be written as
\begin{align*}
D &= \left \{ p \in S: \exists L:\HH^n\to \R\text{ s.t. }\forall \epsilon >0\ \lim_{r\to 0}\sup_{q\in B(p,r)\cap S} \dfrac{| u(q)-u(p) - L(p^{-1}q)|}{d(p,q)} < \epsilon \right \}\\
&= \bigcap_{j=1}^{\infty} \bigcup_{k=1}^\infty \left \{ p \in S: \lim_{r\to 0} \sup_{q\in B(p,r)\cap S} \dfrac{| u(q)-u(p) - L_k(p^{-1}q)|}{d(p,q)} < \dfrac{1}{j}\right \}.
\end{align*}
Hence $D$ is Borel. To prove that the horizontal gradient along $S$ is a Borel map,
let $A$ be a closed subset of $V_1$ and let $v_k$, $k=1,2,\dots$ be a dense countable subset of $A$.
There holds
\begin{align*}
& \{p\in S: \nabla_\HH^Su_p \in A\} \ =\ \{p\in S: \nabla_\HH^Su_p \in \overline A\}\\
=&\bigcap_{j=1}^{\infty} \bigcup_{k=1}^\infty \left \{ p \in S: \lim_{r\to 0} \sup_{q\in B(p,r)\cap S} \dfrac{| u(q)-u(p) -v_k \cdot (p^{-1}q)_H|}{d(p,q)} < \dfrac{1}{j}\right \},
\end{align*}
so that the map $p\mapsto \nabla_\HH^S u_p$ is Borel measurable on $D \subset S$.
\end{proof}
\section{Proof of Theorem~\ref{thm607bf499}}\label{sec_proofThmA}
In the following lemma, as well as in the sequel, limits of currents are understood with respect to the standard weak-* topology on the space of currents, i.e., $\mathsf T_j\to\mathsf T$ if and only if $\mathsf T_j(\omega)\to\mathsf T(\omega)$ for every test Heisenberg form $\omega$.
Moreover, given a $C^1_\HH$-submanifold $S$ of codimension $m\leq n$
and a function $u:S\to\R$, locally integrable with respect to $\sphH^{Q-m}\hel S$, we denote by $u\cur S$ the $(2n+1-m)$-Heisenberg current
\[
(u\cur{S})(\omega)
:=
\int_S u\: \langle t^\HH_S\, |\, \omega\rangle \dd\sphH^{Q-m},\qquad\omega\in\DH^{2n+1-m} .
\]
\begin{lemma}\label{lem607d9517}
Let $S\subset\HH^n$ be a $C^1_\HH$ submanifold of codimension $m\leq n$ and
$u:S\to\R$ a Lipschitz function.
Let $p\in S$ be fixed and, for $\lambda>0$, let $U,\V,\phi,\phi_\lambda$ and $v_\lambda$ be as in Proposition~\ref{prop607d3565}~\ref{item607d35b3}; define also
\begin{align*}
S_{\lambda} &:= \delta_\lambda(p^{-1}S) , \\
u_{\lambda}(x) &:=\lambda(u(p\delta_{1/\lambda}x)-u(p)),
\end{align*}
so that $v_\lambda(w)=u_\lambda(w\phi_\lambda(w))$.
Assume that $\lambda_j$ is a sequence such that $\lambda_j\to\infty$
and $v_{\lambda_j}$ converges locally uniformly on $T^\HH_pS$ to $v: T^\HH_pS\to\R$;
then
\[
\lim_{j\to\infty} u_{\lambda_j} \cur{S_{\lambda_j}} = v\cur{T^\HH_pS} .
\]
\end{lemma}
\begin{proof}
We denote by $\areaf_{\phi_\lambda}$ the area factor of $\phi_\lambda$, see~\eqref{eq_areaformula}.
For every $\omega\in\DH^{2n+1-m}$ and for $j$ large enough we have
\begin{align*}
u_{\lambda_j} \cur{S_{\lambda_j}}(\omega)
&= \int_{S_{\lambda_j}}
u_{\lambda_j}(x)
\langle t^\HH_{S_{\lambda_j}}(x) | \omega(x) \rangle
\dd \sphH^{Q-m}(x) \\
&= \int_{\delta_{\lambda_j} U}
v_{\lambda_j}(w)
\langle t^\HH_{S_{\lambda_j}}(w\phi_{\lambda_j}(w)) | \omega(w\phi_{\lambda_j}(w)) \rangle
\areaf_{\phi_{\lambda_j}}(w)
\dd\sphH^{Q-m}(w).
\end{align*}
The latter integrand, in $j$, gives a sequence of functions that are supported on some fixed compact subset of $T^\HH_pS$ and converge uniformly to
\[
w\mapsto v(w) \langle t^\HH_{S}(p) | \omega(w) \rangle,
\]
where we also used Remark~\ref{rem_areafactoris1} together with the fact that $\areaf_{\phi_{\lambda_j}}(w)=\areaf_\phi(\delta_{1/\lambda_j}w)$. This is sufficient to conclude.
\end{proof}
In the following lemma, given a covector $\alpha\in\text{\Large$\wedge$}^1 V_1$ we consider the homogeneous morphism
\begin{equation}\label{eq_Lalfa}
L_\alpha:\HH^n\to\R,\quad L_\alpha(p):=\alpha(p)
\end{equation}
obtained by identifying $\HH^n$ with $V$ and setting $L_\alpha|_{V_2}:=0$. Observe that $dL_\alpha=\alpha$, where the 1-covector $\alpha$ is identified with a left-invariant 1-form. Moreover, given a $C^1_\HH$ submanifold $S$ of codimension $m< n$ and a 1-form $\alpha$, we denote by $\cur S\hel\alpha$ the Heisenberg $(2n-m)$-current defined by
\[
\cur S\hel\alpha( \omega) =\int_S\langle t^\HH_S|\alpha\wedge\omega\rangle\dd\sphH^{Q-m},\qquad\omega\in\DH^{2n-m}.
\]
Clearly, when $\alpha$ is smooth this is equivalent to $\cur S\hel\alpha( \omega) =\cur S(\alpha \wedge \omega)$; observe that if $\omega\in \DH^{2n-m}$, then $\alpha\wedge \omega \in \DH^{2n+1-m}$ by definition of Heisenberg forms and because $m<n$.
\begin{lemma}\label{lem607d90a0}
Let $\W\subset\HH^n$ be a homogeneous subgroup of codimension $m< n$.
Given a measurable $u:\W\to\R$
and $\alpha\in\text{\Large$\wedge$}^1 V_1$ such that
$\de_c(u\cur \W)=-\cur\W\hel\alpha$, where we identified the covector $\alpha$ with a left-invariant 1-form.
Then there exists $c\in\R$ such that $u(w) = c+ L_\alpha(w)$ for $\sphH^{Q-m}$-a.e.~$w\in\W$.
\end{lemma}
\begin{proof}
If $\alpha=0$ this is a consequence of the Constancy Theorem in~\cite[Theorem~1.7]{2020arXiv200714286V}.
If $\alpha\neq 0$, we use the fact that $\de_c\cur\W=0$ (see e.g.~\cite[Proposition~1.9]{2020arXiv200714286V}) to deduce that for every $\omega\in\DH^{2n-m}$
\begin{align*}
0=\cur\W(d(L_\alpha\omega))=\cur\W(\alpha\wedge\omega+L_\alpha d\omega)=(\cur\W\hel\alpha)(\omega)+(L_\alpha\cur\W)(d\omega),
\end{align*}
i.e., $\de_c(L_\alpha\cur\W)=-\cur\W\hel\alpha$.
This implies that $\de_c((u-L_\alpha)\cur\W)=0$ and the statement follows from the Constancy Theorem again.
\end{proof}
\begin{remark}\label{rem_constifucont}
Clearly, when $u$ is continuous the constant $c$ provided by Lemma~\ref{lem607d90a0} is $c=u(0)$.
\end{remark}
\begin{lemma}\label{lem607c0430}
Let $S\subset\HH^n$ be a $C^1_\HH$ submanifold of codimension $m< n$
and $u:S\to\R$ be a Lipschitz function.
Then there exists a 1-form $\alpha\in L^\infty(S,\text{\Large$\wedge$}^1V_1)$ such that
\begin{equation}\label{eq607c0436}
\de_c(u\cur{S}) (\omega)
= -\cur S\hel\alpha(\omega)\qquad\forall\:\omega\in\DH^{2n-m}\text{ such that }\spt\:\omega\subset\HH^n\setminus\de S.
\end{equation}
If $\alpha_1$ and $\alpha_2$ both satisfy~\eqref{eq607c0436},
then $\alpha_1(p)|_{T^\HH_pS} = \alpha_2(p)|_{T^\HH_pS}$, for $\sphH^{Q-m}$-a.e.~$p\in S$.
\end{lemma}
\begin{proof}
By the McShane-Whitney extension theorem we can extend $u$ to a Lipschitz function $\HH^n\to\R$.
Let $(u_j)_j$ be a sequence of smooth functions\footnote{These functions can be easily produced e.g. by group convolution.} that converge uniformly to $u$ and such that the Lipschitz constant of $u_j$ is bounded uniformly in $j$.
Write $d_\HH u_j:=\sum_{i=1}^n(X_iu_j)dx_i+(Y_iu_j)dy_i$; the uniform Lipschitz continuity of $u_j$ implies that $d_\HH u_j$ is uniformly bounded, hence (up to passing to a subsequence) there exists $\alpha\in L^\infty(S;\text{\Large$\wedge$}^1V_1)$ such that $d_\HH u_j$ converges weakly-* to $ \alpha$ in $L^\infty(S;\text{\Large$\wedge$}^1V_1)$. Let us prove that~\eqref{eq607c0436} holds for such $\alpha$.
Let $\omega\in\DH^{2n-m}$ be such that $\spt\:\omega\subset\HH^n\setminus\de S$; by using Remark~\ref{rem_boundarylocally0} and a standard partition-of-unity argument one can prove that there exists an open neighborhood $\Omega$ of $\spt\:\omega$ such that $(\de_c\cur S )\hel \Omega=0$.
Noticing that $du_j=d_\HH u_j+(Tu_j)\theta$ we have
\begin{equation}\label{eq_trattore}
\begin{split}
\de_c(u\cur{S})(\omega)
&= (u\cur{S})(d\omega)
= \lim_{j\to\infty} (u_j\cur{S})(d\omega)
= \lim_{j\to\infty} \cur{S}(u_j d\omega)\\
&= \lim_{j\to\infty} \cur{S}( d(u_j\omega)-du_j\wedge\omega)
=- \lim_{j\to\infty} \cur{S}(d_\HH u_j\wedge\omega),
\end{split}
\end{equation}
where we used the equalities $(\de\cur S )\hel \Omega = (\de_c\cur S )\hel \Omega=0$ and $\omega\wedge\theta=0$. Therefore
\begin{align*}
\de_c(u\cur{S})(\omega)
&=-\lim_{j\to\infty} \int_S \langle t^\HH_S | (\dd_\HH u_j)\wedge\omega \rangle \dd\sphH^{Q-m}
=- \int_S \langle t^\HH_S | \alpha\wedge\omega \rangle \dd\sphH^{Q-m},
\end{align*}
which is~\eqref{eq607c0436}.
As for the last statement, let us introduce the following standard notation: if $t\in\text{\Large$\wedge$}_kV$ and $\alpha\in\text{\Large$\wedge$}^1 V$, then $t\lrcorner\alpha$ denotes the element of $\text{\Large$\wedge$}_{k-1}V$ defined for each $\omega\in \text{\Large$\wedge$}^{k-1}V$ by $\langle t\lrcorner\alpha|\omega \rangle=\langle t|\alpha\wedge\omega \rangle$.
It is now enough to observe that the equality $t^\HH_S\lrcorner(\alpha_1-\alpha_2)=0$ holds $\sphH^{Q-m}$-a.e. on $S$, and the statement follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm607bf499}]
Passing to the components of $u:S\to\R^\ell$ separately, we can assume $\ell=1$.
Let $\alpha$ be as in Lemma~\ref{lem607c0430}.
Since $\sphH^{Q-m}\hel S $ is locally $(Q-m)$-Ahlfors regular, for $\sphH^{Q-m}$-a.e. $p\in S$ we have
\begin{equation}\label{eq_Lebesguepoint}
\lim_{r\to0^+} \frac{1}{r^{Q-m}}\int_{S\cap B(p,r)}|\alpha-\alpha(p)|\dd\sphH^{Q-m}
= 0.
\end{equation}
We fix such a $p$ and prove that $u$ is Pansu differentiable along $S$ at $p$ with differential (recall~\eqref{eq_Lalfa}) $D_\HH^Su_p=L_{\alpha(p)}|_{T^\HH_pS}$, which is uniquely defined by Lemma~\ref{lem607c0430};
this will be enough to conclude.
For $\lambda>0$, let $U,\V,\phi,\phi_\lambda$ and $v_\lambda$ be as in Proposition~\ref{prop607d3565}~\ref{item607d35b3}; let also $S_\lambda$ and $u_\lambda$ be as in Lemma~\ref{lem607d9517}.
By Proposition~\ref{prop607d3565}, we have to prove that $v_\lambda$ converges to $L_{\alpha(p)} $ locally uniformly on $T^\HH_pS$; to this end, we assume that $\lambda_j\to\infty$ is a sequence such that the functions $v_{\lambda_j}$ converge locally uniformly to some map $v:T^\HH_pS\to\R$ and we prove that $v=L_{\alpha(p)}|T^\HH_pS$.
The existence of converging subsequences for the family $(v_\lambda)_\lambda$ follows from a standard Ascoli-Arzel\`a argument and the uniform continuity of the maps $(\phi_\lambda)_\lambda$, see~\cite[Proposition~3.8]{FS_JGA}.
For $\omega\in\DH^{2n-m}$ we have
\begin{equation*}
\begin{split}
(u_{\lambda_j}\cur{S_{\lambda_j}})(d\omega)
&= \int_{S_{\lambda_j}}{\lambda_j} (u(p\delta_{1/{\lambda_j}}x)-u(p))\ \langle t^\HH_{S_{\lambda_j}}(x)|d\omega(x)\rangle\dd\sphH^{Q-m}(x)\\
&= \lambda_j^{Q-m}\int_S (u(y)-u(p))\;\langle t^\HH_{S}(y)|{\lambda_j}(d\omega)(\delta_{\lambda_j}(p^{-1}y))\rangle\dd\sphH^{Q-m}(y)\\
&= \lambda_j^{Q-m}\int_S (u(y)-u(p))\;\langle t^\HH_{S}(y)|d(\omega\circ L_{p^{-1},\lambda_j})(y)\rangle\dd\sphH^{Q-m}(y),
\end{split}
\end{equation*}
where we set $L_{p^{-1},\lambda}(y):=\delta_{\lambda}(p^{-1}y)$ and used~\eqref{eq_commute}. For large enough $j$ the test form $d(\omega\circ L_{p^{-1},\lambda_j})$ has support in $\HH^n\setminus \de S$: this gives $(\de\cur S)(\omega\circ L_{p^{-1},\lambda_j})=0$, thus
\begin{equation*}
\begin{split}
(u_{\lambda_j}\cur{S_{\lambda_j}})(d\omega)
&= \lambda_j^{Q-m}\int_S u(y)\;\langle t^\HH_{S}(y)|d(\omega\circ L_{p^{-1},\lambda_j})(y)\rangle\dd\sphH^{Q-m}(y)\\
&=\lambda_j^{Q-m}\de(u\cur S)(\omega\circ L_{p^{-1},\lambda_j}).
\end{split}
\end{equation*}
The definition of $\alpha$ (Lemma~\ref{lem607c0430}) yields
\begin{equation*}
\begin{split}
(u_{\lambda_j}\cur{S_{\lambda_j}})(d\omega)
&= -\lambda_j^{Q-m}\int_S \langle t^\HH_{S}(y)|\alpha(y)\wedge(\omega\circ L_{p^{-1},\lambda_j})(y)\rangle\dd\sphH^{Q-m}(y)
\end{split}
\end{equation*}
and, if $R>0$ is such that $\spt\:\omega\subset B(0,R)$, we obtain from~\eqref{eq_Lebesguepoint}
\begin{equation*}
\begin{split}
(u_{\lambda_j}\cur{S_{\lambda_j}})(d\omega)
&= -\lambda_j^{Q-m}\int_{S\cap B(p,R/\lambda_j)} \langle t^\HH_{S}(y)|\alpha(y)\wedge\omega(\delta_{\lambda_j}(p^{-1}y))\rangle\dd\sphH^{Q-m}(y)\\
&= -\lambda_j^{Q-m}\int_{S\cap B(p,R/\lambda_j)} \langle t^\HH_{S}(y)|\alpha(p)\wedge\omega(\delta_{\lambda_j}(p^{-1}y))\rangle\dd\sphH^{Q-m}(y) +o(1).
\end{split}
\end{equation*}
We now use Lemma~\ref{lem607d9517} to deduce that, for every test form $\omega\in\DH^{2n-m}$,
\begin{equation*}
\begin{split}
\de(v\cur{T^\HH_pS})(\omega)
=\,& v\cur{T^\HH_pS}(d\omega)=\lim_{j\to\infty}u_{\lambda_j}\cur{S_{\lambda_j}}(d\omega)\\
=\,&- \lim_{j\to\infty} \lambda_j^{Q-m}\int_{S} \langle t^\HH_{S}(y)|\alpha(p)\wedge\omega(\delta_{\lambda_j}(p^{-1}y))\rangle\dd\sphH^{Q-m}(y)\\
=\,& - \lim_{j\to\infty}\int_{S_{\lambda_j}} \langle t^\HH_{S_{\lambda_j}}(x)|\alpha(p)\wedge\omega(x)\rangle\dd\sphH^{Q-m}(x)\\
=\,& - \lim_{j\to\infty}\int_{\delta_{\lambda_j}U} \langle t^\HH_{S_{\lambda_j}}(w\phi_{\lambda_j}(w))|\alpha(p)\wedge\omega(w\phi_{\lambda_j}(w))\rangle\areaf_{\phi_{\lambda_j}}(w)\dd\sphH^{Q-m}(w)\\
=\,& -\int_{T^\HH_pS} \langle t^\HH_{S}(p)|\alpha(p)\wedge\omega\rangle\dd\sphH^{Q-m},
\end{split}
\end{equation*}
where we used Remark~\ref{rem_areafactoris1} and the fact that the area factor verifies $\areaf_{\phi_{\lambda_j}}(w)=\areaf_\phi(\delta_{1/\lambda_j}w)$. We have therefore proved that $\de(v\cur{T^\HH_pS})=-\cur{T^\HH_pS}\hel\alpha(p)$; since $v_\lambda(0)=0$ for every positive $\lambda$, we obtain $v(0)=0$ and Lemma~\ref{lem607d90a0} (together with Remark~\ref{rem_constifucont}) implies that $v-v(0)=L_{\alpha(p)}$ on $T^\HH_pS$, as claimed.
This implies $\de (u_{\lambda}\cur{S_{\lambda}})\to -\cur{T^\HH_pS}\hel\alpha(p)$, and the proof is accomplished.
\end{proof}
The following result, which we state without proof, is a standard consequence of Theorem~\ref{thm607bf499} together with the Rademacher Theorem for intrinsic Lipschitz graphs in Heisenberg groups~\cite{2020arXiv200714286V}.
We do not recall here the definition of {\em intrinsic Lipschitz graphs} in Heisenberg groups: see e.g.~\cite{2020arXiv200714286V}.
\begin{corollary}\label{cor_RademacherLipgr}
Let $\Gamma\subset\HH^n$ be an intrinsic Lipschitz graph of codimension $m<n$ and let $u:\Gamma\to\R^\ell$ be Lipschitz continuous; then, for $\sphH^{Q-m}$-a.e. $p\in \Gamma$ there exists a homogeneous morphism $L=L(p):\HH^n\to\R^\ell$ such that
\[
\lim_{\substack{q\to p\\q\in \Gamma}} \frac{|u(q) - u(p) - L(p^{-1}q)|}{d(p,q)} = 0.
\]
Moreover, the restriction $L(p)|_{T^\HH_p\Gamma}$ is uniquely defined.
\end{corollary}
A version of Theorem~\ref{thm607bf499} for $\HH$-rectifiable sets reads as follows.
\begin{corollary}\label{cor_RademacherHrectifiablesets}
Let $R\subset\HH^n$ be countably $\HH$-rectifiable of codimension $m<n$ and let $u:R\to\R^\ell$ be Lipschitz continuous; then, for $\sphH^{Q-m}$-a.e. $p\in R$ there exists a unique homogeneous morphism $D^R_\HH u_p:T^\HH_pR \to\R^\ell$ such that the following holds. If $\tilde u:\HH^n\to\R^\ell$ is a Lipschitz continuous function such that $\tilde u|_R=u$, then,
for $\sphH^{Q-m}$-a.e.~$p\in R$,
\begin{equation}\label{eq_differentiabilityonHrectifiable}
\lim_{\substack{q\to p\\q\,\in\, p\,T^\HH_pR}} \frac{|\tilde u(q) - \tilde u(p) - D^R_\HH u_p(p^{-1}q)|}{d(p,q)} = 0.
\end{equation}
\end{corollary}
\begin{proof}
Using the notation of approximate tangent space $T^\HH_pR$ in Definition~\ref{def60af5092},
Theorem~\ref{thm607bf499} claims that,
for every $i\in\N$, there is a $\sphH^{Q-m}$-null set $N_i\subset S_i$ so that $\tilde u$ is tangentially Pansu differentiable along $S_i$ at every $p\in S_i\setminus N_i$.
Therefore, for $\sphH^{Q-m}$-a.e.~$p\in R$, there is a $C_\HH^1$-submanifold $S_i$ such that $p\in S_i\setminus N_i$ and $T^\HH_pR=T^\HH_pS_i$.
Then~\eqref{eq_differentiabilityonHrectifiable} follows from item~\ref{item607d35b9} of Proposition~\ref{prop607d3565}.
\end{proof}
\begin{remark}
In~\eqref{eq_differentiabilityonHrectifiable}, the restriction to points $q$ in the affine tangent plane $p\,T^\HH_pR$ is necessary: this is a phenomenon that occurs also in Euclidean geometry. Consider in fact a sequence $(S_i)_{i\in\N}$ of segments in the plane $\R^2$ such that
\[
\text{$S_0$ joins $(0,0)$ and $(1,0)\qquad$and$\qquad R:=\bigcup_{i\in\N}S_i$ is dense in $\R^2$.}
\]
We can also assume that $\scr H^1(R)<\infty$, so that $R$ is 1-rectifiable. Consider the Lipschitz function $u(x,y)=|y|$; then, the density of $R$ implies that for every $p\in S_0$ there exists no linear map $L:\R^2\to\R$ such that
\[
\lim_{\substack{q\to p\\q\,\in\, R}} \frac{| u(q) - u(p) - L(q-p)|}{|q-p|} = 0.
\]
A way to circonvent this problem is to use the notion of \textit{approximate differentiability}.
\end{remark}
\section{Proof of Theorem~\ref{thm607bf4ef}}\label{sec_proofThmB}
The fundamental tool we use for proving Theorem~\ref{thm607bf4ef} is the Whitney Extension Theorem~\cite[Theorem~6.8]{MR1871966}. We denote by $\HM$ the space of homogeneous morphisms $L:\HH^n\to\R^\ell$ endowed with the natural topology induced (for instance) by the distance
\[
\rho(L,L'):=\sup\{|L(p)-L'(p)|:p\in B(0,1)\}\quad L,L'\in\HM.
\]
Recall also that, for every $L\in\HM$, there exists a linear map $M_L:\R^{2n}\to\R^\ell$ such that $L(p)=M_L(p_1,\dots,p_{2n})$ for every $p=\exp(p_1X_1+\dots+p_{2n}Y_n+p_{2n+1}T)\in\HH^n$: with this identification, the Whitney Extension Theorem can be written as follows.
\begin{theorem}[{\cite[Theorem 6.8]{MR1871966}}]\label{thm_Whitney}
Let $F\subset\HH^n$ be a closed set and let $u:F\to\R^\ell$ and $L:F\to\HM$ be continuous; assume that for every compact set $K\subset F$
\[
\lim_{r\to 0^+} \sup\left\{\frac{|u(q)-u(p)-L(p)(p^{-1}q)|}{d(p,q)}:p,q\in K, 0<d(p,q)<r \right\}=0.
\]
Then, there exists $\tilde u\in C^1_\HH(\HH^n;\R^\ell)$ such that $\tilde u|_F=u$ and $D_\HH\tilde u=L$ on $F$.
\end{theorem}
\begin{remark}\label{rem_WhitneyLipschitz}
Although not explicitly stated in~\cite[Theorem~6.8]{MR1871966}, the following fact is a consequence of the construction performed in its proof: if $u$ is Lipschitz continuous on $F$, then the $C^1_\HH$ extension $\tilde u:\HH^n\to\R^\ell$ can be chosen to be also Lipschitz continuous. Moreover, the Lipschitz constant of $\tilde u$ is controlled from above in terms of $n$ and of the Lipschitz constant of $u$ only.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm607bf4ef}]
Extend $u$ to a Lipschitz $\R^\ell$-valued function defined on the whole $\HH^n$; by Lemma~\ref{lem_oneSisenough} it is not restrictive to assume that $R$ is actually a $C^1_\HH$ submanifold $S$ of codimension $m$. By Theorem~\ref{thm607bf499} and Lemma~\ref{lem_Borel}, the set $D\subset S$ of points where $u$ is tangentially Pansu differentiable along $S$ is a Borel set such that $\sphH^{Q-m}(S\setminus D)=0$. By the standard Lusin Theorem, there exists a closed set $C\subset D$ such that $\sphH^{Q-m}(S\setminus C)<\varepsilon/2$ and $\nabla_\HH^S u(p)|_C:C\to (V_1)^\ell$ is continuous. Using the notation $q_H$ and $\cdot$ introduced before Lemma~\ref{lem_Borel}, the continuous map $L:C\to\HM$ defined by
\[
L(p)(q):=q_H\cdot\nabla_\HH^S u(p)\qquad\text{for every }p\in C,q\in\HH^n
\]
has the property that, for every $p\in C$,
\begin{equation}\label{eq_limiteperWhitney}
\lim_{\substack{q\to p,\\ q\in C}} \frac{|u(q)-u(p)-L(p)(p^{-1}q)|}{d(p,q)}=0.
\end{equation}
By the Severini-Egorov Theorem, there exists a closed set $F\subset C$ such that $\sphH^{Q-m}(S\setminus F)<\varepsilon$ and the convergence in~\eqref{eq_limiteperWhitney} is uniform on compact subsets of $F$.
To conclude the proof, it suffices to apply Theorem~\ref{thm_Whitney} and recall Remark~\ref{rem_WhitneyLipschitz}.
\end{proof}
\section{Proof of Theorem~\ref{thm607bf4fe}}\label{sec_proofThmC}
We recall that a homogeneous distance $d$ on $\HH^n$ is {\em rotationally invariant}
if
\begin{equation}\label{eq:seba1}
d(0,(x,y,t))=d(0,(x',y',t))\qquad\text{whenever }|(x,y)|=|(x',y')|,
\end{equation}
where $|\cdot|$ is the Euclidean norm in $\R^{2n}$.
\begin{proof}[Proof of Theorem~\ref{thm607bf4fe}]
By standard arguments, we can without loss of generality assume that $R$ is a $C^1_\HH$ submanifold $S$ of codimension $m$. By Theorem~\ref{thm607bf4ef}, for every positive integer $i$ there exists $g_i\in C^1_\HH(\HH^n;\R^\ell)$ such that
\[
\sphH^{Q-m}(B_i)<2^{-i-1},\quad\text{where }B_i:=\{p\in S: u(p)\neq g_i(p)\text{ or }D_\HH^Su(p)\neq D_\HH^S{g_i}(p)\}.
\]
Moreover, by Remark~\ref{rem_WhitneyLipschitz} we can assume that the Lipschitz constants of $g_i$ are uniformly bounded. Let $C_i:=\cup_{j\geq i}B_j\subset S$ and $C_\infty:=\cap_{i} C_i$; observe that $\sphH^{Q-m}(C_i)<2^{-i}$ and $\sphH^{Q-m}(C_\infty)=0$. By the coarea formula in~\cite[Theorem 1.7]{JNGV} we obtain for every Borel function $h:S\to[0,+\infty)$
\begin{align*}
&\int_S \chi_{S\setminus C_i}(p)h(p)\coarea(T^\HH_pS,D_\HH^S {g_i}_p) \, \dd\sphH^{Q-m} (p)\\
= & \int_{\R^\ell} \int_{S\cap g_i^{-1}(s)} \chi_{S\setminus C_i}h\dd\sphH^{Q-m-\ell}\,\dd\mathscr L^\ell(s),
\end{align*}
where $\chi_{S\setminus C_i}$ is the characteristic function of $S\setminus C_i$ (which is a Borel subset of $S$) and $\coarea$ denotes the (continuous) coarea factor introduced in~\cite[Proposition~4.5]{JNGV}. The previous formula is the same as
\begin{align*}
\int_{S\setminus C_i}h(p)\coarea(T^\HH_pS,D_\HH^S u_p) \, \dd\sphH^{Q-m} (p)
= \int_{\R^\ell} \int_{(S\setminus C_i)\cap u^{-1}(s)} h\dd\sphH^{Q-m-\ell}\,\dd\mathscr L^\ell(s).
\end{align*}
Recalling that $\sphH^{Q-m}(C_\infty)=0$ and that $S\setminus C_i\nearrow S\setminus C_\infty$ as $i\to\infty$, by monotone convergence we obtain
\begin{align*}
\int_{S}h(p)\coarea(T^\HH_pS,D_\HH^S u_p) \, \dd\sphH^{Q-m} (p) = &
\int_{S\setminus C_\infty}h(p)\coarea(T^\HH_pS,D_\HH^S u_p) \, \dd\sphH^{Q-m} (p)\\
= & \int_{\R^\ell} \int_{(S\setminus C_\infty)\cap u^{-1}(s)} h\dd\sphH^{Q-m-\ell}\,\dd\mathscr L^\ell(s)\\
= & \int_{\R^\ell} \int_{S\cap u^{-1}(s)} h\dd\sphH^{Q-m-\ell}\,\dd\mathscr L^\ell(s).
\end{align*}
In the last equality we used the fact that $\sphH^{Q-m-\ell}( C_\infty\cap u^{-1}(s))=0$ for $\mathscr L^\ell$-a.e. $s\in\R^\ell$: this is a consequence of the
{\it coarea inequality} (see e.g.~\cite[Lemma~4.3]{JNGV} and the references therein), which implies that for a suitable $K>0$
\[
\int_{\R^\ell} \sphH^{Q-m-\ell}( C_\infty\cap u^{-1}(s))\,\dd\mathscr L^\ell(s) \leq K\sphH^{Q-m}(C_\infty)=0.
\]
In order to prove the last statement in Theorem~\ref{thm607bf4fe}, it is enough to reason as above and use the coarea formula proved in~\cite[Theorem 1.7]{JNGV} for rotationally invariant distances.
This concludes the proof.
\end{proof}
\printbibliography
\end{document}
|
1,314,259,995,571 | arxiv | \section{Introduction}
\vspace{-1.25mm}
The properties of vector mesons at finite baryon density, such as its mass
and decay width, have attracted considerable experimental and theoretical
interest over the last few decades~\cite{vectormesonsinnuclmatt}, in part
due to their potential to carry information on the partial restoration of
chiral symmetry and the possible role of QCD van der Waals forces in the
binding of quarkonia to nuclei~\cite{vanderwaals}.
However, an experimentally unified consensus has not yet been reached for the
$\phi$ meson~\cite{phipptiesnuclmatt} and further studies need to be done
~\cite{JPARCE29Proposal,JLabphiProposal}.
The study of, for example, the $\phi$--nucleus bound
states~\cite{JPARCE29Proposal, JLabphiProposal} is expected to provide
information on the $\phi$ properties at finite density, since a downward
mass shift of the $\phi$ in a nucleus is directly connected with the
existence of an attractive potential between the $\phi$ and the nucleus
where it has been produced.
Various authors predict a small downward shift of
the in-medium $\phi$ mass and a large broadening of its decay width~\cite{phipptiestheory} at normal nuclear matter density.
In Ref.~\cite{Cobos-Martinez:2017vtr} we computed the $\phi$ mass shift and
decay width in nuclear matter by evaluating the $K\overline{K}$ loop
contribution to the $\phi$ self-energy, with the in-medium $K$ and
$\overline{K}$ masses calculated using the quark-meson coupling
(QMC) model~\cite{Tsushima:1997df}. This study was extended
in Ref.~\cite{Cobos-Martinez:2017woo} by computing the $\phi$--nucleus bound
state energies and absorption with complex potentials. The Results for
$^{197}\text{Au}$ nucleus are presented for the first time. Furthermore, we also
update results for the $J/\Psi$ vector meson, adding also the $^{197}\text{Au}$
nucleus for the first time.
\section{$\Phi$-meson in nuclear matter and $\Phi$-meson--nucleus bound states}
We compute the $\phi$ self-energy $\Pi_{\phi}$ in vacuum and in nuclear matter~\cite{Cobos-Martinez:2017vtr} using an effective Lagrangian approach,
considering only the $\phi K\overline{K}$ vertex~\cite{Cobos-Martinez:2017vtr},
since we expect that a large fraction of the density dependence of $\Pi_{\phi}$
arises from the in-medium modification of the $K\overline{K}$ intermediate state,
\begin{equation}
\label{eqn:Lpkk}
\mathcal{L}_{\phi K\overline{K}}=\mi g_{\phi}\phi^{\mu}
[\overline{K}(\partial_{\mu}K)-(\partial_{\mu}\overline{K})K],
\end{equation}
\noindent where $K$ and $\overline{K}$ are isospin doublets and $\phi^{\mu}$ is
the $\phi$ meson vector field.
The contribution from \eqn{eqn:Lpkk} to $\Pi_{\phi}(p)$ is given by
\begin{equation}
\label{eqn:phise}
\mi\Pi_{\phi}(p)=-(8/3)g_{\phi}^{2}\int\dfd{4}{4}{q}\vec{q}^{\,2}
D_{K}(q)D_{K}(q-p),
\end{equation}
\noindent where $D_{K}(q)=1/(q^{2}-m_{K}^{2}+\mi\epsilon)$ is the
kaon propagator; $p=(p^{0}=m_{\phi},\vec{0})$ for a $\phi$ at rest,
$m_{\phi}$ its mass; $m_{K} (=m_{\overline{K}})$ the kaon mass; and
$g_{\phi}= 4.539$~\cite{Cobos-Martinez:2017vtr} the coupling constant.
The integral in \eqn{eqn:phise} is divergent and will be regulated using a
dipole form factor, with cutoff parameter
$\Lambda_{K}$~\cite{Cobos-Martinez:2017vtr}. The dependence of our results
on the value of $\Lambda_{K}$ is studied below.
The mass and decay width of the $\phi$ in vacuum ($m_{\phi}$ and
$\Gamma_{\phi}$), as well as in nuclear matter ($m_{\phi}^{*}$ and
$\Gamma_{\phi}^{*}$), are determined~\cite{Cobos-Martinez:2017vtr} from
\begin{equation}
\label{eqn:phippties}
m_{\phi}^{2}=(m_{\phi}^{0})^{2}+\operatorname{Re}\Pi_{\phi}(m_{\phi}^{2}),
\quad \Gamma_{\phi}=-(1/m_{\phi})\operatorname{Im}\Pi_{\phi}(m_{\phi}^{2}).
\end{equation}
The density dependence of the $\phi$ mass and decay width is driven by the
interactions the $K\overline{K}$ intermediate state with the nuclear medium, which
we calculate in the QMC model~\cite{Tsushima:1997df,Saito:2005rv}.
In \fig{fig:nuclmatt} (left panel) we present the in-medium kaon
Lorentz scalar mass as a function of the baryon density. At normal nuclear
matter density $\rho_{0}= 0.15$ fm$^{-3}$ $m_{K}^{*}$ has decreased by 13\%.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.203]{mK_in_medium.eps} &
\includegraphics[scale=0.177]{Dmphi_in_medium.eps} &
\includegraphics[scale=0.177]{Gphi_in_medium.eps}
\end{tabular}
\caption{\label{fig:nuclmatt} Left panel: In-medium kaon mass; center and right panels: $\phi$ mass shift and decay width.}
\end{center}
\end{figure}
In \fig{fig:nuclmatt} we present the $\phi$ mass shift (center panel)
and decay width (right panel) as a function of the nuclear matter density,
$\rho_{B}$, for three values of $\Lambda_{K}$.
For the largest value of $\rho_{B}$, the downward mass shift turns out to be
a few percent at most for all $\Lambda_{K}$. On the other hand,
$\Gamma_{\phi}^{*}$ depends strongly on the nuclear density, increasing
by up to a factor of $\approx 20$ for the largest value of $\rho_{B}$.
These results open the experimental possibility to study the binding and
absorption of the $\phi$ in nuclei.
We now investigate the situation where the $\phi$ meson is ``placed" inside
a nucleus~\cite{Cobos-Martinez:2017woo}.
The nuclear density distributions for all nuclei but $^{4}$He are obtained
in the QMC model~\cite{Saito:1996sf}. For $^{4}$He we use
Ref.~\cite{Saito:1997ae}. Using a local density approximation the $\phi$--nucleus potentials for a nucleus $A$ is given by
\begin{equation}
\label{eqn:Vcomplex}
V_{\phi A}(r)= U_{\phi}(r)-(\mi/2)W_{\phi}(r),
\end{equation}
\noindent where $r$ is the distance from the center of the nucleus and
$U_{\phi}(r)=m^{*}_{\phi}(\rho_{B}^{A}(r))-m_\phi$ and $W_{\phi}(r)=\Gamma_{\phi}(\rho_{B}^{A}(r))$ are, respectively, the $\phi$
mass shift and decay width inside nucleus $A$, with $\rho_{B}^{A}(r)$
the baryon density distribution of nucleus $A$.
\begin{figure}
\centering
{\renewcommand{\arraystretch}{4.00}%
\begin{tabular}{ccc}
\includegraphics[scale=0.187]{Vphi_He4.eps} &
\includegraphics[scale=0.187]{Vphi_C12.eps} &
\includegraphics[scale=0.187]{Vphi_O16.eps} \\
\includegraphics[scale=0.187]{Vphi_Au197.eps} &
\includegraphics[scale=0.187]{Vphi_Pb208.eps} &
\includegraphics[scale=0.187]{Gphi_He4.eps} \\
\includegraphics[scale=0.187]{Gphi_C12.eps} &
\includegraphics[scale=0.187]{Gphi_O16.eps} &
\includegraphics[scale=0.187]{Gphi_Au197.eps} \\
\includegraphics[scale=0.187]{Gphi_Pb208.eps} \\
\end{tabular}}
\caption{\label{fig:phinuclpot} Real $U_{\phi}(r)$ and imaginary $W_{\phi}(r)$
parts of the $\phi$--nucleus potentials for various nuclei.}
\end{figure}
In \fig{fig:phinuclpot} we show the $\phi$ potentials for some selected nuclei. We note that the results for $^{197}\text{Au}$ nucleus are
presented here for the first time. One can see that the depth of $U_{\phi}(r)$ is sensitive to $\Lambda_{K}$, but $W_{\phi}(r)$ is not.
Using these complex potentials, we calculate the $\phi$ single-particle
energies and absorption widths for various nuclei, considering the situation
where the $\phi$ is produced at rest. Then, under this condition, solving
the Proca equation becomes equivalent to solving the Klein-Gordon equation
\begin{equation}
\label{eqn:kge}
(-\nabla^{2} + \mu^{2} + 2\mu V(r))\phi(\vec{r})
= \mathcal{E}^{2}\phi(\vec{r}),
\end{equation}
where $\mu$ is the reduced mass of the system in vacuum, and $V (r)$ is
given by \eqn{eqn:Vcomplex}.
The calculated bound state energies ($E$) and absorption widths
($\Gamma$)~\cite{Cobos-Martinez:2017woo}, related to the complex
eigenvalue $\mathcal{E}$ by $E= \operatorname{Re}\mathcal{E}-\mu$ and
$\Gamma= -2\operatorname{Im}\mathcal{E}$, respectively, are given in \tab{tab:bsenergies}
with and without $W_{\phi}(r)$.
When $W_{\phi}(r)=0$ the $\phi$ is expected to form bound states with
all the nuclei studied (values in parenthesis). However, $E$ is dependent on
$\Lambda_{K}$, increasing with it. For $W_{\phi}(r)\ne 0$ the situation
changes considerably. Whether or not the bound states can be observed
experimentally is sensitive to the value of $\Lambda_{K}$. However, for the
largest value of $\Lambda_{K}$, which yields the deepest potentials, the
$\phi$ is expected to form bound states with all the nuclei studied.
However, since the so-called dispersive effect of the absorptive potential
is repulsive, the bound states disappear completely in some cases, even though
they were found when $W_{\phi}(r)=0$. This feature is obvious for the $^{4}$He
nucleus, making it especially relevant to the future experiments, planned at
J-PARC and JLab using light and medium-heavy
nuclei~\cite{JPARCE29Proposal,JLabphiProposal}.
\begin{table}
\centering
\begin{minipage}[t]{0.55\textwidth}
\centering
\addtolength{\tabcolsep}{-4pt}
\renewcommand{\arraystretch}{0.2}
\tiny
\begin{tabular}{ll|rr|rr|rr}
\hline \hline
& & \multicolumn{2}{c|}{$\Lambda_{K}=2000$} &
\multicolumn{2}{c}{$\Lambda_{K}=3000$} &
\multicolumn{2}{|c}{$\Lambda_{K}=4000$} \\
\hline
& $n\ell$ & $E$ & $\Gamma/2$ & $E$ & $\Gamma/2$ & $E$ & $\Gamma/2$ \\
\hline
$^{4}_{\phi}\text{He}$ & 1s & n (-0.8) & n & n (-1.4) & n & -1.0 (-3.2) & 8.3 \\
\hline
$^{12}_{\phi}\text{C}$ & 1s & -2.1 (-4.2) & 10.6 & -6.4 (-7.7) & 11.1 & -9.8
(-10.7) & 11.2 \\
\hline
$^{16}_{\phi}\text{O}$ & 1s & -4.0 (-5.9) & 12.3 & -8.9 (-10.0) & 12.5 & -12.6
(-13.4) & 12.4 \\
& 1p & n (n) & n & n (n) & n & n (-1.5) & n \\
\hline
$^{197}_{\phi}\text{Au}$
& 1s & -14.6 (-15.0) & 16.9 & -20.5 (-20.8) & 16.1 & -25.0 (-25.2) & 15.5 \\
& 1p & -10.9 (-11.6) & 16.2 & -16.7 (-17.2) & 15.5 & -21.1 (-21.4) & 15.0 \\
& 1d & -6.4 (-7.5) & 15.2 & -12.0 (-12.7) & 14.8 & -16.3 (-16.7) & 14.4 \\
& 2s & -4.6 (-6.1) & 14.6 & -10.1 (-11.0) & 14.3 & -14.3 (-14.9) & 14.0 \\
& 2p & n (-1.3) & n & -3.9 (-5.3) & 13.0 & -7.9 (-8.8) & 12.9 \\
& 2d & n (n) & n & n (n) & n & -1.1 (-2.7) & 11.4 \\
\hline
$^{208}_{\phi}\text{Pb}$ & 1s & -15.0 (-15.5) & 17.4 & -21.1 (-21.4) & 16.6 &
-25.8 (-26.0) & 16.0 \\
& 1p & -11.4 (-12.1) & 16.7 & -17.4 (-17.8) & 16.0 & -21.9 (-22.2) & 15.5 \\
& 1d & -6.9 (-8.1) & 15.7 & -12.7 (-13.4) & 15.2 & -17.1 (-17.6) & 14.8 \\
& 2s & -5.2 (-6.6) & 15.1 & -10.9 (-11.7) & 14.8 & -15.2 (-15.8) & 14.5 \\
& 2p & n (-1.9) & n & -4.8 (-6.1) & 13.5 & -8.9 (-9.8) & 13.4 \\
& 2d & n (n) & n & n (-0.7) & n & -2.2 (-3.7) & 11.9 \\
\hline \hline
\end{tabular
\end{minipage}\quad
\begin{minipage}[t]{0.425\textwidth}
\centering
\addtolength{\tabcolsep}{-4pt}
\renewcommand{\arraystretch}{0.72}
\tiny
\begin{tabular}{ll|r|r|r}
\hline \hline
& & \multicolumn{3}{c}{Cutoff $\Lambda_{D}$} \\
\hline
& $n\ell$ & 2000 & 4000 & 6000\\
\hline
& & $E$ & $E$ & $E$ \\
\hline
$^{4}_{J/\Psi}\text{He}$
& 1s & n & -0.70 & -5.52 \\
\hline
$^{12}_{J/\Psi}\text{C}$
& 1s & -0.53 & -4.47 & -11.28 \\
\hline
$^{16}_{J/\Psi}\text{O}$
& 1s & -1.03 & -5.73 & -13.12 \\
\hline
$^{197}_{J/\Psi}\text{Au}$
& 1s & -4.09 & -10.49 & -19.09 \\
& 1p & -2.98 & -9.18 & -17.64 \\
& 1d & -1.66 & -7.53 & -15.80 \\
& 2s & -1.23 & -6.87 & -15.00 \\
& 1f & -0.20 & -5.64 & -13.66 \\
\hline
$^{208}_{J/\Psi}\text{Pb}$
& 1s & -4.26 & -10.84 & -19.67 \\
& 1p & -3.16 & -9.53 & -18.23 \\
& 1d & -1.84 & -7.91 & -16.41 \\
& 2s & -1.41 & -7.26 & -15.64 \\
& 1f & -0.39 & -6.04 & -14.30 \\
& 2p & -0.05 & -5.11 & -13.18 \\
\hline \hline
\end{tabular}
\end{minipage}
\caption{\label{tab:bsenergies}$\Phi$- and $J/\Psi$-nuclear bound state
energies ($E$) and and absorption widths ($\Gamma$).
Units are in MeV.}
\end{table}
\section{Nuclear-bound $J/\Psi$}
Following the same procedure as in the $\phi$ meson case, here we update
results for the $J/\Psi$-nuclear bound states, considering only the lightest
intermediate state in the $J/\Psi$ self-energy, namely the $D\overline{D}$
loop.
In the original studies~\cite{JPsiBoundStates}, the $J/\Psi$ self-energy
intermediate states involved the $D$, $\overline{D}$, $D^{*}$, and
$\overline{D^{*}}$ mesons. However, it turned out that the $J/\Psi$
self-energy has larger contributions from the loops involving the $D^{*}$,
and $\overline{D^{*}}$ mesons, which is unexpected; see Krein {\it et al}
in Ref.~\cite{vectormesonsinnuclmatt} for details on the issues involved.
In \tab{tab:bsenergies}, right panel, we present our updated results for
the $J/\Psi$-nuclear bound states, adding also the $^{197}\text{Au}$
nucleus for the first time.
We note that we have set the strong interaction width of the $J/\Psi$ to
zero~\cite{JPsiBoundStates}, and therefore the $J/\Psi$ potentials are real
for all nuclei.
From these results, we expect that the $J/\Psi$ meson will form nuclear
bound states for nearly all the nuclei considered, but some cases for
$^{4}$He, and that the signal for the formation should be experimentally
very clear, provided that the $J/\Psi$ meson is produced in recoilless
kinematics.
Thus, it will be possible to search for the bound states in a $^{208}$Pb nucleus at JLab after the 12 GeV upgrade.
\section{Summary}
We have presented results for the $\phi$- and $J/\Psi$-nuclear bound states,
where the vector meson potentials in nuclei have been obtained in the local
density approximation from the vector meson self-energy in nuclear matter.
The in-medium $K$ and $D$ masses as well as the the nuclear density
distributions for all nuclei but $^{4}$He are obtained in the QMC model.
From our results, w expect that the $\phi$ and $J/\Psi$ vector mesosn should form bound states for all five nuclei studied, provided that these vector mesons are produced in (nearly) recoilless kinematics.
|
1,314,259,995,572 | arxiv | \section{Introduction}
Transmitting power over long distances with minimal losses is one of the greatest challenges in today's power transmission systems. The strong rising share of renewables increased the distances between power generation and consumption. This is a driving factor behind long-distance power transmission. One such example are large-scale off-shore wind farms, which often require power to be transmitted in cables over long distances to the mainland power grid \cite{breseti2007HVDC}. High-voltage direct current (HVDC) power transmission is a commonly used technology for long-distance power transmission. Its higher investment costs compared to AC transmission lines are compensated by its lower resistive losses for sufficiently long distances \cite{melhem2013electricity}. The break-even point, i.e., the point where the total construction and operation costs of overhead HVDC and AC lines are equal, is typically 500-800 km \cite{padiyar1990hvdc}. However, for cables, the break-even point is typically less than 50 km \cite{Hertem2010technical}. Increased use of HVDC for electrical power transmission suggests that future HVDC transmission systems are likely to consist of multiple terminals connected by several HVDC transmission lines \cite{Haileselassie2013Power}. Such systems are referred to as Multi-terminal HVDC (MTDC) systems in the literature. The main technical obstacle to overcome in order to realize such MTDC is the development of a DC breaker \cite{Franck2011HVDC}. There are a few advanced ideas to realize this device in the near future \cite{callavik2012hybrid}.
Maintaining an adequate DC voltage is the single most important practical control problem for HVDC transmission systems. If the DC voltage deviates too far from the nominal operational voltage, equipment could be damaged, resulting in loss of power transmission capability and high costs.
Many existing AC transmission grids are connected through HVDC links, usually used for bulk power transfer between the AC areas. The fast operation of the DC converters however would also enable frequency regulation of one of the connected AC grids through the HVDC link. One practical example of this is the island of Gotland in Sweden, which is only connected to the main Nordic grid through an HVDC cable \cite{axelsson2001gotland}. However, since the main Nordic AC grid has orders of magnitudes higher inertia than the AC grid of Gotland, the influence of the frequency regulation on the main grid will be negligible.
By connecting several AC grids by an MTDC system, primary frequency regulation reserves may be shared, which reduces the need for frequency regulation reserves in the individual AC systems \cite{li2008frequency}. In \cite{dai2010impact}, distributed control algorithms have been applied to share primary frequency control reserves of asynchronous AC transmission systems connected through an MTDC system. However, the proposed controller requires a slack bus to control the DC voltage, defeating the purpose of distributing the primary frequency regulation reserves. In \cite{Andreasson2014_IFAC, andreasson2014control}, distributed controllers for secondary voltage control of MTDC systems are proposed, which do not rely on a slack bus. Both of the aforementioned controllers however rely on the presence of a communication network. While a communication network might already be present, it introduces the issue of time delays, due to large geographical distances in MTDC systems, and has a certain outage risk. The impacts of delays have been analyzed in \cite{dai2010impact}, and have been found to seriously degrade performance and destabilize the power system.
A distributed controller without the need of a slack bus is proposed in \cite{dai2013voltage}. Stability of the equilibrium is guaranteed in the absence of communication delays. However, the voltage dynamics of the HVDC system are neglected. Moreover, the implementation of the controller is not realistic, as every local controller needs to access the DC voltages of all terminals.
In \cite{dai2011voltage} and \cite{silva2012provision}, decentralized controllers are employed to share primary frequency control reserves. In \cite{silva2012provision} no stability analysis of the closed-loop system is performed, whereas \cite{dai2011voltage} guarantees stability of the equilibrium provided that the connected AC areas have identical parameters. In \cite{taylordecentralized}, optimal decentralized controllers for AC systems connected by HVDC systems are derived. In all aforementioned references the voltage dynamics of the HVDC system are neglected.
Due to the inherent difficulties of time-delays, we propose a decentralized proportional controller for distributing primary frequency control reserves, which relies only on local measurements. The controller is shown to distribute the primary frequency control reserves between the connected AC systems, while maintaining an adequate DC voltage. In contrast to \cite{dai2011voltage}, we prove that the equilibrium of the closed-loop system is globally asymptotically stable for any set of system parameters and controller gains by using Lyapunov arguments. We also explicitly model the voltage dynamics of the MTDC system, and extend our result to AC systems consisting of multiple generators in simulations. Due to inherent properties of proportional controllers, the steady-state values of the voltages and frequencies will deviate from their reference values. We quantify these deviations by provable upper bounds.
The remainder of this paper is organized as follows. In Section \ref{sec:prel}, the mathematical notation is defined. In Section \ref{sec:model}, the system model and the control objectives are defined. In Section \ref{sec:dec_control}, a decentralized proportional controller for distributing primary frequency control is analyzed. In Section \ref{sec:equilibrium}, the equilibrium of the closed-loop system is analyzed.
In Section \ref{sec:simulations}, simulations of the distributed controller on a four-terminal MTDC test system are provided, showing the effectiveness of the proposed controller. The paper ends with a discussion and concluding remarks in Section \ref{sec:discussion}.
\section{Preliminaries}
\label{sec:prel}
Let $\mathcal{G}$ be a graph. Denote by $\mathcal{V}=\{ 1,\hdots, n \}$ the vertex set of $\mathcal{G}$, and by $\mathcal{E}=\{ 1,\hdots, m \}$ the edge set of $\mathcal{G}$. Let $\mathcal{N}_i$ be the set of neighboring vertices to $i \in \mathcal{V}$.
Denote by $\mathcal{B}$ the vertex-edge incidence matrix of $\mathcal{G}$, and let $\mathcal{\mathcal{L}_W}=\mathcal{B}W\mathcal{B}^T$ be the weighted Laplacian matrix of $\mathcal{G}$, with edge-weights given by the elements of the diagonal matrix $W$. We denote the space of real-valued $n\times m$-valued matrices by $\mathbb{R}^{n\times m}$.
Let $\mathbb{C}^-$ denote the open left half complex plane, and $\bar{\mathbb{C}}^-$ its closure. We denote by $c_{n\times m}$ a vector or matrix of dimension $n\times m$, whose elements are all equal to $c$. For a symmetric matrix $A$, $A>0 \;(A\ge 0)$ is used to denote that $A$ is positive (semi) definite. $I_{n}$ denotes the identity matrix of dimension $n$. For simplicity, we will often drop the notion of time dependence of variables, i.e., $x(t)$ will be denoted $x$ for simplicity. Let $\norm{\cdot}_\infty$ denote the maximal absolute value of the elements of a vector.
\section{Model and problem setup}
\label{sec:model}
We will here give a unified model for an MTDC system interconnected with several asynchronous AC systems.
We consider an MTDC transmission system consisting of $n$ converters, each connecting to an AC system, denoted $1, \dots, n$. The converters are assumed to be connected by an MTDC transmission grid. The dynamics of converter $i$ is assumed to be given by
\begin{align}
\begin{aligned}
C_i \dot{V}_i &= -\sum_{j\in \mathcal{N}_i} \frac{1}{R_{ij}}(V_i -V_j) + I_i^{\text{inj}} ,
\end{aligned}
\label{eq:voltage}
\end{align}
where $V_i$ is the voltage of converter $i$, $C_i>0$ is its capacitance, $I_i^{\text{inj}}$ is the injected current from an AC grid connected to the DC converter. The constant $R_{ij}$ denotes the resistance of the HVDC transmission line connecting the converters $i$ and $j$.
The graph corresponding to the HVDC line connections is assumed to be connected.
The AC system is assumed to consist of a single generator which is connected to the corresponding DC converter, representing an aggregate model of the AC grid. The dynamics of the AC system are given by the swing equation \cite{machowski2008power}:
\begin{align}
m_i \dot{\omega}_i &= -K_i^{\text{droop}} (\omega_i-\omega^{\text{ref}}) + P_i^\text{nom} + P_i^{{m}} - P_i^{\text{inj}}, \label{eq:frequency}
\end{align}
where $\omega_i$ is the frequency of the generator, $\omega^{\text{ref}}$ is the reference frequency and $m_i>0$ is its moment of inertia. The constant $P_i^\text{nom}$ is the nominal generated power of generator $i$, $P^m_i$ is the uncontrolled deviation from the nominal generated power, $P_i^{\text{inj}}$ is the power injected to the DC system through the converter and $K_i^{\text{droop}}>0$ is the gain of the frequency droop controller of the generator.
We define $P^\text{droop}_i=-K_i^{\text{droop}} (\omega_i-\omega^{\text{ref}})$, and state the control objective.
\begin{objective}
\label{obj:1}
The primary frequency control action should be distributed fairly amongst the generators, i.e.
\begin{align*}
\lim_{t\rightarrow \infty } \left| P_i^{\text{droop}}(t) + \frac{1}{n} \sum_{i=1}^n P_i^m \right| \le e^{\text{droop}} \quad \forall i = 1, \dots, n,
\end{align*}
where $e^{\text{droop}}$ is a given scalar.
Furthermore, the frequencies of the AC systems, as well as the converter voltages, should not deviate too far from their nominal values, i.e.
\begin{align*}
\lim_{t\rightarrow \infty} |V_i(t)-V_i^{\text{ref}}| &\le e^{{V} } \quad \forall i = 1, \dots, n \\
\lim_{t\rightarrow \infty} |\omega_i(t)-\omega^{\text{ref}}| &\le e^{{\omega} } \quad \forall i = 1, \dots, n, \\
\end{align*}
where $V_i^{\text{ref}}$ is the reference DC voltage of converter $i$, $\omega^{\text{ref}}$ is the reference frequency and $e^{{V}}$ and $e^{{\omega} }$ are given scalars.
\end{objective}
\section{Decentralized MTDC control}
\label{sec:dec_control}
In this section we propose a decentralized controller for the frequency control of AC systems connected through an MTDC network. This controller does not rely on a single voltage regulator for the MTDC system, but the voltage regulation is distributed among all converters.
The local controller governing the power injections into the MTDC network is given by
\begin{align}
\label{eq:voltage_control}
\begin{aligned}
P_i^{\text{inj}} = P_i^{\text{inj, nom}} + K_i^{{\omega}} (\omega_i - \omega^{\text{ref}}) + K_i^{{V}}(V_i^{\text{ref}}-V_i),
\end{aligned}
\end{align}
where $P_i^{\text{inj, nom}}$ is the nominal injected power, and $K_i^{{\omega}}>0$ and $K_i^{{V}}>0$ are positive controller gains for all $i=1, \dots, n$. The HVDC converter is assumed to be perfect and instantaneous, i.e., injected power on the AC side is immediately converted to DC power without losses. Furthermore the dynamics of the converter are ignored, implying that the converter tracks the output of controller \eqref{eq:voltage_control} perfectly. This assumption is reasonable due to the dynamics of the converter typically being orders of magnitudes faster than the AC dynamics.
The relation between the injected HVDC current and the injected AC power is thus given by
\begin{align}
V_iI_i^{\text{inj}} = P_i^{\text{inj}}. \label{eq:power-current_nonlinear}
\end{align}
By assuming that all voltages are at the same nominal value, i.e., $V_i=V^{\text{nom}}$ for all $i=1, \dots, n$ in the above equation, the following linear relation is obtained
\begin{align}
V^{\text{nom}}I_i^{\text{inj}} = P_i^{\text{inj}}. \label{eq:power-current}
\end{align}
Combining the voltage dynamics \eqref{eq:voltage}, the frequency dynamics \eqref{eq:frequency}, the voltage controller \eqref{eq:voltage_control} and the power-current relationship \eqref{eq:power-current}, we obtain the following closed-loop dynamics
\begin{align}
\begin{bmatrix}
\dot{\omega} \\ \dot{V}
\end{bmatrix}
&= \underbrace{\begin{bmatrix}
-M(K^\omega + K^{\text{droop}}) & MK^V \\
\frac{1}{V^{\text{nom}}}EK^\omega & -E\left(\mathcal{L}_R + \frac{K^V}{V^{\text{nom}}} \right)
\end{bmatrix}}_{\triangleq A}
\begin{bmatrix}
\omega \\ V
\end{bmatrix} \nonumber \\
&+ \begin{bmatrix}
M\left((K^\omega + K^{\text{droop}}) \omega^{\text{ref}}1_{n\times 1} - K^V V^{\text{ref}} \right) \\
E\left(\frac{1}{V^\text{nom}} K^V V^\text{ref} -\frac{\omega^{\text{ref}}}{{V^{\text{nom}}}} K^\omega1_{n\times 1} \right)
\end{bmatrix} \nonumber \\
&+
\begin{bmatrix}
M(P^m + P^\text{nom}-P^{\text{inj, nom}}) \\
\frac{1}{V^{\text{nom}}} E P^{\text{inj, nom}}
\end{bmatrix}
\label{eq:cl_dynamics_vec}
\end{align}
where
$ \omega = [\omega_1, \dots, \omega_n]^T$,
$ V =[V_1, \dots, V_n]^T$,
$M=\diag({m_1}^{-1}, \hdots , {m_n}^{-1})$ is a matrix of inverse generator inertia,
$E=\diag([C_1^{-1}, \dots, C_n^{-1}])$ is a matrix of electrical elastances,
$K^\omega = \diag([K^\omega_1, \dots, K^\omega_n])$,
$K^{\text{droop}} = \diag([K^{\text{droop}}_1, \dots$, $K^{\text{droop}}_n])$,
$K^V = \diag([K^V_1, \dots, K^V_n])$,
$P^\text{nom} =[P^\text{nom}_1,\hdots, P^\text{nom}_n]^T$,
$P^\text{inj, nom} =[P^\text{inj, nom}_1,\hdots, P^\text{inj, nom}_n]^T$, $P^m =[P^m_1,\hdots, P^m_n]^T$,
and $\mathcal{L}_R$ is the weighted Laplacian matrix of the graph representing the HVDC transmission lines, denoted $\mathcal{G}_R$, whose edge-weights are given by the conductances $\frac{1}{R_{ij}}$. The following assumption is made on the nominal generated power and the nominal injected power.
\begin{assumption}
\label{ass:balances_power}
$P^\text{nom}=P^\text{inj, nom}$.
\end{assumption}
\begin{remark}
Assumption~\ref{ass:balances_power} implies that the reference frequency and reference voltages define an equilibrium of the closed-loop system when the deviation from the nominal power generation is zero.
\end{remark}
We define the incremental frequencies and voltages as
\begin{align}
\hat{ \omega} &= \omega-\omega^{\text{ref}}1_{n\times 1} \label{eq:delta_omega}
\\
\hat{ V} &= V- V^{ref} \label{eq:delta_V}.
\end{align}
By Assumption \ref{ass:balances_power}, the decentralized MTDC control system given by \eqref{eq:cl_dynamics_vec}, can be written as
\begin{align}
\begin{bmatrix}
\dot{\hat{\omega}} \\ \dot{\hat{V}}
\end{bmatrix}
&= {\begin{bmatrix}
-M(K^\omega + K^{\text{droop}}) & MK^V \\
\frac{1}{V^{\text{nom}}}EK^\omega & -E\left(\mathcal{L}_R + \frac{K^V}{V^{\text{nom}}} \right)
\end{bmatrix}}
\begin{bmatrix}
\hat{\omega} \\ \hat{V}
\end{bmatrix} \nonumber \\
&+
\begin{bmatrix}
M P^m \\
0_{n\times 1}
\end{bmatrix}. \label{eq:cl_dynamics_vec_delta}
\end{align}
Assume that the system matrix of \eqref{eq:cl_dynamics_vec_delta}, $A$, is full-rank, which ensures that unique equilibrium of \eqref{eq:cl_dynamics_vec_delta} exists. Denote this equilibria $x_{0}=[\omega_{0}^T, V_{0}^T]^T$. Define $\bar{x}\triangleq [\bar{\omega}^T, \bar{V}^T]^T =[\hat{\omega}^T, \hat{V}^T]^T - [\omega_{0}^T, V_{0}^T]^T$.
Now:
\begin{align}
\dot{\bar{x}} = A \bar{x} \label{eq:dynamics_A_decentralized_shifted}
\end{align}
with the origin as the unique equilibrium of the above dynamical system. We are now ready to show the main stability result of this section.
\begin{theorem}
\label{th:stability_passivity_1}
The equilibrium of the decentralized MTDC control system given by \eqref{eq:cl_dynamics_vec_delta} is globally asymptotically stable.
\end{theorem}
\begin{proof}
First consider the Lyapunov function candidate
\begin{align}
W(\bar{\omega}, \bar{V}) &= \frac 12 \bar{\omega}^T K^\omega (K^V)^{-1} M^{-1}\bar{\omega} + \frac{V^\text{nom}}{2} \bar{V}^T C \bar{V}, \label{eq:lyap_hvdc_decentralized_projected}
\end{align}
where $C=\diag([C_1, \dots, C_n])$.
Clearly $W(\bar{\omega}, \bar{V})$ is positive definite and radially unbounded. Differentiating \eqref{eq:lyap_hvdc_decentralized_projected} with respect to time along trajectories of \eqref{eq:dynamics_A_decentralized_shifted}, we obtain
\begin{align*}
&\dot{W}(\bar{\omega}, \bar{V}) \\
&= \bar{\omega}^T K^\omega (K^V)^{-1} M^{-1}\dot{\bar{\omega}} + V^\text{nom} \bar{V}^T E \dot{\bar{V}} + \bar{\eta}' \dot{\bar{\eta}}' \\
&= \bar{\omega}^T \big( -K^\omega (K^V)^{-1}(K^\omega + K^\text{droop})\bar{\omega} + K^\omega \bar{V} \big) \\
&\;\;\;\; + \bar{V}^T \Big( K^\omega \bar{\omega} - (V^\text{nom}\mathcal{L}_R {+} K^V)\bar{V} \Big) \\
&= -\bar{\omega}^T \big( -K^\omega (K^V)^{-1}(K^\omega + K^\text{droop})\bar{\omega} \\
&\;\;\;\; + 2 \bar{\omega}^T K^\omega \bar{V} - \bar{V}^T (V^\text{nom}\mathcal{L}_R + K^V)\bar{V} \\
&= - \begin{bmatrix}
\bar{\omega}^T & \bar{V}^T
\end{bmatrix}
\underbrace{\begin{bmatrix}
K^\omega (K^V)^{-1}(K^\omega + K^\text{droop}) & -K^\omega \\
-K^\omega & K^V
\end{bmatrix}}_{\triangleq Q_1}
\begin{bmatrix}
\bar{\omega} \\ \bar{V}
\end{bmatrix}.
\end{align*}
Clearly $\dot{W}(\bar{\omega}, \bar{V})< 0$ iff the symmetric matrix $Q_1$ is positive definite. By applying the Schur complement condition for positive definiteness, $Q_1$ is positive definite iff
\begin{eqnarray*}
K^\omega (K^V)^{-1}(K^\omega + K^\text{droop}) - K^\omega (K_V)^{-1} K^\omega \\
= K^\omega (K^V)^{-1} K^\text{droop} > 0.
\end{eqnarray*}
Hence $Q_1$ is always positive definite, and thus $\dot{W}(\bar{\omega}, \bar{V}< 0$, which concludes the proof.
\end{proof}
\begin{remark}
Note that the equilibrium of \eqref{eq:cl_dynamics_vec_delta} being globally asymptotically stable implies that all the eigenvalues of $A$ are stable, which ensures that the previous assumption that $A$ is full rank, is valid.
\end{remark}
\begin{remark}
Note that Theorem~\ref{th:stability_passivity_1} only guarantees the stability of the equilibrium. It does however not guarantee that Objective~\ref{obj:1} is fulfilled.
\end{remark}
\section{Equilibrium analysis}
\label{sec:equilibrium}
We will now study the globally asymptotically stable equilibrium of \eqref{eq:cl_dynamics_vec_delta}, in order to bound the asymptotic voltage and frequency deviations from the reference values. We will furthermore show that the generated power in the AC grids will be shared fairly amongst the generators.
We make the following additional assumptions on the controller gains, in order to draw conclusions about the equilibrium of \eqref{eq:cl_dynamics_vec_delta}.
\begin{assumption}
\label{ass:scalar_1} The controller gains satisfy
$K^\omega_i=k^\omega, K^\text{droop}_i=k^\text{droop}, K^V_i=k^V \; \forall i=1, \dots, n$.
\end{assumption}
With the previous assumptions made, and having provided necessary stability conditions for the closed-loop system \eqref{eq:cl_dynamics_vec}, we are ready to analyze its equilibrium.
\begin{theorem}
\label{th:equilibrium}
Assume that Assumptions~\ref{ass:balances_power} and \ref{ass:scalar_1} hold, then Objective \ref{obj:1} is satisfied for the following coefficients
\begin{align*}
e^{\text{gen}} &=\frac{k^\text{droop}\max_i P^m_i}{k^\text{droop}+k^\omega} \left( (n-1) + \frac{k^V}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \right) \\
e^V &= \frac{k^\omega \left|1_{1\times n}P^m\right|}{nk^\text{droop}k^V} + \frac{k^\omega \max_i \left| P^m_i \right| }{(k^\omega + k^\text{droop})V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \\
e^\omega &= \frac{1}{n k^\text{droop}} \left| \sum_{i=1}^n P^m_i \right| \\
&\;\;\;\; + \frac{\max_i |P^m_i|}{k^\text{droop}+k^\omega} \left( (n-1) + \frac{k^V}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \right).
\end{align*}
\end{theorem}
\begin{remark}
The error bounds $e^{\text{droop}}$ and $e^\omega$ can simultaneously be made arbitrarily small by choosing appropriate controller gains. However, the voltage error bound $e^V$ is lower bounded by a constant. This is of course due to the necessity of a relative voltage drop for having a power flow between in an HVDC line.
\end{remark}
\begin{proof}
Consider the equilibrium of \eqref{eq:cl_dynamics_vec}. Let $\hat{\omega}$ and $\hat{V}$ be defined by \eqref{eq:delta_omega} -- \eqref{eq:delta_V}. By Assumption~\ref{ass:balances_power}, we obtain the following expression
\begin{align}
\label{eq:eq_delta_coordinates}
\begin{bmatrix}
-(K^\omega+K^\text{droop}) & K^V \\
K^\omega & -(K^V+ V^\text{nom} \mathcal{L}_R)
\end{bmatrix}
\begin{bmatrix}
\hat{\omega} \\
\hat{V}
\end{bmatrix}
&=
\begin{bmatrix}
-P^m \\
0_{n\times 1}
\end{bmatrix}.
\end{align}
By multiplying the last $n$ rows of \eqref{eq:eq_delta_coordinates} with $\frac{k^\omega+ k^\text{droop}}{k^\omega}$ and adding to the first $n$ rows of \eqref{eq:eq_delta_coordinates}, we obtain by Assumption~\ref{ass:scalar_1}
\begin{align}
\label{eq:Delta_V_eq}
\underbrace{\left( \frac{(k^\omega+k^\text{droop})V^\text{nom}}{k^\omega} \mathcal{L}_R + \frac{k^\text{droop}k^V}{k^\omega}I_n \right)}_{\triangleq A_1}\hat{V} = P^m.
\end{align}
We write $\hat{V} = \sum_{i=1}^n a^1_i v^1_i$, where $v^1_i$ is the $i$th eigenvector of $A_1$, with the corresponding eigenvalue $\lambda^1_i$. Note that the coefficients $a^1_i$ are unique, since $A_1$ is symmetric, implying that its eigenvectors form an orthonormal basis. Substituting the eigenvector decomposition of $\hat{V}$ in \eqref{eq:Delta_V_eq} yields
\begin{align*}
A_1\hat{V} = A_1 \sum_{i=1}^n a^1_i v^1_i = \sum_{i=1}^n \lambda^1_i a^1_i v^1_i = P^m,
\end{align*}
which implies
\begin{align*}
a^1_i=\frac{(v^1_i)^TP^m}{\lambda^1_i}.
\end{align*}
Let the eigenvalues be ordered by their increasing values. Clearly $\lambda^1_1= \frac{k^\text{droop}k^V}{k^\omega}$ and $v^1_1=\frac{1}{\sqrt{n}} 1_{n\times 1}$. This implies
\begin{align}
\label{eq:Delta_V_ss}
\hat{V} = \frac{k^\omega 1_{1\times n}P^m}{nk^\text{droop}k^V} 1_{n\times 1} + \sum_{i=2}^n \frac{(v^1_i)^TP^m}{\lambda^1_i}v^1_i.
\end{align}
By noting that
\begin{align}
\begin{aligned}
\lambda^1_i &= \frac{(k^\omega + k^\text{droop})V^\text{nom} \lambda_i(\mathcal{L}_R)+k^\text{droop}k^V}{k^\omega} \\
&\ge \frac{(k^\omega + k^\text{droop})V^\text{nom} \lambda_i(\mathcal{L}_R)}{k^\omega},
\end{aligned}
\label{eq:lambda_1_bound}
\end{align}
where $\lambda_i(\mathcal{L}_R)$ is the $i$th eigenvalue of $\mathcal{L}_R$,
we obtain the following bound on $\hat{V}$
\begin{align*}
\norm{\hat{V}}_\infty &\le \frac{k^\omega \left|1_{1\times n}P^m\right|}{nk^\text{droop}k^V} + \frac{\max_i \left| P^m_i \right| k^\omega}{(k^\omega + k^\text{droop})V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \\
& \le \frac{k^\omega \left|\sum_{i=1}^nP^m_i\right|}{nk^\text{droop}k^V} + \frac{\max_i \left| P^m_i \right|}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)}.
\end{align*}
From the first $n$ rows of \eqref{eq:eq_delta_coordinates}, and by substituting the expression for $\hat{V}$ from \eqref{eq:Delta_V_ss}, we have
\begin{align}
\begin{aligned}
\hat{\omega} &= \frac{k^V\hat{V} + P^m}{k^\omega+ k^\text{droop}} = \frac{1}{k^\omega + k^\text{droop}} \Bigg( \frac{k^\omega 1_{1\times n}P^m}{nk^\text{droop}} 1_{n\times 1} \\
&\;\;\;\;+ \sum_{i=2}^n \frac{k^V(v^1_i)^TP^m}{\lambda^1_i}v^1_i + P^m \Bigg).
\end{aligned}
\label{eq:Delta_omega}
\end{align}
By using the bound on $\lambda^1_i$ from \eqref{eq:lambda_1_bound}, we obtain
\begin{align*}
\norm{\hat{\omega}}_\infty &\le \frac{1}{k^\text{droop}} \Bigg( \frac{\left|\sum_{i=1}^n P^m_i\right|}{n} \\
& \;\;\;\; + \max_i |P^m_i|\Bigg( 1 + \frac{k^V}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \Bigg) \Bigg).
\end{align*}
Consider now the power generated by the voltage droop controller. By \eqref{eq:Delta_omega} we obtain
\begin{align*}
&{P^\text{droop} + \frac{1}{n} \sum_{i=1}^n P_i^m 1_{n\times 1}} = {-k^\text{droop}\hat{\omega} + \frac{1_{1\times n} P^m}{n} 1_{n\times 1} } \\
&= \frac{k^\text{droop}}{k^\omega+k^\text{droop}} \Bigg(- \frac{1}{n}\sum_{i=1}^n {P^m_i}1_{n\times 1} + P^m \\
&\;\;\;\;\;{+} \sum_{i=2}^n \frac{k^V(v^1_i)^TP^m}{\lambda^1_i}v^1_i \Bigg) .
\end{align*}
By using the bound on $\lambda^1_i$ in \eqref{eq:lambda_1_bound}, we obtain
\begin{align*}
&\norm{{P^\text{droop} + \frac{1}{n} \sum_{i=1}^n P_i^m 1_{n\times 1}}}_\infty \le \frac{k^\text{droop}}{k^\omega+k^\text{droop}} \Bigg( \\
&\;\;\;\; \max_i \left| P^m_i \right|\left( 1+ \sum_{i=2}^n \frac{k^\omega}{(k^\omega + k^\text{droop})V^\text{nom} \lambda_i(\mathcal{L}_R)} \right) \Bigg) \\
& \le \frac{k^\text{droop}}{k^\omega+k^\text{droop}} \max_i \left| P^m_i \right|\Bigg( 1+ \frac{1}{V^\text{nom}}\sum_{i=2}^n \frac{1}{ \lambda_i(\mathcal{L}_R)} \Bigg),
\end{align*}
which completes the proof.
\end{proof}
\section{Simulations}
\label{sec:simulations}
\newlength\figureheight
\newlength\figurewidth
\setlength\figureheight{4.4cm}
\setlength\figurewidth{6.6cm}
In this section, we simulate the proposed controller on an MTDC grid connecting three asynchronous AC areas, whose main purpose is bulk power transfer between the AC areas. The test grid consists of tree 6 bus AC grids, described in detail in \cite{wollenberg2006power}, connected with a 3 bus MTDC grid. In Figure \ref{fig:testgrid}, the topology of the interconnected MTDC-AC grid is shown.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\columnwidth]{Wood_Base6Bus_fullgrid_trim.pdf}
\caption{Test grid consisting of 3 AC areas, connected by an MTDC grid consisting of 3 converter stations and 3 DC lines.}
\label{fig:testgrid}
\end{figure}
\begin{table}
\centering
\caption{HVDC grid line parameters}
\label{tab:HVDCgridParameter}
\begin{tabular}{llll}\toprule
From & To & Resistance [p.u.]& Reactance [p.u.] \\ \midrule
1 & 2 & 0.0015 & 0.01 \\
1 & 3 & 0.0045 & 0.03 \\
2 & 3 & 0.0015 & 0.01 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}\centering
\raa{1.3}
\caption{Controller Parameter}
\label{tab:ControllerParameter}
\begin{tabular}{@{}llllll@{}}\toprule
$K^{\omega}_1$ &$K^{\omega}_2$&$K^{\omega}_3$& $K^\text{droop}_1$ &$K^\text{droop}_2$ &$K^\text{droop}_3$\\ \midrule
501&501& 501&667&667 & 667 \\
\bottomrule
\end{tabular}
\end{table}
Each converter station is controlled with \eqref{eq:voltage_control}.
While the converter dynamics are ignored due to their fast nature, the nonlinear relation \eqref{eq:power-current_nonlinear} is used to relate the injected AC powers with the HVDC currents.
The physical system parameters and the controller parameters are given in Table \ref{tab:HVDCgridParameter}, \ref{tab:ControllerParameter}, respectively.
The simulation was conducted by using an extended version of MatDyn \cite{matsch}, taking also the HVDC dynamics into account. The simulation starts in steady-state, and at time $1$ s an immediate change in load from 0.7 p.u. (per-unit) to 0.8 p.u. occurs at bus 4 in the AC area 1.
The local frequency controllers at the generators will react immediately to the resulting frequency drop, and start to accelerate. The frequencies of the generators are shown in Figure \ref{fig:generatorspeeds}.
\begin{figure}[th]
\input{Speeds2.tikz}
\caption{Frequencies of the generator areas.}
\label{fig:generatorspeeds}
\end{figure}
After a few second all generator frequencies within the same AC area synchronize, and after about $30$ s the frequencies converge to the new equilibrium. The frequency deviation is larger in AC area $1$ than in the remaining AC areas, but the differences are rather small, in accordance with Theorem~\ref{th:equilibrium}.
Figure \ref{fig:GeneratorDelta} shows the changes in the power output of the generators. The disturbance is shared along all generators.
The injected powers through the converter are shown in Figure \ref{fig:converterpower}. Since the converter dynamics are much faster than the AC systems, they are neglected in the simulation and it is assumed that the converter power tracks the controller output perfectly.
\begin{figure}[th]
\center
\input{GeneratorPowerDelta2.tikz}
\caption{Incremental generator power levels.}
\label{fig:GeneratorDelta}
\end{figure}
\begin{figure}[th]
\center
\input{TermialPowers2.tikz}
\caption{Injected power levels at the converters.}
\label{fig:converterpower}
\end{figure}
Due to the increased load, the DC voltages of all converters increase, see Figure \ref{fig:convertervoltages}. However, as predicted by Theorem~\ref{th:equilibrium}, both the absolute and relative voltage deviations are bounded.
\begin{figure}[th!]
\center
\input{DCVoltages2.tikz}
\caption{Voltages of the DC converters.}
\label{fig:convertervoltages}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:discussion}
In this paper we have proposed a decentralized proportional controller for sharing primary frequency control reserves in asynchronous AC systems connected through an MTDC system. The controller uses the local frequency in the AC grid and the local DC voltage as inputs in order to control the power injections into the MTDC grid. The resulting equilibria of the closed-loop system is shown to be globally asymptotically stable by using Lyapunov arguments, regardless of the controller parameters. It is also shown that the DC voltages and AC frequencies at the equilibrium are close to their nominal values. Furthermore the generated power from the primary frequency control is approximately shared fairly between the AC areas. The deviation from perfectly fair power sharing is quantified.
The proposed controller was simulated on a test system consisting of 3 AC areas combined with an MTDC grid to demonstrate its effectiveness. The paper constitutes a first step towards utilizing the increased flexibility which future MTDC grids will provide to the connected AC systems. Future work will focus on extending the primary proportional controller with secondary controllers, where communication and integral action will be necessary to eliminate static control errors. An extensive simulation study on more realistic grid topologies and dynamical models is also ongoing work.
\section{Introduction}
Transmitting power over long distances with minimal losses is one of the greatest challenges in today's power transmission systems. The strong rising share of renewables increased the distances between power generation and consumption. This is a driving factor behind long-distance power transmission. One such example are large-scale off-shore wind farms, which often require power to be transmitted in cables over long distances to the mainland power grid \cite{breseti2007HVDC}. High-voltage direct current (HVDC) power transmission is a commonly used technology for long-distance power transmission. Its higher investment costs compared to AC transmission lines are compensated by its lower resistive losses for sufficiently long distances \cite{melhem2013electricity}. The break-even point, i.e., the point where the total construction and operation costs of overhead HVDC and AC lines are equal, is typically 500-800 km \cite{padiyar1990hvdc}. However, for cables, the break-even point is typically less than 50 km \cite{Hertem2010technical}. Increased use of HVDC for electrical power transmission suggests that future HVDC transmission systems are likely to consist of multiple terminals connected by several HVDC transmission lines \cite{Haileselassie2013Power}. Such systems are referred to as Multi-terminal HVDC (MTDC) systems in the literature. The main technical obstacle to overcome in order to realize such MTDC is the development of a DC breaker \cite{Franck2011HVDC}. There are a few advanced ideas to realize this device in the near future \cite{callavik2012hybrid}.
Maintaining an adequate DC voltage is the single most important practical control problem for HVDC transmission systems. If the DC voltage deviates too far from the nominal operational voltage, equipment could be damaged, resulting in loss of power transmission capability and high costs.
Many existing AC transmission grids are connected through HVDC links, usually used for bulk power transfer between the AC areas. The fast operation of the DC converters however would also enable frequency regulation of one of the connected AC grids through the HVDC link. One practical example of this is the island of Gotland in Sweden, which is only connected to the main Nordic grid through an HVDC cable \cite{axelsson2001gotland}. However, since the main Nordic AC grid has orders of magnitudes higher inertia than the AC grid of Gotland, the influence of the frequency regulation on the main grid will be negligible.
By connecting several AC grids by an MTDC system, primary frequency regulation reserves may be shared, which reduces the need for frequency regulation reserves in the individual AC systems \cite{li2008frequency}. In \cite{dai2010impact}, distributed control algorithms have been applied to share primary frequency control reserves of asynchronous AC transmission systems connected through an MTDC system. However, the proposed controller requires a slack bus to control the DC voltage, defeating the purpose of distributing the primary frequency regulation reserves. In \cite{Andreasson2014_IFAC, andreasson2014control}, distributed controllers for secondary voltage control of MTDC systems are proposed, which do not rely on a slack bus. Both of the aforementioned controllers however rely on the presence of a communication network. While a communication network might already be present, it introduces the issue of time delays, due to large geographical distances in MTDC systems, and has a certain outage risk. The impacts of delays have been analyzed in \cite{dai2010impact}, and have been found to seriously degrade performance and destabilize the power system.
A distributed controller without the need of a slack bus is proposed in \cite{dai2013voltage}. Stability of the equilibrium is guaranteed in the absence of communication delays. However, the voltage dynamics of the HVDC system are neglected. Moreover, the implementation of the controller is not realistic, as every local controller needs to access the DC voltages of all terminals.
In \cite{dai2011voltage} and \cite{silva2012provision}, decentralized controllers are employed to share primary frequency control reserves. In \cite{silva2012provision} no stability analysis of the closed-loop system is performed, whereas \cite{dai2011voltage} guarantees stability of the equilibrium provided that the connected AC areas have identical parameters. In \cite{taylordecentralized}, optimal decentralized controllers for AC systems connected by HVDC systems are derived. In all aforementioned references the voltage dynamics of the HVDC system are neglected.
Due to the inherent difficulties of time-delays, we propose a decentralized proportional controller for distributing primary frequency control reserves, which relies only on local measurements. The controller is shown to distribute the primary frequency control reserves between the connected AC systems, while maintaining an adequate DC voltage. In contrast to \cite{dai2011voltage}, we prove that the equilibrium of the closed-loop system is globally asymptotically stable for any set of system parameters and controller gains by using Lyapunov arguments. We also explicitly model the voltage dynamics of the MTDC system, and extend our result to AC systems consisting of multiple generators in simulations. Due to inherent properties of proportional controllers, the steady-state values of the voltages and frequencies will deviate from their reference values. We quantify these deviations by provable upper bounds.
The remainder of this paper is organized as follows. In Section \ref{sec:prel}, the mathematical notation is defined. In Section \ref{sec:model}, the system model and the control objectives are defined. In Section \ref{sec:dec_control}, a decentralized proportional controller for distributing primary frequency control is analyzed. In Section \ref{sec:equilibrium}, the equilibrium of the closed-loop system is analyzed.
In Section \ref{sec:simulations}, simulations of the distributed controller on a four-terminal MTDC test system are provided, showing the effectiveness of the proposed controller. The paper ends with a discussion and concluding remarks in Section \ref{sec:discussion}.
\section{Preliminaries}
\label{sec:prel}
Let $\mathcal{G}$ be a graph. Denote by $\mathcal{V}=\{ 1,\hdots, n \}$ the vertex set of $\mathcal{G}$, and by $\mathcal{E}=\{ 1,\hdots, m \}$ the edge set of $\mathcal{G}$. Let $\mathcal{N}_i$ be the set of neighboring vertices to $i \in \mathcal{V}$.
Denote by $\mathcal{B}$ the vertex-edge incidence matrix of $\mathcal{G}$, and let $\mathcal{\mathcal{L}_W}=\mathcal{B}W\mathcal{B}^T$ be the weighted Laplacian matrix of $\mathcal{G}$, with edge-weights given by the elements of the diagonal matrix $W$. We denote the space of real-valued $n\times m$-valued matrices by $\mathbb{R}^{n\times m}$.
Let $\mathbb{C}^-$ denote the open left half complex plane, and $\bar{\mathbb{C}}^-$ its closure. We denote by $c_{n\times m}$ a vector or matrix of dimension $n\times m$, whose elements are all equal to $c$. For a symmetric matrix $A$, $A>0 \;(A\ge 0)$ is used to denote that $A$ is positive (semi) definite. $I_{n}$ denotes the identity matrix of dimension $n$. For simplicity, we will often drop the notion of time dependence of variables, i.e., $x(t)$ will be denoted $x$ for simplicity. Let $\norm{\cdot}_\infty$ denote the maximal absolute value of the elements of a vector.
\section{Model and problem setup}
\label{sec:model}
We will here give a unified model for an MTDC system interconnected with several asynchronous AC systems.
We consider an MTDC transmission system consisting of $n$ converters, each connecting to an AC system, denoted $1, \dots, n$. The converters are assumed to be connected by an MTDC transmission grid. The dynamics of converter $i$ is assumed to be given by
\begin{align}
\begin{aligned}
C_i \dot{V}_i &= -\sum_{j\in \mathcal{N}_i} \frac{1}{R_{ij}}(V_i -V_j) + I_i^{\text{inj}} ,
\end{aligned}
\label{eq:voltage}
\end{align}
where $V_i$ is the voltage of converter $i$, $C_i>0$ is its capacitance, $I_i^{\text{inj}}$ is the injected current from an AC grid connected to the DC converter. The constant $R_{ij}$ denotes the resistance of the HVDC transmission line connecting the converters $i$ and $j$.
The graph corresponding to the HVDC line connections is assumed to be connected.
The AC system is assumed to consist of a single generator which is connected to the corresponding DC converter, representing an aggregate model of the AC grid. The dynamics of the AC system are given by the swing equation \cite{machowski2008power}:
\begin{align}
m_i \dot{\omega}_i &= -K_i^{\text{droop}} (\omega_i-\omega^{\text{ref}}) + P_i^\text{nom} + P_i^{{m}} - P_i^{\text{inj}}, \label{eq:frequency}
\end{align}
where $\omega_i$ is the frequency of the generator, $\omega^{\text{ref}}$ is the reference frequency and $m_i>0$ is its moment of inertia. The constant $P_i^\text{nom}$ is the nominal generated power of generator $i$, $P^m_i$ is the uncontrolled deviation from the nominal generated power, $P_i^{\text{inj}}$ is the power injected to the DC system through the converter and $K_i^{\text{droop}}>0$ is the gain of the frequency droop controller of the generator.
We define $P^\text{droop}_i=-K_i^{\text{droop}} (\omega_i-\omega^{\text{ref}})$, and state the control objective.
\begin{objective}
\label{obj:1}
The primary frequency control action should be distributed fairly amongst the generators, i.e.
\begin{align*}
\lim_{t\rightarrow \infty } \left| P_i^{\text{droop}}(t) + \frac{1}{n} \sum_{i=1}^n P_i^m \right| \le e^{\text{droop}} \quad \forall i = 1, \dots, n,
\end{align*}
where $e^{\text{droop}}$ is a given scalar.
Furthermore, the frequencies of the AC systems, as well as the converter voltages, should not deviate too far from their nominal values, i.e.
\begin{align*}
\lim_{t\rightarrow \infty} |V_i(t)-V_i^{\text{ref}}| &\le e^{{V} } \quad \forall i = 1, \dots, n \\
\lim_{t\rightarrow \infty} |\omega_i(t)-\omega^{\text{ref}}| &\le e^{{\omega} } \quad \forall i = 1, \dots, n, \\
\end{align*}
where $V_i^{\text{ref}}$ is the reference DC voltage of converter $i$, $\omega^{\text{ref}}$ is the reference frequency and $e^{{V}}$ and $e^{{\omega} }$ are given scalars.
\end{objective}
\section{Decentralized MTDC control}
\label{sec:dec_control}
In this section we propose a decentralized controller for the frequency control of AC systems connected through an MTDC network. This controller does not rely on a single voltage regulator for the MTDC system, but the voltage regulation is distributed among all converters.
The local controller governing the power injections into the MTDC network is given by
\begin{align}
\label{eq:voltage_control}
\begin{aligned}
P_i^{\text{inj}} = P_i^{\text{inj, nom}} + K_i^{{\omega}} (\omega_i - \omega^{\text{ref}}) + K_i^{{V}}(V_i^{\text{ref}}-V_i),
\end{aligned}
\end{align}
where $P_i^{\text{inj, nom}}$ is the nominal injected power, and $K_i^{{\omega}}>0$ and $K_i^{{V}}>0$ are positive controller gains for all $i=1, \dots, n$. The HVDC converter is assumed to be perfect and instantaneous, i.e., injected power on the AC side is immediately converted to DC power without losses. Furthermore the dynamics of the converter are ignored, implying that the converter tracks the output of controller \eqref{eq:voltage_control} perfectly. This assumption is reasonable due to the dynamics of the converter typically being orders of magnitudes faster than the AC dynamics.
The relation between the injected HVDC current and the injected AC power is thus given by
\begin{align}
V_iI_i^{\text{inj}} = P_i^{\text{inj}}. \label{eq:power-current_nonlinear}
\end{align}
By assuming that all voltages are at the same nominal value, i.e., $V_i=V^{\text{nom}}$ for all $i=1, \dots, n$ in the above equation, the following linear relation is obtained
\begin{align}
V^{\text{nom}}I_i^{\text{inj}} = P_i^{\text{inj}}. \label{eq:power-current}
\end{align}
Combining the voltage dynamics \eqref{eq:voltage}, the frequency dynamics \eqref{eq:frequency}, the voltage controller \eqref{eq:voltage_control} and the power-current relationship \eqref{eq:power-current}, we obtain the following closed-loop dynamics
\begin{align}
\begin{bmatrix}
\dot{\omega} \\ \dot{V}
\end{bmatrix}
&= \underbrace{\begin{bmatrix}
-M(K^\omega + K^{\text{droop}}) & MK^V \\
\frac{1}{V^{\text{nom}}}EK^\omega & -E\left(\mathcal{L}_R + \frac{K^V}{V^{\text{nom}}} \right)
\end{bmatrix}}_{\triangleq A}
\begin{bmatrix}
\omega \\ V
\end{bmatrix} \nonumber \\
&+ \begin{bmatrix}
M\left((K^\omega + K^{\text{droop}}) \omega^{\text{ref}}1_{n\times 1} - K^V V^{\text{ref}} \right) \\
E\left(\frac{1}{V^\text{nom}} K^V V^\text{ref} -\frac{\omega^{\text{ref}}}{{V^{\text{nom}}}} K^\omega1_{n\times 1} \right)
\end{bmatrix} \nonumber \\
&+
\begin{bmatrix}
M(P^m + P^\text{nom}-P^{\text{inj, nom}}) \\
\frac{1}{V^{\text{nom}}} E P^{\text{inj, nom}}
\end{bmatrix}
\label{eq:cl_dynamics_vec}
\end{align}
where
$ \omega = [\omega_1, \dots, \omega_n]^T$,
$ V =[V_1, \dots, V_n]^T$,
$M=\diag({m_1}^{-1}, \hdots , {m_n}^{-1})$ is a matrix of inverse generator inertia,
$E=\diag([C_1^{-1}, \dots, C_n^{-1}])$ is a matrix of electrical elastances,
$K^\omega = \diag([K^\omega_1, \dots, K^\omega_n])$,
$K^{\text{droop}} = \diag([K^{\text{droop}}_1, \dots$, $K^{\text{droop}}_n])$,
$K^V = \diag([K^V_1, \dots, K^V_n])$,
$P^\text{nom} =[P^\text{nom}_1,\hdots, P^\text{nom}_n]^T$,
$P^\text{inj, nom} =[P^\text{inj, nom}_1,\hdots, P^\text{inj, nom}_n]^T$, $P^m =[P^m_1,\hdots, P^m_n]^T$,
and $\mathcal{L}_R$ is the weighted Laplacian matrix of the graph representing the HVDC transmission lines, denoted $\mathcal{G}_R$, whose edge-weights are given by the conductances $\frac{1}{R_{ij}}$. The following assumption is made on the nominal generated power and the nominal injected power.
\begin{assumption}
\label{ass:balances_power}
$P^\text{nom}=P^\text{inj, nom}$.
\end{assumption}
\begin{remark}
Assumption~\ref{ass:balances_power} implies that the reference frequency and reference voltages define an equilibrium of the closed-loop system when the deviation from the nominal power generation is zero.
\end{remark}
We define the incremental frequencies and voltages as
\begin{align}
\hat{ \omega} &= \omega-\omega^{\text{ref}}1_{n\times 1} \label{eq:delta_omega}
\\
\hat{ V} &= V- V^{ref} \label{eq:delta_V}.
\end{align}
By Assumption \ref{ass:balances_power}, the decentralized MTDC control system given by \eqref{eq:cl_dynamics_vec}, can be written as
\begin{align}
\begin{bmatrix}
\dot{\hat{\omega}} \\ \dot{\hat{V}}
\end{bmatrix}
&= {\begin{bmatrix}
-M(K^\omega + K^{\text{droop}}) & MK^V \\
\frac{1}{V^{\text{nom}}}EK^\omega & -E\left(\mathcal{L}_R + \frac{K^V}{V^{\text{nom}}} \right)
\end{bmatrix}}
\begin{bmatrix}
\hat{\omega} \\ \hat{V}
\end{bmatrix} \nonumber \\
&+
\begin{bmatrix}
M P^m \\
0_{n\times 1}
\end{bmatrix}. \label{eq:cl_dynamics_vec_delta}
\end{align}
Assume that the system matrix of \eqref{eq:cl_dynamics_vec_delta}, $A$, is full-rank, which ensures that unique equilibrium of \eqref{eq:cl_dynamics_vec_delta} exists. Denote this equilibria $x_{0}=[\omega_{0}^T, V_{0}^T]^T$. Define $\bar{x}\triangleq [\bar{\omega}^T, \bar{V}^T]^T =[\hat{\omega}^T, \hat{V}^T]^T - [\omega_{0}^T, V_{0}^T]^T$.
Now:
\begin{align}
\dot{\bar{x}} = A \bar{x} \label{eq:dynamics_A_decentralized_shifted}
\end{align}
with the origin as the unique equilibrium of the above dynamical system. We are now ready to show the main stability result of this section.
\begin{theorem}
\label{th:stability_passivity_1}
The equilibrium of the decentralized MTDC control system given by \eqref{eq:cl_dynamics_vec_delta} is globally asymptotically stable.
\end{theorem}
\begin{proof}
First consider the Lyapunov function candidate
\begin{align}
W(\bar{\omega}, \bar{V}) &= \frac 12 \bar{\omega}^T K^\omega (K^V)^{-1} M^{-1}\bar{\omega} + \frac{V^\text{nom}}{2} \bar{V}^T C \bar{V}, \label{eq:lyap_hvdc_decentralized_projected}
\end{align}
where $C=\diag([C_1, \dots, C_n])$.
Clearly $W(\bar{\omega}, \bar{V})$ is positive definite and radially unbounded. Differentiating \eqref{eq:lyap_hvdc_decentralized_projected} with respect to time along trajectories of \eqref{eq:dynamics_A_decentralized_shifted}, we obtain
\begin{align*}
&\dot{W}(\bar{\omega}, \bar{V}) \\
&= \bar{\omega}^T K^\omega (K^V)^{-1} M^{-1}\dot{\bar{\omega}} + V^\text{nom} \bar{V}^T E \dot{\bar{V}} + \bar{\eta}' \dot{\bar{\eta}}' \\
&= \bar{\omega}^T \big( -K^\omega (K^V)^{-1}(K^\omega + K^\text{droop})\bar{\omega} + K^\omega \bar{V} \big) \\
&\;\;\;\; + \bar{V}^T \Big( K^\omega \bar{\omega} - (V^\text{nom}\mathcal{L}_R {+} K^V)\bar{V} \Big) \\
&= -\bar{\omega}^T \big( -K^\omega (K^V)^{-1}(K^\omega + K^\text{droop})\bar{\omega} \\
&\;\;\;\; + 2 \bar{\omega}^T K^\omega \bar{V} - \bar{V}^T (V^\text{nom}\mathcal{L}_R + K^V)\bar{V} \\
&= - \begin{bmatrix}
\bar{\omega}^T & \bar{V}^T
\end{bmatrix}
\underbrace{\begin{bmatrix}
K^\omega (K^V)^{-1}(K^\omega + K^\text{droop}) & -K^\omega \\
-K^\omega & K^V
\end{bmatrix}}_{\triangleq Q_1}
\begin{bmatrix}
\bar{\omega} \\ \bar{V}
\end{bmatrix}.
\end{align*}
Clearly $\dot{W}(\bar{\omega}, \bar{V})< 0$ iff the symmetric matrix $Q_1$ is positive definite. By applying the Schur complement condition for positive definiteness, $Q_1$ is positive definite iff
\begin{eqnarray*}
K^\omega (K^V)^{-1}(K^\omega + K^\text{droop}) - K^\omega (K_V)^{-1} K^\omega \\
= K^\omega (K^V)^{-1} K^\text{droop} > 0.
\end{eqnarray*}
Hence $Q_1$ is always positive definite, and thus $\dot{W}(\bar{\omega}, \bar{V}< 0$, which concludes the proof.
\end{proof}
\begin{remark}
Note that the equilibrium of \eqref{eq:cl_dynamics_vec_delta} being globally asymptotically stable implies that all the eigenvalues of $A$ are stable, which ensures that the previous assumption that $A$ is full rank, is valid.
\end{remark}
\begin{remark}
Note that Theorem~\ref{th:stability_passivity_1} only guarantees the stability of the equilibrium. It does however not guarantee that Objective~\ref{obj:1} is fulfilled.
\end{remark}
\section{Equilibrium analysis}
\label{sec:equilibrium}
We will now study the globally asymptotically stable equilibrium of \eqref{eq:cl_dynamics_vec_delta}, in order to bound the asymptotic voltage and frequency deviations from the reference values. We will furthermore show that the generated power in the AC grids will be shared fairly amongst the generators.
We make the following additional assumptions on the controller gains, in order to draw conclusions about the equilibrium of \eqref{eq:cl_dynamics_vec_delta}.
\begin{assumption}
\label{ass:scalar_1} The controller gains satisfy
$K^\omega_i=k^\omega, K^\text{droop}_i=k^\text{droop}, K^V_i=k^V \; \forall i=1, \dots, n$.
\end{assumption}
With the previous assumptions made, and having provided necessary stability conditions for the closed-loop system \eqref{eq:cl_dynamics_vec}, we are ready to analyze its equilibrium.
\begin{theorem}
\label{th:equilibrium}
Assume that Assumptions~\ref{ass:balances_power} and \ref{ass:scalar_1} hold, then Objective \ref{obj:1} is satisfied for the following coefficients
\begin{align*}
e^{\text{gen}} &=\frac{k^\text{droop}\max_i P^m_i}{k^\text{droop}+k^\omega} \left( (n-1) + \frac{k^V}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \right) \\
e^V &= \frac{k^\omega \left|1_{1\times n}P^m\right|}{nk^\text{droop}k^V} + \frac{k^\omega \max_i \left| P^m_i \right| }{(k^\omega + k^\text{droop})V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \\
e^\omega &= \frac{1}{n k^\text{droop}} \left| \sum_{i=1}^n P^m_i \right| \\
&\;\;\;\; + \frac{\max_i |P^m_i|}{k^\text{droop}+k^\omega} \left( (n-1) + \frac{k^V}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \right).
\end{align*}
\end{theorem}
\begin{remark}
The error bounds $e^{\text{droop}}$ and $e^\omega$ can simultaneously be made arbitrarily small by choosing appropriate controller gains. However, the voltage error bound $e^V$ is lower bounded by a constant. This is of course due to the necessity of a relative voltage drop for having a power flow between in an HVDC line.
\end{remark}
\begin{proof}
Consider the equilibrium of \eqref{eq:cl_dynamics_vec}. Let $\hat{\omega}$ and $\hat{V}$ be defined by \eqref{eq:delta_omega} -- \eqref{eq:delta_V}. By Assumption~\ref{ass:balances_power}, we obtain the following expression
\begin{align}
\label{eq:eq_delta_coordinates}
\begin{bmatrix}
-(K^\omega+K^\text{droop}) & K^V \\
K^\omega & -(K^V+ V^\text{nom} \mathcal{L}_R)
\end{bmatrix}
\begin{bmatrix}
\hat{\omega} \\
\hat{V}
\end{bmatrix}
&=
\begin{bmatrix}
-P^m \\
0_{n\times 1}
\end{bmatrix}.
\end{align}
By multiplying the last $n$ rows of \eqref{eq:eq_delta_coordinates} with $\frac{k^\omega+ k^\text{droop}}{k^\omega}$ and adding to the first $n$ rows of \eqref{eq:eq_delta_coordinates}, we obtain by Assumption~\ref{ass:scalar_1}
\begin{align}
\label{eq:Delta_V_eq}
\underbrace{\left( \frac{(k^\omega+k^\text{droop})V^\text{nom}}{k^\omega} \mathcal{L}_R + \frac{k^\text{droop}k^V}{k^\omega}I_n \right)}_{\triangleq A_1}\hat{V} = P^m.
\end{align}
We write $\hat{V} = \sum_{i=1}^n a^1_i v^1_i$, where $v^1_i$ is the $i$th eigenvector of $A_1$, with the corresponding eigenvalue $\lambda^1_i$. Note that the coefficients $a^1_i$ are unique, since $A_1$ is symmetric, implying that its eigenvectors form an orthonormal basis. Substituting the eigenvector decomposition of $\hat{V}$ in \eqref{eq:Delta_V_eq} yields
\begin{align*}
A_1\hat{V} = A_1 \sum_{i=1}^n a^1_i v^1_i = \sum_{i=1}^n \lambda^1_i a^1_i v^1_i = P^m,
\end{align*}
which implies
\begin{align*}
a^1_i=\frac{(v^1_i)^TP^m}{\lambda^1_i}.
\end{align*}
Let the eigenvalues be ordered by their increasing values. Clearly $\lambda^1_1= \frac{k^\text{droop}k^V}{k^\omega}$ and $v^1_1=\frac{1}{\sqrt{n}} 1_{n\times 1}$. This implies
\begin{align}
\label{eq:Delta_V_ss}
\hat{V} = \frac{k^\omega 1_{1\times n}P^m}{nk^\text{droop}k^V} 1_{n\times 1} + \sum_{i=2}^n \frac{(v^1_i)^TP^m}{\lambda^1_i}v^1_i.
\end{align}
By noting that
\begin{align}
\begin{aligned}
\lambda^1_i &= \frac{(k^\omega + k^\text{droop})V^\text{nom} \lambda_i(\mathcal{L}_R)+k^\text{droop}k^V}{k^\omega} \\
&\ge \frac{(k^\omega + k^\text{droop})V^\text{nom} \lambda_i(\mathcal{L}_R)}{k^\omega},
\end{aligned}
\label{eq:lambda_1_bound}
\end{align}
where $\lambda_i(\mathcal{L}_R)$ is the $i$th eigenvalue of $\mathcal{L}_R$,
we obtain the following bound on $\hat{V}$
\begin{align*}
\norm{\hat{V}}_\infty &\le \frac{k^\omega \left|1_{1\times n}P^m\right|}{nk^\text{droop}k^V} + \frac{\max_i \left| P^m_i \right| k^\omega}{(k^\omega + k^\text{droop})V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \\
& \le \frac{k^\omega \left|\sum_{i=1}^nP^m_i\right|}{nk^\text{droop}k^V} + \frac{\max_i \left| P^m_i \right|}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)}.
\end{align*}
From the first $n$ rows of \eqref{eq:eq_delta_coordinates}, and by substituting the expression for $\hat{V}$ from \eqref{eq:Delta_V_ss}, we have
\begin{align}
\begin{aligned}
\hat{\omega} &= \frac{k^V\hat{V} + P^m}{k^\omega+ k^\text{droop}} = \frac{1}{k^\omega + k^\text{droop}} \Bigg( \frac{k^\omega 1_{1\times n}P^m}{nk^\text{droop}} 1_{n\times 1} \\
&\;\;\;\;+ \sum_{i=2}^n \frac{k^V(v^1_i)^TP^m}{\lambda^1_i}v^1_i + P^m \Bigg).
\end{aligned}
\label{eq:Delta_omega}
\end{align}
By using the bound on $\lambda^1_i$ from \eqref{eq:lambda_1_bound}, we obtain
\begin{align*}
\norm{\hat{\omega}}_\infty &\le \frac{1}{k^\text{droop}} \Bigg( \frac{\left|\sum_{i=1}^n P^m_i\right|}{n} \\
& \;\;\;\; + \max_i |P^m_i|\Bigg( 1 + \frac{k^V}{V^\text{nom}} \sum_{i=2}^n \frac{1}{\lambda_i(\mathcal{L}_R)} \Bigg) \Bigg).
\end{align*}
Consider now the power generated by the voltage droop controller. By \eqref{eq:Delta_omega} we obtain
\begin{align*}
&{P^\text{droop} + \frac{1}{n} \sum_{i=1}^n P_i^m 1_{n\times 1}} = {-k^\text{droop}\hat{\omega} + \frac{1_{1\times n} P^m}{n} 1_{n\times 1} } \\
&= \frac{k^\text{droop}}{k^\omega+k^\text{droop}} \Bigg(- \frac{1}{n}\sum_{i=1}^n {P^m_i}1_{n\times 1} + P^m \\
&\;\;\;\;\;{+} \sum_{i=2}^n \frac{k^V(v^1_i)^TP^m}{\lambda^1_i}v^1_i \Bigg) .
\end{align*}
By using the bound on $\lambda^1_i$ in \eqref{eq:lambda_1_bound}, we obtain
\begin{align*}
&\norm{{P^\text{droop} + \frac{1}{n} \sum_{i=1}^n P_i^m 1_{n\times 1}}}_\infty \le \frac{k^\text{droop}}{k^\omega+k^\text{droop}} \Bigg( \\
&\;\;\;\; \max_i \left| P^m_i \right|\left( 1+ \sum_{i=2}^n \frac{k^\omega}{(k^\omega + k^\text{droop})V^\text{nom} \lambda_i(\mathcal{L}_R)} \right) \Bigg) \\
& \le \frac{k^\text{droop}}{k^\omega+k^\text{droop}} \max_i \left| P^m_i \right|\Bigg( 1+ \frac{1}{V^\text{nom}}\sum_{i=2}^n \frac{1}{ \lambda_i(\mathcal{L}_R)} \Bigg),
\end{align*}
which completes the proof.
\end{proof}
\section{Simulations}
\label{sec:simulations}
\newlength\figureheight
\newlength\figurewidth
\setlength\figureheight{4.4cm}
\setlength\figurewidth{6.6cm}
In this section, we simulate the proposed controller on an MTDC grid connecting three asynchronous AC areas, whose main purpose is bulk power transfer between the AC areas. The test grid consists of tree 6 bus AC grids, described in detail in \cite{wollenberg2006power}, connected with a 3 bus MTDC grid. In Figure \ref{fig:testgrid}, the topology of the interconnected MTDC-AC grid is shown.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\columnwidth]{Wood_Base6Bus_fullgrid_trim.pdf}
\caption{Test grid consisting of 3 AC areas, connected by an MTDC grid consisting of 3 converter stations and 3 DC lines.}
\label{fig:testgrid}
\end{figure}
\begin{table}
\centering
\caption{HVDC grid line parameters}
\label{tab:HVDCgridParameter}
\begin{tabular}{llll}\toprule
From & To & Resistance [p.u.]& Reactance [p.u.] \\ \midrule
1 & 2 & 0.0015 & 0.01 \\
1 & 3 & 0.0045 & 0.03 \\
2 & 3 & 0.0015 & 0.01 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}\centering
\raa{1.3}
\caption{Controller Parameter}
\label{tab:ControllerParameter}
\begin{tabular}{@{}llllll@{}}\toprule
$K^{\omega}_1$ &$K^{\omega}_2$&$K^{\omega}_3$& $K^\text{droop}_1$ &$K^\text{droop}_2$ &$K^\text{droop}_3$\\ \midrule
501&501& 501&667&667 & 667 \\
\bottomrule
\end{tabular}
\end{table}
Each converter station is controlled with \eqref{eq:voltage_control}.
While the converter dynamics are ignored due to their fast nature, the nonlinear relation \eqref{eq:power-current_nonlinear} is used to relate the injected AC powers with the HVDC currents.
The physical system parameters and the controller parameters are given in Table \ref{tab:HVDCgridParameter}, \ref{tab:ControllerParameter}, respectively.
The simulation was conducted by using an extended version of MatDyn \cite{matsch}, taking also the HVDC dynamics into account. The simulation starts in steady-state, and at time $1$ s an immediate change in load from 0.7 p.u. (per-unit) to 0.8 p.u. occurs at bus 4 in the AC area 1.
The local frequency controllers at the generators will react immediately to the resulting frequency drop, and start to accelerate. The frequencies of the generators are shown in Figure \ref{fig:generatorspeeds}.
\begin{figure}[th]
\input{Speeds2.tikz}
\caption{Frequencies of the generator areas.}
\label{fig:generatorspeeds}
\end{figure}
After a few second all generator frequencies within the same AC area synchronize, and after about $30$ s the frequencies converge to the new equilibrium. The frequency deviation is larger in AC area $1$ than in the remaining AC areas, but the differences are rather small, in accordance with Theorem~\ref{th:equilibrium}.
Figure \ref{fig:GeneratorDelta} shows the changes in the power output of the generators. The disturbance is shared along all generators.
The injected powers through the converter are shown in Figure \ref{fig:converterpower}. Since the converter dynamics are much faster than the AC systems, they are neglected in the simulation and it is assumed that the converter power tracks the controller output perfectly.
\begin{figure}[th]
\center
\input{GeneratorPowerDelta2.tikz}
\caption{Incremental generator power levels.}
\label{fig:GeneratorDelta}
\end{figure}
\begin{figure}[th]
\center
\input{TermialPowers2.tikz}
\caption{Injected power levels at the converters.}
\label{fig:converterpower}
\end{figure}
Due to the increased load, the DC voltages of all converters increase, see Figure \ref{fig:convertervoltages}. However, as predicted by Theorem~\ref{th:equilibrium}, both the absolute and relative voltage deviations are bounded.
\begin{figure}[th!]
\center
\input{DCVoltages2.tikz}
\caption{Voltages of the DC converters.}
\label{fig:convertervoltages}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:discussion}
In this paper we have proposed a decentralized proportional controller for sharing primary frequency control reserves in asynchronous AC systems connected through an MTDC system. The controller uses the local frequency in the AC grid and the local DC voltage as inputs in order to control the power injections into the MTDC grid. The resulting equilibria of the closed-loop system is shown to be globally asymptotically stable by using Lyapunov arguments, regardless of the controller parameters. It is also shown that the DC voltages and AC frequencies at the equilibrium are close to their nominal values. Furthermore the generated power from the primary frequency control is approximately shared fairly between the AC areas. The deviation from perfectly fair power sharing is quantified.
The proposed controller was simulated on a test system consisting of 3 AC areas combined with an MTDC grid to demonstrate its effectiveness. The paper constitutes a first step towards utilizing the increased flexibility which future MTDC grids will provide to the connected AC systems. Future work will focus on extending the primary proportional controller with secondary controllers, where communication and integral action will be necessary to eliminate static control errors. An extensive simulation study on more realistic grid topologies and dynamical models is also ongoing work.
|
1,314,259,995,573 | arxiv |
\section{Algorithms}
\label{sec:alg}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{fig_dynamic.pdf}
\caption{(a): The negative database $D_-$ holding only rows containing a pattern $P$. (b): The negative database $D_-$ holding only rows containing a pattern $P \cup \set{i}$. This can be obtained by deleting the row where the column of $i$ is $0$ in the database in (a). (c): The negative database $D_-$ holding only rows containing a pattern $P \cup \set{k}$. (d): A search tree according to the dictionary order of items. (e): A search tree according to pattern frequency. Since the frequency of $P \cup \set{k}$ that added $k$ to the pattern is less than the frequency of adding other items, $k$ is added preferentially. At this time, $P \cup \set{i,k}$ and $P \cup \set{j,k}$ can be pruned with minimality because because the frequency of negative class does not change from $P \cup \set{k}$.}
\label{fig:dynamic}
\end{figure}
Our proposed algorithm is based on a depth-first search algorithm such as CP-tree~\cite{Fan_TKDE2006}. During the search, pruning is possible when $ Sup _-(P) = Sup _-(P + \{a\})$ based on the minimality and constraint expression (the correctness of this pruning will be proved later)~\cite{Loekito_KDD2006}. The key idea of our algorithm is to consider the search order for variables that satisfy this pruning rule early and do not perform extra searches.
Figure~\ref{fig:dynamic} shows the basic idea. When variables $i$ and $j$ that are $ Sup_ {D _-} (P + \{i\}) < Sup_ {D _-} (P + \{j\})$ are given as next search candidates, it can be quickly pruned by searching $P + \{i\}$ first.
Therefore, the search is performed with priority given to $i \in B $ with the smallest $Sup_{D_-}(P + \{i\})$ for the currently searched pattern $P$ and the next variable candidate set $B$.
This means that instead of searching for variables in a predetermined static order, the search order is determined in a dynamic order according to the currently searched pattern $P$.
In order to implement this, it is important to enabling high-speed counting by always reducing the database to contain records only with the pattern $P$ currently being searched. There is an enumeration of contrast patterns using ZDD in this method using pruning and database reduction. However, because ZDD is constructed with only the static variable order determined in advance due to its data structure, it is difficult to extend to dynamic order.
In this section, we first show the pruning rules used in this algorithm, and then give a data structure that uses dancing links to efficiently execute dynamic variable ordering.
\subsection{Pruning rules}
In this subsection, we explain the types of pruning used in the proposed algorithm.
\subsubsection{Pruning 1: for minimality constraint}
This is one of the obvious pruning. If we find a pattern $P$ that satisfies constraints other than minimality constraints, we do not need to search for patterns $Q \supset P$.
\subsubsection{Pruning 2: for $EP$ and $CP$ by lower bound}
Let $P \in 2^I$ any pattern at the current iteration. We define the set of descendants by $Desc(P) := \set{ Q \in 2^I \mid P \subseteq Q }$.
We define the upper bound and the lower bound of the values of $f$ on all descendants of $P$ by
\begin{align*}
GUB_{D_+, D_-}[f](P)
&:= GUB[f](Desc(P))
\\&= \max \set{ f(Q) \mid Q \in 2^I, P \subseteq Q }.
\end{align*}
\begin{align*}
GLB_{D_+, D_-}[f](P)
&:= GLB[f](Desc(P))
\\&= \min \set{ f(Q) \mid Q \in 2^I, P \subseteq Q }.
\end{align*}
If $B$ is a set of all unsearched items, the lower bound of negative frequency is $lb_occ_- = Sup _- (P \ cup B)$ due to the monotonicity of frequency.
Similarly, the upper bound of the positive frequency is $ ub_occ _ + = Sup _ + (P) $.
Let $f = GR = Sup _ + (P) / Sup _- (P) $, that is, under the condition of the growth rate constraint, the upper bound on all descendants is $ GUB_ {D_ +, D _-} [GR] (P) \leq ub_occ _ + / lb_occ _- $. Thus, pruning is possible when this upper bound is smaller than the given parameter $\theta$. The pruning using the lower bound of the negative frequency and the upper bound of the positive frequency can be similarly used for the contrast constraint and the chi-square value constraint.
\subsubsection{Pruning 3: Safe pruning for minimal constrained patterns based on negative conservative elements}
We show the soundness of a pruning strategy using the negative occurrences.
\begin{definition}[Condition C1']
For any pattern $P$, and any $a \in I\cup \neg I$, if $Sup_-(P) = Sup_-(P+a)$ then the implication $P+a \in \mathbb C \implies P \in \mathbb C$ holds.
\end{definition}
\begin{definition}[Condition C2']
If $Occ_+(P) = Occ_+(Q)$ and $Occ_-(P) = Occ_-(Q)$, then the equivalence $P \in \mathbb C \iff Q \in \mathbb C$ holds.
\end{definition}
Now, we have the following lemma and theorem.
\begin{lemma}[Soundness of pruning for $\mathbb C$-patterns]\label{lemma:pr3}: Suppose that a constraint $\mathbb C$ satisfies Conditions C1' and C2' above. Let $P$ be any pattern and any item $a \notin P$. Suppose that $Sup_-(P) = Sup_-(P+a)$. For any pattern $Z$ that is an extension of $P+a$, where $P+a \subseteq Z$, $Z \in \mathbb C \implies Z\setminus\{a\} \in \mathbb C$.
\end{lemma}
\begin{theorem}[Soundness of pruning for minimal $\mathbb C$-patterns]: Suppose that a constraint $\mathbb C$ satisfies Conditions C1' and C2' above. Suppose that $Sup_-(P) = Sup_-(P+a)$. Then, any extension $Z$ of pattern $(P+a)$ never satisfy the minimality constraint $\mathbb{MINC}$ w.r.t. $\mathbb{C}$. That is, for any $Z$, the condition $P+a \subseteq Z$ implies that $Z \notin \mathbb{MINC}$.
\end{theorem}
\begin{proof}
We assume the conditions of C1' and C2', and that a pattern $P$ satisfies $Sup_-(P) = Sup_-(P+a)$. Now, suppose to contradict that $Z \in \mathbb{MINC}$ for some (possibly identical) extension $Z$ of $(P+a)$. Since $\mathbb{MINC}\subseteq \mathbb{C}$, we have $Z \in \mathbb{C}$. Then, it immediately follows from Lemma~\ref{lemma:pr3} that $Z\setminus\{a\} \in \mathbb{C}$. Since $a \in Z$, the set $Z\setminus\{a\}$ is a strict subset of $Z$. Hence, $Z$ cannot be minimal in $\mathbb{C}$, i.e., $Z \notin \mathbb{MINC}$.
\end{proof}
At any unsuccessful iteration on $P$ such that $P \notin \mathbb C$, if the condition $Sup_-(P) = Sup_-(P+a)$ holds, then we can prune all the descendants of $(P+a)$, and then backtrack to the parent $P$.
\subsection{Dynamically reducible binary matrices}
In this subsection, we propose an representation of a binary matrix, called, dynamically reducible binary matrix (DRMX),
which allows efficient modification and undo operations on a transaction database necessary to dynamic item ordering during backtrack search for candidate patterns. To achieve this goal, we employ the dancing link data structure of Knuth~\cite{Knuth_2000}.
\begin{definition}[D1]
The DRMX data structure $\mathbb M$ stores a transaction database $M = (T, I, R)$ supports the following operations, where we refer to a tuple and an item as a row and a column, respectively.
\end{definition}
\begin{itemize}
\item $\mathbb M := DRMX.create(M)$: Create a new DRMX storing a given transaction database $D$.
\item $\mathbb M.deleteRow(i)$: Remove the row with rid $i$ from the matrix.
\item $\mathbb M.deleteColumn(j)$: Remove the column with cid $j$ from the matrix.
\item $\mathbb M.checkpoint()$: Push the current state of the matrix on the undo-stack
\item $\mathbb M.undo(i)$: Pop $i$-times from the undo-stack, and then recover the status of the matrix $M$ at the time of the checkpoint
\item $\mathbb M.countRows(P)$: Return the number of rows where $j=1$ for all column $j\in P$ in the matrix $M$. If $P = \emptyset$ then return the number of rows in the matrix $M$.
\end{itemize}
Dancing links can perform these operations efficiently. In the next subsection we describe our algorithm in pseudo code using these operations.
\subsection{Pseudo codes of our algorithm}
Pseudo codes of our algorithm is shown in Algorithm~\ref{alg:main} and Algorithm~\ref{alg:mine}. In the algorithm~\ref{alg:main}, the solution candidates are mined on the line~3, and then the minimal solution is extracted on the fourth line. A method using BDD has been proposed for narrowing down the minimum solution~\cite{Toda_EA2013}. Algorithm~\ref{alg:mine} is the main mining algorithm.
\IncMargin{1em}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetFuncSty{textrm}
\SetCommentSty{textrm}
\SetKwFunction{MiningMCP}{{\scshape MiningMCP}}
\SetKwFunction{FindCandidates}{{\scshape FindCandidates}}
\SetKwFunction{ExtractMinimalPatterns}{{\scshape ExtractMinimalPatterns}}
\SetKwProg{myfunc}{}{}{}
\begin{algorithm}[th]
\label{alg:main}
\caption{Main function for mining $\mathbb{MINCP}$}
\Input{A pair $D_+, D_- \subseteq 2^{I}$ of positive and negative datasets represented in the DRMX data structure, a tuple $\Theta = (\sigma_+, \sigma_-, \theta, \gamma)$ of mining parameters (See above for the meaning of symbols). }
\Output{The set $MCP \subseteq 2^{I}$ of all and only minimal constrained patterns.}
\myfunc{\MiningMCP{$D, I, \Theta$}}{
$(D_+, D_-) \gets DRMX.create(D)$ \;
$CP \gets \FindCandidates(\emptyset, I, \mathbb D_+, D_-, \Theta)$ \;
$MCP \gets \ExtractMinimalPatterns(CP)$ \;
\Return $MCP$ \;
}
\end{algorithm}
\DecMargin{1em}
\newcommand{\deli}[1]{\setminus\set{#1}}
\IncMargin{1em}
\begin{algorithm}[h!]
\small
\label{alg:mine}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetFuncSty{textrm}
\SetCommentSty{textrm}
\SetKwFunction{FindCandidates}{{\scshape FindCandidates}}
\SetKwProg{myfunc}{}{}{}
\caption{An algorithm for finding candidates for minimal contrast patterns in $\mathbb{MINCP}/$ under the following mining parameters: $\sigma_+, \sigma_-$: positive and negative minimum positive support thresholds, $\theta$: minimum growth-rate threshold, $\gamma$: minimum $\chi^2$-value threshold. }
\myfunc{\FindCandidates{$P, B, D_+, D_-, \Theta$}}{
$CP \gets \emptyset$ \;
$(occ_+, occ_-) \gets (D_+.countRow(), D_-.countRow())$ \tcp*{positive and negative frequencies}
\tcp{Pruning1: Pruning with minimal constraints}
\If{$isCP(occ_+, occ_-, \Theta) = true$}{
$CP \gets CP \cup \set{P}$ \tcp*{discover patterns}
\Return{CP} \;
}
\tcp{Pruning2: Pruning using the lower bound of the negative frequency of all descendants of $P$}
$lb\_occ_- \gets D_+.countRow(\emptyset)$ \tcp*{the lower bound for negative frequency of $P$}
\If{ $(lb\_occ_- > \sigma_-)$ }{
\tcp{Prunes all descendants of $P \ cup \ set {i}$}
return \;
}
\tcp{Pruning 3: Pruning EP using the upper bound of the GR of all descendants of $P$}
$ub\_occ_+ \gets D_-.countRow(B)$ \tcp*{the upper bound of positive frequency of $P$}
$ub\_gr \gets ub\_occ_+/lb\_occ_-$ \tcp*{Maximum GR of all descendants of $P$}
\If{ $ub\_gr < \theta)$ }{
return \tcp*{Prunes all descendants of $P$}
}
\tcp{Pruning 4: Database reduction based on minimal constraints}
\For{each $i \in B$}{
\If{$(D_-.countRow() = D_-.countRow(i))$}{
$B \gets B \setminus i$ \;
$D_k.deleteColumn(i)$, $\forall k\in\set{+,-}$ \;
}
}
\If{$B = \emptyset$}{
\Return{CP}
}
\tcp{Dynamic Ordering based on the frequency of $P\cup\set{i}$ on $D_-$}
$i_* \gets \mathop{\rm arg~min}\limits_{i \in B} ( D_-.countRow(i ))$
\tcp*{Select the least frequent item on $D_-$}
Record the current snapshot of $D_k$ as a checkpoint $\tau$\;
\tcp{Branch 0}
$D_k \gets D_k\deli{i_*}$, $\forall k\in\set{+,-}$ \tcp*{fast implementation by DRMX}
$CP \gets CP \; \cup$ \FindCandidates{$P, B\deli{i_*}, D_+, D_-, \Theta$} \;
\vspace{1em}
\tcp{Branch 1}
$D_k \gets \set{ t \in D_k \mid P\cup\set{i_*} \subseteq t }$, $\forall k\in\set{+,-}$ \;
$CP \gets CP \;\cup$ \FindCandidates{$P\cup\set{i_*}, B\deli{i_*}, D_+, D_-, \Theta$} \;
Undo the modification of $D_k$ at $\tau$\;
\Return{CP}
}
\end{algorithm}
\DecMargin{1em}
\section{Conclusions}
\label{sec:conc}
In this paper, we consider the problem of constrained pattern mining. We propose dynamic variable-ordering during pattern search, and using dancing links data structures. By computational experiments on real datasets, we observed that our algorithm outperformed the existing algorithms for dense databases.
\section{Experimental results}
\label{sec:exp}
We examine the following three experimental results in this section.
The first is a speed comparison between the heuristic with various static variable orders and the proposed dynamic variable order.
Second, speed comparison with other methods.
Finally, we evaluate the classification model using the patterns that we actually mined.
We implemented our algorithm using C++.
All CPU time is measured on a Linux workstation with Intel Xeon E5-2680 v2 2.80GHz CPU with 400GB memory.
\subsection{Experiments 1: Effectiveness of dynamic ordering}
In this experiment, we investigate the speed difference between the static variable order and the dynamic variable order in our proposed method.
Table~\ref{tbl:dataset-enum} gives the datasets used for performance evaluation in this and the next subsections.
They are all from the CP4IM dataset\footnote{https://dtai.cs.kuleuven.be/CP4IM/datasets/} with more than 50 items and more than 200 examples.
The column ``density'' shows the average percentage of the number of items in an example over the number of all items.
Mushroom and Splice-1 are relatively sparse, German-credit is moderate, and the rest are dense.
The columns ``\#JEPs'' and ``\#SJEPs'' indicate the number of jumping emerging patterns and strong jumping emerging patterns, respectively, when minimum support threshold is set to 0.02 times the number of positive examples.
\begin{table}[!ht]\centering
\caption{Datasets used in Experiments 1 and 2.}
\label{tbl:dataset-enum}
\begin{tabular}{crrrrr}
name & \#item & \#example & density & \#JEPs & \#SJEPs \\
\hline
Mushroom & 119 & 8124 & 18\% & 21574290 & 1353 \\
Splice-1 & 287 & 3190 & 21\% & 377330 & 179810 \\
German-credit & 112 & 1000 & 34\% & 2410029163 & 148303 \\
Kr-vs-kp & 73 & 3196 & 49\% & 129786095160 & 7283 \\
Hypothyroid & 88 & 3247 & 50\% & 40807701172704 & 1966 \\
Anneal & 93 & 812 & 45\% & 34803198050304 & 3906 \\
Heart-cleveland & 95 & 296 & 47\% & 29701186840434 & 946235 \\
Australian-credit & 125 & 653 & 41\% & 261786633471699 & 2057646 \\
Audiology & 148 & 216 & 45\% & \textit{unknown} & 2858 \\
\end{tabular}
\end{table}
\begin{table}[!ht]
\centering
\caption{Datasets used in Experiment 3.}
\begin{tabularx}{\columnwidth}{cXXX}
name & \#sample & {\#feature (not binarized)} & \#target class \\ \hline
Banknote Authentication & 1372 & 5 & 1 \\
Breast Tissue& 106 & 10 & 6 \\
Glass Identification & 214 & 10 & 6\\
Iris & 150 & 4 & 3 \\
Wireless Indoor Localization (Wifi) & 2000 & 7 & 4 \\
Yeast & 1484 & 8 & 9 \\
\end{tabularx}
\label{tbl:dataset-pred}
\end{table}
\begin{figure}[t]\centering
\includegraphics[width=.32\columnwidth]{fig/mushroom-methods.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/splice-1-methods.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/german-credit-methods.pdf}\\\medskip
\includegraphics[width=.32\columnwidth]{fig/kr-vs-kp-methods.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/hypothyroid-methods.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/anneal-methods.pdf}\\\medskip
\includegraphics[width=.32\columnwidth]{fig/heart-cleveland-methods.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/australian-credit-methods.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/audiology-methods.pdf}\\
\caption{Comparison of the mining time for minimal emerging patterns with $\theta=9$, using combinations of the pruning rule 2 (LB) or 3 (NC) and static or dynamic ordering.}
\label{fig:pruning-and-ordering}
\end{figure}
\begin{figure}[h!]\centering
\includegraphics[width=.32\columnwidth]{fig/mushroom-miners.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/splice-1-miners.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/german-credit-miners.pdf}\\\medskip
\includegraphics[width=.32\columnwidth]{fig/kr-vs-kp-miners.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/hypothyroid-miners.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/anneal-miners.pdf}\\\medskip
\includegraphics[width=.32\columnwidth]{fig/heart-cleveland-miners.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/australian-credit-miners.pdf}\hfill
\includegraphics[width=.32\columnwidth]{fig/audiology-miners.pdf}\\
\caption{Comparison of the mining time for JEPs and SJEPs, using LCM, CP-tree, and our algorithm.}
\label{fig:vs-cptree-and-lcm}
\end{figure}
Comparison of mining time for minimal emerging patterns is shown in Figure~\ref{fig:pruning-and-ordering}, where ``LB'' uses the pruning rules 1 and 2, and ``NC'' uses the pruning rules 1 and 3.
Growth rate constraint is fixed to $\theta=9$ in the experiments.
On the Audiology dataset, mining could not be finished within 3600 seconds without the combination of LB and dynamic ordering.
On the dense datasets, effectiveness of LB is improved dramatically when it is combined with dynamic ordering.
\subsection{Experiments 2: Performance comparison with existing methods}
In this experiment, we investigate the speed difference by mining the jumping emerging patterns from the proposed method and the existing methods (LCM~\cite{Uno_FIMI2004} and CP-tree~\cite{Fan_TKDE2006}).
LCM proposed by Uno et al.\ is a state-of-the-art algorithm that won the FIMI 2004 competition with closed frequent itemset mining.
We used LCM version 5.3\footnote{\url{http://research.nii.ac.jp/~ uno/codes.htm}}, which can mine JEPs by setting large negative weights to the negative data.
The CP-tree proposed by Fan manages the pattern frequency by a tree structure and high-speed mining by reducing access to the database.
We used a C++ implementation of the CP-tree algorithm that mines SJEPs.
The results are shown in Figure~\ref{fig:vs-cptree-and-lcm}.
CP-tree could not complete mining within 3600 seconds on dense datasets.
We can see that LCM can perform JEP mining, which is more expensive than SJEP mining, orders of magnitude faster than the traditional CP-tree algorithm.
On the Audiology dataset, only the SJEP version of our algorithm could be finished within 3600 seconds.
The JEP version of our algorithm sometimes completed orders of magnitudes faster than LCM and the SJEP version was always faster than others in dense datasets.
\subsection{Experiments 3: Evaluation of classification model using mined patterns}
\begin{table}[ht!]
\label{table:f1-score}
\caption{Comparison of F-value between proposed method and existing method.}
\small
\begin{tabularx}{\columnwidth}{cXXXXX}
& Proposed method (with negative items) & Proposed method (without negative items) & Decision trees & Logistic regression & Random forests \\ \hline
banknote-class-1 & \textbf{0.998} & 0.992 & 0.989 & 0.991 & 0.994 \\ \hline
breast-tissue-class-adi & \textbf{1.000} & \textbf{1.000} & 0.935 & 0.931 & 0.971 \\
breast-tissue-class-car & \textbf{0.937} & 0.863 & 0.891 & 0.894 & 0.931 \\
breast-tissue-class-con & \textbf{1.000} & \textbf{1.000} & 0.891 & 0.740 & 0.900 \\
breast-tissue-class-fad & \textbf{0.806} & 0.722 & 0.599 & 0.673 & 0.535 \\
breast-tissue-class-gla & \textbf{0.881} & \textbf{0.881} & 0.771 & 0.700 & 0.727 \\
breast-tissue-class-mas & \textbf{0.832} & 0.770 & 0.523 & 0.482 & 0.474 \\ \hline
glass-class-1 & 0.802 & 0.800 & 0.733 & 0.716 & \textbf{0.830} \\
glass-class-2 & \textbf{0.867} & 0.863 & 0.777 & 0.599 & 0.799 \\
glass-class-3 & \textbf{0.697} & 0.625 & 0.558 & 0.231 & 0.371 \\
glass-class-5 & 0.920 & \textbf{0.960} & 0.777 & 0.658 & 0.865 \\
glass-class-6 & \textbf{1.000} & \textbf{1.000} & 0.960 & 0.920 & \textbf{1.000} \\
glass-class-7 & 0.942 & \textbf{0.966} & 0.900 & 0.915 & 0.915 \\ \hline
iris-class-setosa & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} \\
iris-class-versicolor & \textbf{0.962} & 0.916 & 0.948 & 0.730 & 0.949 \\
iris-class-virginica & 0.949 & 0.943 & 0.952 & \textbf{0.971} & 0.952 \\ \hline
wifi-class-1 & 0.993 & 0.993 & 0.989 & 0.989 & \textbf{0.997} \\
wifi-class-2 & \textbf{0.982} & 0.961 & 0.979 & 0.977 & 0.978 \\
wifi-class-3 & 0.974 & 0.943 & 0.953 & 0.598 & \textbf{0.975} \\
wifi-class-4 & \textbf{0.995} & 0.956 & 0.991 & 0.994 & \textbf{0.995} \\ \hline
yeast-class-CYT & 0.632 & 0.604 & 0.604 & 0.606 & \textbf{0.650} \\
yeast-class-ERL. & \textbf{1.000} & \textbf{1.000} & 0.647 & 0.867 & 0.167 \\
yeast-class-EXC & \textbf{0.661} & 0.536 & 0.589 & 0.530 & 0.654 \\
yeast-class-ME1 & \textbf{0.785} & 0.734 & 0.761 & 0.641 & 0.779 \\
yeast-class-ME2 & \textbf{0.591} & 0.420 & 0.485 & 0.430 & 0.483 \\
yeast-class-ME3 & \textbf{0.823} & 0.810 & 0.793 & 0.768 & 0.811 \\
yeast-class-MIT & 0.634 & 0.589 & 0.614 & 0.590 & \textbf{0.645} \\
yeast-class-NUC & 0.630 & 0.545 & 0.605 & 0.590 & \textbf{0.634} \\
yeast-class-POX & \textbf{0.628} & 0.614 & 0.614 & 0.614 & 0.560 \\ \hline
\end{tabularx}
\end{table}
In this experiment, we compare the classification model using patterns mined by the proposed method with existing models.
Our model is a generalized additive linear model learned by the LASSO algorithm, which has mined patterns with a maximum length of 5 as features.
We performed binary classification problems on various datasets and compared F-values with existing methods (Logistic Regression, Decision Tree, and Random Forest) by the 5-fold cross validation.
We used binarized data with MDLP for learning of our method, and used original real-valued data for existing methods.
Here we considered two types of binarized data where one introduces negation and the other does not.
Both ours and existing methods tuned hyperparameters with Optuna~\footnote{https:\slash\slash{}optuna.readthedocs.io\slash{}en\slash{}stable\slash}.
We show the used datasets in Table~\ref{tbl:dataset-pred}.
We show the comparison results in Fig~\ref{table:f1-score}.
The results attract attention to that our model shows superior performance in all datasets.
In addition, the following two characteristical results are observed.
(i) Our model achieves high F-values for the datasets causing inferior performances of existing methods, for example the breast-class-mas data and the glass-class-3 data.
(ii) Our model tends to make high performance with introducing negation in the binarized data, for example the yeast-class-EXC data and the yeast-class-ME2 data.
\section{Introduction}
\label{sec:intro}
Machine learning of various classes of interpretable prediction models over combinatorial features, such as decision trees and rule lists~\cite{Angelino_JMLR2018,Lakkaraju_KDD2016}, attracts much attention for last a few years from the view of trustable machine learning and knowledge discovery.
Among many classes of combinatorial features, \textit{constrained patterns} such as \textit{contrast} and \textit{emerging patterns} are important classes of combinatorial features in high-dimensional data sets~\cite{Dong_KDD1999,Fan_TKDE2006,Loekito_KDD2006}, which are itemsets that discriminate one class from another by capturing significant differences among two classes.
These classes of patterns are useful to capture high difference in two data sets, to provide human experts interpretable explanation, and to construct highly accurate classifiers~\cite{Li_PAKDD2000}.
Techniques in modern frequent itemset miners, such as LCM~\cite{Uno_FIMI2004}, work also well in finding constrained patterns of a sparse dataset where the frequency drops sharply with the addition of items.
However, for knowledge discovery, we often work with dense databases.
For example, we consider the case that a pattern consists of positive as well as negative items, where a negative item is a special symbol $\bar i$ indicating that the corresponding positive item $i$ does not appear in a transaction data.
This is important for interpretability and knowledge discovery because it allows us to describe patterns with fewer combinations of features that would be difficult to express succinctly with only positive items.
We propose a mining algorithm for constrained patterns that efficiently works on not only sparse databases and also dense databases.
The key technique of our algorithm is to apply dynamic item ordering during pattern search.
In our algorithm, we use several pruning methods based on dynamic item ordering, and some of which are very effective for dense databases.
The same idea is used for maximal frequent pattern mining~\cite{Bayardo_SIGMOD1998}, but as the best of our knowledge, it has not been considered for constrained pattern mining.
In order to efficiently work dynamic item ordering, we also propose a novel representation of database \textit{DRMX} (Dynamically Reducible Binary Matrix) based on dancing links~\cite{Knuth_2000} which supports the deletion of rows and columns in the arbitrary order at any moment, and undo them in the reverse order to restore the previous snapshot.
By experiments on real data sets, we compare our mining algorithms {MiningMCP} with the previous, state-of-the-art algorithms in both mining and learning tasks.
After confirming the effectiveness of dynamic ordering in various pruning strategies, we compare the proposed method {MiningMCP} with the state-of-the-art methods LCM~\cite{Uno_FIMI2004} and CP-tree~\cite{Fan_TKDE2006} for mining jumping emerging patterns.
We observed that {MiningMCP} is 100 to 1000 times faster than LCM and CP-trees for almost all dense data sets.
Finally, we conducted binary classification experiments, and observed that the models constructed by our method achieved superior accuracy in all data sets than existing learning methods such as logistic regression, decision tree
ls, and random forests and that the use of negative items was effective in learning some difficult data sets.
This paper is organized as follows. Section~\ref{sec:pre} gives the preliminaries. The details of our method is provided in section~\ref{sec:alg}. Section~\ref{sec:exp} presents experimental results. Section~\ref{sec:conc} is conclusion of our paper.
\section{Preliminaries}
\label{sec:pre}
\subsection{Labeled databases and generalized itemsets}
Let $I = \{a_1, \dots, a_n\}$ be an alphabet of $n$ items.
A \textbf{labeled database} over $I$ is a pair $D = (D_+, D_-)$, where $D_+, D_- \subseteq 2^I$ are possibly overlapping sets of positive and negative tuples over $I$, respectively.
A tuple in $D$ is also called a data or an example.
As a class of patterns, we consider the class of generalized itemsets defined as follows.
A \textbf{literal} is either an item $x \in I$ or its negation $\neg x$. We refer to $x$ and $\neg x$ as \textbf{positive} and \textbf{negative literals}. We denote the \textbf{set of all negative literals} by $\neg I := \{\neg x \mid x \in I\}$.
We denote by $D = D_I := 2^{I\cup\neg I}$ the domain of all possible labeled databases over $I$.
A \textbf{generalized itemset} (a pattern, for short) over $I$ is an expression $X = X_{pos} \cup X_{neg}$, where $X_{pos} = \{x_1, \ldots, x_k\} \subseteq I$ and $X_{neg} = \{\neg x_{k+1}, \ldots, \neg x_{k+m}\} \subseteq \neg I$ are sets of $k$ positive literals and $m$ negative literals, respectively.
Then, the \textbf{size} of $X$ is $|X| = k + m$. Clearly, $X \subseteq I \cup \neg I$. In what follows, we denote by $\mathbb P = 2^{I \cup \neg I}$ the class of generalized itemsets over $I$.
For any tuple $t \in 2^I$, a generalized itemset $X$ \textbf{occurs in} $t$, denoted $X \sqsubseteq t$, if all positive literals and none of negative literals of $X$ are contained in $t$, i.e. $\forall i \in [1..k], x_i \in t$ and $\forall j \in [k+1..k+m], x_j \notin t$. For any tuple $t \in D_+\cup D_-$, if $X \sqsubseteq t$, we say that $t$ is an occurrence of $X$ in $D$. For any set $D$ of tuples, the occurrence list of $X$ in $D$ is the set $Occ_{D}(X) := \{ t \in D \mid X \sqsubseteq t \}$. The positive and negative supports are $Sup_+(X) := |Occ_{D_+}(X)|$ and $Sup_-(X) := |Occ_{D_-}(X)|$, respectively.
In terms of propositional logic, a generalized itemset $X = \{x_1, \ldots, x_k\} \cup \{\neg x_{k+1}, \ldots, \neg x_{k+m}\}$ represents the conjunction $$\widetilde X := (\bigwedge _{i=1}^k x_i) \wedge (\bigwedge _{j=k+1}^{k+m} \neg x_j)$$ of positive and negative literals over $I$. The logical meaning of $X$ is given as follows. For any assignment $t \in 2^I$, we define the associated Boolean assignment $\tilde t: I \to \{0,1\}$ as $\tilde t(x) = 1$ if $x \in t$ and $\tilde t(x) = 0$ otherwise. Then, we can easily show that $X \sqsubseteq t$ if and only if the conjunction $\widetilde X$ is valid on $\tilde t$, that is, $\tilde t \models \widetilde X$.
\subsection{Our data mining problem}
Let $\mathbb{LD}$ and $\mathbb P$ be domains of labeled databases and patterns over $I$. A \textbf{pattern constraint} (or constraint)
over $\mathbb{LD}$ and $\mathbb P$ is a mapping
$\mathbb C(\cdot \mid \mathbb P): \mathbb{LD} \to \mathbb P$
that assigns a given labeled database $D = (D_+, D_-) \in \mathbb{LD}$ to a subset $\mathbb C(D \mid \mathbb P)\subseteq \mathbb P$ of patterns.
We will simply refer to $\mathbb C = \mathbb C(D\mid \mathbb P)$ as a \textbf{constraint} on $\mathbb P$ if $D$ is clear from context, . In the later sections, we will introduce classes of particular constraints including contrast, emerging pattern, minimality, and their composite constraints.
Now, we state our data mining problem considered in this paper. Suppose we fix a constraint $\mathbb C$.
\subsubsection{Problem:}
The constrained pattern mining problem w.r.t. constraint $\mathbb C$
\begin{itemize}
\item \textbf{Inputs:} A universe $I$ of items and a labeled database $D = (D_+, D_-)$ over $I$.
\item \textbf{Task:} Find all $\mathbb C$-interesting generalized patterns $X \in \mathbb P$ such that $X \in \mathbb C(D \mid \mathbb P)$ on the labeled database $D$.
\end{itemize}
We remark that the above formulation includes many of previous itemset mining problems by changing the constraint $\mathbb C$. In the remainder of this paper, we consider the pattern mining problem under the constraint of the form $\mathbb{MIN}(\mathbb{CP}[\sigma_+,\sigma_-]\cap \mathbb{GR}[\theta]\cap \mathbb{C}[f, \eta])$, where $f$ is any convex function such as the chi-square constraint $\mathbb{CHI}[\eta]$.
\subsection{Constraints and scores of pattern}
Let $D = (D_+, D_-)$ be a labeled database. A pattern constraint (or constraint) is a subset $\mathbb C\subseteq \mathbb P$ of patterns. In this paper, we consider the following classes of constraints:
\subsubsection{Constraint 1. Contrast constraint}
For any non-negative integers $\sigma_+ \in [0..|D_+|]$ and $\sigma_- \in [0..|D_-|]$, the constraint $\mathbb{CP}[\sigma_+, \sigma_-]$ is defined as follows: any pattern $X$ belongs to $\mathbb{CP}[\sigma_+, \sigma_-]$ if and only if $Sup_+(X) \geq \sigma_+$ and $Sup_-(X) \leq \sigma_-$.
Members of $\mathbb{CP}[\sigma_+, \sigma_-]$ are called \textbf{contrast patterns}. Members of $\mathbb{CP}[1, 0]$ are called \textbf{jumping emerging patterns}.
\subsubsection{Constraint 2. Growth rate constraint}
For any non-negative real number $\theta \in [0,\infty]$, the constraint $\mathbb{GR}[\theta]$ is defined as follows: a pattern $X$ belongs to $\mathbb{GR}[\theta]$ if and only if $GR(X\mid D_+, D_-) := Sup_+(X) / Sup_-(X) \geq \theta$.
Members of $\mathbb{GR}[\theta]$ are called \textbf{emerging patterns patterns}.
\subsubsection{Constraint 3. Chi-square constraint}
For any non-negative real number $\gamma \in [0,\infty]$, a pattern $X$ belongs to $\mathbb{CHI}[\gamma]$ if and only if $\chi^2(X \mid D_+, D_-) \ge \gamma$, where $\chi^2(X)$ is chi-squared value.
\subsubsection{Constraint 4. Composition of constraint}
The constraint consisting of all patterns satisfying two constraints $\mathbb C_1$ and $\mathbb C_2$ is represented by their intersection $\mathbb C := \mathbb C_1\cap\mathbb C_2$.
\subsubsection{Constraint 5. Minimality constraint}
Let $\mathbb C\subseteq \mathbb P$ be any constraint. The minimal $\mathbb C$-constraint, denoted by $\mathbb{MIN\:C}$, is the set of all minimal members of w.r.t.~$\mathbb C$, that is, any pattern $X$ belongs to $\mathbb{MIN\:C}$ if and only if (i) $X$ belongs to $\mathbb C$, and (ii) no proper subset $Y \subset X$ belongs to $\mathbb C$.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\linewidth]{fig_domain.pdf}
\caption{The sub-regions for $\mathbb{CP}[\sigma_+,\sigma_-]$, $\mathbb{GR}[\theta]$, and their composite constraint $\mathbb R :=\mathbb{CP}[\sigma_+,\sigma_-]\cap \mathbb{GR}[\theta]$ as a blue rectangle, a red triangle, and their intersection as a purple pentagon}
\label{fig:domain}
\end{figure}
\subsubsection{Example of a composite constraint}
For instance, $\mathbb{MIN}(\mathbb{CP}[\sigma_+,\sigma_-]\cap \mathbb{GR}[\theta])$ stands for the class of all minimal patterns that satisfy the contrast constraint w.r.t.~$(\sigma_+,\sigma_-)$ and the growth rate constraint w.r.t.~$\theta$.
In Fig.~\ref{fig:domain}, we show the sub-regions for $\mathbb{CP}[\sigma_+,\sigma_-]$, $\mathbb{GR}[\theta]$, and their composite constraint $\mathbb R :=\mathbb{CP}[\sigma_+,\sigma_-]\cap \mathbb{GR}[\theta]$ as a blue rectangle, a red triangle, and their intersection as a purple pentagon.
A minimal pattern is a point within the pentagon $\mathbb R$ that is minimal w.r.t.~set inclusion $\subseteq$.
|
1,314,259,995,574 | arxiv | \section{\label{sec:level1}Introduction}
Bright flaring from accreting black holes is seen at all wavelengths, but the mechanism powering high-energy flares is still a topic of major debate. Rapid $\gamma$-ray flares have been observed from active galactic nuclei, in the form of very high-energy ($> 100$ GeV) emission (\citealt{Albert2007,Aharonian2007,Aharonian2009,Aleksic2014S}). The variability timescale of the flares can be shorter than the light-crossing time of the event horizon, constraining the emitting region to be of the order of a Schwarzschild radius. Bright TeV flares are also periodically observed from the supermassive black hole M87$^{*}$, in the center of the Messier 87 galaxy (\citealt{Hess2006,Veritas2010,veritas2012,Magic2021}). The flares show a flux rise and decay timescale of 1-3 days, emitting $\gtrsim 10^{41}$ erg/s (\citealt{Hess2012}), which is non-negligible compared to the total jet power of $10^{42}-10^{44}$ erg/s (e.g., \citealt{Prieto2016}). High-energy $\gamma$-rays originating nearby the horizon can be absorbed by background photons to create electron-positron pairs, preventing their escape. Therefore, it is unclear if there is a mechanism that can produce such flares near the horizon and under which conditions the {radiation} can freely escape.
Furthermore, the black hole in the Galactic Center, Sgr A$^{*}$, shows intriguing infrared and X-ray flares on similarly short dynamical timescales (\citealt{baganoff2001,Eckart2004,Neilsen2015}) originating from near the horizon (\citealt{Gravity2018,Gravity2021}).
Magnetically arrested disk (MAD, \citealt{1974Ap&SS..28...45B,1976Ap&SS..42..401B,narayan2003}) accretion is the most plausible scenario for the accretion flow onto active galactic nuclei showing strong jets (see, e.g., \citealt{EHTVII2021} for M87$^{*}$). Sources fed by stellar winds, like Sgr A$^{*}$, are also capable of producing MADs (\citealt{Ressler_2020}). {General-relativistic magnetohydrodynamics (GRMHD) simulations show that a large amount of poloidal (pointing in the $R$- and $z$-directions) magnetic flux (proportional to the square root of the mass accretion rate) is forced into the black hole by the accreting gas, until the flux becomes dynamically important and strong enough to push the accreting gas away (\citealt{2003ApJ...592.1042I,2008ApJ...677..317I,Tchekhovskoy2011}).} The MAD state is accompanied by large-amplitude fluctuations, caused by quasi-periodic accumulation and escape of the magnetic flux bundles in the vicinity of the black hole (\citealt{2008ApJ...677..317I,Tchekhovskoy2011,dexter2020sgr,Porth2020flares}).
{Recently, extreme resolution two-dimensional (2D) GRMHD simulations showed that escape of magnetic flux bundles from the black hole, resulting in the decay of magnetic flux on the horizon, occurs through {plasmoid-mediated} reconnection (\citealt{Ripperda2020}). The magnetic flux decay is accompanied the ejection of the accretion disk (\citealt{Proga2003}). This ejection results in the formation of a magnetosphere, consisting of an equatorial plasmoid-unstable} current sheet of oppositely directed magnetic field, that separates two highly magnetized jet regions. {Reconnection in the current sheet releases energy that can power a flare and the tension of the reconnected flux can push the gas away and suppress the mass accretion rate.} The jets, which supply the matter in the current sheet, are highly magnetized because their large-scale magnetic field serves as a barrier to ions within the accretion disk. Pair discharges can generate ample electron-positron plasma to fill the magnetospheric region \citep{parfrey2019,Crinquand2020}. The collisional mean free path of particles is much larger than the characteristic length scale of the system. As a result, the magnetospheric electron-positron plasma is collisionless, and can be accelerated in a reconnecting current sheet into a power-law distribution, and subsequently power high-energy flares. In magnetized and collisionless plasma conditions, reconnection occurs in the {plasmoid-mediated} regime {at a universal reconnection rate of $v_{\rm rec} / v_{\rm A} \sim 0.1$, where $v_{\rm rec}$ is the inflow velocity into a current sheet, and $v_{\rm A} \sim c$ is the Alfv\'{e}n speed (\citealt{sironi2014,Guo_2014,Werner_2015}).}
{In {collisional systems as described by GRMHD, the reconnection rate in the plasmoid-mediated regime at high Lundquist numbers (and at sufficiently high resolution to resolve the spatial scales associated to that Lundquist number) converges to a universal value of $v_{\rm rec} / v_{\rm A} \sim 0.01$, becoming independent of the resistivity} (\citealt{bhattacharjee2009,uzdensky2010,ripperda2019,Ripperda2020}) \footnote{{Note that the reconnection rate in the plasmoid-mediated regime in collisionless systems is approximately ten times faster than in collisional systems described by GRMHD (\citealt{sironi2014,Guo_2014,Werner_2015,Bransgrove2021}). At low resolutions, GRMHD simulations show higher reconnection rates, which are however a result of large numerical diffusion instead of plasmoid-mediated reconnection.}}. Resolving plasmoid-mediated reconnection, and hence a converged universal reconnection rate, in global black hole simulations requires resolutions higher than $\sim 2000$ cells in the {$\theta$}-direction to capture thin current sheets liable to the plasmoid instability (\citealt{Ripperda2020,Bransgrove2021}).} The flare time-scale is governed by the flux decay which is directly set by the reconnection rate (\citealt{Bransgrove2021}){, which makes it particularly important to resolve the plasmoid instability in thin current sheets}.
{Our goal here is to understand if a macroscopic reconnecting current sheet can form and power a flare in converged 3D GRMHD simulations, despite the excitation of non-axisymmetric effects like a Rayleigh–Taylor-type instability (RTI) preventing the complete arrest of accretion (\citealt{Tchekhovskoy2011,Papadopoulos2019}).}
{In this Letter we conduct the highest-resolution global 3D GRMHD simulations to-date to show that plasmoid-mediated magnetic reconnection in transient, non-axisymmetric current sheets can power flares from accreting black holes and that the magnetic flux decay on the black hole event horizon is governed by the universal reconnection rate.}
{Throughout the manuscript we use geometrized units with gravitational constant, black-hole mass, and speed of light $G = M = c = 1$; such that length scales are normalized to the gravitational radius $r_{\rm g} = GM/c^2$ and times are given in units of $r_{\rm g}/c$.
We employ spherical Kerr-Schild coordinates, where $r$ is the radial coordinate, $\theta$ and $\phi$ are the poloidal and toroidal angular coordinates, respectively, and $t$ is the temporal coordinate.}
\section{Numerical setup}
{Reconnecting current sheets are plasmoid-unstable for Lundquist numbers (\citealt{bhattacharjee2009})}
\begin{equation}
S=v_{\rm A} w / \eta_{\rm num} \geq S_{\rm crit} = 10^4,
\label{eq:lundquist}
\end{equation}
{assuming the Alfv\'{e}n speed $v_{\rm A} \sim c$, and the length of a current sheet $w \sim r_{\rm g}$. Here, we assume that the numerical resistivity proportional to the cell size $\eta_{\rm num} \propto \Delta x^p$, where $p \approx 2$ depending on the details of the second order accurate algorithm. Thus, the constraint on $S$ (Eq. \ref{eq:lundquist}) directly determines the required resolution.} In the plasmoid-mediated regime, the reconnection rate converges to the asymptotic $v_{\rm rec} \sim 0.01c$ in GRMHD (\citealt{Ripperda2020,Bransgrove2021}), directly determining the (converged) rate of magnetic flux decay on the horizon. To achieve the resolution required to capture the plasmoid-mediated reconnection {and, hence, achieve long-sought convergence in the reconnection rate,} we employ our GPU-accelerated GRMHD code H-AMR (\citealt{liska2019}). We set the effective numerical resolution to $N_r \times N_\theta \times N_\phi = 5376\times2304\times2304$ (dubbed ``extreme resolution'' from here onward) to {ensure} that we capture thin plasmoid-unstable current sheets (\citealt{Ripperda2020}). To study convergence of the reconnection rate and the rate at which magnetic flux can escape {from the black hole}, we also conduct three {lower resolution} runs at $N_r \times N_\theta \times N_\phi = 2240\times1056\times1024$ (``high resolution''); $580\times288\times256$ (``standard resolution''); and $288\times128\times128$ (``low resolution'').
{We initialize our simulation to obtain a prograde MAD around a Kerr black hole with dimensionless spin $a=0.9375$, starting from a torus threaded by a single weak poloidal magnetic field loop, defined by the vector potential $A_{\phi} \propto \max\left[{\rho}/{\rho_{\rm max}}\left({r}/{r_{\rm in}}\right)^3\sin^3\theta\exp\left(-{r}/{400}\right)-0.2, 0\right]$, normalized to the gas-to-magnetic-pressure ratio $\beta = 2p/b^2 = 100$.} We replenish gas density $\rho$ in low-density regions to maintain $\sigma_{\rm max}=25$ where the magnetization $\sigma=b^2/(4 \pi \rho c^2)$ is defined using the magnetic field strength $b$ co-moving with the fluid, and fluid-frame rest-mass density $\rho$. We adopt an equation of state for a relativistic ideal gas with an adiabatic index of $\hat{\gamma} = 13/9$, in between a fully relativistic gas $\hat{\gamma}=4/3$ and a fully non-relativistic gas $\hat{\gamma}=5/3$. We employ dimensionless temperature units $T=p/\rho$ with {thermal gas} pressure $p$, where $T=1$ corresponds to $k_{\rm B} T=m_{\rm i} c^2$ with ion mass $m_{\rm i}$ and Boltzmann's constant $k_{\rm B}$ such that $T>1$ indicates relativistic {ion} temperatures.
\section{Reconnection-powered flares}
\begin{figure*}
\centering
\includegraphics[width=0.353\textwidth,trim= 0.85cm 2.3cm 13.4cm 1.3cm, clip=true]{T_2048_xz_40rg_462.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{beta_invertedmagma9122.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{rho_29122.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 1.95cm, clip=true]{T_2048_xz_40rg_478_inset.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{beta9422_invertedmagma9422_inset.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{rho_29422.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 2.15cm, clip=true]{T_2048_xz_40rg_496.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{beta__2048_xz_40rg_9782.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{rho_2048_xz_40rg_9782.pdf}
\caption{{Plasmoid-mediated reconnection, which takes place at sufficiently high resolutions in MHD, is seen in a 3D GRMHD simulation for the first time. Resolving the dynamics of X-points and plasmoids in the current sheet can be the key to understanding the source of black hole non-thermal emission, e.g., high-energy flares.}
Dimensionless temperature $T=p/\rho$, plasma-$\beta$, and density $\rho$ (from left to right) in the {meridional plane before (top row), during (middle row) in the inner $10 r_{\rm g}$ and after (bottom row) the large magnetic flux eruption} in the inner $40 r_{\rm g}$. During the {magnetic flux eruption}, the accretion disk is ejected and the broad accretion inflow is reduced to a thin {plasmoid-unstable} current sheet, indicated by {X-points and} magnetic nulls shown by the antiparallel in-plane field lines (in green{, see inset in panel D) and the high $\beta$ (inset panel E)}. The hot ($T \sim \sigma_{\rm max}$) exhaust of the reconnection layer heats the jet sheath. Reconnection transforms the horizontal field in the current sheet to vertical field that is ejected in the form of hot coherent flux tubes (panel G) at low $\beta$ and density (panels H,I).}
\label{fig:panelXZ}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.353\textwidth,trim= 0.85cm 2.3cm 13.4cm 1.3cm, clip=true]{T_2048_xy_40rg_462.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{beta_2048_xy_40rg_462.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{Rho_2048_xy_40rg_462.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 1.95cm, clip=true]{T_2048_xy_40rg_478.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{beta_2048_xy_40rg_478.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{Rho_2048_xy_40rg_478.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 2.15cm, clip=true]{T_2048_xy_40rg_496.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{beta_2048_xy_40rg_496.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{rho_2048_xy_40rg_496.pdf}
\caption{{Our extreme resolution simulation reveals small-scale structure and interface instabilities of magnetic flux bundles escaping from the black hole, in an equatorial slice through the system.} Dimensionless temperature $T=p/\rho$, plasma-$\beta$, and density $\rho$ (from left to right) in the equatorial plane before a large {magnetic flux eruption} (top row), during the {magnetic flux eruption} (middle row) in the inner $10 r_{\rm g}$ and after the {magnetic flux eruption} (bottom row) in the inner $40 r_{\rm g}$. Gaps of low $\beta$ and density form during the {pre-eruption} quiescence while many azimuthal RTI modes accrete.
During the {magnetic flux eruption} a single large $T>1$ spiral forms with a gap where the sheet moved out of the equatorial plane. Magnetic flux escapes through the spiral current sheet, while accretion continues over a small angle $\phi<2\pi$ at $x \approx 2 r_{\rm g}$ and $y \approx -1$ to $y \approx -2$. In the bottom row the inner $10 r_{\rm g}$ is in quiescent accretion state, and a hot flux tube that is ejected from the reconnection layer is in orbit at $x \approx 10 r_{\rm g}$ to $x \approx 30 r_{\rm g}$ and $y \approx -10 r_{\rm g}$ to $y \approx 20 r_{\rm g}$. The low $\beta$ flux tube shows clear signatures of instabilities at its boundaries mixing low density plasma into the disk.}
\label{fig:panelXY}
\end{figure*}
{We analyze the flaring mechanism and its properties in the {MAD} after $t \approx 5000 r_{\rm g}/c$ {when} the accretion flow has settled into a quasi-steady state of a constant mass accretion rate and magnetic flux {on the black hole event horizon} (see Figure \ref{fig:mdot} in the Supplemental Material)}.
The accumulation of magnetic flux on the horizon cannot continue beyond the limit in which {the outward magnetic force balances the inward gravitational force}. When the magnetic flux reaches this limit in axisymmetry (2D), accretion is halted completely and a low density magnetosphere with an equatorial current sheet can form transiently (\citealt{Ripperda2020}). In 3D, a large spectrum of RTI modes develops in the turbulent inner edge of the disk, steadily driving accretion.
{The magnetic flux periodically erupts from the black hole into the disk. These eruptions are made possible by near-event-horizon reconnection, which converts the magnetic energy into the energy of emitting particles and can naturally power a flare.}
Figures \ref{fig:panelXZ} (at $\phi=0$, i.e., the meridional plane) and \ref{fig:panelXY} (at $\theta=\pi/2$, i.e., the equatorial plane) show the gas temperature $T=p/\rho$ with magnetic field lines plotted as green lines, the gas-to-magnetic-pressure ratio $\beta=8 \pi p/B^2$, and rest-mass density $\rho$ {around the time of one such flares at $t\sim 9500 r_{\rm g}/c$. Namely, we show the quantities in the quiescent period (i.e., a period of quasi-constant magnetic flux at the horizon) before, during, and after the large magnetic flux eruption, respectively, at $t=9122 r_{\rm g}/c$, $t=9422 r_{\rm g}/c$, and $t=9782 r_{\rm g}/c$ (where we zoom out to show large-scale effects).}
Shortly before and during a flare, accretion only occurs through large{-scale} (i.e., low azimuthal mode-number) spiral RTI modes (see also \citealt{Takasao2019} for a very similar scenario explaining protostellar flares) creating a transient, non-axisymmetric (i.e., over an angle $\phi<2\pi$), magnetized (i.e., low plasma-$\beta$), low-density magnetosphere (top and middle rows in Figures \ref{fig:panelXZ} and \ref{fig:panelXY}) pushing the accretion disk outward and resulting in a drop in mass accretion rate. A macroscopic equatorial current sheet forms in the magnetosphere, extending from the horizon to the disk at $x=r\sin\theta\cos\phi\approx-5 r_{\rm g}$ at $z=r\cos\theta\approx 0$ shown by the antiparallel magnetic field lines ({inset in panel D}, green lines).
Reconnection pinches off the horizontal magnetic field in the sheet, transforming it into vertical ($z$) magnetic field, reminiscent of the 2D results of \cite{Ripperda2020}.
The {flux eruption} originates from the inner magnetosphere where the highly magnetized plasma in the jet directly feeds the current sheet.
The plasma density in the jet is determined by the density floor at $\sigma_{\rm max}=25$ in our simulations, whereas in reality it is much more strongly magnetized ($\sigma \gg \sigma_{\rm max}$) pair plasma. {Reconnection occurs locally in X-points where a field line breaks and reconnects to other field line (see insets in Figures~\ref{fig:panelXZ}D and \ref{fig:panelXZ}E). In these X-points, reconnection heats the plasma up to $T \sim \sigma_{\rm max} = 25$ (left panels) after which it is expelled from the layer at Lorentz factors up to $\gamma \propto \sqrt{\sigma_{\rm max}} = 5$ (\citealt{lyubarsky2005}, see also Supplemental Material for an exploration of different $\sigma_{\rm max}$ in 2D).} The flux is expelled through reconnection into the low-density region in between the large low-mode-number RTI modes accreting spirals. {Electrons and positrons accelerated to non-thermal energies through reconnection at the X-points in the macroscopic equatorial current sheet can power high-energy flares that may reach a distant observer during the drop in the mass accretion rate.}
Small plasmoids are visible close to the horizon and a larger hot plasmoid is detected at $x=-3 r_{\rm g}$ (middle row in Figure \ref{fig:panelXZ}) as a result of the merger of smaller escaping plasmoids. The plasmoids that escape the gravitational pull of the black hole interact with the disk and jet sheath resulting in significant heating up to at least $z \gtrsim \pm 40 r_{\rm g}$. The bottom row of Figure \ref{fig:panelXZ} shows a large magnetic flux tube at $x \approx 20-30 r_{\rm g}$: a low density region of strong vertical field (low plasma-$\beta$) heated to medium temperature $T \sim 0.1-1$. {The flux tube forms as a result of the reconnection that converts horizontal magnetic field into vertical field that is ejected from the reconnection layer. Filled with heated plasma, the flux tube can appear as a hot spot. The accumulated vertical magnetic flux in this hot spot} can remain coherent for approximately one orbital time scale {between $10$ and $30 r_{\rm g}$} (bottom row in Figure \ref{fig:panelXY} between $y \approx -20 r_{\rm g}$ and $y\approx 20 r_{\rm g}$), while the inner $10 r_{\rm g}$ is already in the quiescent accretion state at $t=9782 r_{\rm g}/c$. RTIs develop at the boundary of the hot spot, which mix the hot low density plasma into the surrounding accreting gas. The hot spots are expected to be filled with positrons and electrons energized by the reconnection, which in this way can end up in the accretion disk.
After the flaring episode, magnetic flux builds up on the horizon and the quasi-steady-state accretion cycle develops again. Smaller and less hot current sheets where $B^{\phi}$ changes sign also exist in the inner $\sim 20 r_{\rm g}$ of the turbulent accretion disk during the quiescent period, indicated by thin high-$\beta$ layers of anti-parallel field lines (top and bottom rows in Figures \ref{fig:panelXZ} and \ref{fig:panelXY}).
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth, clip=true]{Presentation2.pdf}
\caption
Volume rendering of the temperature $T=p/\rho$ shows plasmoids and hot current sheets. Extreme resolution allows the current sheets to become thinner and hotter than typically seen in GRMHD simulations.
(Panel A:) During a large flare a relativistically hot $T>1$ spiral current sheet forms. Accretion occurs over a small azimuthal angle $\phi < 2\pi$ in the $T<1$ (white) regions. The green field lines, seeded in the current sheet ($T>1$), remain in the current sheet and are mostly attached to the black hole. Blue field lines are seeded in the disk, where some disk field lines are accreting onto the black hole in the $T<1$ region. (Panel B:) In the quiescent state $T\leq 1$ everywhere, and both green and blue field lines (with the same seeds as in panel A) are in the disk, accreting onto the black hole. The inset (C) shows a zoom into {the inner $r_{\rm g}$ in} the flare state with multiple escaping flux loops (green field lines). In the small black box we highlight an escaping flux tube with vertical field as the result of reconnection (green) and an infalling flux tube (purple). We also show a plasmoid, indicated by the helical field line (green) in the second small black box.}
\label{fig:3D}
\end{figure*}
Figure \ref{fig:3D}{A} visualizes the 3D nature of the hot current sheet by showing the temperature and magnetic field line structure in the inner $10 r_{\rm g}$ during the flare at $t=9422 r_{\rm g}/c$. The current sheet has a relativistic temperature $T>1$, whereas shortly before the flare at $t=9122 r_{\rm g}/c$ ({\ref{fig:3D}B}) there are no structures at $T>1$. During flare, the (green) field lines in the current sheet {(i.e., seeded in the $T>1$ region in \ref{fig:3D}{A})} have a clear spiral structure and are separated from the more vertical field lines in the disk (blue). During the quiescence before the flare {(Figure \ref{fig:3D}B)} no such distinction is visible and all field lines (green and blue{, which are seeded at the same points as in panel \ref{fig:3D}A}) are part of the disk. {The extreme resolution allows to capture multiple plasmoids identified as 3D helical field line structures in the sheet (Figure~\ref{fig:3D}C) during the magnetic flux eruption}. We highlight a typical X-point as the manifestation of reconnection, separating an infalling (purple field line) and escaping flux tube (green field line) in the hot current sheet. Similar X-points can be detected in {e.g., the inset in Figure \ref{fig:panelXZ}D}.
\begin{figure*}
\centering
\includegraphics[width=0.476\textwidth,trim= 2.45cm 6.25cm 8.5cm 0cm, clip=true]{beta_128_fig4_xz_10rg_737.pdf}
\includegraphics[width=0.51\textwidth,trim= 4.2cm 6.25cm 5.1cm 0cm, clip=true]{beta_256_fig4_xz_10rg_870.pdf}
\includegraphics[width=0.476\textwidth,trim= 2.45cm 4.7cm 8.5cm 1.5cm, clip=true]{beta_1024_fig4_xz_10rg_474.pdf}
\includegraphics[width=0.51\textwidth,trim= 4.2cm 4.7cm 5.1cm 1.5cm, clip=true]{beta_2048_fig4_xz_10rg_370.pdf}
\includegraphics[width=0.498\textwidth, trim= 0.3cm 10.5cm 14.3cm 0.4cm, clip=true]{mdot_phidot_paper.pdf}
\includegraphics[width=0.496\textwidth, trim= 0.3cm 10.5cm 14.5cm 0.4cm, clip=true]{recrate.pdf}
\includegraphics[width=0.498\textwidth, trim= 0.3cm 0.3cm 14.3cm 9.9cm, clip=true]{mdot_phidot_paper.pdf}
\includegraphics[width=0.496\textwidth, trim= 0.3cm 0.3cm 14.5cm 9.9cm, clip=true]{recrate.pdf}
\caption{{The equatorial current sheet that forms during the magnetic flux eruption is unresolved at low and standard resolutions (panels A,B) such that magnetic field lines (green lines) diffuse through the current sheet and do not reconnect, due to the high numerical resistivity. At high and extreme resolutions (C,D), the field lines are antiparallel in the current sheet, and they reconnect in well-defined X-points. Smaller current sheets are resolved in the accretion disk at high and extreme resolutions, potentially heating the plasma through reconnection.} {Panel E shows the} magnetic flux on the horizon for the four numerical resolutions. The extreme and high resolution runs show two and three large flare periods, respectively, indicated by flux decay at a rate $\propto e^{-t/500}$ governed by the reconnection rate (dashed black lines). A mini-flare is indicated by the small flux drop at $t\approx 6800 r_{\rm g}/c$ in the extreme resolution run. The standard and low resolution runs show a faster flux decay $\propto e^{-t/350}$ governed by the enhanced reconnection rate due to an increased numerical resistivity. Flares in the extreme resolution run are accompanied by clear drops in the mass accretion rate {(panel G)}, due to the expulsion of the disk over a large azimuthal angle. {Panel F shows a} {cut through the equatorial current sheet at $x\approx1.5 r_{\rm g}$ during the flare state (indicated by the red dashed line in panels A-D)}, displaying the three components of the magnetic field $B^i$ in minimum variance coordinates. Both the (nearly) radial field $B^L$ and the (nearly) toroidal field $B^M$ reconnect and go through zero. The guide field $B^N$ is (close to) zero. {Panel H shows the} $\mathbf{E} \times \mathbf{B}$ speed flowing into the current sheet. After correcting for the bulk velocity, the reconnection rate $v_{\rm rec}\approx0.01c$, which we confirmed at 10 radial cuts and during several flare periods.
}
\label{fig:recrate}
\end{figure*}
{Figure \ref{fig:recrate}A-D shows zooms into the current sheet during large magnetic flux eruptions for the four numerical resolutions employed. The drop in magnetic flux at low and standard resolutions (panels A,B) is not accompanied by a large drop in mass accretion rate (see panels E,G), due to the large numerical diffusion. The magnetic field diffuses through the thick current sheet and does not reconnect, due to the large numerical resistivity. This results in a too high reconnection rate and a large heated area (see Supplemental Material, Figure \ref{fig:panelXZlowres} for more properties of the large magnetic flux eruption at low resolution). The current sheet is in these cases not plasmoid-unstable. The high resolution flux eruption (panel C) behaves similarly to the extreme resolution result (panel D) from Figure \ref{fig:panelXZ}, indicating that the plasmoid instability is resolved on the grid, and that the reconnection rate is converged to a universal value of $0.01c$ (panel H).}
In Figure \ref{fig:recrate} we {also} analyze the magnetic flux $\dot{\phi}_{\rm BH} := \frac{1}{2}\int_{0}^{2 \pi} \int_{0}^{\pi} |^{*} F^{rt}| \sqrt{-g} \, d\theta d\phi$ on the horizon {(\ref{fig:recrate}E)} and the mass accretion rate $\dot{m} := -\int_{0}^{2 \pi} \int_{0}^{\pi} \rho u^r \sqrt{-g} d\theta d\phi$ through the inner $5 r_{\rm g}$ {(\ref{fig:recrate}G)}, where $g$ is the metric determinant, $u^\mu$ is the fluid 4-velocity, $^* F^{\mu\nu}$ is the dual of the Faraday tensor, and $\rho$ is the fluid-frame rest-mass density. After $\sim 5000 r_{\rm g}/c$ the flow sets into a quasi-steady state which is globally converged for all resolutions. For the extreme resolution run (magenta line {Figure \ref{fig:recrate}E)} we observe two major flux decays, which we associate with large flares, at $t\approx 7300 r_{\rm g}/c$ and $t\approx 9300 r_{\rm g}/c$, both lasting for a few $\sim 100 r_{\rm g}/c$. We also observe a small flux decay at $t\approx 6800 r_{\rm g}/c$, associated with a smaller flare, or ``mini-flare''. For all flares, the magnetic flux on the event horizon decays quasi-exponentially with time with characteristic timescale $\tau\approx 500$~$r_{\rm g}/c$ (indicated by the black dashed lines), implying that the decay is governed by reconnection at a universal rate of $0.01c$, consistent with the decay observed for a split monopole magnetic field on the event horizon (\citealt{Bransgrove2021}). All three events are accompanied by a large drop in mass accretion rate (Figure \ref{fig:recrate}G) that is related to the ejection of the accretion disk such that the accretion is funneled through a small azimuthal angle $\phi < 2\pi$ and nearly halts.
For {the high resolution run (red line, Figure \ref{fig:recrate}E), }similar flare episodes can be observed at $t\approx 7500 r_{\rm g}/c$, $t\approx 8300 r_{\rm g}/c$, and $t\approx 9400 r_{\rm g}/c$, with flux decaying on the same timescale $\tau\approx 500$~$r_{\rm g}/c$. For lower resolutions (blue and green line) there is a clearer distinction: large flares show (e.g., at $t\approx 7300 r_{\rm g}/c$ for low resolution, and $t\approx 8300 r_{\rm g}/c$ and $8600 r_{\rm g}/c$ at standard resolution) a faster decay rate $\tau\approx 350$~$r_{\rm g}/c$, implying a faster reconnection rate $> 0.01c$. Mini-flares (e.g., at $t\approx 9300 r_{\rm g}/c$ for low resolution and $t\approx 7500 r_{\rm g}/c$ for high resolution) instead show a flux decay at a rate of $\tau\approx 500$~$r_{\rm g}/c$ implying a reconnection rate of $\sim 0.01c$. At low and standard resolution{s}, these mini-flares are typically {\it not} accompanied by a clear drop in $\dot{m}_{5r_{\rm g}}$ {(Figure \ref{fig:recrate}G)}, while large flares are showing a clear drop in $\dot{m}_{5r_{\rm g}}$ implying a large ($\gtrsim 5 r_{\rm g}$) current sheet. {This can be explained by the large numerical diffusion of the thinning current sheet in both the $z$ and $y$ directions, resulting in a too broad accretion funnel at low and standard resolution (Figure \ref{fig:recrate}A,B). Mini-flares are better captured at lower resolutions than large flares due to the shorter length of the current sheet and the higher effective resolution of the spherical grid at small radii (see Supplemental Material).}
The reconnection rate can be determined directly by selecting a current sheet during a flare episode and measure the inflow speed of the plasma into the reconnection layer.
To do so, we first transform the Eulerian electric and magnetic fields into a local inertial frame to apply standard reconnection analysis in flat spacetime (\citealt{Ripperda2020}).
The fields are expressed in minimum variance coordinates (\citealt{Howes_2016}), with $B^L$ projected in the flat frame along the poloidal direction parallel to the current sheet, $B^M$ along the toroidal direction and $B^N$ perpendicular to the current sheet, to determine the upstream geometry, showing a typical Harris-type sheet structure in {Figure \ref{fig:recrate}F}. Both the toroidal and poloidal components switch sign in the sheet, indicating that zero-guide-field reconnection occurs. The inflow speed is determined from the $\mathbf{E}\times\mathbf{B}$-velocity projected along the direction perpendicular to the reconnection layer. In Figure {\ref{fig:recrate}H} we measure the inflow speeds from left and right of the current sheet as $v_{\rm in, left}=(v_{\rm bulk} + v_{\rm rec}) / (1 + v_{\rm bulk} v_{\rm rec}/c^2)$ and $v_{\rm in, right}=(v_{\rm bulk} - v_{\rm rec}) / (1 - v_{\rm bulk} v_{\rm rec}/c^2)$ and solve for $v_{\rm rec}$, where we corrected for the relativistic speed of the bulk flow. We select 10 cuts of the current sheet at different radii and consistently find a reconnection rate of $\sim 0.01c$, indicating a Lundquist number of at least $S = v_{\rm rec}^{-2} = 10^4$. Reconnection thus occurs in the asymptotic {plasmoid-mediated} regime where $S=v_{\rm A} w / \eta_{\rm num} \geq S_{\rm{crit}} = 10^4$ (\citealt{bhattacharjee2009}) for our extreme resolution run, where the length of the sheet $w \gtrsim r_{\rm g}$, Alfv\'{e}n speed $v_{\rm A} \sim c$ and numerical resistivity $\eta_{\rm num}$. In the supplemental material we show the same analysis for the lower resolution simulations, concluding that the {extreme and} high resolution results are in the {plasmoid-mediated} regime, whereas the standard and low {resolution} runs show reconnection rates $>0.01c$, and do not display plasmoids. The enhanced reconnection rate due to larger numerical resistivity at lower resolutions manifests itself as an increased flux decay rate and hence directly affects the flare time scale.
\begin{figure*}
\centering
\includegraphics[width=0.3527\textwidth,trim= 0.85cm 2.3cm 13.4cm 1.3cm, clip=true]{T_2048_xz_40rg_345.pdf}
\includegraphics[width=0.3183\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{beta_2048_invertedmagma6852.pdf}
\includegraphics[width=0.3183\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{rho_2048_26852.pdf}
\includegraphics[width=0.354\textwidth,trim= 0.85cm 0.785cm 13.4cm 2.15cm, clip=true]{T_2048_xy_40rg_345.pdf}
\includegraphics[width=0.3177\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{beta_2048_xy_40rg_345.pdf}
\includegraphics[width=0.3177\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{Rho_2048_xy_40rg_345.pdf}
\caption{
{Smaller flux eruptions show shorter current sheets, potentially powering mini-flares that are not accompanied by a large-scale evacuation of the accretion disk.}
Meridional (top row) and equatorial (bottom row) cuts of temperature $T=p/\rho$ (left), plasma-$\beta$ (middle) and density $\rho$ (right) during the mini-flare at $t=6852 r_{\rm g}/c$. The magnetic flux is expelled through a smaller ($w \lesssim 3 r_{\rm g}$) current sheet, close to the horizon, in a short time $\ll 100 r_{\rm g}/c$. The accretion disk is not expelled over a large azimuthal angle, yet the flare is accompanied by a significant drop in mass accretion rate {(see Figure \ref{fig:recrate}G)} and clear gaps in the density (F). Multiple small current sheets are visible in the accretion disk at $x\geq 3 r_{\rm g}$ indicated by the high plasma-$\beta$ (B).}
\label{fig:panelminiflare}
\end{figure*}
In Figure \ref{fig:panelminiflare} we show temperature (left column), plasma-$\beta$ (middle column), and rest-mass density (right column) in both the {meridional} (top row) and {equatorial} (bottom row) plane for the mini-flare in the extreme resolution run at $t\approx 6800 r_{\rm g}/c$. In this case, the accretion disk is not ejected far beyond $5 r_{\rm g}$, still creating a spiral density gap causing the mass accretion rate to drop significantly. Reconnection occurs in a shorter, $\lesssim 5 r_{\rm g}$ plasmoid-unstable current sheet, very close to the horizon, and this is also the main area that is heated to relativistic temperatures $T>1$. These mini-flares could potentially result in smaller very high energy {flares} and shorter variability time scales (\citealt{Hess2012}).
\section{Radiative properties of the reconnection layer}
To probe the radiation emitted by accelerated particles in the reconnection layer a self-consistent radiative kinetic approach is necessary (\citealt{Hakobyan2019,Crinquand2020,Crinquand2020b}). Here, we use the well-constrained parameters for M87$^{*}$ and Sgr A$^{*}$ to estimate the expected emission properties due to reconnection occurring in the radiative regime.
\subsection{M87$^{*}$ flares powered by radiative reconnection}
In our simulations, the current sheet is fed by plasma in the jet at the floor density with a magnetization $\sigma_{\rm max}=25$. In reality, the reconnection powering the flare close to the event horizon is fed by collisionless pair plasma from the jet with a rate of $v_{\rm rec}/c = 0.1$ (\citealt{Bransgrove2021}) at magnetization $\sigma_{\rm up} = B^2_{\rm up} / (4\pi n m_{\rm e} c^2) = 2U_{\rm B}/ (n m_{\rm e} c^2)$, where $n$ is the number density of electrons with mass $m_{\rm e}$, $B_{\rm up}$ is magnetic field strength upstream from the current sheet and magnetic energy density $U_{\rm B} = B^2_{\rm up}/8\pi$\footnote{\cite{Scepi2021} find a typical $\sigma_{\rm up}=100$ in the upstream, which is due to the floor $\sigma_{\rm max}=100$ that they set. However, for realistic funnel densities that are not limited by floors in GRMHD simulations, the magnetization parameter in the upstream, $\sigma_{\rm up}$, can be much higher.}. The plasma particles are impulsively accelerated by non-ideal electric fields at the X-points \citep{sironi2014}. When they encounter plasmoids, they experience strong synchrotron losses. To parametrize the effect of the radiation backreaction, we define the particle Lorentz factor $\gamma_{\rm rad}^{\rm sync}$ for which the radiation drag force is comparable to the force due to the accelerating electric field $E \sim B_{\rm up} v_{\rm rec} / c$ (\citealt{Uzdensky2011a}):
\begin{equation}
2\sigma_{\rm T} U_{\rm B} (\gamma^{\rm sync}_{\rm rad})^2 = v_{\rm rec} e B_{\rm up} / c,
\label{eq:1}
\end{equation}
where $\sigma_{\rm T} = (8\pi/3)r_{\rm e}^2$ is the Thomson cross-section, $r_{\rm e} = e^2 / (m_{\rm e} c^2)$ is the classical electron radius, and $e$ is the electron's charge. We then find $(\gamma_{\rm rad}^{\rm sync})^2 = 3 v_{\rm rec} B_{\rm cl} / (2 c B_{\rm up})$, where $B_{\rm cl} = m_{\rm e}^2 c^4/e^3 \simeq 6 \times 10^{15}$ G is the classical magnetic field. The global magnetic field strength at $5 r_{\rm g}$ is estimated to be $1-30$G (\citealt{EHTVII2021}), resulting in $5-150$G at the horizon, assuming a $1/r$ dependence (\citealt{Ripperda2020}). We can compare this to the magnetic field strength in the jet, feeding the current sheet close to the event horizon of M87$^{*}$ by equating the observed limits on the total jet power $L_{\rm jet}\sim 10^{42} - 10^{44}$ erg/s (\citealt{Prieto2016}) to the Blandford-Znajek jet power $L_{\rm BZ} = \kappa \Omega^2_{\rm BH} \dot{\phi}^2_{\rm BH}/(4\pi c)$, where $\kappa \approx 0.044$ for a parabolic field geometry, $\Omega_{\rm BH} = ac / 2r_{\rm H} \simeq c/2r_{\rm g}$ is the black hole's angular frequency, $M\approx 6\cdot 10^9 M_{\odot}$ for M87$^{*}$, and $r_{\rm H} = r_{\rm g}(1+\sqrt{1-a^2})$ is the horizon radius (\citealt{BZ1977,Tchekhovskoy2011}), resulting in a range $B_{\rm horizon} \sim 20 - 200$ G at the horizon. By normalizing to a fiducial $B_{\rm up} = 100$ G in this range, we then obtain
\begin{equation}
\gamma_{\rm rad}^{\rm sync} \approx 3 \cdot 10^6 \left(\frac{B_{\rm up}}{100 {\rm G}}\right)^{-1/2}
\label{eq:2}
\end{equation}
The magnetization $\sigma_{\rm up}$ sets the available magnetic energy per particle and determines the typical particle Lorentz factor, $\gamma\sim \sigma_{\rm up}$, for the acceleration at X-points, if cooling were negligible (\citealt{sironi2014,Guo_2014,Werner_2015}). We can rewrite $\sigma_{\rm up} = \omega_{\rm B} / (2\Omega_{\rm BH} \lambda)$, where we plugged in the nominal electron gyrofrequency $\omega_{\rm B} = e B_{\rm up} / (m_{\rm e} c)$ and defined plasma density with respect to the Goldreich-Julian density, $n = \lambda n_{\rm GJ} = \lambda \Omega_{\rm BH} B_{\rm up} / (2\pi c e)$, where $\lambda$ is the multiplicity of the pair cascade in the charge-starved gap in the funnel region $\lambda \lesssim 10^3$ (\citealt{chen2019,Crinquand2020}) or of collisions of photons from the disk, if that process is more efficient (\citealt{Moscibrodzka2011}). The typical ratio between the electron gyrofrequency and the angular frequency of M87$^{*}$ is $\omega_{\rm B} / \Omega_{\rm BH} \sim 10^{14} ({M}/{6 \cdot 10^9 M_{\odot}}) ({B_{\rm up}}/{100 {\rm G}})$, such that $\sigma_{\rm up} \sim 10^{14} ({M}/{6 \cdot 10^9 M_{\odot}}) ({B_{\rm up}}/{100 {\rm G}}) / 2\lambda$. For these parameters, $\gamma^{\rm sync}_{\rm rad} \ll \sigma_{\rm up}$ such that leptons impulsively accelerated at X-points are quickly cooled in plasmoids \citep{Hakobyan2019}. Thus, the reconnection occurs in the radiative regime \citep{uzdensky2011}.
To understand the radiative efficiency of reconnection, we determine the magnetic {\it compactness} $\ell_{\rm B} = U_{\rm B} \sigma_{\rm T} w / (m_{\rm e} c^2)$ \citep{Beloborodov2017}. Using Eq.~\ref{eq:1} and the $\omega_{\rm B} / \Omega_{BH}$ relation, we can rewrite $\ell_{\rm B} = v_{\rm rec} w \omega_{\rm B} / (c^2 (\gamma^{\rm sync}_{\rm rad})^2)$ and obtain
\begin{equation}
\ell_{\rm B} \sim 1 \left(\frac{w}{1 r_{\rm g}}\right)\left(\frac{M}{6 \cdot 10^9 M_{\odot}}\right)\left(\frac{B_{\rm up}}{100 {\rm G}}\right)^2,
\label{eq:3}
\end{equation}
so $\ell_{\rm B} \sim 1$, suggesting potentially efficient pair production, but negligible annihilation \citep{Beloborodov2017}. In this regime the cooling time of accelerated particles, $c t_{\rm sync} / w \sim 1/(\ell_{\rm B} \gamma)$, is much shorter than the light-crossing time of the current sheet. Inverse Compton (IC) cooling of accelerated particles on the $\sim 10^{41}$ ${\rm erg/s}$ low-energy photons with energy density $U_{\rm rad}^{\rm soft} \sim 0.003$ ${\rm erg}$ ${\rm cm}^{-3}$ in the inner $10 r_{\rm g}$ results in $\gamma_{\rm rad}^{\rm IC} \sim \gamma_{\rm rad}^{\rm sync} \sqrt{U_{\rm B}/U_{\rm rad}^{\rm soft}} \sim 10^9$ \citep{Broderick2015,MWL2021}, which is well above $\gamma_{\rm rad}^{\rm sync}$. The jet's magnetic field reconnects with a rate of $0.1c$ in the collisionless radiative regime, after which all reconnected power is directly radiated such that the higher energy density of photons produced by accelerated particles, $U_{\rm rad}^{\rm rec} \sim 0.1 U_{\rm B}$ and hence $L_{\rm rad} \sim 0.1 L_{\rm jet}$ (\citealt{Beloborodov2017,Bransgrove2021}), can lead to very efficient IC cooling. The exact result depends on the spectral shape and reduction by Klein-Nishina effects.
The peak of the synchrotron radiation spectrum is expected to be at the synchrotron burnoff limit $\mathcal{E}_{\rm ph} \sim (\gamma_{\rm rad}^{\rm sync})^2 \hbar \omega_{\rm B} \sim 200 {\rm MeV}$ (\citealt{Uzdensky2011a}), which is independent of the magnetic field strength. The highest energy photons will be produced by IC scattering. Conservatively, the characteristic photon energy that can be produced is ${\rm max}(\mathcal{E}_{\rm ph}) = m_{\rm e} c^2 \gamma_{\rm rad}^{\rm sync} \sim 0.511 {\rm MeV} \cdot \gamma_{\rm rad}^{\rm sync} \sim$ few {\rm TeV}. Additionally, particles can be accelerated beyond $\gamma > \gamma_{\rm rad}^{\rm sync}$ because synchrotron cooling is suppressed in X-points (\citealt{Uzdensky2011a,Cerutti2014}).
For photons with energy above the electron rest-mass energy $m_{\rm e}c^2=0.5 {\rm MeV}$, $e^{\pm}$ pairs are created if there are enough photon-photon collisions with seed photons with low energy $\mathcal{E_{\rm s}} \sim (m_{\rm e} c^2)^2 / \mathcal{E}_{\rm ph}$. High-energy photons of energy $\mathcal{E}_{\rm ph,TeV}$ produced in the magnetospheric region around the current sheet will interact most efficiently with seed photons of energy $\mathcal{E}_{\rm s} \sim (1 {\rm TeV} / \mathcal{E}_{\rm ph,TeV})$ ${\rm eV}$.
Given the uncertainties about the density of a $1 {\rm eV}$ photon field near the event horizon during the flaring state, the escape of TeV photons from the region is an open question \citep{Levinson2011,MWL2021}. Conservatively, if $\sim 1 \%$ of the reconnection dissipated power $U_{\rm rad} \sim 0.1 U_{\rm B}$, $L_{\rm rad} \sim 0.1 L_{\rm jet} \sim 10^{41} - 10^{43}$, is emitted in very high-energy photons, a $\gamma$-ray flux of $10^{39} - 10^{41}$ erg/s can be emitted as a flare.
Our {extreme resolution} GRMHD simulation shows transient flaring periods where the mass accretion rate drops (and, thus, luminosity of seed photons) significantly, by a factor $\sim 5-10$, resulting in large low density regions, such that opacity constraints for the escape of $\gamma$-ray photons from the equatorial current sheet are less strict than during a quiescent state.
The decrease of the mass accretion rate and the local soft photon field can also create favorable conditions for the activation of pair discharges on the jet's magnetic field lines and the potential escape of TeV emission from spark gaps, if the opacity becomes prohibitive during the quiescent state (\citealt{Levinson2011,Crinquand2020}).
The flaring state is distinctively different from the quiescent state observed by \cite{EHTpaper1}, implying that observations during a mass accretion rate drop/flare may result in different $230 {\rm GHz}$ images (Chatterjee et al., in prep.).
The magnetic flux decay and mass-accretion drop lasts for a period of $\sim 100 r_{\rm g}/c$ $\sim$ 1 month for M87$^{*}$, which is longer than the typical observed $\sim 1-3$ day TeV flux rise and decay timescale (\citealt{Hess2012}). However, in a collisionless plasma, the magnetic flux decay period is typically $\sim 3-10$ times shorter due to the faster reconnection rate of $v_{\rm rec} \approx 0.1c$ (\citealt{Bransgrove2021}) compared to $v_{\rm rec} \approx 0.01c$ in GRMHD models \footnote{{Note that the higher reconnection rate in collisionless models is caused by kinetic plasma effects, e.g., gradients of the anisotropic pressure tensor of electrons and positrons in pair plasma (\citealt{Bessho2005}), and is unrelated to the increased reconnection rate due to large numerical diffusion in low resolution GRMHD models.}}, resulting in a flare timescale of $\sim$ few days.
We find that pair production in the current sheet can efficiently mass load the jet with electrons and positrons with energies $\gamma \sim 1-1000$, that can emit synchrotron photons with energies ranging from the radio to optical wavelengths (see Supplemental Material).
\subsection{Sgr A$^{*}$ flares powered by radiative reconnection}
Sgr A$^{*}$ shows daily near-infrared and X-ray flares from the inner $10 r_{\rm g}$, on average every 6 and 12 hours, lasting for 30-80 minutes, respectively (\citealt{baganoff2001,eckart2006,Gravity2018,Witzel2020,Murchikova2021}). The flare periods in our simulation last for $\sim 100 r_{\rm g}/c \sim 30$ minutes, and the subsequent quiescent period for $\sim 2000 r_{\rm g}/c \sim 10$ hours for Sgr A$^{*}$. The resulting hot spot orbits for $\sim 500 r_{\rm g}/c \sim 150$ minutes in the inner $20 r_{\rm g}$ until it diffuses due to mixing instabilities. The magnetic field strength in quiescence is well constrained in the range of $10-50$ G in the inner $10 r_{\rm g}$ for Sgr A$^{*}$ with black hole mass $4 \cdot 10^6 M_{\odot}$ (\citealt{Dodds_Eden2009}). Using Eq. \ref{eq:2}, this results in $\gamma_{\rm rad}^{\rm sync} \approx 9 \cdot 10^6 (B_{\rm up}/10 {\rm G})^{-1/2}$, limiting the energy of accelerated particles by synchrotron cooling for a typical magnetization $\sigma_{\rm up} \sim 10^{10} (M/4 \cdot 10^6 M_{\odot}) (B_{\rm up}/10 {\rm G}) / 2\lambda \gg \gamma_{\rm rad}^{\rm sync}$.
Using Eq. \ref{eq:3}, the compactness is $\ell_{\rm B} \sim 10^{-5} (w/1 r_{\rm g})(M/4 \cdot 10^6 M_{\odot})(B_{\rm up}/10 {\rm G})^2$.
Synchrotron photons emitted by the particles accelerated to the highest energies in the reconnection layer, up to $\gamma_{rad}^{\rm sync} \sim 10^7$, should extend in the hard X-ray range. The energy of particles accumulated in the orbiting hot spot will be constrained by the synchrotron cooling time which has to be larger than the lightcrossing time of the current sheet, $c t_{\rm sync} / w \sim 1/(\ell_{\rm B} \gamma) \geq 1$, or $\gamma \lesssim 1 / \ell_{\rm B} \sim \gamma_{\rm {cool}}=10^4$ for the hot spot at $\sim 10r_{\rm g}$. These particles are likely to emit in the (near-)infrared range, $(\gamma_{\rm cool})^2 \hbar \omega_{\rm B} \sim 1 {\rm eV}(B/10 {\rm G})$. Thus, reconnection near the event horizon can power a multi-wavelength flare solely by synchrotron emission from reconnection-accelerated particles. Mini-flares are a potentially viable route to produce only near-infrared emission without strong enough X-rays to be detected as flares, as they don't produce a long-lasting extended current sheet, which would be the source of highest energy particles. The characteristic power of the X-ray emission can be estimated from the total dissipated power in reconnection, $\sim 0.1 L_{\rm BZ}\sim 10^{35} (B_{\rm horizon}/10{\rm G})^2$ erg/s. Thus, reconnection in the magnetospheric current sheet provides enough energy to power the observed X-ray flares from Sgr A$^{*}$ with typical luminosities in a range $10^{34}-10^{35}$ erg/s \citep{Neilsen2015}.
\section{Conclusions}
By conducting extreme resolution 3D GRMHD simulations we have shown that during periods of magnetic flux decay at the horizon, MAD flows form transient and non-axisymmetric magnetospheres that possess special qualities revealed only at such high resolutions. Namely, these eruptions lead to a substantial, order-of-magnitude drop in the mass accretion rate and the formation of a thin equatorial current sheet that extends from the horizon out to $\sim 5-10 r_{g}$ into the disk and separates the two polar jets. This current sheet is filled with the electron-positron plasma from the jets and reconnects in the {plasmoid-mediated} regime. The formation of plasmoids is revealed here for the first time in 3D thanks to the unusually high resolutions achieved in this work, $N_r \times N_\theta \times N_\phi = 5376\times2304\times2304$. Reconnection-heated to relativistic temperatures, the plasma in the current sheet escapes the black hole's gravitational pull through the exhaust of the reconnection layer: this injects magnetic flux tubes filled with the low-density pair plasma into the accretion disk, and hot plasma along the jet-disk boundary. This reconnection-heated plasma can produce a multiwavelength flare.
Hot flux tubes orbit in the accretion disk and can remain coherent for one to a few orbital periods. The time scales of the flare are directly governed by the the reconnection rate in the equatorial current sheet. We have shown that this rate \emph{decreases} with increasing numerical resolution until the critical resolution beyond which it reaches the \emph{universal {converged} value} that no longer changes when the resolution is increased any further.
Importantly, only at such high resolutions, the structure of the current sheet -- X-points and plasmoids -- are {resolved for the first time with our extreme resolution 3D GRMHD simulations.}
The universal reconnection rate directly sets the magnetic flux decay rate at the horizon. Other studies have related flux decay at the horizon with flares (\citealt{Ball2018b,dexter2020sgr,Chashkina2021,Scepi2021}) or observed orbiting flux tubes in retrograde disks (those rotating in the opposite sense to their black hole; \citealt{Porth2020flares}). However, due to limited numerical resolution they did do not capture {plasmoid-mediated} reconnection as the power source and did not identify a direct link between the magnetic flux decay at the event horizon and its origin in reconnection in the equatorial magnetospheric current.
We note that the trigger behind such large flux eruption events is still not understood. Large flares occur when the accretion is governed by large, low azimuthal mode-number spiral RTI modes. It is as of yet unclear why the accretion state switches from a large spectrum of RTI modes in quiescence to a single azimuthal spiral RTI mode during the flare.
The reconnection powering the flare is fed by highly magnetized pair plasma that eventually ends up in the hot flux tube buoyantly rising in and mixing with the electron-ion plasma that makes up the accretion disk. Commonly used parametrized relations connecting the temperatures of ions and electrons based on local plasma-$\beta$ or $\sigma$ values in the accretion flow \citep[e.g.,][and references therein]{Moscibrodzka2016, Davelaar2019, EHTpaper5, Chatterjee2020b, dexter2020sgr, Yoon2020} or two-temperature GRMHD approaches (\citealt{Ressler2015,Chael2019}) therefore cannot describe the non-thermal emission from these events which involves reconnection in high-$\sigma$ collisionless pair plasma regime, the transport and cooling of non-thermal lepton distributions, as well as efficient pair production.
We note that while the reconnection rate in the equatorial current sheet is converged in GRMHD at the extremely high numerical resolutions used in this work, it converges to $v_{\rm rec}/v_{\rm A} \sim 0.01 $, which is an order of magnitude lower than the converged value of $\sim 0.1$ in kinetic simulations \citep{Bransgrove2021}. This difference comes from GRMHD simulations being unaware of the collisionless plasma microphysics, which is important at scales where reconnection happens, i.e., electron skin depth. Incorporating non-ideal effects beyond scalar resistivity (e.g., \citealt{ripperda2019b}) into GRMHD simulations, such as electron inertia and anisotropic electron pressure tensor effects in the Ohm’s law, holds promise of matching the (collisional) GRMHD and collisionless reconnection rates \citep{NG2020}. Radiative kinetic simulations (e.g., \citealt{parfrey2019,Crinquand2020b,Crinquand2020}) are crucial for probing the non-thermal effects and the impact of the higher reconnection rate in collisionless plasma on the flare properties. In upcoming work we will investigate the radiative properties of the flares, and the consequences for the image variability as observed by the Event Horizon Telescope (Chatterjee et al., in prep.). The robust formation of a plasmoid-unstable current sheet close the event horizon that can heat and accelerate plasma, and eject flux tubes as low density hot spots into an orbiting disk in our extreme resolution GRMHD simulation, suggests that bright, rapid, high-energy flares powered by magnetic reconnection are a widespread phenomenon that can potentially explain observations of TeV flares from M87$^{*}$ and flaring hot spots from Sgr A$^{*}$.
\section*{Acknowledgements}
We would like to thank Ashley Bransgrove, Alexander Chernoglazov, Luca Comisso, Doosoo Yoon, Hayk Hakobyan, Amir Levinson and Yuri Levin for useful discussions. B.R. and M.L. contributed equally to this work. This research was enabled by support provided by grant no. NSF PHY-1125915 along with a INCITE program award PHY129, using resources from the Oak Ridge Leadership Computing Facility, Summit, which is a US Department of Energy office of Science User Facility supported under contract DE-AC05- 00OR22725, as well as Calcul Quebec (http://www.calculquebec.ca) and Compute Canada (http://www.computecanada.ca). The computational resources and services used in this work were partially provided by facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation. This research is part of the Frontera (\citealt{Frontera}) computing project at the Texas Advanced Computing Center (LRAC-AST20008). Frontera is made possible by National Science Foundation award OAC-1818253.
B.R. is supported by a Joint Princeton/Flatiron Postdoctoral Fellowship. M.L. was supported by John Harvard Distinguished Science Fellowship and ITC
Fellowship. K.C. is supported by a Black Hole Initiative Fellowship at Harvard University, which is funded by grants from the Gordon and Betty Moore Foundation, John Templeton Foundation and the Black Hole PIRE program (NSF grant OISE-1743747). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Moore or Templeton Foundations. G.M. is supported by a Netherlands Research School for Astronomy (NOVA), Virtual Institute of Accretion (VIA) postdoctoral fellowship. A.P. acknowledges support by the National Science Foundation under Grants No. AST-1910248 and PHY-2010145. Research at the Flatiron Institute is supported by the Simons Foundation. K.C. and S.M. are thankful for support by Dutch Research Council (NWO) VICI award, grant Nr. 639.043.513. A.T. acknowledges support by Northwestern University
and by the National Science Foundation grants AST-1815304, AST-1911080. Z.Y. is supported by a UK Research \& Innovation (UKRI) Stephen Hawking Fellowship.
\section{\label{sec:level1}Introduction}
Bright flaring from accreting black holes is seen at all wavelengths, but the mechanism powering high-energy flares is still a topic of major debate. Rapid $\gamma$-ray flares have been observed from active galactic nuclei, in the form of very high-energy ($> 100$ GeV) emission (\citealt{Albert2007,Aharonian2007,Aharonian2009,Aleksic2014S}). The variability timescale of the flares can be shorter than the light-crossing time of the event horizon, constraining the emitting region to be of the order of a Schwarzschild radius. Bright TeV flares are also periodically observed from the supermassive black hole M87$^{*}$, in the center of the Messier 87 galaxy (\citealt{Hess2006,Veritas2010,veritas2012,Magic2021}). The flares show a flux rise and decay timescale of 1-3 days, emitting $\gtrsim 10^{41}$ erg/s (\citealt{Hess2012}), which is non-negligible compared to the total jet power of $10^{42}-10^{44}$ erg/s (e.g., \citealt{Prieto2016}). High-energy $\gamma$-rays originating nearby the horizon can be absorbed by background photons to create electron-positron pairs, preventing their escape. Therefore, it is unclear if there is a mechanism that can produce such flares near the horizon and under which conditions the {radiation} can freely escape.
Furthermore, the black hole in the Galactic Center, Sgr A$^{*}$, shows intriguing infrared and X-ray flares on similarly short dynamical timescales (\citealt{baganoff2001,Eckart2004,Neilsen2015}) originating from near the horizon (\citealt{Gravity2018,Gravity2021}).
Magnetically arrested disk (MAD, \citealt{1974Ap&SS..28...45B,1976Ap&SS..42..401B,narayan2003}) accretion is the most plausible scenario for the accretion flow onto active galactic nuclei showing strong jets (see, e.g., \citealt{EHTVII2021} for M87$^{*}$). Sources fed by stellar winds, like Sgr A$^{*}$, are also capable of producing MADs (\citealt{Ressler_2020}). {General-relativistic magnetohydrodynamics (GRMHD) simulations show that a large amount of poloidal (pointing in the $R$- and $z$-directions) magnetic flux (proportional to the square root of the mass accretion rate) is forced into the black hole by the accreting gas, until the flux becomes dynamically important and strong enough to push the accreting gas away (\citealt{2003ApJ...592.1042I,2008ApJ...677..317I,Tchekhovskoy2011}).} The MAD state is accompanied by large-amplitude fluctuations, caused by quasi-periodic accumulation and escape of the magnetic flux bundles in the vicinity of the black hole (\citealt{2008ApJ...677..317I,Tchekhovskoy2011,dexter2020sgr,Porth2020flares}).
{Recently, extreme resolution two-dimensional (2D) GRMHD simulations showed that escape of magnetic flux bundles from the black hole, resulting in the decay of magnetic flux on the horizon, occurs through {plasmoid-mediated} reconnection (\citealt{Ripperda2020}). The magnetic flux decay is accompanied the ejection of the accretion disk (\citealt{Proga2003}). This ejection results in the formation of a magnetosphere, consisting of an equatorial plasmoid-unstable} current sheet of oppositely directed magnetic field, that separates two highly magnetized jet regions. {Reconnection in the current sheet releases energy that can power a flare and the tension of the reconnected flux can push the gas away and suppress the mass accretion rate.} The jets, which supply the matter in the current sheet, are highly magnetized because their large-scale magnetic field serves as a barrier to ions within the accretion disk. Pair discharges can generate ample electron-positron plasma to fill the magnetospheric region \citep{parfrey2019,Crinquand2020}. The collisional mean free path of particles is much larger than the characteristic length scale of the system. As a result, the magnetospheric electron-positron plasma is collisionless, and can be accelerated in a reconnecting current sheet into a power-law distribution, and subsequently power high-energy flares. In magnetized and collisionless plasma conditions, reconnection occurs in the {plasmoid-mediated} regime {at a universal reconnection rate of $v_{\rm rec} / v_{\rm A} \sim 0.1$, where $v_{\rm rec}$ is the inflow velocity into a current sheet, and $v_{\rm A} \sim c$ is the Alfv\'{e}n speed (\citealt{sironi2014,Guo_2014,Werner_2015}).}
{In {collisional systems as described by GRMHD, the reconnection rate in the plasmoid-mediated regime at high Lundquist numbers (and at sufficiently high resolution to resolve the spatial scales associated to that Lundquist number) converges to a universal value of $v_{\rm rec} / v_{\rm A} \sim 0.01$, becoming independent of the resistivity} (\citealt{bhattacharjee2009,uzdensky2010,ripperda2019,Ripperda2020}) \footnote{{Note that the reconnection rate in the plasmoid-mediated regime in collisionless systems is approximately ten times faster than in collisional systems described by GRMHD (\citealt{sironi2014,Guo_2014,Werner_2015,Bransgrove2021}). At low resolutions, GRMHD simulations show higher reconnection rates, which are however a result of large numerical diffusion instead of plasmoid-mediated reconnection.}}. Resolving plasmoid-mediated reconnection, and hence a converged universal reconnection rate, in global black hole simulations requires resolutions higher than $\sim 2000$ cells in the {$\theta$}-direction to capture thin current sheets liable to the plasmoid instability (\citealt{Ripperda2020,Bransgrove2021}).} The flare time-scale is governed by the flux decay which is directly set by the reconnection rate (\citealt{Bransgrove2021}){, which makes it particularly important to resolve the plasmoid instability in thin current sheets}.
{Our goal here is to understand if a macroscopic reconnecting current sheet can form and power a flare in converged 3D GRMHD simulations, despite the excitation of non-axisymmetric effects like a Rayleigh–Taylor-type instability (RTI) preventing the complete arrest of accretion (\citealt{Tchekhovskoy2011,Papadopoulos2019}).}
{In this Letter we conduct the highest-resolution global 3D GRMHD simulations to-date to show that plasmoid-mediated magnetic reconnection in transient, non-axisymmetric current sheets can power flares from accreting black holes and that the magnetic flux decay on the black hole event horizon is governed by the universal reconnection rate.}
{Throughout the manuscript we use geometrized units with gravitational constant, black-hole mass, and speed of light $G = M = c = 1$; such that length scales are normalized to the gravitational radius $r_{\rm g} = GM/c^2$ and times are given in units of $r_{\rm g}/c$.
We employ spherical Kerr-Schild coordinates, where $r$ is the radial coordinate, $\theta$ and $\phi$ are the poloidal and toroidal angular coordinates, respectively, and $t$ is the temporal coordinate.}
\section{Numerical setup}
{Reconnecting current sheets are plasmoid-unstable for Lundquist numbers (\citealt{bhattacharjee2009})}
\begin{equation}
S=v_{\rm A} w / \eta_{\rm num} \geq S_{\rm crit} = 10^4,
\label{eq:lundquist}
\end{equation}
{assuming the Alfv\'{e}n speed $v_{\rm A} \sim c$, and the length of a current sheet $w \sim r_{\rm g}$. Here, we assume that the numerical resistivity proportional to the cell size $\eta_{\rm num} \propto \Delta x^p$, where $p \approx 2$ depending on the details of the second order accurate algorithm. Thus, the constraint on $S$ (Eq. \ref{eq:lundquist}) directly determines the required resolution.} In the plasmoid-mediated regime, the reconnection rate converges to the asymptotic $v_{\rm rec} \sim 0.01c$ in GRMHD (\citealt{Ripperda2020,Bransgrove2021}), directly determining the (converged) rate of magnetic flux decay on the horizon. To achieve the resolution required to capture the plasmoid-mediated reconnection {and, hence, achieve long-sought convergence in the reconnection rate,} we employ our GPU-accelerated GRMHD code H-AMR (\citealt{liska2019}). We set the effective numerical resolution to $N_r \times N_\theta \times N_\phi = 5376\times2304\times2304$ (dubbed ``extreme resolution'' from here onward) to {ensure} that we capture thin plasmoid-unstable current sheets (\citealt{Ripperda2020}). To study convergence of the reconnection rate and the rate at which magnetic flux can escape {from the black hole}, we also conduct three {lower resolution} runs at $N_r \times N_\theta \times N_\phi = 2240\times1056\times1024$ (``high resolution''); $580\times288\times256$ (``standard resolution''); and $288\times128\times128$ (``low resolution'').
{We initialize our simulation to obtain a prograde MAD around a Kerr black hole with dimensionless spin $a=0.9375$, starting from a torus threaded by a single weak poloidal magnetic field loop, defined by the vector potential $A_{\phi} \propto \max\left[{\rho}/{\rho_{\rm max}}\left({r}/{r_{\rm in}}\right)^3\sin^3\theta\exp\left(-{r}/{400}\right)-0.2, 0\right]$, normalized to the gas-to-magnetic-pressure ratio $\beta = 2p/b^2 = 100$.} We replenish gas density $\rho$ in low-density regions to maintain $\sigma_{\rm max}=25$ where the magnetization $\sigma=b^2/(4 \pi \rho c^2)$ is defined using the magnetic field strength $b$ co-moving with the fluid, and fluid-frame rest-mass density $\rho$. We adopt an equation of state for a relativistic ideal gas with an adiabatic index of $\hat{\gamma} = 13/9$, in between a fully relativistic gas $\hat{\gamma}=4/3$ and a fully non-relativistic gas $\hat{\gamma}=5/3$. We employ dimensionless temperature units $T=p/\rho$ with {thermal gas} pressure $p$, where $T=1$ corresponds to $k_{\rm B} T=m_{\rm i} c^2$ with ion mass $m_{\rm i}$ and Boltzmann's constant $k_{\rm B}$ such that $T>1$ indicates relativistic {ion} temperatures.
\section{Reconnection-powered flares}
\begin{figure*}
\centering
\includegraphics[width=0.353\textwidth,trim= 0.85cm 2.3cm 13.4cm 1.3cm, clip=true]{T_2048_xz_40rg_462.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{beta_invertedmagma9122.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{rho_29122.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 1.95cm, clip=true]{T_2048_xz_40rg_478_inset.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{beta9422_invertedmagma9422_inset.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{rho_29422.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 2.15cm, clip=true]{T_2048_xz_40rg_496.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{beta__2048_xz_40rg_9782.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{rho_2048_xz_40rg_9782.pdf}
\caption{{Plasmoid-mediated reconnection, which takes place at sufficiently high resolutions in MHD, is seen in a 3D GRMHD simulation for the first time. Resolving the dynamics of X-points and plasmoids in the current sheet can be the key to understanding the source of black hole non-thermal emission, e.g., high-energy flares.}
Dimensionless temperature $T=p/\rho$, plasma-$\beta$, and density $\rho$ (from left to right) in the {meridional plane before (top row), during (middle row) in the inner $10 r_{\rm g}$ and after (bottom row) the large magnetic flux eruption} in the inner $40 r_{\rm g}$. During the {magnetic flux eruption}, the accretion disk is ejected and the broad accretion inflow is reduced to a thin {plasmoid-unstable} current sheet, indicated by {X-points and} magnetic nulls shown by the antiparallel in-plane field lines (in green{, see inset in panel D) and the high $\beta$ (inset panel E)}. The hot ($T \sim \sigma_{\rm max}$) exhaust of the reconnection layer heats the jet sheath. Reconnection transforms the horizontal field in the current sheet to vertical field that is ejected in the form of hot coherent flux tubes (panel G) at low $\beta$ and density (panels H,I).}
\label{fig:panelXZ}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.353\textwidth,trim= 0.85cm 2.3cm 13.4cm 1.3cm, clip=true]{T_2048_xy_40rg_462.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{beta_2048_xy_40rg_462.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{Rho_2048_xy_40rg_462.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 1.95cm, clip=true]{T_2048_xy_40rg_478.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{beta_2048_xy_40rg_478.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 1.95cm, clip=true]{Rho_2048_xy_40rg_478.pdf}
\includegraphics[width=0.353\textwidth,trim= 0.85cm 0.785cm 13.4cm 2.15cm, clip=true]{T_2048_xy_40rg_496.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{beta_2048_xy_40rg_496.pdf}
\includegraphics[width=0.318\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{rho_2048_xy_40rg_496.pdf}
\caption{{Our extreme resolution simulation reveals small-scale structure and interface instabilities of magnetic flux bundles escaping from the black hole, in an equatorial slice through the system.} Dimensionless temperature $T=p/\rho$, plasma-$\beta$, and density $\rho$ (from left to right) in the equatorial plane before a large {magnetic flux eruption} (top row), during the {magnetic flux eruption} (middle row) in the inner $10 r_{\rm g}$ and after the {magnetic flux eruption} (bottom row) in the inner $40 r_{\rm g}$. Gaps of low $\beta$ and density form during the {pre-eruption} quiescence while many azimuthal RTI modes accrete.
During the {magnetic flux eruption} a single large $T>1$ spiral forms with a gap where the sheet moved out of the equatorial plane. Magnetic flux escapes through the spiral current sheet, while accretion continues over a small angle $\phi<2\pi$ at $x \approx 2 r_{\rm g}$ and $y \approx -1$ to $y \approx -2$. In the bottom row the inner $10 r_{\rm g}$ is in quiescent accretion state, and a hot flux tube that is ejected from the reconnection layer is in orbit at $x \approx 10 r_{\rm g}$ to $x \approx 30 r_{\rm g}$ and $y \approx -10 r_{\rm g}$ to $y \approx 20 r_{\rm g}$. The low $\beta$ flux tube shows clear signatures of instabilities at its boundaries mixing low density plasma into the disk.}
\label{fig:panelXY}
\end{figure*}
{We analyze the flaring mechanism and its properties in the {MAD} after $t \approx 5000 r_{\rm g}/c$ {when} the accretion flow has settled into a quasi-steady state of a constant mass accretion rate and magnetic flux {on the black hole event horizon} (see Figure \ref{fig:mdot} in the Supplemental Material)}.
The accumulation of magnetic flux on the horizon cannot continue beyond the limit in which {the outward magnetic force balances the inward gravitational force}. When the magnetic flux reaches this limit in axisymmetry (2D), accretion is halted completely and a low density magnetosphere with an equatorial current sheet can form transiently (\citealt{Ripperda2020}). In 3D, a large spectrum of RTI modes develops in the turbulent inner edge of the disk, steadily driving accretion.
{The magnetic flux periodically erupts from the black hole into the disk. These eruptions are made possible by near-event-horizon reconnection, which converts the magnetic energy into the energy of emitting particles and can naturally power a flare.}
Figures \ref{fig:panelXZ} (at $\phi=0$, i.e., the meridional plane) and \ref{fig:panelXY} (at $\theta=\pi/2$, i.e., the equatorial plane) show the gas temperature $T=p/\rho$ with magnetic field lines plotted as green lines, the gas-to-magnetic-pressure ratio $\beta=8 \pi p/B^2$, and rest-mass density $\rho$ {around the time of one such flares at $t\sim 9500 r_{\rm g}/c$. Namely, we show the quantities in the quiescent period (i.e., a period of quasi-constant magnetic flux at the horizon) before, during, and after the large magnetic flux eruption, respectively, at $t=9122 r_{\rm g}/c$, $t=9422 r_{\rm g}/c$, and $t=9782 r_{\rm g}/c$ (where we zoom out to show large-scale effects).}
Shortly before and during a flare, accretion only occurs through large{-scale} (i.e., low azimuthal mode-number) spiral RTI modes (see also \citealt{Takasao2019} for a very similar scenario explaining protostellar flares) creating a transient, non-axisymmetric (i.e., over an angle $\phi<2\pi$), magnetized (i.e., low plasma-$\beta$), low-density magnetosphere (top and middle rows in Figures \ref{fig:panelXZ} and \ref{fig:panelXY}) pushing the accretion disk outward and resulting in a drop in mass accretion rate. A macroscopic equatorial current sheet forms in the magnetosphere, extending from the horizon to the disk at $x=r\sin\theta\cos\phi\approx-5 r_{\rm g}$ at $z=r\cos\theta\approx 0$ shown by the antiparallel magnetic field lines ({inset in panel D}, green lines).
Reconnection pinches off the horizontal magnetic field in the sheet, transforming it into vertical ($z$) magnetic field, reminiscent of the 2D results of \cite{Ripperda2020}.
The {flux eruption} originates from the inner magnetosphere where the highly magnetized plasma in the jet directly feeds the current sheet.
The plasma density in the jet is determined by the density floor at $\sigma_{\rm max}=25$ in our simulations, whereas in reality it is much more strongly magnetized ($\sigma \gg \sigma_{\rm max}$) pair plasma. {Reconnection occurs locally in X-points where a field line breaks and reconnects to other field line (see insets in Figures~\ref{fig:panelXZ}D and \ref{fig:panelXZ}E). In these X-points, reconnection heats the plasma up to $T \sim \sigma_{\rm max} = 25$ (left panels) after which it is expelled from the layer at Lorentz factors up to $\gamma \propto \sqrt{\sigma_{\rm max}} = 5$ (\citealt{lyubarsky2005}, see also Supplemental Material for an exploration of different $\sigma_{\rm max}$ in 2D).} The flux is expelled through reconnection into the low-density region in between the large low-mode-number RTI modes accreting spirals. {Electrons and positrons accelerated to non-thermal energies through reconnection at the X-points in the macroscopic equatorial current sheet can power high-energy flares that may reach a distant observer during the drop in the mass accretion rate.}
Small plasmoids are visible close to the horizon and a larger hot plasmoid is detected at $x=-3 r_{\rm g}$ (middle row in Figure \ref{fig:panelXZ}) as a result of the merger of smaller escaping plasmoids. The plasmoids that escape the gravitational pull of the black hole interact with the disk and jet sheath resulting in significant heating up to at least $z \gtrsim \pm 40 r_{\rm g}$. The bottom row of Figure \ref{fig:panelXZ} shows a large magnetic flux tube at $x \approx 20-30 r_{\rm g}$: a low density region of strong vertical field (low plasma-$\beta$) heated to medium temperature $T \sim 0.1-1$. {The flux tube forms as a result of the reconnection that converts horizontal magnetic field into vertical field that is ejected from the reconnection layer. Filled with heated plasma, the flux tube can appear as a hot spot. The accumulated vertical magnetic flux in this hot spot} can remain coherent for approximately one orbital time scale {between $10$ and $30 r_{\rm g}$} (bottom row in Figure \ref{fig:panelXY} between $y \approx -20 r_{\rm g}$ and $y\approx 20 r_{\rm g}$), while the inner $10 r_{\rm g}$ is already in the quiescent accretion state at $t=9782 r_{\rm g}/c$. RTIs develop at the boundary of the hot spot, which mix the hot low density plasma into the surrounding accreting gas. The hot spots are expected to be filled with positrons and electrons energized by the reconnection, which in this way can end up in the accretion disk.
After the flaring episode, magnetic flux builds up on the horizon and the quasi-steady-state accretion cycle develops again. Smaller and less hot current sheets where $B^{\phi}$ changes sign also exist in the inner $\sim 20 r_{\rm g}$ of the turbulent accretion disk during the quiescent period, indicated by thin high-$\beta$ layers of anti-parallel field lines (top and bottom rows in Figures \ref{fig:panelXZ} and \ref{fig:panelXY}).
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth, clip=true]{Presentation2.pdf}
\caption
Volume rendering of the temperature $T=p/\rho$ shows plasmoids and hot current sheets. Extreme resolution allows the current sheets to become thinner and hotter than typically seen in GRMHD simulations.
(Panel A:) During a large flare a relativistically hot $T>1$ spiral current sheet forms. Accretion occurs over a small azimuthal angle $\phi < 2\pi$ in the $T<1$ (white) regions. The green field lines, seeded in the current sheet ($T>1$), remain in the current sheet and are mostly attached to the black hole. Blue field lines are seeded in the disk, where some disk field lines are accreting onto the black hole in the $T<1$ region. (Panel B:) In the quiescent state $T\leq 1$ everywhere, and both green and blue field lines (with the same seeds as in panel A) are in the disk, accreting onto the black hole. The inset (C) shows a zoom into {the inner $r_{\rm g}$ in} the flare state with multiple escaping flux loops (green field lines). In the small black box we highlight an escaping flux tube with vertical field as the result of reconnection (green) and an infalling flux tube (purple). We also show a plasmoid, indicated by the helical field line (green) in the second small black box.}
\label{fig:3D}
\end{figure*}
Figure \ref{fig:3D}{A} visualizes the 3D nature of the hot current sheet by showing the temperature and magnetic field line structure in the inner $10 r_{\rm g}$ during the flare at $t=9422 r_{\rm g}/c$. The current sheet has a relativistic temperature $T>1$, whereas shortly before the flare at $t=9122 r_{\rm g}/c$ ({\ref{fig:3D}B}) there are no structures at $T>1$. During flare, the (green) field lines in the current sheet {(i.e., seeded in the $T>1$ region in \ref{fig:3D}{A})} have a clear spiral structure and are separated from the more vertical field lines in the disk (blue). During the quiescence before the flare {(Figure \ref{fig:3D}B)} no such distinction is visible and all field lines (green and blue{, which are seeded at the same points as in panel \ref{fig:3D}A}) are part of the disk. {The extreme resolution allows to capture multiple plasmoids identified as 3D helical field line structures in the sheet (Figure~\ref{fig:3D}C) during the magnetic flux eruption}. We highlight a typical X-point as the manifestation of reconnection, separating an infalling (purple field line) and escaping flux tube (green field line) in the hot current sheet. Similar X-points can be detected in {e.g., the inset in Figure \ref{fig:panelXZ}D}.
\begin{figure*}
\centering
\includegraphics[width=0.476\textwidth,trim= 2.45cm 6.25cm 8.5cm 0cm, clip=true]{beta_128_fig4_xz_10rg_737.pdf}
\includegraphics[width=0.51\textwidth,trim= 4.2cm 6.25cm 5.1cm 0cm, clip=true]{beta_256_fig4_xz_10rg_870.pdf}
\includegraphics[width=0.476\textwidth,trim= 2.45cm 4.7cm 8.5cm 1.5cm, clip=true]{beta_1024_fig4_xz_10rg_474.pdf}
\includegraphics[width=0.51\textwidth,trim= 4.2cm 4.7cm 5.1cm 1.5cm, clip=true]{beta_2048_fig4_xz_10rg_370.pdf}
\includegraphics[width=0.498\textwidth, trim= 0.3cm 10.5cm 14.3cm 0.4cm, clip=true]{mdot_phidot_paper.pdf}
\includegraphics[width=0.496\textwidth, trim= 0.3cm 10.5cm 14.5cm 0.4cm, clip=true]{recrate.pdf}
\includegraphics[width=0.498\textwidth, trim= 0.3cm 0.3cm 14.3cm 9.9cm, clip=true]{mdot_phidot_paper.pdf}
\includegraphics[width=0.496\textwidth, trim= 0.3cm 0.3cm 14.5cm 9.9cm, clip=true]{recrate.pdf}
\caption{{The equatorial current sheet that forms during the magnetic flux eruption is unresolved at low and standard resolutions (panels A,B) such that magnetic field lines (green lines) diffuse through the current sheet and do not reconnect, due to the high numerical resistivity. At high and extreme resolutions (C,D), the field lines are antiparallel in the current sheet, and they reconnect in well-defined X-points. Smaller current sheets are resolved in the accretion disk at high and extreme resolutions, potentially heating the plasma through reconnection.} {Panel E shows the} magnetic flux on the horizon for the four numerical resolutions. The extreme and high resolution runs show two and three large flare periods, respectively, indicated by flux decay at a rate $\propto e^{-t/500}$ governed by the reconnection rate (dashed black lines). A mini-flare is indicated by the small flux drop at $t\approx 6800 r_{\rm g}/c$ in the extreme resolution run. The standard and low resolution runs show a faster flux decay $\propto e^{-t/350}$ governed by the enhanced reconnection rate due to an increased numerical resistivity. Flares in the extreme resolution run are accompanied by clear drops in the mass accretion rate {(panel G)}, due to the expulsion of the disk over a large azimuthal angle. {Panel F shows a} {cut through the equatorial current sheet at $x\approx1.5 r_{\rm g}$ during the flare state (indicated by the red dashed line in panels A-D)}, displaying the three components of the magnetic field $B^i$ in minimum variance coordinates. Both the (nearly) radial field $B^L$ and the (nearly) toroidal field $B^M$ reconnect and go through zero. The guide field $B^N$ is (close to) zero. {Panel H shows the} $\mathbf{E} \times \mathbf{B}$ speed flowing into the current sheet. After correcting for the bulk velocity, the reconnection rate $v_{\rm rec}\approx0.01c$, which we confirmed at 10 radial cuts and during several flare periods.
}
\label{fig:recrate}
\end{figure*}
{Figure \ref{fig:recrate}A-D shows zooms into the current sheet during large magnetic flux eruptions for the four numerical resolutions employed. The drop in magnetic flux at low and standard resolutions (panels A,B) is not accompanied by a large drop in mass accretion rate (see panels E,G), due to the large numerical diffusion. The magnetic field diffuses through the thick current sheet and does not reconnect, due to the large numerical resistivity. This results in a too high reconnection rate and a large heated area (see Supplemental Material, Figure \ref{fig:panelXZlowres} for more properties of the large magnetic flux eruption at low resolution). The current sheet is in these cases not plasmoid-unstable. The high resolution flux eruption (panel C) behaves similarly to the extreme resolution result (panel D) from Figure \ref{fig:panelXZ}, indicating that the plasmoid instability is resolved on the grid, and that the reconnection rate is converged to a universal value of $0.01c$ (panel H).}
In Figure \ref{fig:recrate} we {also} analyze the magnetic flux $\dot{\phi}_{\rm BH} := \frac{1}{2}\int_{0}^{2 \pi} \int_{0}^{\pi} |^{*} F^{rt}| \sqrt{-g} \, d\theta d\phi$ on the horizon {(\ref{fig:recrate}E)} and the mass accretion rate $\dot{m} := -\int_{0}^{2 \pi} \int_{0}^{\pi} \rho u^r \sqrt{-g} d\theta d\phi$ through the inner $5 r_{\rm g}$ {(\ref{fig:recrate}G)}, where $g$ is the metric determinant, $u^\mu$ is the fluid 4-velocity, $^* F^{\mu\nu}$ is the dual of the Faraday tensor, and $\rho$ is the fluid-frame rest-mass density. After $\sim 5000 r_{\rm g}/c$ the flow sets into a quasi-steady state which is globally converged for all resolutions. For the extreme resolution run (magenta line {Figure \ref{fig:recrate}E)} we observe two major flux decays, which we associate with large flares, at $t\approx 7300 r_{\rm g}/c$ and $t\approx 9300 r_{\rm g}/c$, both lasting for a few $\sim 100 r_{\rm g}/c$. We also observe a small flux decay at $t\approx 6800 r_{\rm g}/c$, associated with a smaller flare, or ``mini-flare''. For all flares, the magnetic flux on the event horizon decays quasi-exponentially with time with characteristic timescale $\tau\approx 500$~$r_{\rm g}/c$ (indicated by the black dashed lines), implying that the decay is governed by reconnection at a universal rate of $0.01c$, consistent with the decay observed for a split monopole magnetic field on the event horizon (\citealt{Bransgrove2021}). All three events are accompanied by a large drop in mass accretion rate (Figure \ref{fig:recrate}G) that is related to the ejection of the accretion disk such that the accretion is funneled through a small azimuthal angle $\phi < 2\pi$ and nearly halts.
For {the high resolution run (red line, Figure \ref{fig:recrate}E), }similar flare episodes can be observed at $t\approx 7500 r_{\rm g}/c$, $t\approx 8300 r_{\rm g}/c$, and $t\approx 9400 r_{\rm g}/c$, with flux decaying on the same timescale $\tau\approx 500$~$r_{\rm g}/c$. For lower resolutions (blue and green line) there is a clearer distinction: large flares show (e.g., at $t\approx 7300 r_{\rm g}/c$ for low resolution, and $t\approx 8300 r_{\rm g}/c$ and $8600 r_{\rm g}/c$ at standard resolution) a faster decay rate $\tau\approx 350$~$r_{\rm g}/c$, implying a faster reconnection rate $> 0.01c$. Mini-flares (e.g., at $t\approx 9300 r_{\rm g}/c$ for low resolution and $t\approx 7500 r_{\rm g}/c$ for high resolution) instead show a flux decay at a rate of $\tau\approx 500$~$r_{\rm g}/c$ implying a reconnection rate of $\sim 0.01c$. At low and standard resolution{s}, these mini-flares are typically {\it not} accompanied by a clear drop in $\dot{m}_{5r_{\rm g}}$ {(Figure \ref{fig:recrate}G)}, while large flares are showing a clear drop in $\dot{m}_{5r_{\rm g}}$ implying a large ($\gtrsim 5 r_{\rm g}$) current sheet. {This can be explained by the large numerical diffusion of the thinning current sheet in both the $z$ and $y$ directions, resulting in a too broad accretion funnel at low and standard resolution (Figure \ref{fig:recrate}A,B). Mini-flares are better captured at lower resolutions than large flares due to the shorter length of the current sheet and the higher effective resolution of the spherical grid at small radii (see Supplemental Material).}
The reconnection rate can be determined directly by selecting a current sheet during a flare episode and measure the inflow speed of the plasma into the reconnection layer.
To do so, we first transform the Eulerian electric and magnetic fields into a local inertial frame to apply standard reconnection analysis in flat spacetime (\citealt{Ripperda2020}).
The fields are expressed in minimum variance coordinates (\citealt{Howes_2016}), with $B^L$ projected in the flat frame along the poloidal direction parallel to the current sheet, $B^M$ along the toroidal direction and $B^N$ perpendicular to the current sheet, to determine the upstream geometry, showing a typical Harris-type sheet structure in {Figure \ref{fig:recrate}F}. Both the toroidal and poloidal components switch sign in the sheet, indicating that zero-guide-field reconnection occurs. The inflow speed is determined from the $\mathbf{E}\times\mathbf{B}$-velocity projected along the direction perpendicular to the reconnection layer. In Figure {\ref{fig:recrate}H} we measure the inflow speeds from left and right of the current sheet as $v_{\rm in, left}=(v_{\rm bulk} + v_{\rm rec}) / (1 + v_{\rm bulk} v_{\rm rec}/c^2)$ and $v_{\rm in, right}=(v_{\rm bulk} - v_{\rm rec}) / (1 - v_{\rm bulk} v_{\rm rec}/c^2)$ and solve for $v_{\rm rec}$, where we corrected for the relativistic speed of the bulk flow. We select 10 cuts of the current sheet at different radii and consistently find a reconnection rate of $\sim 0.01c$, indicating a Lundquist number of at least $S = v_{\rm rec}^{-2} = 10^4$. Reconnection thus occurs in the asymptotic {plasmoid-mediated} regime where $S=v_{\rm A} w / \eta_{\rm num} \geq S_{\rm{crit}} = 10^4$ (\citealt{bhattacharjee2009}) for our extreme resolution run, where the length of the sheet $w \gtrsim r_{\rm g}$, Alfv\'{e}n speed $v_{\rm A} \sim c$ and numerical resistivity $\eta_{\rm num}$. In the supplemental material we show the same analysis for the lower resolution simulations, concluding that the {extreme and} high resolution results are in the {plasmoid-mediated} regime, whereas the standard and low {resolution} runs show reconnection rates $>0.01c$, and do not display plasmoids. The enhanced reconnection rate due to larger numerical resistivity at lower resolutions manifests itself as an increased flux decay rate and hence directly affects the flare time scale.
\begin{figure*}
\centering
\includegraphics[width=0.3527\textwidth,trim= 0.85cm 2.3cm 13.4cm 1.3cm, clip=true]{T_2048_xz_40rg_345.pdf}
\includegraphics[width=0.3183\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{beta_2048_invertedmagma6852.pdf}
\includegraphics[width=0.3183\textwidth,trim= 2.8cm 2.3cm 13.4cm 1.3cm, clip=true]{rho_2048_26852.pdf}
\includegraphics[width=0.354\textwidth,trim= 0.85cm 0.785cm 13.4cm 2.15cm, clip=true]{T_2048_xy_40rg_345.pdf}
\includegraphics[width=0.3177\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{beta_2048_xy_40rg_345.pdf}
\includegraphics[width=0.3177\textwidth,trim= 2.8cm 0.785cm 13.4cm 2.15cm, clip=true]{Rho_2048_xy_40rg_345.pdf}
\caption{
{Smaller flux eruptions show shorter current sheets, potentially powering mini-flares that are not accompanied by a large-scale evacuation of the accretion disk.}
Meridional (top row) and equatorial (bottom row) cuts of temperature $T=p/\rho$ (left), plasma-$\beta$ (middle) and density $\rho$ (right) during the mini-flare at $t=6852 r_{\rm g}/c$. The magnetic flux is expelled through a smaller ($w \lesssim 3 r_{\rm g}$) current sheet, close to the horizon, in a short time $\ll 100 r_{\rm g}/c$. The accretion disk is not expelled over a large azimuthal angle, yet the flare is accompanied by a significant drop in mass accretion rate {(see Figure \ref{fig:recrate}G)} and clear gaps in the density (F). Multiple small current sheets are visible in the accretion disk at $x\geq 3 r_{\rm g}$ indicated by the high plasma-$\beta$ (B).}
\label{fig:panelminiflare}
\end{figure*}
In Figure \ref{fig:panelminiflare} we show temperature (left column), plasma-$\beta$ (middle column), and rest-mass density (right column) in both the {meridional} (top row) and {equatorial} (bottom row) plane for the mini-flare in the extreme resolution run at $t\approx 6800 r_{\rm g}/c$. In this case, the accretion disk is not ejected far beyond $5 r_{\rm g}$, still creating a spiral density gap causing the mass accretion rate to drop significantly. Reconnection occurs in a shorter, $\lesssim 5 r_{\rm g}$ plasmoid-unstable current sheet, very close to the horizon, and this is also the main area that is heated to relativistic temperatures $T>1$. These mini-flares could potentially result in smaller very high energy {flares} and shorter variability time scales (\citealt{Hess2012}).
\section{Radiative properties of the reconnection layer}
To probe the radiation emitted by accelerated particles in the reconnection layer a self-consistent radiative kinetic approach is necessary (\citealt{Hakobyan2019,Crinquand2020,Crinquand2020b}). Here, we use the well-constrained parameters for M87$^{*}$ and Sgr A$^{*}$ to estimate the expected emission properties due to reconnection occurring in the radiative regime.
\subsection{M87$^{*}$ flares powered by radiative reconnection}
In our simulations, the current sheet is fed by plasma in the jet at the floor density with a magnetization $\sigma_{\rm max}=25$. In reality, the reconnection powering the flare close to the event horizon is fed by collisionless pair plasma from the jet with a rate of $v_{\rm rec}/c = 0.1$ (\citealt{Bransgrove2021}) at magnetization $\sigma_{\rm up} = B^2_{\rm up} / (4\pi n m_{\rm e} c^2) = 2U_{\rm B}/ (n m_{\rm e} c^2)$, where $n$ is the number density of electrons with mass $m_{\rm e}$, $B_{\rm up}$ is magnetic field strength upstream from the current sheet and magnetic energy density $U_{\rm B} = B^2_{\rm up}/8\pi$\footnote{\cite{Scepi2021} find a typical $\sigma_{\rm up}=100$ in the upstream, which is due to the floor $\sigma_{\rm max}=100$ that they set. However, for realistic funnel densities that are not limited by floors in GRMHD simulations, the magnetization parameter in the upstream, $\sigma_{\rm up}$, can be much higher.}. The plasma particles are impulsively accelerated by non-ideal electric fields at the X-points \citep{sironi2014}. When they encounter plasmoids, they experience strong synchrotron losses. To parametrize the effect of the radiation backreaction, we define the particle Lorentz factor $\gamma_{\rm rad}^{\rm sync}$ for which the radiation drag force is comparable to the force due to the accelerating electric field $E \sim B_{\rm up} v_{\rm rec} / c$ (\citealt{Uzdensky2011a}):
\begin{equation}
2\sigma_{\rm T} U_{\rm B} (\gamma^{\rm sync}_{\rm rad})^2 = v_{\rm rec} e B_{\rm up} / c,
\label{eq:1}
\end{equation}
where $\sigma_{\rm T} = (8\pi/3)r_{\rm e}^2$ is the Thomson cross-section, $r_{\rm e} = e^2 / (m_{\rm e} c^2)$ is the classical electron radius, and $e$ is the electron's charge. We then find $(\gamma_{\rm rad}^{\rm sync})^2 = 3 v_{\rm rec} B_{\rm cl} / (2 c B_{\rm up})$, where $B_{\rm cl} = m_{\rm e}^2 c^4/e^3 \simeq 6 \times 10^{15}$ G is the classical magnetic field. The global magnetic field strength at $5 r_{\rm g}$ is estimated to be $1-30$G (\citealt{EHTVII2021}), resulting in $5-150$G at the horizon, assuming a $1/r$ dependence (\citealt{Ripperda2020}). We can compare this to the magnetic field strength in the jet, feeding the current sheet close to the event horizon of M87$^{*}$ by equating the observed limits on the total jet power $L_{\rm jet}\sim 10^{42} - 10^{44}$ erg/s (\citealt{Prieto2016}) to the Blandford-Znajek jet power $L_{\rm BZ} = \kappa \Omega^2_{\rm BH} \dot{\phi}^2_{\rm BH}/(4\pi c)$, where $\kappa \approx 0.044$ for a parabolic field geometry, $\Omega_{\rm BH} = ac / 2r_{\rm H} \simeq c/2r_{\rm g}$ is the black hole's angular frequency, $M\approx 6\cdot 10^9 M_{\odot}$ for M87$^{*}$, and $r_{\rm H} = r_{\rm g}(1+\sqrt{1-a^2})$ is the horizon radius (\citealt{BZ1977,Tchekhovskoy2011}), resulting in a range $B_{\rm horizon} \sim 20 - 200$ G at the horizon. By normalizing to a fiducial $B_{\rm up} = 100$ G in this range, we then obtain
\begin{equation}
\gamma_{\rm rad}^{\rm sync} \approx 3 \cdot 10^6 \left(\frac{B_{\rm up}}{100 {\rm G}}\right)^{-1/2}
\label{eq:2}
\end{equation}
The magnetization $\sigma_{\rm up}$ sets the available magnetic energy per particle and determines the typical particle Lorentz factor, $\gamma\sim \sigma_{\rm up}$, for the acceleration at X-points, if cooling were negligible (\citealt{sironi2014,Guo_2014,Werner_2015}). We can rewrite $\sigma_{\rm up} = \omega_{\rm B} / (2\Omega_{\rm BH} \lambda)$, where we plugged in the nominal electron gyrofrequency $\omega_{\rm B} = e B_{\rm up} / (m_{\rm e} c)$ and defined plasma density with respect to the Goldreich-Julian density, $n = \lambda n_{\rm GJ} = \lambda \Omega_{\rm BH} B_{\rm up} / (2\pi c e)$, where $\lambda$ is the multiplicity of the pair cascade in the charge-starved gap in the funnel region $\lambda \lesssim 10^3$ (\citealt{chen2019,Crinquand2020}) or of collisions of photons from the disk, if that process is more efficient (\citealt{Moscibrodzka2011}). The typical ratio between the electron gyrofrequency and the angular frequency of M87$^{*}$ is $\omega_{\rm B} / \Omega_{\rm BH} \sim 10^{14} ({M}/{6 \cdot 10^9 M_{\odot}}) ({B_{\rm up}}/{100 {\rm G}})$, such that $\sigma_{\rm up} \sim 10^{14} ({M}/{6 \cdot 10^9 M_{\odot}}) ({B_{\rm up}}/{100 {\rm G}}) / 2\lambda$. For these parameters, $\gamma^{\rm sync}_{\rm rad} \ll \sigma_{\rm up}$ such that leptons impulsively accelerated at X-points are quickly cooled in plasmoids \citep{Hakobyan2019}. Thus, the reconnection occurs in the radiative regime \citep{uzdensky2011}.
To understand the radiative efficiency of reconnection, we determine the magnetic {\it compactness} $\ell_{\rm B} = U_{\rm B} \sigma_{\rm T} w / (m_{\rm e} c^2)$ \citep{Beloborodov2017}. Using Eq.~\ref{eq:1} and the $\omega_{\rm B} / \Omega_{BH}$ relation, we can rewrite $\ell_{\rm B} = v_{\rm rec} w \omega_{\rm B} / (c^2 (\gamma^{\rm sync}_{\rm rad})^2)$ and obtain
\begin{equation}
\ell_{\rm B} \sim 1 \left(\frac{w}{1 r_{\rm g}}\right)\left(\frac{M}{6 \cdot 10^9 M_{\odot}}\right)\left(\frac{B_{\rm up}}{100 {\rm G}}\right)^2,
\label{eq:3}
\end{equation}
so $\ell_{\rm B} \sim 1$, suggesting potentially efficient pair production, but negligible annihilation \citep{Beloborodov2017}. In this regime the cooling time of accelerated particles, $c t_{\rm sync} / w \sim 1/(\ell_{\rm B} \gamma)$, is much shorter than the light-crossing time of the current sheet. Inverse Compton (IC) cooling of accelerated particles on the $\sim 10^{41}$ ${\rm erg/s}$ low-energy photons with energy density $U_{\rm rad}^{\rm soft} \sim 0.003$ ${\rm erg}$ ${\rm cm}^{-3}$ in the inner $10 r_{\rm g}$ results in $\gamma_{\rm rad}^{\rm IC} \sim \gamma_{\rm rad}^{\rm sync} \sqrt{U_{\rm B}/U_{\rm rad}^{\rm soft}} \sim 10^9$ \citep{Broderick2015,MWL2021}, which is well above $\gamma_{\rm rad}^{\rm sync}$. The jet's magnetic field reconnects with a rate of $0.1c$ in the collisionless radiative regime, after which all reconnected power is directly radiated such that the higher energy density of photons produced by accelerated particles, $U_{\rm rad}^{\rm rec} \sim 0.1 U_{\rm B}$ and hence $L_{\rm rad} \sim 0.1 L_{\rm jet}$ (\citealt{Beloborodov2017,Bransgrove2021}), can lead to very efficient IC cooling. The exact result depends on the spectral shape and reduction by Klein-Nishina effects.
The peak of the synchrotron radiation spectrum is expected to be at the synchrotron burnoff limit $\mathcal{E}_{\rm ph} \sim (\gamma_{\rm rad}^{\rm sync})^2 \hbar \omega_{\rm B} \sim 200 {\rm MeV}$ (\citealt{Uzdensky2011a}), which is independent of the magnetic field strength. The highest energy photons will be produced by IC scattering. Conservatively, the characteristic photon energy that can be produced is ${\rm max}(\mathcal{E}_{\rm ph}) = m_{\rm e} c^2 \gamma_{\rm rad}^{\rm sync} \sim 0.511 {\rm MeV} \cdot \gamma_{\rm rad}^{\rm sync} \sim$ few {\rm TeV}. Additionally, particles can be accelerated beyond $\gamma > \gamma_{\rm rad}^{\rm sync}$ because synchrotron cooling is suppressed in X-points (\citealt{Uzdensky2011a,Cerutti2014}).
For photons with energy above the electron rest-mass energy $m_{\rm e}c^2=0.5 {\rm MeV}$, $e^{\pm}$ pairs are created if there are enough photon-photon collisions with seed photons with low energy $\mathcal{E_{\rm s}} \sim (m_{\rm e} c^2)^2 / \mathcal{E}_{\rm ph}$. High-energy photons of energy $\mathcal{E}_{\rm ph,TeV}$ produced in the magnetospheric region around the current sheet will interact most efficiently with seed photons of energy $\mathcal{E}_{\rm s} \sim (1 {\rm TeV} / \mathcal{E}_{\rm ph,TeV})$ ${\rm eV}$.
Given the uncertainties about the density of a $1 {\rm eV}$ photon field near the event horizon during the flaring state, the escape of TeV photons from the region is an open question \citep{Levinson2011,MWL2021}. Conservatively, if $\sim 1 \%$ of the reconnection dissipated power $U_{\rm rad} \sim 0.1 U_{\rm B}$, $L_{\rm rad} \sim 0.1 L_{\rm jet} \sim 10^{41} - 10^{43}$, is emitted in very high-energy photons, a $\gamma$-ray flux of $10^{39} - 10^{41}$ erg/s can be emitted as a flare.
Our {extreme resolution} GRMHD simulation shows transient flaring periods where the mass accretion rate drops (and, thus, luminosity of seed photons) significantly, by a factor $\sim 5-10$, resulting in large low density regions, such that opacity constraints for the escape of $\gamma$-ray photons from the equatorial current sheet are less strict than during a quiescent state.
The decrease of the mass accretion rate and the local soft photon field can also create favorable conditions for the activation of pair discharges on the jet's magnetic field lines and the potential escape of TeV emission from spark gaps, if the opacity becomes prohibitive during the quiescent state (\citealt{Levinson2011,Crinquand2020}).
The flaring state is distinctively different from the quiescent state observed by \cite{EHTpaper1}, implying that observations during a mass accretion rate drop/flare may result in different $230 {\rm GHz}$ images (Chatterjee et al., in prep.).
The magnetic flux decay and mass-accretion drop lasts for a period of $\sim 100 r_{\rm g}/c$ $\sim$ 1 month for M87$^{*}$, which is longer than the typical observed $\sim 1-3$ day TeV flux rise and decay timescale (\citealt{Hess2012}). However, in a collisionless plasma, the magnetic flux decay period is typically $\sim 3-10$ times shorter due to the faster reconnection rate of $v_{\rm rec} \approx 0.1c$ (\citealt{Bransgrove2021}) compared to $v_{\rm rec} \approx 0.01c$ in GRMHD models \footnote{{Note that the higher reconnection rate in collisionless models is caused by kinetic plasma effects, e.g., gradients of the anisotropic pressure tensor of electrons and positrons in pair plasma (\citealt{Bessho2005}), and is unrelated to the increased reconnection rate due to large numerical diffusion in low resolution GRMHD models.}}, resulting in a flare timescale of $\sim$ few days.
We find that pair production in the current sheet can efficiently mass load the jet with electrons and positrons with energies $\gamma \sim 1-1000$, that can emit synchrotron photons with energies ranging from the radio to optical wavelengths (see Supplemental Material).
\subsection{Sgr A$^{*}$ flares powered by radiative reconnection}
Sgr A$^{*}$ shows daily near-infrared and X-ray flares from the inner $10 r_{\rm g}$, on average every 6 and 12 hours, lasting for 30-80 minutes, respectively (\citealt{baganoff2001,eckart2006,Gravity2018,Witzel2020,Murchikova2021}). The flare periods in our simulation last for $\sim 100 r_{\rm g}/c \sim 30$ minutes, and the subsequent quiescent period for $\sim 2000 r_{\rm g}/c \sim 10$ hours for Sgr A$^{*}$. The resulting hot spot orbits for $\sim 500 r_{\rm g}/c \sim 150$ minutes in the inner $20 r_{\rm g}$ until it diffuses due to mixing instabilities. The magnetic field strength in quiescence is well constrained in the range of $10-50$ G in the inner $10 r_{\rm g}$ for Sgr A$^{*}$ with black hole mass $4 \cdot 10^6 M_{\odot}$ (\citealt{Dodds_Eden2009}). Using Eq. \ref{eq:2}, this results in $\gamma_{\rm rad}^{\rm sync} \approx 9 \cdot 10^6 (B_{\rm up}/10 {\rm G})^{-1/2}$, limiting the energy of accelerated particles by synchrotron cooling for a typical magnetization $\sigma_{\rm up} \sim 10^{10} (M/4 \cdot 10^6 M_{\odot}) (B_{\rm up}/10 {\rm G}) / 2\lambda \gg \gamma_{\rm rad}^{\rm sync}$.
Using Eq. \ref{eq:3}, the compactness is $\ell_{\rm B} \sim 10^{-5} (w/1 r_{\rm g})(M/4 \cdot 10^6 M_{\odot})(B_{\rm up}/10 {\rm G})^2$.
Synchrotron photons emitted by the particles accelerated to the highest energies in the reconnection layer, up to $\gamma_{rad}^{\rm sync} \sim 10^7$, should extend in the hard X-ray range. The energy of particles accumulated in the orbiting hot spot will be constrained by the synchrotron cooling time which has to be larger than the lightcrossing time of the current sheet, $c t_{\rm sync} / w \sim 1/(\ell_{\rm B} \gamma) \geq 1$, or $\gamma \lesssim 1 / \ell_{\rm B} \sim \gamma_{\rm {cool}}=10^4$ for the hot spot at $\sim 10r_{\rm g}$. These particles are likely to emit in the (near-)infrared range, $(\gamma_{\rm cool})^2 \hbar \omega_{\rm B} \sim 1 {\rm eV}(B/10 {\rm G})$. Thus, reconnection near the event horizon can power a multi-wavelength flare solely by synchrotron emission from reconnection-accelerated particles. Mini-flares are a potentially viable route to produce only near-infrared emission without strong enough X-rays to be detected as flares, as they don't produce a long-lasting extended current sheet, which would be the source of highest energy particles. The characteristic power of the X-ray emission can be estimated from the total dissipated power in reconnection, $\sim 0.1 L_{\rm BZ}\sim 10^{35} (B_{\rm horizon}/10{\rm G})^2$ erg/s. Thus, reconnection in the magnetospheric current sheet provides enough energy to power the observed X-ray flares from Sgr A$^{*}$ with typical luminosities in a range $10^{34}-10^{35}$ erg/s \citep{Neilsen2015}.
\section{Conclusions}
By conducting extreme resolution 3D GRMHD simulations we have shown that during periods of magnetic flux decay at the horizon, MAD flows form transient and non-axisymmetric magnetospheres that possess special qualities revealed only at such high resolutions. Namely, these eruptions lead to a substantial, order-of-magnitude drop in the mass accretion rate and the formation of a thin equatorial current sheet that extends from the horizon out to $\sim 5-10 r_{g}$ into the disk and separates the two polar jets. This current sheet is filled with the electron-positron plasma from the jets and reconnects in the {plasmoid-mediated} regime. The formation of plasmoids is revealed here for the first time in 3D thanks to the unusually high resolutions achieved in this work, $N_r \times N_\theta \times N_\phi = 5376\times2304\times2304$. Reconnection-heated to relativistic temperatures, the plasma in the current sheet escapes the black hole's gravitational pull through the exhaust of the reconnection layer: this injects magnetic flux tubes filled with the low-density pair plasma into the accretion disk, and hot plasma along the jet-disk boundary. This reconnection-heated plasma can produce a multiwavelength flare.
Hot flux tubes orbit in the accretion disk and can remain coherent for one to a few orbital periods. The time scales of the flare are directly governed by the the reconnection rate in the equatorial current sheet. We have shown that this rate \emph{decreases} with increasing numerical resolution until the critical resolution beyond which it reaches the \emph{universal {converged} value} that no longer changes when the resolution is increased any further.
Importantly, only at such high resolutions, the structure of the current sheet -- X-points and plasmoids -- are {resolved for the first time with our extreme resolution 3D GRMHD simulations.}
The universal reconnection rate directly sets the magnetic flux decay rate at the horizon. Other studies have related flux decay at the horizon with flares (\citealt{Ball2018b,dexter2020sgr,Chashkina2021,Scepi2021}) or observed orbiting flux tubes in retrograde disks (those rotating in the opposite sense to their black hole; \citealt{Porth2020flares}). However, due to limited numerical resolution they did do not capture {plasmoid-mediated} reconnection as the power source and did not identify a direct link between the magnetic flux decay at the event horizon and its origin in reconnection in the equatorial magnetospheric current.
We note that the trigger behind such large flux eruption events is still not understood. Large flares occur when the accretion is governed by large, low azimuthal mode-number spiral RTI modes. It is as of yet unclear why the accretion state switches from a large spectrum of RTI modes in quiescence to a single azimuthal spiral RTI mode during the flare.
The reconnection powering the flare is fed by highly magnetized pair plasma that eventually ends up in the hot flux tube buoyantly rising in and mixing with the electron-ion plasma that makes up the accretion disk. Commonly used parametrized relations connecting the temperatures of ions and electrons based on local plasma-$\beta$ or $\sigma$ values in the accretion flow \citep[e.g.,][and references therein]{Moscibrodzka2016, Davelaar2019, EHTpaper5, Chatterjee2020b, dexter2020sgr, Yoon2020} or two-temperature GRMHD approaches (\citealt{Ressler2015,Chael2019}) therefore cannot describe the non-thermal emission from these events which involves reconnection in high-$\sigma$ collisionless pair plasma regime, the transport and cooling of non-thermal lepton distributions, as well as efficient pair production.
We note that while the reconnection rate in the equatorial current sheet is converged in GRMHD at the extremely high numerical resolutions used in this work, it converges to $v_{\rm rec}/v_{\rm A} \sim 0.01 $, which is an order of magnitude lower than the converged value of $\sim 0.1$ in kinetic simulations \citep{Bransgrove2021}. This difference comes from GRMHD simulations being unaware of the collisionless plasma microphysics, which is important at scales where reconnection happens, i.e., electron skin depth. Incorporating non-ideal effects beyond scalar resistivity (e.g., \citealt{ripperda2019b}) into GRMHD simulations, such as electron inertia and anisotropic electron pressure tensor effects in the Ohm’s law, holds promise of matching the (collisional) GRMHD and collisionless reconnection rates \citep{NG2020}. Radiative kinetic simulations (e.g., \citealt{parfrey2019,Crinquand2020b,Crinquand2020}) are crucial for probing the non-thermal effects and the impact of the higher reconnection rate in collisionless plasma on the flare properties. In upcoming work we will investigate the radiative properties of the flares, and the consequences for the image variability as observed by the Event Horizon Telescope (Chatterjee et al., in prep.). The robust formation of a plasmoid-unstable current sheet close the event horizon that can heat and accelerate plasma, and eject flux tubes as low density hot spots into an orbiting disk in our extreme resolution GRMHD simulation, suggests that bright, rapid, high-energy flares powered by magnetic reconnection are a widespread phenomenon that can potentially explain observations of TeV flares from M87$^{*}$ and flaring hot spots from Sgr A$^{*}$.
\section*{Acknowledgements}
We would like to thank Ashley Bransgrove, Alexander Chernoglazov, Luca Comisso, Doosoo Yoon, Hayk Hakobyan, Amir Levinson and Yuri Levin for useful discussions. B.R. and M.L. contributed equally to this work. This research was enabled by support provided by grant no. NSF PHY-1125915 along with a INCITE program award PHY129, using resources from the Oak Ridge Leadership Computing Facility, Summit, which is a US Department of Energy office of Science User Facility supported under contract DE-AC05- 00OR22725, as well as Calcul Quebec (http://www.calculquebec.ca) and Compute Canada (http://www.computecanada.ca). The computational resources and services used in this work were partially provided by facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation. This research is part of the Frontera (\citealt{Frontera}) computing project at the Texas Advanced Computing Center (LRAC-AST20008). Frontera is made possible by National Science Foundation award OAC-1818253.
B.R. is supported by a Joint Princeton/Flatiron Postdoctoral Fellowship. M.L. was supported by John Harvard Distinguished Science Fellowship and ITC
Fellowship. K.C. is supported by a Black Hole Initiative Fellowship at Harvard University, which is funded by grants from the Gordon and Betty Moore Foundation, John Templeton Foundation and the Black Hole PIRE program (NSF grant OISE-1743747). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the Moore or Templeton Foundations. G.M. is supported by a Netherlands Research School for Astronomy (NOVA), Virtual Institute of Accretion (VIA) postdoctoral fellowship. A.P. acknowledges support by the National Science Foundation under Grants No. AST-1910248 and PHY-2010145. Research at the Flatiron Institute is supported by the Simons Foundation. K.C. and S.M. are thankful for support by Dutch Research Council (NWO) VICI award, grant Nr. 639.043.513. A.T. acknowledges support by Northwestern University
and by the National Science Foundation grants AST-1815304, AST-1911080. Z.Y. is supported by a UK Research \& Innovation (UKRI) Stephen Hawking Fellowship.
|
1,314,259,995,575 | arxiv | \section{Introduction}
In its most general form, the Goldstone theorem says that in a translationally-invariant system spontaneous breaking of a continuous symmetry $G$ produces gapless bosonic excitations, i.e. excitations whose energy vanishes as the spatial momentum ${\mathbf q}$ goes to zero. These gapless excitations are called Goldstone bosons. In the relativistic case, one can show that there is one Goldstone boson for every generator of the symmetry which does not annihilate the vacuum. It was noted by Nielsen and Chadha \cite{NC} that if one abandons Lorenz invariance, the situation is more complex because there exist two different types of Goldstone bosons which they referred to as Type I and Type II Goldstones. By definition, a Type I Goldstone boson has energy which scales as an odd power of momentum as ${\mathbf q}{\rightarrow} 0$, while a Type II Goldstone boson has energy which scales as an even power of momentum. Suppose the number of Type I and Type II Goldstones is $n_I$ and $n_{II}$ respectively, and the number of broken symmetry generators is $N$. Then there is an inequality \cite{NC}:
$$
n_I+2n_{II}\geq N.
$$
Later Leutwyler \cite{L} and Nambu \cite{Nambu} pointed out that the presence of Type II Goldstone bosons is associated with a nonzero charge density for a charge which is a commutator of two broken charges. For a review of these and other related works see \cite{Brauner}. More recently Watanabe and Brauner \cite{WB} conjectured that the numbers of Type I and Type II Goldstone bosons satisfy a relation
\begin{equation}\label{WBeq}
n_I+n_{II}=N-\frac12 {\rm rank}\, B
\end{equation}
where $B$ is an $N\times N$ matrix which encodes the commutators of broken charges:
$$
B_{ij}=\lim_{V{\rightarrow}\infty}\frac{-i}{V}[Q_i,Q_j].
$$
Here $V$ is the spatial volume and $Q_i$, $i=1,\ldots,N$, are the broken generators.
Watanabe and Brauner proved that the left-hand side of eq.~(\ref{WBeq}) is greater or equal than the right-hand side. It is also known that when $B=0$ eq.~(\ref{WBeq}) holds true \cite{Schaferetal}.
Very recently Watanabe and Murayama \cite{WM} proved the Watanabe-Brauner conjecture using the effective action approach. More precisely, they define Type A and Type B Goldstons, which are closely related to Type I and Type II Goldstones, and show that their numbers are given by
\begin{equation}\label{WMeq}
n_A=N-{\rm rank}\, B,\quad n_B=\frac12 {\rm rank}\, B.
\end{equation}
Eq. (\ref{WBeq}) is a consequence of these more precise counting formulas.
In this note we refine the analysis of \cite{WM} and show that apart from true Goldstone bosons the effective action considered in \cite{WM} describes gapped excitations which we call almost-Goldstone bosons. We determine their number and show that if the target-space metric is nondegenerate, it is equal to $\frac12{\rm rank}\, B$, so that the total number of Goldstone and almost-Goldstone bosons is $N$. We explain a mechanism by which two Type A Goldstone bosons may pair up into a Type B Goldstone boson and an almost-Goldstone boson. The number of such pairs is precisely ${\rm rank}\, B$. This gives a simple and intuitive explanation of the counting rules eq.~(\ref{WMeq}).
In Section 2 we analyze the effective action for the order parameter and compute the number of Goldstone and almost-Goldstone bosons. We follow closely \cite{WM}, but remove some unnecessary assumptions about the form of the action. In Section 3 we give some examples and discuss our results. In particular, we propose that deviations from linearity in the dispersion law of Goldstone bosons at small momenta serve as a signature of a small breaking of time-reversal symmetry.
I would like to thank Ira Rothstein for discussions and Hiroshi Ooguri for drawing my attention to Ref.~\cite{WM}. This work was supported in part by the DOE grant DE-FG02-92ER40701.
\section{Goldstone and almost-Goldstone bosons}
Suppose the symmetry group $G$ (which we assume for now to be internal symmetry, i.e. it does not act on time or spatial coordinates) is spontaneously broken down to a subgroup $H$, by which we mean that there is an order parameter taking values in $G/H$. Our basic assumption is that the low-energy theory can be described by an action for a field $\phi$ taking values in $G/H$. One further assumption is that the action contains terms only of first or second order in time derivatives. Thus it has the form
$$
S=\int dt d^nx \left( \frac12 G_{ij}(\phi) \partial_t\phi^i \partial_t\phi^j+A_i(\phi,\nabla)\partial_t\phi^i-W(\phi,\nabla)\right).
$$
By $A_i(\phi,\nabla), etc.$ we mean some function of $\phi$ and its spatial derivatives. We do not assume rotational invariance. For simplicity we assumed that the term quadratic in time derivatives does not depend on spatial derivatives, although this is not really necessary. The above action is slightly more general than that considered in \cite{WM}, where $A_i$ was assumed to depend on the field $\phi$, but not on its spatial derivatives.
If the target space metric $G_{ij}$ is nondegenerate, we can rewrite the action in the first-order form as follows:
$$
S=\int dt d^nx \left[\left(p_i+A_i(\phi,\nabla)\right)\partial_t\phi^i-\frac12 G^{ij}(\phi)p_i p_j -W(\phi,\nabla)\right],
$$
where $G^{ij}$ is the inverse of $G_{ij}$.
We can now redefine momenta
$$
p_i\mapsto p_i-A_i(\phi,\nabla),
$$
and bring the action to the standard form
$$
S=\int dt d^n x \left[p_i\partial_t\phi^i-H(p,\phi,\nabla)\right]
$$
with the Hamiltonian density
$$
H(p,\phi,\nabla\phi)=\frac12 G^{ij}(p_i-A_i)(p_j-A_j)+W(\phi,\nabla).
$$
Let us examine the physics described by this action. We pick any constant vacuum configuration $\phi^i=\phi^i_0$ and expand the Hamiltonian to quadratic order in the fluctuations. Since the target space $G/H$ is a homogeneous $G$-space, and by assumption the action is $G$-invariant, the physics is independent of the choice of $\phi^i_0$, and without loss of generality we may set it to zero. The quadratic part of the Hamiltonian density has the form
$$
H_0=\frac12 g^{ij}_0 (p_i-B_{ik}(-i\nabla)\phi^k)(p_j-B_{jl}(-i\nabla)\phi^l)+\frac12 \phi^i\Omega^2_{ij}(-i\nabla)\phi^j.
$$
Here $g_0^{ij}=G^{ij}(0)$, $B_{ik}(-i\nabla)$ is a matrix whose entries are polynomials in the spatial derivatives such that $B_{ik}(-i\nabla)\phi^k$ is a linearization of $A_i(\phi,\nabla)$, and $\Omega^2_{ij}(-i\nabla)$ is similarly a matrix of spatial differential operators such that $\frac12 \phi^i\Omega^2_{ij}(-i\nabla)\phi^j$ is the leading (quadratic) part in the expansion of $W(\phi,\nabla)$. Since it is assumed that $\phi^i=0$ is a solution of the equations of motion, there are no linear terms in the expansion of $W$. We also absorbed possible constant terms in the expansion of $A_i(\phi,\nabla)$ into a shift of $p_i$.
Fourier-expanding both $p_i$ and $\phi^i$, we see that the Hamiltonian for Fourier modes with momentum ${\mathbf q}$ is almost identical to the Hamiltonian of an anisotropic $N$-dimensional harmonic oscillator in a magnetic field. The only difference is that the canonical coordinates and momenta are complex and subject to the reality constraint $p_i({\mathbf q})^*=p_i(-{\mathbf q})$ and $\phi^i({\mathbf q})^*=\phi^i(-{\mathbf q})$. We also have the relations $B_{ik}^*({\mathbf q})=B_{ki}(-{\mathbf q})$ and $\Omega^2_{ij}({\mathbf q})=\Omega^2_{ij}(-{\mathbf q})$. In addition, $\Omega^2_{ij}({\mathbf q})$ is a positive-definite Hermitian matrix. One can diagonalize the Hamiltonian in the usual way by introducing creation and annihilation operators. This is facilitated by working in a coordinate system where $g^{ij}_0=\delta^{ij}$ and by shifting
$$
p_i({\mathbf q})\mapsto p_i({\mathbf q})+\frac12 (B_{ij}({\mathbf q})+B_{ji}(-{\mathbf q}))\phi^j({\mathbf q}).
$$
This redefinition does not affect the commutation relations, and its only effect is to replace $B_{ij}({\mathbf q})$ with its ``antisymmetrized'' part $\frac12(B_{ij}({\mathbf q})-B_{ji}(-{\mathbf q}))$. In other words, we may assume that the matrix function $B_{ij}$ satisfies $B_{ij}({\mathbf q})=-B_{ji}(-{\mathbf q})$. Together with the reality condition, this means that $B_{ij}({\mathbf q})$ is an anti-Hermitian matrix.
In terms of the usual creation and annihilation operators the normal-ordered Hamiltonian takes the form
$$
\int d^n{\mathbf q}\ a_i({\mathbf q})^\dagger \left( {\sqrt {\Omega^2({\mathbf q})+B({\mathbf q})^\dagger B({\mathbf q})}}+i B({\mathbf q})\right)_{ij}a_j({\mathbf q}).
$$
Here it is understood that we take the positive square root of the positive Hermitian matrix $\Omega^2+B^\dagger B$.
This Hamiltonian describes $N$ species of bosonic particles. The energies of one-particle excitations with momentum ${\mathbf q}$ are the eigenvalues of the Hermitian matrix
\begin{equation}\label{M}
M({\mathbf q})=iB({\mathbf q})+\sqrt {\Omega^2({\mathbf q})+B({\mathbf q})^\dagger B({\mathbf q})}
\end{equation}
We can now determine the number of gapless excitations arising from the fluctuations of the order parameter, i.e. the number of Goldstone bosons. Consider the limit ${\mathbf q}{\rightarrow} 0$. In this limit the stiffness matrix $\Omega^2({\mathbf q})$ necessarily goes to zero, since otherwise $\phi^i=const$ would not be a solution of the classical equations of motion. The energies of the ${\mathbf q}=0$ excitations are therefore the eigenvalues of the matrix
$$
M(0)=iB(0)+\sqrt {B(0)^\dagger B(0)}
$$
The matrix $B(0)$ is a real skew-symmetric matrix.
We can use an orthogonal transformation to bring it to the standard block-diagonal form
$$
B(0)=\begin{pmatrix} 0 & b_1 & & & & & \\
-b_1 & 0 & & & & \\
& & \hdotsfor{2} & & & \\
& &\hdotsfor{2} & & & \\
& & & & 0 & b_{[\frac{N}{2}]} & \\
& & & & -b_{[\frac{N}{2}]} & 0 & \\
& & & & & & [0]\end{pmatrix}
$$
Here the brackets around zero in the last row indicate that this diagonal element is present only for odd $N$. Therefore the eigenvalues of the matrix $M(0)$ (i.e. the energies of one-particle excitations with zero momentum) are $2|b_1|,\ldots,2|b_{[N/2]}|,0,\ldots,0$, where the number of zeros is $N-[N/2]=[(N+1)/2]$. Thus the number of Goldstone bosons is $N-\frac12{\rm rank}\, B(0)$ \cite{WM}. It can range from $[(N+1)/2]$ to $N$. The remaining $\frac12{\rm rank}\, B(0)$ excitations are gapped. We will refer to the gapped excitations as almost-Golstone bosons, since they also arise from small fluctuations of the order parameter. However, it should be kept in mind that if the energy gap for almost-Goldstone bosons is comparable to the masses of other excitations which we integrated out to arrive at our effective action, then the effective action is not valid at these energy scales, and the almost-Goldstone bosons will mix with other excitations. Thus the notion of an almost-Goldstone is well-defined only if nonzero eigenvalues of $B(0)$ are much smaller than the UV energy cutoff.
As for true Goldstone bosons, they can be further classified as being of Type A or Type B depending on whether for ${\mathbf q}=0$ their internal-space polarizations are in the kernel of $B(0)$ or not \cite{WM}. The number of Type A Goldstone bosons is $\dim \ker B(0)=N-{\rm rank}\, B(0)$. The number of Type B Goldstone bosons is $\frac12 {\rm rank}\, B(0)$, because each of the Type B Goldstone bosons has an almost-Goldstone partner. The energy gap for the $\ell^{\rm th}$ almost-Goldstone boson is $2|b_\ell|$, where $b_\ell$ is a nonzero eigenvalue of $B(0)$ and $\ell$ runs over $\frac12 {\rm rank}\, B(0)$ values. The total number of independent excitations adds up to $N$, of course.
Typically, rotational invariance or spatial parity invariance dictates that the stiffness matrix $\Omega^2({\mathbf q})$ is of order ${\mathbf q}^2$, while the eigenvalues $\pm b_\ell({\mathbf q})$ of the matrix $i B({\mathbf q})$ for small ${\mathbf q}$ behave as $b_\ell({\mathbf q})=b_\ell(0)+O({\mathbf q}^2)$ (some exceptions will be noted below). Barring accidental cancellations, this means that Type B Goldstone bosons have quadratic dispersion law, while Type A Goldstone bosons have linear dispersion law. Thus this classification scheme is consistent with that of \cite{NC}. The effective action approach has the advantage that the counting rules for Type A and Type B Goldstones hold even when the stiffness $\Omega^2({\mathbf q})$ is softer than usual due to accidental cancellations or fine-tuning.
So far we have been assuming that the metric $G_{ij}(\phi)$ is nondegenerate. The general case is not very different. Consider first the opposite extreme, $G_{ij}=0$. In this case we are dealing with an action which is of first order in time derivatives, and for this action to define a sensible theory $B_{ij}({\mathbf q})$ has to be nondegenerate for all ${\mathbf q}$, including ${\mathbf q}=0$. This means, of course, that $N$ has to be even, that $B_{ij}({\mathbf q})$ plays the role of a symplectic structure on the space of fluctuations with momentum ${\mathbf q}$, and that half of the fluctuations $\phi^i({\mathbf q})$ should be regarded as canonically conjugate to the other half \cite{WM}. This immediately implies that there are $N/2$ independent excitations. For each ${\mathbf q}$ the Hamiltonian can be thought of as describing a zero-mass charged particle in a harmonic potential and magnetic field (in $N$ dimensions). The zero-mass limit means that only the lowest Landau level survives.
One can think of this case as arising from the case of nondegenerate $G_{ij}$ in the limit $G_{ij}{\rightarrow} 0$. Equivalently, if one works in the coordinate system where $g_0^{ij}=\delta^{ij}$, one can rescale $B\mapsto \lambda B$, $\Omega^2\mapsto\lambda\Omega^2$ and take the limit $\lambda{\rightarrow}\infty$. In this limit half of the eigenvalues of the matrix $M({\mathbf q})$ (the ones corresponding to the almost-Goldstone bosons) go to infinity, while the other half (corresponding to Type B Goldstone bosons) remain finite. Thus we get $N/2$ Goldstones of Type B and no other excitations. This agrees with the analysis of \cite{WM}.
The most general case is now clear. If $G_{ij}$ is degenerate (i.e. positive semi-definite rather than positive-definite), the matrix $B_{ij}({\mathbf q})$ must be nondegenerate when restricted to the zero subspace $\ker G$ of $G_{ij}$, for the action to describe a sensible theory. This means in particular that $\ker G$ is even-dimensional. Fluctuations in the zero subspace of $G_{ij}$ are pairwise canonically conjugate, therefore the total number of independent one-particle excitations is $N-\frac12 \dim\ker G$. Some of these are gapless (true Goldstone bosons), while the rest are gapped (almost-Goldstone bosons). The count of Goldstone bosons is independent of the form of $G$ and depends only on $B(0)$. Namely, we have $N-\frac12 {\rm rank}\, B(0)$ Goldstone bosons, out of which $\frac12 {\rm rank}\, B(0)$ are Type B and $N-{\rm rank}\, B(0)$ are Type A. The number of almost-Goldstone bosons is
$$
\frac12 ({\rm rank}\, B(0)-\dim\ker G).
$$
\section{Discussion and examples}
The derivation of the Goldstone boson count just presented clarifies the relationship between Type A and Type B Goldstone bosons. Suppose we start with an action which does not contain terms with only a single time derivative. Such an action describes $N$ Type A Goldstone bosons. If we perturb the theory by adding a term with a single time derivative, some Goldstone bosons (namely, the ones which lie in nonzero eigenspaces of $B(0)$) are paired up, and each pair gives rise to a single Type B Goldstone boson and a single almost-Goldstone boson. This is why the number of true Goldstone bosons is now decreased to $N-\frac12{\rm rank}\, B(0)$. This bears some resemblance to the Higgs effect, where Goldstone bosons become longitudinal polarizations of massive gauge bosons. However, there is a crucial distinction (apart from the fact that no gauge interactions are involved in our case): the Higgs effect is a classical phenomenon, while the emergence of the almost-Goldstone bosons is a quantum effect. This is quite clear from the above derivation, which relies on the existence of Landau levels for a particle in a magnetic field.
This viewpoint might suggest that in the nonrelativistic case Type B Goldstones are generic, while Type A Goldstones require fine-tuning. This is not so, however, because terms of first order in time derivatives are odd under time reversal, while the rest of the action is even. We assumed here that the order parameter does not transform under time reversal. If this naive time-reversal transformation is a symmetry of the microscopic theory, we must have $B=0$, and Type B Goldstone bosons are forbidden. If the naive time-reversal is not a symmetry, the system might still possess time-reversal symmetry under which the order parameter transforms nontrivially. But in this case there no symmetry reason for $B(0)$ to vanish, and Type B Goldstone bosons are generic. Such is the case of the Heseinberg ferromagnet, where the microscopic theory is time-reversal invariant, but the order parameter (magnetization) is odd under time-reversal. Hence we expect that magnons in the ferromagnetic phase (Goldstone bosons arising from $SO(3)$ breaking down to $SO(2)$) are of Type B. On the other hand, in an antiferromagnet the order parameter is even under time-reversal, so magnons are Type A Goldstones.
If the naive time-reversal symmetry is only slightly broken, then the splitting between a Type B Goldstone boson and its partner almost-Goldstone boson is small, and moreover for moderately large momenta the dispersion law for both is essentially linear, i.e. they are approximately of Type A. The deviations from linearity will be observed only for very small momenta, where the dispersion law for the Goldstone boson is $\epsilon({\mathbf q})=K_{\alpha\beta} {\mathbf q}^\alpha{\mathbf q}^\beta+\ldots$, while for the almost-Goldstone boson it is $\epsilon({\mathbf q})=2b(0)+K_{\alpha\beta} {\mathbf q}^\alpha{\mathbf q}^\beta+\ldots$. One can look for such deviations from linearity at small momenta as a signature of a small breaking of time-reversal symmetry.
There may be other symmetry considerations forbidding Type B Goldstone bosons. Consider a phase where $G\times G$ is spontaneously broken down to the diagonal subgroup $G$. Such is the case for the B-phase of superfluid helium-3, for example, where $G=SO(3)$. If $G$ is compact semi-simple, then no Type B Goldstone bosons are allowed. Indeed, the order parameter takes values in $G$, and all the quantities appearing in the effective action must be invariant with respect to both left and right $G$-action. In particular, the 2-form $B_{ij}(0)$ must be invariant with respect to both left and right $G$-action. It is easy to show that the only such 2-form is $0$.
On the other hand, consider a phase where a compact semi-simple $G$ is broken down to nothing. The order parameter again takes values in $G$, but now the effective action must be invariant only with respect to the left $G$-action. There are plenty of left-invariant 2-forms on $G$ (just take the wedge product of any two left-invariant 1-forms), so unless time-reversal considerations forbid terms with only a single time-derivatives, Type B Goldstones will be generic. (If $\dim G$ is odd-dimensional, there will be at least one Type A Goldstone boson, since an $N\times N$ skew-symmetric matrix cannot have rank $N$ if $N$ is odd).
To conclude this section, let us comment on the somewhat peculiar case of one spatial dimension. There can be no Goldstone bosons in one spatial dimension, nevertheless one may consider actions of the same form as in higher dimensions, and many of the above considerations still apply. Consider for example an action for a single scalar field $\phi$ of the form
$$
S=\int dt dx (\partial_x\phi\partial_t\phi-\partial_x^m\phi \partial_x^m\phi).
$$
For $m=1$ this action describes a chiral boson. The matrix $B({\mathbf q})=i{\mathbf q}$ is of size $1\times 1$, and so is $\Omega^2({\mathbf q})={\mathbf q}^{2M}$. The metric $G$ vanishes identically, so formally the action describes $\frac12 {\rm rank}\, B({\mathbf q})=1/2$ bosonic degrees of freedom. What this really means is that the action describes bosonic particles which are right-moving (have ${\mathbf q}>0$) and have the dispersion law $\epsilon({\mathbf q})={\mathbf q}^m$. In higher dimensions we would reject such an action because the matrix $B(0)$ vanishes identically, so the zero mode of $\phi$ does not have a conjugate momentum. However, in one spatial dimension we typically do not regard the zero mode of $\phi$ as observable: all allowed observables must be invariant under $\phi\mapsto \phi+{\rm const}$. Thus the action gives rise to a sensible theory ``of Type A'', even though the particle it describes cannot be thought of as a Goldstone boson.
We have started our discussion by stating that $G$ is an internal symmetry. However, this was not really necessary: everything we said applies to any continuous symmetry which does not involve the time coordinate. Since we also assumed translational invariance, in practice this means that $G$ may contain both internal symmetries and rotations.
|
1,314,259,995,576 | arxiv | \section{Introduction}
Within a physical theory there are often effects or phenomena studied in idealized form in
order to gain insight into the peculiarities of the theory. In Quantum Mechanics one such
effect is the appearance of topological quantum phases in the wave functions of particles
moving freely in multiply connected space-times the prototype of this effect being the
Aharonov-Bohm (AB) effect \cite{aha}, the appearance of a phase factor in the wave function
of an electron which moves around a magnetic flux line. A similar effect is the
Aharonov-Casher (AC) effect \cite{aha2} which is obtained from the AB effect by replacing
the flux line and the electron by a charged line and a neutral particle with magnetic
moment, respectively. There have also been studied analogous effects in gravitation
\cite{dow}\cite{ana}\cite{law}\cite{ford}\cite{rez} and in non-Abelian gauge theory
\cite{wu}. Moreover, there have been considered quantum phases associated with higher
multipole moments of charges \cite{chen}.
It seems that topological quantum phases appear generically in theories that allow a
geometric, gauge theoretic formulation. Up to now, however, a unified description is
lacking.
In this letter, a combined formulation of topological quantum phases by means of a general
model is proposed. This model provides a classification and --- to a certain extent ---
also a prediction of topological quantum phases. The basic idea of this letter is the view
of the ''flux line'' that generates the multiple connectedness of space-time as a
topological defect similar to a crystal line defect. We use defects in higher dimensional
space-times in order to describe internal gauge interactions in the framework of higher
dimensional unification. The model is formulated as a gauge theory generalizing gauge
theory models of gravitation in that curvatures of higher order are introduced.
The letter is organized as follows: In section 2, we begin --- as motivation and
illustration of the model --- with a comparison of AB and AC effects in electromagnetism
and gravitation. In section 3, we formulate the model in a general framework and give
some examples. Section 4 contains a summary and some comments.
\section{AB and AC Effect in Electromagnetism and Gra\-vitation}
In the electromagnetic AB effect, the wave function of a charge $q$ experiences a phase
change when the charge moves around a magnetic flux line. The phase factor is given by
\begin{equation}\label{1}
\Lambda^{AB}_{em} = \exp\left( \frac{i}{\hbar}\oint qA_\mu dx^\mu\right)
= \exp\left( \frac{i}{\hbar} q\phi \right),
\end{equation}
where $A_\mu$ ($\mu =0,\ldots ,3$) is the 4-vector potential of the flux line with flux
$\phi$ and the integration is along an arbitrary curve surrounding the flux line. The
field strength $F_{\mu\nu}=2\partial_{[\mu}A_{\nu]}$ is singular on the flux line.
The gravitational AB effect arises when the wave function of a massive particle encircles
a spinning cosmic string. Such a string is conveniently described by a singularity of
torsion \cite{let}. The phase factor is
\begin{equation}\label{2}
\Lambda^{AB}_{gr} = \exp\left( \frac{i}{\hbar}\oint p_a e^a_\mu dx^\mu\right)
= \exp\left( \frac{i}{\hbar} 8\pi GmS \right).
\end{equation}
Here, $p_a$ ($a=0,\ldots ,3$) is the momentum of the particle with mass $m$, $e^a_\mu$
represents the vierbein of the string geometry with spin $S$ per unit length, and $G$ is
the gravitational constant. $e^a_\mu$ plays the role of a potential for the torsion
$T^a_{\mu\nu} = 2\partial_{[\mu} e^a_{\nu]}$. Since there is no curvature present, we can
use a teleparallel formulation of gravitation and choose a gauge in which the Lorentz
connection vanishes identically.
Comparing the two phases (\ref{1}) and (\ref{2}) we see that they have a similar form.
Indeed, we can combine these phases into a single phase if we use a unification of
gravitation and electromagnetism through a 5-dimensional teleparallel gravitation akin to
the Kaluza-Klein model \cite{lee}. However, we do not employ a 5-dimensional metric. On
the manifold $M_4\times S^1$ where $M_4$ is space-time we introduce a f\"unfbein $E^A_M
(A,M=0,\ldots ,3,5)$ with components
\[
E^a_\mu = e^a_\mu,\quad E^a_5=0,\quad E^5_\mu=A_\mu,\quad E^5_5=1 .
\]
In this case, the 5-th component $T^5_{\mu\nu}$ of the torsion tensor is the field
strength $F_{\mu\nu}$. Since the charge $q$ represents the 5-th component of the
5-momentum $p_A$, the unified AB phase is $\hbar^{-1}\oint\left( p_AE^A_\mu dx^\mu\right)$.
We now turn to the AC effect. Instead of the usual electromagnetic AC effect we consider
its dual effect, that is, the scattering of an electric dipole moment from a straight line
of magnetic monopoles the dipole being polarized along the line \cite{wil}. The phase
factor reads
\begin{equation}\label{3}
\Lambda^{AC}_{em} = \exp\left(\frac{i}{\hbar}\oint\left( {\bf B}\times {\bf d}\right)\cdot
d{\bf r}\right)
= \exp\left(\frac{i}{\hbar} d_z\lambda\right),
\end{equation}
where ${\bf d}$ is the dipole moment, $d_z$ its $z$-component, and ${\bf B}$ the radial
magnetic field of the monopole line which lies on the $z$-axis with magnetic charge
$\lambda$ per unit length.
The counterpart to this effect in gravitation consists in the scattering of a spinning
particle from a massive cosmic string with the spin polarized along the string. The phase
factor is
\begin{equation}\label{4}
\Lambda^{AC}_{gr} = \exp\left( \frac{i}{2\hbar}\oint J_{ab}\omega^{ab}_\mu dx^\mu
\right)
= \exp\left( \frac{i}{\hbar} 8\pi Gs_z M\right) .
\end{equation}
Here, $J_{ab}$ is the spin of the particle, $s_z$ its $z$-component, and $\omega^{ab}_\mu$
the Lorentz connection of the string with mass $M$ per unit length on the $z$-axis.
While these two AC effects are physically analogous their mathematical formulations are
fundamentally different: The gravitational phase factor (\ref{4}) represents the holonomy
of a locally flat Lorentz connection. The electromagnetic phase factor (\ref{3}), however,
cannot be viewed as the holonomy of a locally flat connection. This discrepancy is
resolved in the following way:
The gravitational AC effect was formulated by means of a linear connection the cosmic
string representing a curvature singularity. We consider instead a teleparallel
formulation of gravitation in which the curvature is set to zero but a nonvanishing
torsion is allowed. The interference of neutral spin-$\frac{1}{2}$ particles in
gravitational fields with torsion was investigated in \cite{ana2}. In the teleparallel
case, the phase operator is
\begin{equation}\label{5}
\Lambda_{gr} = {\cal P}\exp\left( -\frac{i}{2\hbar}\oint \hat{S}^{ab}e_a^\mu e_b^\nu
T_{\rho\mu\nu} dx^{\rho}\right),
\end{equation}
where $\hat{S}^{ab}$ is the spin operator, $T^\rho{}_{\mu\nu}=e^\rho_aT^a_{\mu\nu}$ the
torsion tensor, and $e^\mu_a$ the inverse of $e^a_\mu$. Roman indices are raised and
lowered with the Minkowski metric $\eta_{ab} =\mbox{diag}(-1,1,1,1)$ or its inverse, greek
indices with the space-time metric defined by $g_{\mu\nu}=e^a_\mu e^b_\nu\eta_{ab}$. A
solution for a massive straight cosmic string in teleparallel gravitation is given by the
vierbein
\begin{equation}\label{6}
e^0 = dt, \quad e^1 = r^{-4GM} dx, \quad e^2 = r^{-4GM} dy, \quad e^3 = dz,
\end{equation}
and the Lorentz connection $\omega^{ab}_\mu=0$ ($r^2=x^2+y^2$). Equation (\ref{6}) is
equivalent to the solution of a massive particle in (2+1)-dimensional teleparallel
gravitation \cite{kaw}. Inserting the solution (\ref{6}) into the phase operator (\ref{5}),
we recover the phase factor (\ref{4}) of the gravitational AC effect where $\hat{S}^{ab}$
has the only nonvanishing eigenvalue $S^{12}=s_z$. Returning to the electromagnetic AC
effect we can write the phase factor (\ref{3}) covariantly as
\begin{equation}\label{7}
\Lambda^{AC}_{em} = \exp\left( \frac{i}{\hbar}\oint d^\mu F_{\mu\nu} dx^\nu\right)
= \exp\left( \frac{i}{\hbar}\oint d^a e_a^\mu T^5_{\mu\nu}dx^\nu\right),
\end{equation}
where we have finally written the field strength $F_{\mu\nu}$ as the 5-th component of the
torsion tensor in the 5-dimensional unification introduced above. Moreover, we have
referred the dipole moment to the vierbein $e^a_\mu$. The formal similarity of expression
(\ref{7}) with expression (\ref{5}) --- together with the interpretation of the field
strength as torsion in a 5-dimensional unification --- suggests that Maxwell's formulation
of electromagnetism represents a teleparallel formulation provided the view of a higher
dimensional unification is adopted. This is the reason why the electromagnetic ACeffect is
formulated differently from the gravitational one. Since the gravitational AC effect
admits a formulation involving only curvature the same should be possible for the
electromagnetic effect. This is indeed the case as the following considerations show:
Both a massive and a spinning straight cosmic string represent space-time defects
\cite{gal}. These defects can be thought of as being created through global cutting and
pasting processes (Volterra process) in which space-time points are identified by means of
symmetry transformations. The geometries of defects can be described by locally flat
connections associated with the groups of symmetry transformations.
A spinning cosmic string along the spatial $z$-axis is a space-time defect in the
($z,t$)-plane. It can be generated by cutting space-time along a hypersurface bounded by
the ($z,t$)-plane and identifying the borders after a mutual translation in time direction.
In the terminology of the theory of crystal defects, this defect is a screw dislocation
with Burgers vector in time direction.
From a 5-dimensional point of view, also a magnetic flux line can be considered as a
topological defect. In this case, 5-dimensional space-time is cut along a hypersurface
bounded by the flux line and the cut surfaces are identified after a constant mutual
$U(1)$-transformation of the internal $S^1$-space has been performed. If this
transformation is viewed as a translation, the flux line corresponds to a screw
dislocation.
A massive straight cosmic string has its counterpart in crystal physics in a wedge
disclination. Its geometryis generated by identifying points related by a rotation around
the string. Equivalently, it can be thought of as being created through removal of a wedge
from space. The geometry is described in a natural way by a linear connection which has a
curvature singularity on the string. In the teleparallel formulation of the massive
cosmic string, the defect generating rotation is considered as being local translations
spread out over space. The resulting geometry can be illustrated by a continuous
distribution of dislocations parallel to the string these being, however, edge dislocations
which are created in the Volterra process through translations perpendicular to the
defect line.
In the comparison of the gravitational AC effect with the electromagnetic one, we have
seen above that both effects have a similar description in a teleparallel formulation.
This suggests that also a line of magnetic charges represents a topological defect, being
associated --- like a wedge disclination --- with a linear transformation. The magnetic
field of a line of monopoles corresponds to a continuous distribution of radially outgoing
flux lines which we have interpreted as screw dislocations. In the same way as the
rotation that generates a wedge disclination can be considered as local translations, the
local $U(1)$-transformations that generate the magnetic field of the monopole line can be
regarded as a single linear transformation. In fact, this transformation is an internal
$U(1)$-transformation linear in the $z$-coordinate if the monopole line is directed along
the $z$-axis. A linear connection which describes a line of monopoles as a curvature
singularity is associated to this linear transformation. The electromagnetic AC phase can
be looked upon as the holonomy of this connection.
It should be remarked that also a line of electric charges can be interpreted as a
disclination if the 4-vector potential of the dual field strength is used as will be
described below.
To summarize this section, we have shown that the AB and AC effects in electromagnetism
and gravitation can be regarded as being associated with topological space-time defects.
In the following section we will generalize this result.
\section{Classification Model}
Motivated by the considerations in the previous section we will formulate in this section
a mathe\-matical model for the description of topological quantum phases based on the
principle that the ''flux lines'' in the effects represent topological defects.
To this end we consider defects on a ($4+D$)-dimensional manifold $M_4\times G$ where
$G$ is a $D$-dimensional Lie group which defines the internal interaction. A defect on
this manifold will be thought of as being generated in a generalized Volterra process in
the following way: The manifold ${\cal M}\times G$ with the Minkowski space ${\cal M}$ is
cut along a hypersurface. One of the cut faces is displaced by a transformation of
${\cal M}\times G$ and the hypersurfaces obtained are identified where possibly space must
be added or removed. The resulting defect represents the boundary of the hypersurface and
is of dimension $4+D-2$. We limit ourselves to defect topologies that are 2-dimensional in
space-time.
The model employs a particular transformation group of ${\cal M}\times G$ which we denote
by $P^\infty G$. This group consists of Poincar\'e transformations of ${\cal M}$ as well as
internal $G$-transformations that are functions on ${\cal M}$. The vector fields
generating $P^\infty G$\ are given by
\begin{equation}\label{8}
P_a = \partial_a ,\qquad J_{ab}=x_a\partial_b - x_b\partial_a,\qquad
S^{(k)a\cdots d}_\alpha =\underbrace{x^a\cdots x^d}_{{k-{\rm times}}}
v_\alpha ,\quad k=0,1,2,\ldots \quad ,
\end{equation}
where $x^a$ are Cartesian coordinates on ${\cal M}$ and $v_\alpha\; (\alpha=1,\ldots ,D)$
are the generators of $G$ (a basis of left invariant vector fields on $G$). The vector
fields (\ref{8}) satisfy the following commutation relations:
\begin{eqnarray}\label{9}
[J_{ab},J_{cd}]=2\eta_{a[c} J_{d]b}-2\eta_{b[c} J_{d]a},\qquad
[J_{ab},P_c]=2\eta_{c[b} P_{a]},\qquad
[P_a,P_b] = 0, \\[0.4cm]
\label{10}
[J_{ab},S^{(k)cd\cdots f}_\alpha]=2k\,\delta^{(c}_{[a}\, S^{(k)\;\;d\cdots
f)}_{\;\;\;\;b]\alpha} , \qquad
[P_a,S^{(k)bc\cdots f}_\alpha]=k\,\delta^{(b}_a\,S^{(k-1)c\cdots f)}_\alpha ,\\[0.4cm]
\label{11}
[S^{(k)a\cdots c}_\alpha,S^{(l)d\cdots f}_\beta]=c^\gamma_{\alpha\beta}\,
S^{(k+l)a\cdots cd\cdots f}_\gamma ,\quad
k,l=0,1,2,\ldots\quad .
\end{eqnarray}
Here, $c^\gamma_{\alpha\beta}$ are the structure constants of $G$ and round brackets
denote symmetrization. The first three commutators form the Poincar\'e algebra. In the
special case that $G$ is Abelian, the commutators (\ref{11}) vanish and we can define the
finite dimensional group $P^nG$ which is generated by the generators (\ref{8}) where
$S^{(k)a\cdots d}_\alpha=0$ for $k>n$. If we omit the generators $P_a$ and $S^{(0)}_\alpha$
from (\ref{8}), the remaining vector fields generate the subgroup $P^\infty_0G$ of $P^\infty G$, or
the subgroup $P^n_0G$ of $P^nG$ if $G$ is Abelian. Since the generators $S^{(0)}_\alpha$
are constant on ${\cal M}$, we treat them on the same footing as the translations $P_a$.
Our aim is to describe defects within the framework of differential geometry. If on a
manifold there is given a globally flat connection $\Gamma_0$ with respect to a
transformation group $H$ as structure group, the Volterra process gives rise to a locally
flat connection $\Gamma$ as long as the transformation in the Volterra process is in $H$.
The defect geometry is characterized by nontrivial holonomies of $\Gamma$. In the case at
hand, we therefore seek locally flat connections with the group $P^\infty G$\ as structure group.
We will follow the procedure of gauge theories of gravitation in that we consider Cartan
connections \cite{hehl}. These arise from $P^\infty G$\ connections through a symmetry breaking
$P^\infty G$\ $\to P^\infty_0G$ and have the advantage that the translational part can be related
to a basis of cotangent space. Assume that on a principle fibre bundle $P$ over $M_4\times
G$ with structure group $P^\infty_0G$ a Cartan connection with connection form $\omega$
taking values in the Lie algebra of $P^\infty G$\ is given. By means of a section $s$ of
$P$ we can define a gauge connection 1-form $A=s^*\omega$ on $M_4\times G$ as the pull-back
of $\omega$. We can decompose $A$ in the following way:
\begin{equation}\label{12}
A=e^aP_a +\sigma^{(0)\alpha}S^{(0)}_\alpha +\frac{1}{2}\omega^{ab}J_{ab} +
\sigma^{(1)\alpha}_aS^{(1)a}_\alpha +
\sigma^{(2)\alpha}_{ab}S^{(2)ab}_\alpha +\cdots\quad ,
\end{equation}
where the 1-forms $e^a$ and $\sigma^{(0)\alpha}$ are a basis of the cotangent space at
each point of $M_4\times G$. The field strength $F=dA + A\wedge A$ is written as
\begin{equation}\label{13}
F=T^aP_a+K^{(0)\alpha}S^{(0)}_\alpha +\frac{1}{2}R^{ab}J_{ab}+K^{(1)\alpha}_a
S^{(1)a}_\alpha +K^{(2)\alpha}_{ab}S^{(2)ab}_\alpha +\cdots .
\end{equation}
Here, $T^a$ and $ K^{(0)\alpha}$ are torsion tensors, $R^{ab}$ and $ K^{(1)\alpha}_a$
curvature tensors, and $K^{(k)\alpha}_{a\cdots d}$ for $k>1$ will be referred to as
curvature tensors of higher order.
Locally flat defect connections are characterized by $F=0$ on the manifold $M_4\times
G\setminus\Sigma$ where $\Sigma$ is the subspace of the defect. These field equations read
in components:
\begin{equation}\label{14}
T^a=de^a +\omega^a{}_b\wedge e^b =0,\qquad R^{ab} =d\omega^{ab} +
\omega^a{}_c\wedge\omega^{cb} =0 ,
\end{equation}
\vspace{-0.7cm}
\begin{eqnarray}\nonumber
K^{(k)\alpha}_{ab\cdots de\cdots g} & = & d\sigma^{(k)\alpha}_{a\cdots g}
+\frac{1}{2} c^\alpha_{\beta\gamma}\sum_{l=0}^k\sigma^{(l)\beta}_{(a\cdots d}\wedge
\sigma^{(k-l)\gamma}_{e\cdots g)}\\[0.4cm]
\label{15}
& & {}- k\;\omega^h{}_{(a}\wedge\sigma^{(k)\alpha}_{b\cdots g)h} -
(k+1)\;\sigma^{(k+1)\alpha}_{a\cdots gh}\wedge e^h =0,\quad k=0,1,2,\ldots\quad .
\end{eqnarray}
Given a solution to these equations, we can compute the holonomy
\[
\Lambda (*,C)={\cal P}\exp\left( -\oint_C A\right)
\]
along a closed curve $C$ in $M_4\times G \setminus\Sigma$ with base point $*$. $\Lambda$
is invariant under deformations of $C$ as long as the base point is held fixed.
We are now in a position to formulate the model for the classification of topological
quantum phases: Given a 1-parameter subgroup of the group $P^\infty G$\ generated by the vector
field $v$ on $M_4\times G$, we associate to it a quantum mechanical operator $\hat{v}=i
\hbar v$ which is interpreted as the charge operator of the quantum mechanical system that
encircles the ''flux line'' in the interference experiment. In the Volterra process, the
1-parameter subgroup generates a defect the gauge field of which can be determined with
the help of the field equations (\ref{14},\ref{15}) where $F$ has a $v$-valued singularity
concentrated on $\Sigma$. The holonomy of the gauge connection is interpreted as a phase
operator acting on the wave function of the quantum mechanical system. We require that
the wave function is an eigenfunction of the operator $\hat{v}$. The phase operator then
becomes a phase factor which gives the topological quantum phase.
With each of the generators (\ref{8}) there is associated a charge. The hierarchy of the
generators $S_\alpha^{(k)a\ldots d}$ corresponds to the hierarchy of multipole moments of
the charge $S_\alpha^{(0)}$. Given a topological quantum phase, we can find the
1-parameter group that characterizes the quantum mechanical system that interferes as well
as the ''flux line'' which represents a topological defect where the group parameter gives
its strength. On the other hand, choosing a 1-parameter subgroup of $P^\infty G$\ with a given $G$
a new quantum phase can be determined. In this case, it is, however, not ensured that
this phase is realized in nature since the model is purely topological and does not take
into account the real interactions. For example, whether the quantum mechanical systems
experience classical forces cannot be predicted from the model.
We will give a few examples to the model:
(1) Let $G$ be the trivial group $I$. In this case, $P^\infty I$ is the Poincar\'e group
and the field equations reduce to the equations (\ref{14}). The defects that can be
generated by means of Poincar\'e transformations in the Volterra process are space-time
dislocations and disclinations as explained in section 2. The associated charges are mass
and spin, leading to the gravitational AB and AC effect, respectively.
(2) We consider $G=U(1)\times U(1)$ corresponding to electromagnetism with magnetic and
electric flux. We limit ourselves to the group $P^2(U(1)\times U(1))$ and require further
that the Lorentz connection is flat and torsionfree choosing $\omega^a{}_b =0, e^a=dx^a$.
The field equations (\ref{14},\ref{15}) then reduce to
\begin{eqnarray}\nonumber
K^{(0)\alpha} =d\sigma^{(0)\alpha} -\sigma^{(1)\alpha}_a\wedge dx^a =0,\\[0.5cm]
\nonumber
K^{(1)\alpha}_a =d\sigma^{(1)\alpha}_a -2\sigma^{(2)\alpha}_{ab}\wedge dx^b =0,
\\[0.5cm]\label{20}
K^{(2)\alpha}_{ab} =d\sigma^{(2)\alpha}_{ab} =0,\qquad \alpha =1,2.
\end{eqnarray}
(2a) In the case of the AB effect the quantum mechanical system is an electrically or
magnetically charged particle. Thus, the associated 1-parameter subgroup of $P^2(U(1)
\times U(1))$ consists of internal $U(1)$ transformations constant on $\cal M$. Only the
torsion $K^{(0)\alpha}$ is nonvanishing and singular on the flux line where $\alpha=1$
corresponds to magnetic flux and $\alpha=2$ to electric flux. These flux lines are screw
dislocations on the space ${\cal M}\times S^1\times S^1$. Choosing $\sigma^{(1)\alpha}_a
=\sigma^{(2)\alpha}_{ab}=0$ the holonomy of the connection $\sigma^{(0)\alpha}$, which
represents the magnetic ($\alpha=1$) or electric ($\alpha=2$) 4-vector potential, gives
the AB phase factor.
(2b) For the AC effect the interfering system is a dipole moment. The associated
1-parameter subgroups of $P^2(U(1)\times U(1))$ are $U(1)$ transformations which are
linear on $\cal M$. The corresponding defects are disclinations characterized by
singularities of the curvature $K^{(1)\alpha}_a$ with $\alpha = 1$ for electric dipoles
and $\alpha = 2$ for magnetic ones. The field equations (\ref{20}) are solved by
\begin{equation}\label{21}
\sigma^{(2)\alpha}_{ab} =0,\qquad \sigma^{(1)\alpha}_a =\frac{k^\alpha_a}{2\pi}
d\varphi ,\qquad\sigma^{(0)\alpha} =d\theta^\alpha -\frac{k^\alpha_a}{2\pi}
x^a d\varphi ,
\end{equation}
where the defect lies along the $z$-axis and $\varphi$ is the azimuthal angle. $k^\alpha_a$
is the defect (or group) parameter and $\theta^\alpha$ are (angular) coordinates on
$S^1\times S^1$. In the case that only $k^\alpha_3$ is nonvanishing, the holonomy of the
connection (\ref{21}) gives the AC phase factor. Alternatively, we can use a teleparallel
formulation setting $K^{(1)\alpha}_a = 0$. Then, with $\sigma^{(1)\alpha}_a = 0$,
$\sigma^{(0)\alpha}$ in equation (\ref{21}) gives a nonvanishing torsion $K^{(0)\alpha}$
which is the field strength ($\alpha =1$) or the dual field strength ($\alpha =2$). If
only $k^\alpha_3$ is nonvanishing, $K^{(0)1}$ is the field strength of a homogeneous
magnetic line charge and $K^{(0)2}$ that of an electric one.
(2c) In the case that the curvature of second order $K^{(2)\alpha}_{ab}$ is singular on
the defect, we obtain a topological quantum phases for quadrupole moments. The ''flux
line'' corresponds to a defect which results in the Volterra process from a $U(1)$
transformation quadratic on $\cal M$. Again, a teleparallel formulation is possible where
we have a nonvanishing $K^{(0)\alpha}$.
\section{Conclusion}
In this letter, we have proposed a model that allows a classification of topological
quantum phases in that an element of a transformation group of a higher dimensional
space-time is associated to each quantum phase. Our starting point was the principle that
topological quantum phases arise in the scattering from space-time defects. The phase
factors are then given by the holonomies of the defect geometries. The model provides
moreover an explanation why some quantum phases in electromagnetism are usually not
described by holonomies of locally flat connections: From the viewpoint of a higher
dimensional unification, Maxwell's theory possesses the nature of a teleparallel theory.
We close with a few remarks:
(1) The gauge fields of higher order we have introduced do not seem to be suitable for a
formulation of a dynamics of gauge fields in the general case. They are only used here to
show that the "flux lines" in the topological quantum phases represent space-time defects.
Kaluza-Klein theory gives a formulation of electromagnetism in terms of linear connections,
the corresponding symmetries are, however, broken.
(2) A crucial property of the model is the combination of external and internal
transformations in the gauge group. As a result, the field equations (\ref{14},\ref{15})
of different order are coupled. This has the consequence that a defect described by a
curvature of a given order can be represented as a pair (dipole) of defects of one order
higher.
(3) At least in the case that $G$ is Abelian, there is a close relation between principle
$P^\infty G$-bundles and bundles of frames of higher order over $M_4\times G$ \cite{kob}.
(4) There exists another attempt to formulate the electromagnetic AC effect by means of a
holonomy of a connection \cite{ana3}\cite{oh} using the fact that a neutral particle with
magnetic moment in an electromagnetic field is equivalent to an isospin particle in an
$SU(2)$ gauge field \cite{gol}\cite{froe}. However, the $SU(2)$ connection is not flat and
cannot be interpreted as describing a topological defect. Moreover, the $SU(2)$ symmetry
originates from the spin of the particle.
\section*{Acknowledgement}
I would like to thank Th.Filk and H.M.Sauer for useful discussions. This work was
supported by the Deutsche Forschungsgemeinschaft (DFG) through Grant No.\ Ho 841/9-2.
|
1,314,259,995,577 | arxiv | \section{Introduction} \label{sec:intro}
Elongated filaments of gas and dust are ubiquitous in molecular clouds \citep[e.g.][]{molinari2010}. These clouds are stellar nurseries and the filaments they host may play an important role in star formation, with the majority of star-forming cores lying along filaments ``like beads on a string" \citep{andre2014}. \\
Filaments represent velocity coherent over-densities of gas and dust, and have aspect ratios greater than at least three \citep{panopoulou2014}. They can be identified from a 2D astronomical image, for instance a column density map, using skeleton-based filament identification algorithms such as \textsc{filfinder} \citep{koch2015}. For a given input image and set of input parameters these return a filament skeleton. The skeleton is a one pixel wide representation of the filamentary structure in the original image, tracing the main path of the filament and its branches. Clumps and cores are also over-dense compared to their surroundings, and are distinguished from filaments by smaller aspect ratios of $\sim$2 \citep{tachihara2000}. Clumps are inhomogeneously dense velocity coherent regions from which a system of stars may form. A core is a dense velocity coherent region that may form a single star or binary star. Cores are usually found grouped into clumps.\\
In the study of these filaments, one useful measurement is that of their orientation. Filament orientation is used in the construction of radial profiles used to derive filament width. Filament orientation can also be compared with that of the magnetic field. Magnetic fields are believed to have a dynamically important role in filament formation and stability. In several theories of cloud structure formation matter is channelled along the field lines, allowing filaments to form through gravitational contraction \citep{nakamura2008}. In this scenario dense filaments would be aligned perpendicular to the field and less dense filaments would be aligned parallel \citep{li2008}. \citet{goldsmith2008} and \citet{planck2016} find observational evidence for this scenario. \\
Filament orientation can be measured from the filament skeleton \citep{koch2015}. The intensity changes at the edge of the skeleton, and this intensity gradient has an associated direction (see~\autoref{fig:grad_eg}). Here we propose a new method to derive the filament orientation, exploiting this fact. This is achieved through the use of the Sobel filter, described in~\autoref{sec:fil_orient}, and some additional post-processing steps discussed in~\autoref{sec:post}. This method, which we call the `Sobel-gradient method', returns a quantitative and local map of filament orientation for any filament skeleton, including those with complex interconnected structures\footnote{The associated \textsc{python} code and documentation will be available on \textsc{github} in the near future. In the meantime please contact the author for an early release.}. The map reveals how the orientation changes as the skeleton curves on a local scale. We explore the uncertainties associated with the method in~\autoref{sec:test_suite}. Applications for this method are suggested in~\autoref{sec:applications}. \\
\begin{figure}[t]
\centering
\caption{\textbf{Gradient vector for an ideal edge.} A grey line bounds the image, but is not part of this example. The gradient vector is perpendicular to the ideal edge (transition from black to white) and points towards the higher intensity (`lighter') values (where black=0, and white=255). \label{fig:grad_eg}}
\includegraphics[width=0.25\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{Fig_1.pdf}
\end{figure}
\subsection{Motivation}
There are two main existing quantitative approaches to measuring filament orientation, the first being a map based analysis (e.g. \citealt{schisano2014} Hessian matrix method), and the second being a skeleton based analysis (e.g. \citealt{koch2015} \textsc{filfinder} algorithm). Prior to their introduction the predominantly utilized method for measuring filament and field relative orientation, was a qualitative, global, visual comparison \citep[e.g.][]{goldsmith2008, busquet2013, palmeirim2013}, and this approach is still used in more recent works \citep[e.g.][]{kusune2016}.\\
The \citet{schisano2014} method uses the Hessian matrix to identify filaments and measure their orientation from a 2D astronomical map such as a Herschel\footnote{Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.} \citep{pilbratt2010} dust column density map. However \citet{schisano2014} state that ``for very complex features where filaments are organized in web-like structures, the cross-spine profile fitting often fails to converge". In our related project studying the filamentary structure of the South- (SR) and Centre-ridges (CR) of the Vela C Molecular Cloud (C.-E. Green et al. 2017, in preparation), we tried this method to derive the orientation of the filaments shown in~\autoref{fig:sobel} panel (a). Indeed, the method failed to converge for this complex interconnecting data-set, motivating our search for an alternate method. \\
The \citet{koch2015} filament identification algorithm \textsc{filfinder} has an inbuilt skeleton-based filament orientation calculator that uses a `line-based' approach, the Rolling-Hough Transform (RHT). In this type of approach test lines with different angles are fit to groups of pixels along the filament skeleton, finding the best fit line and thus the associated angle of the skeleton segments. \citet{koch2015} define filament orientation to be the weighted directional mean of the distribution of angles for the skeleton returned by the RHT. \textsc{filfinder} thus returns to the user a single orientation value over long filament segments (e.g. $\sim$40 pixels, which corresponds to 1.2\,pc in our example Vela C SR data in~\autoref{fig:sobel} panel (a)), whereas our more complex filaments curve on a smaller scale of $\sim$5\,pixels (0.15\,pc). This definition of filament orientation is therefore not compatible with our goal of a quantitative, local, ``position-by-position" filament orientation.\\
This motivated our search for an alternate, fully automated filament orientation measurement method that:
\begin{enumerate}
\item returns a quantitative, local, ``position-by-position" measurement of filament orientation,
\item can be applied to complex interconnecting filaments (such as the SR shown in~\autoref{fig:sobel} panel (a)), as well as simpler, more linear filaments.
\end{enumerate}
As previous map based approaches such as \citet{schisano2014}'s Hessian matrix method have not worked for these more complicated `looped' (in the 2D image) filaments we focussed our search on a skeleton based approach that would provide a measurement on a smaller scale than the RHT method built into \textsc{filfinder}. This led us to develop the new Sobel-gradient method we propose here, exploiting the image intensity gradient to arrive at a map of filament orientation. \\
\section{Filament orientation from the image intensity gradient} \label{sec:fil_orient}
Filament skeletons are generally output by filament identification algorithms in Flexible Image Transport System (FITS\footnote{\href{http://fits.gsfc.nasa.gov/fits\textunderscore primer.html}{http://fits.gsfc.nasa.gov/fits\textunderscore primer.html}}) file format where the pixels `on' the skeleton have a value of one, and those `off' the skeleton have a value of zero. These can be trivially converted to a greyscale image matrix, where the `on' skeleton pixels are white, with a value of 255 and the `off' skeleton pixels are black, with a value of zero. We use the \textsc{python scipy ndimage} implementation of the Sobel filter where it is necessary to use this convention. White could also be represented as a value of one, black as a value of zero and grey shades as decimal values in between, if a different implementation was used. As the skeleton is a binary image, the image intensity gradient only exists at the edges of the filament skeleton where the intensity changes. This intensity gradient has a magnitude and a direction. An example of the image intensity gradient vector for an idealised case is shown in~\autoref{fig:grad_eg}. With some minor adjustments the skeleton orientation can be derived from the intensity gradient direction. \\
\subsection{Intensity gradient direction}
\label{sec:grad_direc}
To calculate the direction of the intensity gradient, we need the first $x$ and $y$ derivatives ($G_{x}$ and $G_{y}$ respectively) of the skeleton image matrix, $I$. The direction, $\Theta$, of the gradient is calculated as:
\begin{equation}
\label{eqn:grad_direc}
\Theta=tan^{-1}(G_{y}/G_{x})
\end{equation}
The Sobel filter is commonly used in computer vision to estimate these derivatives \citep{gao2010}. It is already built into \textsc{matlab} and the \textsc{python scipy ndimage} library so this method can be quickly and easily implemented. The Sobel filter itself is computationally inexpensive. Its speed is a major advantage because orientation measurements are generally repeated for the multiple different skeletons produced by different combinations of input parameters to the filament identification algorithm. In some ways this approach is similar to the Histogram of Relative Orientations (HRO) method of \citet{soler2013}, which also uses Gaussian derivatives (of which the Sobel filter is one of the simplest types) to measure the orientation of molecular cloud structure. Our approach differs in that we aim to find the orientation only of strictly defined and identified filaments, by measuring the orientation of the one pixel wide filament skeleton. This is in contrast to the HRO method, which makes no structure definitions, and involves finding the orientation of all structures of all scales within a column density map.\\
\subsubsection{The Sobel filter}
\label{sec:sobel}
The Sobel filter is a discrete differential operator consisting of two 3$\times$3 matrices of coefficients. When convolved with the image matrix, $I$, two new image matrices are created, representing estimates of $G_{x}$ and $G_{y}$ as follows (where $\ast$ represents the convolution operation) \citep{gao2010}:
\begin{equation}
G_{x} =
\left[\begin{array}{ccc} -1 & 0 & +1\\ -2 & 0 & +2\\ -1 & 0 & +1 \\ \end{array}\right]
\ast I
\label{eq:gx}
\end{equation}
\begin{equation}
G_{y} =
\left[\begin{array}{ccc} +1 & +2 & +1\\ 0 & 0 & 0\\ -1 & -2 & -1 \\ \end{array}\right]
\ast I
\label{eq:gy}
\end{equation}
Angular measurements have a reference point and a direction of increase. For the image gradient returned by the Sobel filter, these are the horizontal and the anticlockwise direction. Therefore in that convention the gradient angle needs to be rotated by 90$^{\circ}$ to give the angle of the edge. However in the astronomical convention the reference point is North (often vertically upwards in astronomical images), with an anticlockwise direction of increase. We are operating in the domain of [90, -90] so we therefore only need to perform a simple sign reversal to arrive at an estimate of the orientation of the skeleton edge\footnote{North is vertically upwards for the Vela C data presented in this work. If this is not the case for the users dataset this step would require the relevant rotation and sign adjustment to account for that.} when working in the astronomical convention. \\
\section{Deriving the skeleton orientation}
\label{sec:post}
Throughout this work we will use, for the purpose of illustration, filament skeletons identified with \textsc{filfinder} from \citet{fissel2016} Herschel dust column density images of the SR and CR of Vela C. For the SR these are shown in~\autoref{fig:sobel} panels (a) and (b) respectively. In the SR and CR data one pixel corresponds to 0.03\,pc. The skeletons were selected as belonging to the group of optimum skeletons, most similar to, and therefore the best representation of, the original column density image, using the mean structural similarity index as a goodness-of-fit measure as described in \citet{green2017}. Together the selected skeletons from the SR and CR comprise a representative data set, containing interconnected `loops' and curvature on the small scale along with some more linear segments. They were selected as they contained the largest number of `difficult' features for the algorithm to tackle. The SR skeleton selected was produced by \textsc{filfinder} input parameters of: skeleton threshold (skeleton length cutoff) of 10 pixels (0.3\,pc, corresponding to an aspect ratio of 3 \citep{panopoulou2014}, given an assumed width of 0.1\,pc \citep{arzoumanian2011}), branch threshold (branch length cutoff) of 3 pixels (0.09\,pc), global threshold (noise threshold) of 69\%, flattening threshold (threshold for arctan flattening which removes impact of compact sources like clumps in masking step) of 60\%. The CR skeleton was produced by a skeleton threshold of 10 pixels (0.3\,pc), a branch threshold of 5 pixels (0.15\,pc), a global threshold of 74\%, and a flattening threshold of 96\%. \\
Before deriving the filament skeleton orientation we first automatically remove junction points\footnote{Junction points are locations where filaments meet, i.e. locations where an on-skeleton pixel has more than two on-skeleton pixel neighbours. We define neighbours as the eight pixel positions surrounding a central pixel enclosed within a 3$\times$3 window.} from the skeleton. Junction points belong to all of the intersecting filaments involved and therefore have an undefined orientation. Removing these breaks up the skeleton into many components as shown in~\autoref{fig:sobel} panel (c), which in computer vision are called connected components. The image gradient is defined at the pixels immediately surrounding the skeleton, therefore the gradients may overlap and overwrite each other at the new endpoints created by deleting the junctions since they are so close together. To avoid this issue we automatically locate and label the connected components\footnote{Connected components labelling algorithms exist in many computing languages. They can be labelled automatically using e.g. \textsc{python's} \textsc{ConnectedComponentsWithStats} function from the \textsc{OpenCV} library.} and repeat the Sobel-gradient method described in the following on each component separately, collecting the final orientation maps of each component into a `master map', the final filament skeleton orientation map.\\
To derive the skeleton orientation of each component we calculate the $x$ and $y$ image derivatives using the Sobel filter (Equations \ref{eq:gx} and \ref{eq:gy} repectively), and then calculate the skeleton image gradient using~\autoref{eqn:grad_direc}. The sign of the skeleton image gradient is then reversed for consistency with astronomical conventions. This process is illustrated in~\autoref{fig:sobel} panels (d), (e), and (f). \\
The sign reversed gradient direction map in~\autoref{fig:sobel} panel (f) gives the orientation of the edge of the skeleton. To move from this to a map of the orientation of the skeleton's path we perform some simple post processing steps, which are demonstrated in~\autoref{fig:post}. These correct minor, partially cosmetic issues that arise as a direct consequence of the nature of the Sobel filter image gradient approach. In these steps we 1) correct the branch ends, 2) infill the centre pixels, 3) smooth the map, and 4) select the orientation values at the positions along the original skeleton to save into the master map. These steps are illustrated in~\autoref{fig:post}. \\
\begin{figure*}[t]
\centering
\caption{\textbf{Deriving intensity gradient direction for a filament skeleton with the Sobel filter.} Panel (a) shows the image input to \textsc{filfinder}, a Herschel dust column density map of the Vela C South-ridge of \citet{fissel2016}. Panel (b) is the skeleton output by \textsc{filfinder} for the South-ridge, and panel (c) is that skeleton with its junctions removed. These three panels have had their colors inverted for easier viewing. For the purposes of illustration, panels (d), (e) and (f) show quantities that were calculated separately for each connected component of the skeleton (see discussion in text), but have been plotted together. The Sobel filter is applied to the each connected component of the skeleton, producing the $x$ and $y$ derivatives ($G_{x}$ and $G_{y}$) shown in panels (d) and (e) respectively. The direction of the image intensity gradient is then calcuated for each component using~\autoref{eqn:grad_direc}, and is plotted in panel (f). The sign of that map is reversed for consistency with astronomical conventions. The grey does not form part of the colourmaps in these panels. It shows the `Not-a-Number' (NAN) background, as white forms part of the colourmap used. The $x$ and $y$ axes are plotted in pixel coordinates where the lower left is the origin. These images have been zoomed to the region $x$=55-226, $y$=65-265, for easier viewing. \label{fig:sobel}}
\includegraphics[width=0.83\textwidth, clip=true, trim=1cm 0.5cm 1cm 0.5cm]{Fig_2.pdf}
\end{figure*}
\begin{figure*}[t]
\centering
\caption{\textbf{Post-processing to move from intensity gradient direction to skeleton orientation.} For the purposes of illustration, all panels show quantities that were calculated separately for each connected component of the skeleton (see discussion in text), but have been plotted together. Panel (a) illustrates the two issues that need to be resolved to move from a map of intensity gradient direction to that of skeleton orientation. This image is an annotated version of that shown in~\autoref{fig:sobel} panel (f). The pixels at filament ends are deleted and replaced with the circular vector average of the values of the nearest unaffected pixels in the filament, giving panel (b). Then the blank central pixels are infilled with the circular vector average of their neighbours in panel (c). Circular vector averaging is performed smoothing the map, resulting in panel (d), a map of the filament skeleton orientation. Panel (d) is the `master map' to which the skeleton orientation for each connected component is saved. The grey shows the `Not-a-Number' (NAN) background. The $x$ and $y$ axes are plotted in pixel coordinates where the lower left is the origin. These images have been zoomed to the region $x$=55-226, $y$=65-265, for easier viewing. \label{fig:post}}
\includegraphics[width=0.99\textwidth, clip=true, trim=0.5cm 4cm 0.5cm 3.5cm]{Fig_3.pdf}
\end{figure*}
Firstly, at the ends of the connected components there are pixels with angle values that are roughly orthogonal to the rest of the component. This is because of the additional exposed pixel edges at the component ends. We automatically detect and delete\footnote{There are a set number of patterns of off- and on-skeleton pixels that can occur around a component end that we test against to detect them.} the handful of affected pixels at component ends in the map. We then give them the value of the circular vector average of the closest\footnote{The closest pixels are the pixels within a 3$\times$3 window centred on the `bad' pixel closest to the rest of the filament, whose value was removed in the previous step.} unaffected values in the branch, resolving the issue as shown in~\autoref{fig:post} panel (b). Wherever angles undergo averaging we use circular vector averaging. This ensures angles are averaged correctly, accounting for the fact that they are a circular quantity that wraps back around such that e.g. 90$^{\circ}$ and -90$^{\circ}$ are equal and represent the horizontal in the astronomical definition if North is vertical.\\
Secondly, the intensity gradient only exists at the edge of the skeleton, thus leaving a partially blank centre\footnote{The centre is not entirely blank. Some of these centre pixels are `colored in' with a gradient direction value, but that value corresponds to the pixel `next door'. This occurs in skeleton sections that are not horizontal, vertical or diagonal due to the nature of the convolution of the Sobel filter with the original skeleton image.}. We automatically infill the pixels along the positions of the original skeleton with the circular vector average of their neighbouring pixels within a 5$\times$5 pixel window (also including the centre pixel in the average if it is not blank). The result is shown in ~\autoref{fig:post} panel (c). \\
Finally, we smooth the map. A 5$\times$5 pixel window is passed over the image and we consider only pixel positions lying on the original skeleton which we centre in the window. The circular vector average of the pixels inside the window is calculated and is saved to the corresponding pixel position of the central pixel in a new map. This results in a smoothed orientation map for that connected component. The smoothed map for each connected component is saved into the master map. After repeating the process for each component we arrive at a map of the filament skeleton's orientation as illustrated in~\autoref{fig:post} panel (d). \\
These three post processing steps take us from the map of the gradient direction to a quantitative skeleton orientation map that reveals how the orientation changes as the skeleton curves on a local scale. For the first time we present the filament orientation for the SR and CR of Vela C on this small scale in ~\autoref{fig:example_orient}. \\
\begin{figure*}[t]
\centering
\caption{\textbf{Orientation maps.} The Sobel-gradient orientation maps for the Vela C South- and Centre-ridges. The grey shows the `Not-a-Number' (NAN) background. The $x$ and $y$ axes are plotted in pixel coordinates where the lower left is the origin. The South-ridge image in panel (a) has been zoomed to the region $x$=55-226, $y$=65-265, for easier viewing. \label{fig:example_orient}}
\includegraphics[width=0.7\textwidth, clip=true, trim=7.3cm 8cm 7.3cm 7.5cm]{Fig_4.pdf}
\end{figure*}
\section{Constraining uncertainties}
\label{sec:test_suite}
We validate the Sobel-gradient method against the known analytic case of the circle. We generate circles with radii of 7 to 500 pixels and calculate the theoretical orientation of their tangents at each point around them. We then apply the Sobel-gradient method to them and compare the orientations at each point around the circle. Circles drawn digitally with radii smaller than 7 pixels are essentially squares with the middle pixel along each side pushed out by one position. We therefore only consider radii larger than this. \\
For each radius we calculate the difference between the theoretical tangent orientation and Sobel-gradient orientation at each point around the circle, and then find the maximum, mean and standard deviation of those differences. These are plotted against their corresponding radii in~\autoref{fig:circle_test}. The average of the maximum differences at each radius was 7.8$^{\circ}$, the average of the mean was 2.1$^{\circ}$, and the average of the standard deviations was 1.6$^{\circ}$. In calculating these values we include only the circles with radii of 57 pixels or greater, as these have 360 pixels (and therefore 360 unique angles) around them. Circles digitally generated with radii less than this are still very effected by the `squaring effect' present at small radii. This means their theoretical orientation deviates greatly from the orientation that is actually drawn. They are thus not a reliable or accurate point of validation.\\
The maximum differences are dominated by digitisation errors, which we estimate to be up to $\sim$5$^{\circ}$. The average of the standard deviations of the differences between the theoretical and Sobel-gradient orientation for each radii thus provides a more appropriate estimate of the uncertainty associated with the Sobel-gradient method. Consequently we estimate the uncertainty of the Sobel-gradient method to be $\sim$2$^{\circ}$ based on this circle test analysis.\\
Obviously circles have no start or endpoints, have no branches, and do not wiggle back and forth, changing their direction as real filaments do. To further gauge the uncertainty associated with the Sobel-gradient method in a realistic scenario we compare the Sobel-gradient orientation maps of two filamentary regions, the Vela C SR and CR, to those measured manually\footnote{Manual orientation measurements were made with a protractor on enlarged skeletons printed on paper. The estimated uncertainty of the manual measurement method is $\sim$5$^{\circ}$. This is taken as a maximum estimate, most individual manual measurements had uncertainties much smaller than this. This value includes the uncertainty of the protractor measurement, the uncertainty of decomposing the skeleton into sections, and the $\pm$1\,pixel uncertainty in the skeleton (E. Koch, 2017, private communication).}. \\
A difference map was calculated for each region between the Sobel-gradient and manually measured orientation maps, and these are shown in~\autoref{fig:difference_map}. The histograms of the difference maps for both regions are shown in~\autoref{fig:difference_hist}. The majority of orientations differed by less than one degree. The maximum difference for the SR was 7.1$^{\circ}$, the mean difference was 1.9$^{\circ}$, and the standard deviation was 1.8$^{\circ}$, while that for the CR were 7.2$^{\circ}$, 1.2$^{\circ}$, and 1.3$^{\circ}$ respectively. \\
When measuring orientation manually, the non-linear skeletal path was essentially decomposed into small linear sections. The Sobel-gradient method does not decompose the path in this way, rather having a smooth transition along the filament that better reflects the curvature of the filaments path. The human defined section does not always align perfectly with the corresponding section in the Sobel map, sometimes they are shifted off each other by 1-2\,pixels. The larger orientation differences in the difference map mostly occur at locations where these shifts exist. This indicates that the larger difference values in the distribution in~\autoref{fig:difference_hist} are likely caused by this affect. This issue is unavoidable in constraining the uncertainty on the Sobel-gradient method in a realistic scenario--there is currently no other method on this scale besides manual measurement to provide a comparison.\\
The maximum of the difference maps of 7$^{\circ}$ is therefore a poor measure of the actual uncertainty of the Sobel-gradient method. It is more appropriate to use the standard deviation of the combined difference distribution of the SR and CR of $\sim$2$^{\circ}$ to estimate this uncertainty. This value is in agreement with that obtained from the circle test analysis. We therefore conservatively estimate the uncertainty of the Sobel-gradient method to be $\sim$2$^{\circ}$. This is acceptable considering that for Vela C BLASTPol data of \citet{fissel2016} the average uncertainty of the magnetic field maps is $\sim$2$^{\circ}$, reaching up to $\sim$16$^{\circ}$ in places.\\
The Sobel-gradient method is slightly more accurate than manual measurement, but it's strength of course is that it is significantly faster. To measure the orientation of one skeleton (such as those presented here) on this small scale manually takes most of a working day. When the additional time to then input the manual orientation measurement into a FITS file is taken into account, the manual process takes about one to two working days per skeleton. In studies of filaments, the filament and field relative orientation and radial profile measurements that involve filament orientation, are often repeated for hundreds of skeletons (all corresponding to the same input image, but to different combinations of input parameters to the filament identification algorithm). Consequently a fast and accurate orientation measurement method is essential. The Sobel-gradient method allows automation of the filament orientation measurement and is therefore crucial in this era of `big data'. \\
\begin{figure}[t]
\centering
\caption{\textbf{Circle test orientation differences.} The maximum, mean, and standard deviation (STD) of the differences between the Sobel-gradient and theoretical tangent orientation maps for circles of different radii. \label{fig:circle_test}}
\includegraphics[width=0.45\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{Fig_5.pdf}
\end{figure}
\begin{figure*}[t]
\centering
\caption{\textbf{Manual orientation difference maps.} The difference between the Sobel-gradient and manual orientation map for the Vela C South- and Centre-ridges. The $x$ and $y$ axes are plotted in pixel coordinates where the lower left is the origin. The South-ridge image in panel (a) has been zoomed to the region $x$=55-226, $y$=65-265, for easier viewing. \label{fig:difference_map}}
\includegraphics[width=0.7\textwidth, clip=true, trim=7.3cm 8cm 7.3cm 7.5cm]{Fig_6.pdf}
\end{figure*}
\begin{figure}[t]
\centering
\caption{\textbf{Manual orientation difference histogram.} The histograms of the difference between the Sobel-gradient and manual orientation maps for the Vela C South- (SR) and Centre-ridges (CR). Each bin is labelled with its corresponding count for each region. \label{fig:difference_hist}}
\includegraphics[width=0.45\textwidth, clip=true, trim=0cm 0cm 0cm 0cm]{Fig_7.pdf}
\end{figure}
\section{Applications of the Sobel-gradient method}
\label{sec:applications}
The Sobel-gradient method described is a technique to derive the orientation of filaments from their skeletons. This measurement has a number of astrophysical applications. One of the most significant is its use to calculate the relative orientation between magnetic fields and filaments, which provides clues on the role of magnetic fields in the formation and stability of filaments. We perform this calculation and present the results for the filaments of Vela C in Green et al. 2017 (in preparation). There are a number of other potential applications of the method including: to investigate relations between filament orientation and filament column density, mass, spatial width or molecular linewidth.
\section{Summary} \label{sec:summary}
We have described a fully automated method to derive the orientation of a filament skeleton from the direction of the image intensity gradient that is suitable for complex, `looping' filamentary structures. We call this the `Sobel-gradient method'. It allows a local measurement of filament orientation that reflects the often rapid changes in orientation as a filament curves. This means that the filament orientation calculated from the intensity gradient can be directly compared to a map of the magnetic field, giving a quantitative, local measure of relative orientation as opposed to the qualitative, global and `by-eye' technique that is the current predominantly adopted method. It also has a number of other applications in investigating relationships involving filament orientation, such as that between filament orientation and column density. We have found this method to have a high degree of accuracy, with an uncertainty of $\sim$2$^{\circ}$. This computer vision technique provides the significant advantage that it can be easily automated, saving a significant amount of time compared to manual measurement, which is imperative in this era of `big data'. It also has broader applications and can be applied to any image containing lines or edges to find their orientation.\\
\acknowledgements
\noindent \textbf{Acknowledgements} \\
The authors would like to thank the referee Erik Rosolowsky and the anonymous statistical editor for their helpful comments which improved this work. The authors are also grateful to Eric Koch for helpful discussions surrounding the \textsc{filfinder} filament skeletons of this work. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. C.-E.G gratefully acknowledges the support of the Layne Beachley Foundation. L.M.F. and G.N. acknowledge support from NASA grant NNX13AE50G and from the Center for Interdisciplinary Exploration and Research in Astrophysics. L.M.F. was supported in part by an NSERC Postdoctoral Fellowship. LMF is a Jansky Fellow of the National Radio Astronomy Observatory (NRAO). NRAO is a facility of the National Science Foundation (NSF) operated under cooperative agreement by Associated Universities, Inc. This research made use of NASA's Astrophysics Data System. \\
\software{This project utilised
\textsc{astropy} (\href{http://www.astropy.org}{http://www.astropy.org}) \citep{astropy2013}, \textsc{aplpy} (\href{http://aplpy.github.com}{http://aplpy.github.com}), \textsc{filfinder} \citep{koch2015}, Karma visualisation tools \citep{gooch1996}, and \textsc{scipy} (\href{http://www.scipy.org/}{http://www.scipy.org/}).
}
|
1,314,259,995,578 | arxiv | \section{Introduction}
\indent
Currently there are three pieces of evidence which
suggest that neutrinos have non-zero mass differences and
mixings. These are: (i) the observations of solar neutrinos,
(ii) the anomaly in the $\nu_{\mu}/\nu_e$ ratio in
atmospheric neutrinos at low energies and (iii) the possible
$\overline{\nu}_{\mu}-\overline{\nu}_{e}$ conversion
seen in the LSND experiment.
With the conventional interpretation of these effects
as being due to neutrino oscillations; the solar
neutrino anomaly needs a $\delta m^2$ $(\nu_e-\nu_x)$
of either about $10^{-5}-10^{-6} \; eV^2$ (MSW) or about
$10^{-10} \; eV^2$ (long wavelength vacuum oscillations), the
atmospheric neutrino anomaly
calls for a $\delta m^2$ $(\nu_e-\nu_{\mu})$ or
$(\nu_{\mu}-\nu_{\tau})$ of $10^{-2}-10^{-3}\; eV^2$ and
the LSND effect needs a $\delta m^2$ $(\nu_e-\nu_{\mu})$
in the neighborhood of $1-2 \; eV^2$. For these three
independent $\delta m^2$'s at least one more neutrino
state (beyond the three flavors) is necessary \cite{Bilenky}.
In this letter we explore the possibility
that all the neutrino anomalies may yet be accounted for
with just three flavors of neutrinos. We assume only two
distinct values of $\delta m^2$'s. One value of
$\delta m^2$ is selected to explain the low energy
atmospheric data, while the second value is
selected with the LSND effect in mind.
Specifically we choose the following
spectrum of $\delta m^2$'s:
\begin{equation}
\delta m^2_{31} \sim \delta m^2_{32} \sim (1-2)eV^2
\end{equation}
\begin{equation}
\delta m^2_{21} \sim10^{-2}\; eV^2
\end{equation}
We then seek to determine if, with this spectrum of
$\delta m^2$'s, an explanation of the LSND, solar, atmospheric
neutrino data can be found by appropriate choice of
neutrino mixing angles.
We begin by calculating the neutrino survival and
transition probabilities. In general, these are given by
\begin{equation}
P_{\alpha\beta} =|\sum_{i} U_{\beta i} \exp(-iE_i t)U^*_{\alpha i}|^2
\label{genprob}
\end{equation}
Here the $U_{\alpha i}$ are elements of the matrix $U$ describing the mixing
between the flavor eigenstates ($\nu_{\alpha}$) and the mass eigenstates
($\nu_i$); that is $\nu_{\alpha}=\sum_iU_{\alpha i}\nu_i$. For
now we ignore possible CP violation, then $U$ is real, and Eq (\ref{genprob})
may be written as
\begin{equation}
P_{\alpha\beta}=\sum_i(U_{\beta i})^2(U_{\alpha i})^2
+2\sum_{i > j}U_{\beta i}U_{\beta j}U_{\alpha i} U_{\alpha j}
\times \cos \left(\frac{\delta m_{ij}^2 L}{2 E}\right)
\label{bprob}
\end{equation}
where $\delta m_{ij}^2 = m_i^2-m_j^2$ and $L$ is the distance between
the neutrino source and detection.
We present below an explicit form of the $3\times 3$ matrix $U$
\begin{equation}
U = \left( \begin{array}{ccc}
C_{12}C_{13} & C_{13}S_{12}&S_{13} \\
- C_{23}S_{12}-S_{23}S_{13}C_{12} &
C_{23}C_{12}-S_{23}S_{13}S_{12}& S_{23}C_{13}\\
S_{23}S_{12}- C_{23}S_{13}C_{12} & -S_{23}C_{12}-
C_{23}S_{13}S_{12} & C_{23}C_{13}\\
\end{array} \right)
\label{mat}
\end{equation}
where $C_{12}=\cos \theta_{12}$, $S_{12} = \sin \theta_{12}$, {\em etc.}.
The explicit form of the transition probabilities
depends on the spectrum of the $\delta m^2$'s.
For the choice of $\delta m^2$'s considered here, all of the oscillating
terms in Eq (\ref{bprob}) average to zero for the energies and path lengths
relevant to both low energy atmospheric and solar neutrinos. Hence, for our model, the
form of the transition and survival probabilities relevant to
solar and atmospheric neutrinos are:
\begin{equation}
P_{ee} =\sum_i (U_{ei})^4
\label{pee}
\end{equation}
\begin{equation}
P_{\mu\mu} =\sum_i (U_{\mu i})^4
\label{pmm}
\end{equation}
\begin{equation}
P_{e \mu}=P_{\mu e} =\sum_i (U_{ei}U_{\mu i})^2
\label{pme}
\end{equation}
Note that the above expressions are functions of the mixing angles only, and
are independent of the neutrino energy.
\section{Solar Neutrinos}
\indent
The four currently operating solar neutrino experiments
report the following results:
\bigskip
\begin{tabular}{lcl}
\hline
{\em Experiment} &{\em Results} & \\ \hline
Homestake\cite{cl} & $2.56 \pm 0.16\pm 0.14$ & SNU \\ \hline
Kamioka \cite{kam}& $2.80 \pm 0.19 \pm 0.33$
& $\times 10^{10}m^{-2}s^{-1}$ \\ \hline
SAGE \cite{sage} & $72 \pm 12 \pm 7$ & SNU\\ \hline
Gallex \cite{gallex} & $70 \pm 7 $ & SNU\\ \hline
\end{tabular}
\bigskip
Recently measurements of the reaction
$\gamma +{ ^{8}B} \rightarrow { ^{7}Be} +p$ have been made\cite{reaction}. These
suggest that the cross-section for the inverse reaction
$^{7}Be +p \rightarrow {^{8}B} +\gamma$ at energies relevant
for the solar core may be somewhat smaller than the
value used in the Standard Solar Model(SSM) of Bahcall et al. calculations. Hence it is possible
that the flux of $^8B$ neutrinos is somewhat smaller than the
SSM, while the other neutrino fluxes are unaffected.
We allow for this possibility by defining $f_{B}$ as
\begin{equation}
f_{B} = {\Phi_{B} \over \Phi^{BP}_{B}}
\end{equation}
where $\Phi^{BP}_{B}$ is the $^{8}B$ neutrino flux predicted
in the SSM of Bahcall and Pinsonneault, which
incorporates helium diffusion\cite{BP}.
\footnote{Solar models which incorporate helium diffusion yield values for
the depth of the convective zone and primordial helium abundance which are
in excellent agreement with helioseismological data\cite{BP2}.}
Thus the parameter $f_{B}$ describes the deviation of
the actual $^{8}B$ neutrino flux from the SSM value.
We can now proceed to describe the expected counting
rates at the various solar neutrino experiments
in terms of $f_{B}$ and $P_{ee}$ the solar $\nu_e$ survival
probability.
With a threshold energy of 7.5 MeV the Kamiokande water Cerenkov detector is
sensitive only to $^{8}B$ neutrinos. The expected flux is given by:
\begin{equation}
R(\rm{KII}) =(P_{ee}+ {\alpha}
(1-P_{ee}))f_{B} \times 5.69\times 10^{10} m^{-2} s^{-1}
\end{equation}
Where $5.69 \times 10^{10} m^{-2}s^{-1}$ is the SSM prediction and
$\alpha$ (approximately 0.16) is the ratio of the $\nu_{\mu (\tau)}-e$ to
$\nu_{e}-e$ scattering cross sections integrated over the $^{8}B$ neutrino
spectrum. Note that we
have ignored the possibility of oscillation into sterile flavors and
assumed that solar neutrinos not interacting as $\nu_e$'s interact
as either $\nu_{\mu}$'s or $\nu_{\tau}$'s with probability $1-P_{ee}$.
The expected counting rate in the Homestake $^{37}Cl$ experiment is given by
\begin{equation}
R(^{37}Cl) = (6.2f_{B} + 1.8)\times P_{ee} \ \rm {SNU}
\end{equation}
where 6.2 SNU's is the expected contribution from $^{8}B$
neutrinos and 1.8 SNU's is the contribution from all other solar
neutrino fluxes. Similarly, the expected counting
rate in the $^{71}Ga$ experiments SAGE and Gallex is given by
\begin{equation}
R(^{71}Ga) = (13.8f_{B} + 117.6)\times P_{ee} \ \rm {SNU}
\end{equation}
where 13.8 SNU's is the expected contribution from $^{8}B$ neutrinos and 117.6 SNU's is
the contribution from all other solar neutrino fluxes.
The results of a chi-squared analysis are shown in Fig. 1.
We find that there is a solution at the 90\% C.L. when
the electron neutrino survival probability is in
the range $0.4< P_{ee} <0.55$ and the $^{8}B$ neutrino
flux is in the range $0.55< f_B <0.8$.
\footnote{It was first suggested by Acker {\it et al.}\cite{A1} that
solar neutrinos might be accounted for by the same $\delta m^2$ and
mixing as atmospheric neutrinos and hence should show an energy
independent suppression. A search for an energy independent fit
with varying $f_B$ was made in Ref. \cite{KP}.}
This is consistent with the variation in $^{8}B$ neutrino flux found in an
analysis of solar models\cite{BP2}.
The results in Fig. 1 can be interpreted in terms of the neutrino
mixing matrix $U$. From Eq( \ref{pee}) the $\nu_e$
survival probability
$P_{ee}$ is a function of $\theta_{12}$ and $\theta_{13}$ only.
Each allowed value of $f_B$ in Fig. 1 corresponds to an allowed range of
$P_{ee}$ and hence to an allowed range of $\theta_{12}$ and
$\theta_{13}$. In Fig. 2 we present a plot of the
allowed values (90\% C.L.) of
$\sin(\theta_{12})$ and $\sin(\theta_{13})$ for
$f_B = 0.8$ and $f_B = 0.65$. At $f_B = 0.8$,
$P_{ee}$ is required to be $\sim 0.43$, this can be realized only in three
flavor mixing. Hence, as shown in Fig. 2, $S_{12}$ and $S_{13}$
must both be nonzero. At $f_B = 0.65$, $P_{ee}$
can be greater then 0.5, which can be accomplished
in effective two flavor mixing. As shown in the figure, there
are allowed regions with $\sin(\theta_{12})$ or $\sin(\theta_{13})$
equal to zero,
corresponding to pure $\nu_e - \nu_{\tau}$ or $\nu_e - \nu_{\mu}$
mixing respectively.
\section{LSND}
\indent
The Liquid Scintillation Neutrino Detector (LSND) experiment
at Los Alamos reports to have observed the possible appearance of
$\overline{\nu}_{e}$ in an initial beam of
$\overline{\nu}_{\mu}$'s \cite{LSND}. These results have been interpreted
as evidence of neutrino oscillations and the preferred range of
$\delta m^2$
and $\sin^2(2\theta)$ in a two flavor mixing scenario given. For
definiteness, we choose $\sin^2(2\theta)\sim 1.2\times 10^{-3}$ and
$\delta m^2 \sim 2 \; eV^2$, which lie in this range \cite{Babu}.
In the range of $L/E$ covered by the LSND set-up,
$\delta m_{12}^2L/4E \sim 0$ and
$\delta m_{31}^2L/4E \sim \delta m_{32}^2L/4E$. Then
the $\overline{\nu}_{\mu} \rightarrow \overline{\nu}_{e}$
conversion probability is given by
\begin{equation}
P_{ \overline{\mu} \overline{e}} = 4 U_{\mu 3}^2 U_{ e 3}^2
\sin^2(\delta m_{31}^2L/4E)
\end{equation}
Thus the three-flavor interpretation of the LSND result is obtained by
letting
\begin{equation}
\sin^2(2\theta_{LSND}) \rightarrow 4|U_{\mu 3}|^2|U_{e 3}|^2
\end{equation}
This may be expressed as a constraint on the three flavor mixing angles. Using
\begin{eqnarray}
|U_{e 3}|^2 =\sin^2(\theta_{13})
\nonumber\\
|U_{\mu 3}|^2 =\cos^2(\theta_{13})\sin^2(\theta_{23})
\end{eqnarray}
We obtain
\begin{equation}
\sin^2(\theta_{23}) = {\sin^2(2\theta_{LSND})\over
4 \sin^2(\theta_{13}) \cos^2(\theta_{13})}
\end{equation}
Choosing
$\delta m_{31}^2 \sim \delta m_{32}^2$ to be near
$2 \; eV^2$,
the LSND results then give $\sin^2(2\theta_{LSND})\sim 1.2\times 10^{-3}$,
and we have
\begin{equation}
\sin(\theta_{23})^2 = {1.2\times 10^{-3} \over 4 \sin^2(\theta_{13})\cos^2(\theta_{13})}
\label{lsnd}
\end{equation}
In order to determine the range of validity of this result,
constraints from reactor and accelerator experiments must
be taken into consideration. For this range of $\delta m^2$, reactor
experiments give the bound\cite{ue3}:
\begin{equation}
|U_{e3}|^2 \le 0.02
\end{equation}
and accelerator experiments give the bound\cite{um3}:
\begin{equation}
|U_{\mu 3}|^2 \le 0.018.
\end{equation}
As $|U_{\mu 3}|$ is related to $|U_{e 3}|$ through
Eq (\ref{lsnd}) these two upper limits can be combined to form
bounds on the allowed values of $|U_{e3}|$ and $|U_{\mu 3}|$.
We have:
\begin{equation}
0.129 \le |U_{e3}| \le 0.141
\label{limits}
\end{equation}
and
\begin{equation}
0.123 \le |U_{\mu 3}| \le 0.134
\end{equation}
Hence the requirement that the LSND results be consistent with
existing bounds on neutrino mixing leads to rather stringent limits on the
allowed values of $|U_{e3}|$ and $|U_{\mu 3}|$.
We will find these constraints particularly useful when
interpreting the atmospheric neutrino data.
\section{Atmospheric Neutrinos}
Experimentally measured atmospheric neutrino fluxes are often described
in terms of an (observed to predicted) 'ratio of ratios' $R$, where
\begin{equation}
R={(\nu_{\mu}/\nu_e)_{\rm{observed}}\over
(\nu_{\mu}/\nu_e)_{\rm{Monte Carlo}}}
\end{equation}
The final results \cite{kam,klast} from Kamiokande for the low
energy atmospheric neutrino $\nu_{\mu}/\nu_e$ ratio
place $R$ at $0.62 \pm 0.06\pm 0.06$. The results from IMB \cite{IMB}
are in excellent agreement with these results. Results from non-water-Cerenkov
detectors are somewhat varying: Soudan \cite{Soudan} finds an
R of $0.72 \pm 0.19 \; { ^{+0.05}_{-0.07}}$
whereas the results from Nusex \cite{Nussex}
and Frejus \cite{Frejus} are consistent with an R of unity, although
with smaller statistics than the two large water-Cerenkov detectors.
It has been known for some time that this low energy atmospheric
neutrino anomaly can be explained by neutrino oscillations.
For $\delta m^2$ in the range
$(4\times 10^{-3}-2\times 10^{-2}) \; eV^2$ it has been shown \cite{afit}
that this anomaly can be explained by $\nu_{\mu}-\nu_{\tau}$
oscillations for
\begin{equation}
0.6 \le \sin^2(2\theta_{\mu-\tau}) \le 1.0
\label{mutauosc}
\end{equation}
or by $\nu_{\mu}-\nu_{e}$ oscillations for
\begin{equation}
0.5 \le \sin^2(2\theta_{\mu-e}) \le 1.0.
\label{emuosc}
\end{equation}
Expressing these bounds in terms of the $U_{\alpha i}$ we have:
\begin{equation}
0.3 \le (P_{\mu \tau}=\sum_i (U_{\mu i}U_{\tau i})^2) \le 0.5
\label{pmto}
\end{equation}
for $\nu_{\mu}-\nu_{\tau}$ oscillations and
\begin{equation}
0.25 \le (P_{\mu e}=\sum_i (U_{\mu i}U_{e i})^2) \le 0.5
\end{equation}
for $\nu_{\mu}-\nu_{e}$ oscillations.
In the narrow range of $|U_{e3}|$ values permitted by
LSND, reactor and accelerator data (Eq \ref{limits})
$P_{\mu \tau}$ is less then 0.05 and thus inconsistent with Eq (\ref{pmto});
while
$P_{\mu e}$ can take on values up to 0.48. Hence, in this
region, the atmospheric neutrino anomaly must be explained
almost exclusively by $\nu_{\mu}-\nu_{e}$ mixing.
With $\theta_{23}$ constrained in terms of
$\theta_{13}$ by Eq (\ref{lsnd})
and $\sin(\theta_{13})$ bound by Eq (\ref{limits}),
we find that $ 0.25 \le P_{\mu e} \le 0.5$ if
$\sin(\theta_{12})$ is in the range:
\begin{equation}
0.38 \le \sin(\theta_{12}) \le 0.92
\label{atmosc}
\end{equation}
Thus we can express this explanation of the low energy
atmospheric neutrino anomaly consistent with
the LSND, reactor, accelerator data as the region of the
$\sin(\theta_{12})-\sin(\theta_{13})$ plane bounded by
Eq (\ref{lsnd}) and Eq (\ref{limits}).
\section{A Combined Solution to the Solar, Atmospheric
and LSND Data}
It is now a straightforward matter to identify
simultaneous solutions to the Solar, Atmospheric
and LSND neutrino data. As the energy
independent solution to the solar neutrino problem,
shown in Fig. 2, and the combined LSND and
atmospheric neutrino solution
are both expressed as regions of the
$\sin(\theta_{12})-\sin(\theta_{13})$ plane, any
intersection between the allowed regions represents
the desired solution.
Fig. 3 presents a plot of
the intersecting regions of the Solar
neutrino and Atmospheric-LSND solutions.
Fig. 3a assumes the $^8B$ solar neutrino
flux, $f_{B}$, is at 80\% of its SSM value,
Fig. 3b assumes $f_{B}$
is at 70\% of its SSM value and
Fig. 3c assumes $f_{B}$
is at 65\% of its SSM value.
There is no intersection in Fig. 3a
and narrow region of overlap in
Fig. 3b broadening somewhat in Fig.
3c as the $^8B$ neutrino suppression is
allowed to increase to 0.65.
It should be noted that the
selection of any region of the
$\sin(\theta_{12})-\sin(\theta_{13})$ plane
determines the complete
set of mixing angles, and hence
the neutrino mixing matrix $U$,
as $\sin(\theta_{23})$ is
fixed by Eq (\ref{lsnd}). Specifically the intersection
region of Fig. 3b corresponds to
$\sin(\theta_{12})$ $\sim$ 0.707,
$\sin(\theta_{13})$ $\sim$ 0.140 and
$\sin(\theta_{23})$ $\sim$ 0.125.
Using Eq (\ref{mat}) we present below the
explicit form of the $3\times 3$
mixing matrix $U$ corresponding to
solution region of Fig. 3b:
\begin{equation}
U= \left( \begin{array}{ccc}
.700 & .700 & .140 \\
-.714 & .689 & .124 \\
-.010 & -.187 & .982 \\
\end{array} \right)
\label{Ub}
\end{equation}
While for the solution region corresponding to Fig. 3c we
find that, in addition to Eq (\ref{Ub}) above,
the following range of matrix values are allowed:
\begin{equation}
\left( \begin{array}{ccc}
.630 & .764 & .140 \\
-.776 & .619 & .124 \\
-.010 & -.187 & .982 \\
\end{array} \right)
\leftrightarrow
\left( \begin{array}{ccc}
.764 & .630 & .140 \\
-.645 & .754 & .124 \\
-.028 & -.185 & .982 \\
\end{array} \right).
\label{Uc}
\end{equation}
\indent
\section{Implications}
\indent
{\bf (i)} Both Super-Kamiokande \cite{superk} and SNO \cite{SNO} should
see {\bf NO} spectrum
distortion in either $\nu-e$ scattering or the
$\nu_e D$ charged current. The suppression in
$\nu-e$ scattering should be in the range 0.38 - 0.40 of SSM
at all energies
and in $\nu_e D$, charged current suppression should be about
$f_B P_{ee} \sim 0.32-0.34$.
{\bf (ii)} Borexino \cite{borexino} should observe the
$^{7}Be$ line at a rate of
$\left[ P_{ee}+\beta (1-P_{ee})\right]$ where $\beta$ is the ratio of
$\nu_{\mu (\tau)}-e$ to $\nu_{e}-e$ scattering cross sections. We thus expect
0.56 to 0.58 of the SSM rate, and an identical suppression should hold for
the pep line.
{\bf (iii)} The atmospheric $\nu_\mu/\nu_e$ anomaly
should be confirmed by Super-Kamiokande. Zenith angle dependence
of multi-GeV neutrinos should confirm
the tentative evidence seen in Kamiokande \cite{afit}. But most important is our
prediction \cite{A1} that $\nu_\mu- \nu_e$ oscillations should be confirmed by
observation of excess high energy
e-like upcoming shower events (above and beyond $\nu_\mu$ neutral
current events).
{\bf (iv)} Future reactor experiments such as
CHOOZ \cite{chooz} and Palo Verde \cite {PV} which will be sensitive to
$\delta m^2$ upto $10^{-3} \; eV^2$ should see a
$\overline{\nu}_{e}$ survival probability of
$P_{ee}\sim 0.48-0.5$.
{\bf (v)} Long baseline experiments
(such as MINOS \cite{minos}, CERN-LNGS \cite{cern} and KEK-PS E362
\cite{kek}) which will probe
$\delta m^2$ upto $10^{-3} \; eV^2$ should see
$\nu_{\mu}-\nu_{\tau}$ conversion
with $P_{\mu \tau} =\sum_i (U_{\mu i}U_{\tau i})^2
\sim 0.028-0.035$ accompanied by
$\nu_{\mu}-\nu_{e}$ conversion at $P_{\mu e} \sim 0.46-0.48$.
{\bf (vi)} Short baseline experiments such as CHORUS \cite{nomad},
NOMAD \cite{nomad} and COSMOS \cite{cosmos} will
probe $\nu_{\mu}-\nu_{\tau}$ conversion for $\delta m^2 \ge 0.1 \; eV^2$.
We predict, at $\delta m^2 = 1-2 \; eV^2$, an effective
$\sin^2(2\theta)$ of $4(U_{\mu 3}U_{\tau 3})^2$ which is 0.06.
{\bf (vii)} Large mixings in some ranges of $\delta m^2$
lead to strong conversion of $\overline{\nu}_{\mu}$ to
$\overline{\nu}_{\tau}$ due to the MSW effect in
supernova, leading to a harder energy spectrum of the
emerging $\overline{\nu}_e$'s. This can lead to
potential conflict with observation of neutrinos
from SN1987A. For the $\delta m^2$ in our scenario this
is not a problem \cite{SN}.
{\bf (viii)} The neutrino mass spectrum implied by
our scenario is:
\begin{eqnarray}
m_1 \sim m_0
\nonumber\\
m_2 \sim m_0 + \epsilon
\nonumber\\
m_3 \sim \sqrt{m_{2}^2 + 2 eV^2}
\end{eqnarray}
where $\delta m_{12}^2 \sim (2m_0\epsilon + \epsilon^2)
\sim 10^{-2} \; eV^2$. There are two limiting cases of
interest assuming that the largest mass is in the $eV$ range.
One is the hierarchical limit, in which
$m_0$ is negligible. Then $m_1 \ll m_2 \sim 0.1 eV$
and $m_3 \sim 1.4 eV$. The other is the nearly
degenerate limit, in which
\begin{eqnarray}
m_1 \sim 1 eV
\nonumber\\
m_2 \sim (1 + \epsilon) eV
\nonumber\\
m_3 \sim 1.73 eV
\end{eqnarray}
with $\epsilon \sim {1\over 2}(10^{-2})eV$. Then,
the sum of the neutrino masses is
$\sum_i m_i \sim 4 eV$. In this case, the
Cosmological density parameter associated with
neutrinos $\Omega_{\nu} =
0.011 h^{-2}\sum_i m_{i} = 0.044 h^{-2} \approx 0.2$ (for h of about 0.5) and
the amount of neutrino dark matter component along with cold dark matter
makes for a viable and testable scenario for mixed dark matter \cite{DM}.
{\bf (ix)} When the neutrinos are Majorana particles, the effective
mass \newline
$<m_{\nu_e}>$ relevant in neutrino-less
double $\beta$-decay analysis is
\begin{equation}
<m_{\nu_e}> = \sum_i U^2_{ei}m_i
\end{equation}
We find that in the case of the hierarchical spectrum
$<m_{\nu_e}> \sim 0.1 eV$ whereas in the degenerate
case $<m_{\nu_e}> \sim 1 eV$ (this could be somewhat
smaller when CP phases are taken into account). It is interesting that these
values are in the range of what the double beta decay experiments can probe
now and in the near future \cite{cp}.
{\bf (x)} When the mixing matrix is allowed to have a CP violating phase,
the CP violating neutrino flavor conversion probability differences are
given by \cite{meff}
\begin{eqnarray}
\Delta P = P_{\mu \tau} - P_{\bar{\mu} \bar{\tau}} = P_{\bar{\mu} \bar{e}}
- P_{\mu e}
\nonumber\\
= -4 J_{cp}^\nu \left [ \sin D_{12} + \sin D_{23} + \sin D_{31} \right ]
\end{eqnarray}
where
\begin{eqnarray}
J_{CP}^\nu = Im \left [ U_{\mu 2} U_{\tau 2}^* U_{\mu 3}U_{\tau 3}^* \right ]
\nonumber\\
= |U_{\mu 2}||U_{\tau 2}||U_{\mu 3}||U_{\tau 3}|\sin \phi,
\end{eqnarray}
and
\begin{equation}
D_{ij} = \delta m^2_{i j} L/2E
\end{equation}
with $\phi$ being the phase in the mixing matrix. With the
matrix of Eq.( {\ref{Uc}}), $J_{CP}^\nu \leq 0.07;$ and $\left[ \sin D_{12 }+ \sin D_{23} +
\sin D_{31}\right]
\approx \sin D_{12}$ is given by $\sim -1$ for
$L/E =730$km/10GeV (relevant for MINOS) and also for $L/E=250$km/3GeV
(relevant for E362).
Hence, $\Delta P$ can be as large as 0.07.(For these parameters, matter
effects are negligible \cite{meff}).
We conclude by stressing that our proposal to account for both solar and
atmospheric neutrino anomalies by the same mass and mixing can be confirmed
or ruled out in the very near future.
\section*{Acknowledgements}
\bigskip
We thank V. Barger, A. Joshipura, L. Kofman, J.G. Learned, S. Parke,
R. S. Raghavan, H. Sugawara, X. Tata and T.J. Weiler for
valuable discussions and encouragement.
This research was supported in part by the U.S.
Department of Energy grant \#DE-FG-03-94ER40833.
\bigskip
\vfill \eject
\noindent
|
1,314,259,995,579 | arxiv | \section{Introduction}
Let $\F_{q}$ be the Galois field with $q$ elements.
Let $\PG(N,q)$ be the $N$-dimensio\-nal projective space over $\F_q$. An $n$-arc in $\PG(N,q)$, with $n\ge N + 1\ge3$, is a set of $n$ points such that no $N +1$ points belong to the same hyperplane of $\PG(N,q)$, see \cite{BallLavrauw} and the references therein. For an introduction to projective geometry over finite fields see \cite{Hirs_PGFF,HirsStor-2001,HirsThas-2015}.
In $\PG(N,q)$, $2\le N\le q-2$, a normal rational curve is a $(q+1)$-arc projectively equivalent to the arc
$\{(t^N,t^{N-1},\ldots,t^2,t,1):t\in \F_q\}\cup \{(1,0,\ldots,0)\}$. In $\PG(3,q)$, the normal rational curve is called a \emph{twisted cubic} \cite{Hirs_PG3q,HirsThas-2015}.
The twisted cubic has many interesting properties and is connected with distinct combinatorial and applied problems, see e.g. \cite{BDMP-TwCub,BlokPelSzo,BonPolvTwCub,BrHirsTwCub,CLPolvT_Spr,CasseGlynn82,CasseGlynn84,CosHirsStTwCub,DMP_RSCoset,DMP_PlLineInc,DMP_OrbLine
GiulVincTwCub,GulLav,Hirs_PG3q,HirsStor-2001,HirsThas-2015,LunarPolv,ZanZuan2010} and the references therein. In particular, using properties of the twisted cubic, spreads in $\PG(3,q)$ are studied \cite{BrHirsTwCub,CLPolvT_Spr,LunarPolv}, optimal multiple covering codes are constructed \cite{BDMP-TwCub}, the weight distributions of cosets and their leaders for the Reed-Solomon codes are obtained \cite{BlokPelSzo,DMP_RSCoset}, the three-level secret sharing schemes are considered \cite{GiulVincTwCub}.
In investigations of the twisted cubic, an important direction is to determine the matrices of the incidences between points, planes, and lines partitioned into orbits under the group $G_q$ fixing the cubic. The orbits of planes and points are known and described in detail \cite{Hirs_PG3q}. The \emph{point-plane} incidence matrix of $\PG(3,q)$ for all $q\ge2$ is given in \cite{BDMP-TwCub} where the numbers of distinct planes through distinct points and, conversely, the numbers of distinct points lying in distinct planes are obtained. (By ``distinct planes" we mean ``planes from distinct orbits", and similarly for points and lines.)
For plane-line and point-line incidence matrices a description of line orbits is needed. In \cite{Hirs_PG3q}, the lines in $\PG(3,q)$ are partitioned into classes, each of which is a union of line orbits under $G_q$; see Section 2.2. Apart from one class (which is denoted by $\OO_6$), the number and the structure of the orbits forming those unions are independently considered by distinct methods in \cite[Sections 3, 8]{DMP_OrbLine} (for all $q\ge2$), \cite[Section 7]{BlokPelSzo} (for all $q\ge23$), and \cite{GulLav} (for finite fields of characteristic $> 3$); see also the references therein.
The classification of the line orbits in the class $\OO_6$ is an open problem.
The results on line orbits from \cite{BlokPelSzo,DMP_OrbLine,GulLav} are in accordance with each other. The representation and description of the orbits in these papers are distinct; in particular, in \cite{DMP_OrbLine}, the orbits are given in a form which is convenient for the investigations in \cite{DMP_PlLineInc}.
More precisely, using the representation of the line orbits in \cite{DMP_OrbLine}, the \emph{plane-line} incidence matrix of $\PG(3,q)$ is given in \cite{DMP_PlLineInc}
where, apart from $\OO_6$, for all $q\ge2$, the numbers of distinct planes through distinct lines and, vice versa, the numbers of distinct lines lying in distinct planes are obtained. For $\OO_6$, the corresponding average values are calculated.
In \cite{GulLav}, apart from $\OO_6$, for odd $q\not\equiv0\pmod3$ the numbers of distinct planes through distinct lines (called ``the plane orbit distribution of a line") and the numbers of distinct points lying on distinct lines
(called ``the point orbit distribution of a line") are obtained.
For finite fields of characteristic $> 3$, the results of \cite{GulLav} on ``the plane orbit distribution of a line" are in accordance with those from \cite{DMP_PlLineInc} on plane-line incidence matrix.
The results of \cite{GulLav} on ``the point orbit distribution of a line" are an important step towards the point-line incidence matrix. However, these results are obtained only for odd $q\not\equiv0\pmod3$ and
the computation of the numbers of distinct lines through distinct points has been left open.
In this paper, we obtain the \emph{point-plane} incidence matrix for all $q\ge2$, leaving open the questions related to $\OO_6$.
We consider the structure of the point-line incidence matrix with respect to $G_q$.
We use the partitions of planes and lines into orbits and unions of orbits under the group $G_q$, as described in \cite{DMP_OrbLine,Hirs_PG3q}. We search the structures of the submatrices with incidences between an orbit of points and a union of line orbits.
For the unions consisting of two or three line orbits, the original submatrices are split into new ones, in which the incidences are also considered.
For each submatrix (apart from the ones related to $\OO_6$), the numbers of distinct points lying on distinct lines and, conversely, the numbers of distinct lines through distinct points are obtained.
This corresponds to the numbers of ones in columns and rows of the submatrices.
The results noted are obtained for all $q\ge2$ including even $q$ and $q\equiv0\pmod3$. Thus, the gaps of \cite {GulLav} in the point-line incidence matrix are filled.
For $\OO_6$, some average and cumulative values are calculated.
Many submatrices considered are configurations in the sense of \cite
{GroppConfig}, see Definition~\ref{def2_config} in Section \ref{subsec_incid}. Such configurations are useful in several distinct areas, in particular, to construct bipartite graph codes without the so-called 4-cycles, see e.g. \cite{BargZem,DGMP_BipGraph,HohJust} and the references therein.
The paper is organized as follows. Section \ref{sec_prelimin} contains preliminaries. In Section~\ref{sec_mainres}, the main results of this paper are summarized. Some useful relations are given in Section~\ref{sec:useful}. The numbers of distinct points lying on distinct lines and, vice versa, the numbers of distinct lines through distinct points are obtained in Sections \ref{sec:results_q_ne0} (for even and odd $q\not\equiv0\pmod3$) and \ref{sec:results_q=0}
(for $q\equiv0\pmod3$). Some general results are given in Section \ref{sec:gen res}.
\section{Preliminaries}\label{sec_prelimin}
Throughout the paper, we consider orbits of lines and points under $G_q$ apart from Theorem \ref{th3:q=2 3 4} in Section \ref{sec_mainres}.
\subsection{Twisted cubic}\label{subset_twis_cub}
In this subsection, including Theorem \ref{th2_Hirs}, we summarize some results from \cite{Hirs_PG3q} useful in this paper.
The space $\PG(N,q)$ contains $\theta_{N,q}$ points and hyperplanes, and $\beta_{N,q}$ lines;
\begin{align}\label{eq1_theta_lambda}
\theta_{N,q}=\frac{q^{N+1}-1}{q-1} ,~\beta_{N,q}=\frac{(q^{N+1}-1)(q^{N+1}-q)}{(q^2-1)(q^2-q)}\,.
\end{align}
Let $\boldsymbol{\pi}(c_0,c_1,c_2,c_3)$ be the plane of $\PG(3,q)$ with equation
\begin{align}\label{eq2_plane}
c_0x_0+c_1x_1+c_2x_2+c_3x_3=0,~c_i\in\F_q.
\end{align}
We denote $\F_{q}^*=\F_{q}\setminus\{0\}$, $\F_q^+=\F_q\cup\{\infty\}$. Let $\Pf(x_0,x_1,x_2,x_3)\in\PG(3,q)$ be a point with homogeneous coordinates $x_i\in\F_{q}$.
Let $P(t)$ be a point with
\begin{align}\label{eq2:P(t)}
t\in\F_q^+;~ P(t)=\Pf(t^3,t^2,t,1)\text{ if }t\in\F_q;~~P(\infty)=\Pf(1,0,0,0).
\end{align}
Let $\C\subset\PG(3,q)$ be the \emph{twisted cubic} in the canonical form
\begin{align}\label{eq2_cubic}
&\C=\{P_1,P_2,\ldots,P_{q+1}\}=\{P(t)\,|\,t\in\F_q^+\}.
\end{align}
where $P_1,\ldots,P_{q+1}$ are points no four of which are coplanar.
The \emph{osculating plane} $\pi_\T{osc}(t)$ at the point $P(t)\in\C$ has the form
\begin{align}\label{eq2_osc_plane}
&\pi_\T{osc}(t)=\boldsymbol{\pi}(1,-3t,3t^2,-t^3)\T{ if }t\in\F_q; ~\pi_\T{osc}(\infty)=\boldsymbol{\pi}(0,0,0,1).
\end{align}
The $q+1$ osculating planes form the \emph{osculating developable} $\mathrm{\Gamma}$ to $\C$.
For $q\equiv0\pmod3$, the osculating developable is a \emph{pencil of planes}.
\begin{definition}
\begin{description}
\item[(i)]
A \emph{chord} of $\C$ is a line through a pair of real points of $\C$ or a pair of complex conjugate points. If the real points coincide with each other, the chord is a \emph{tangent} to $\C$; if they are distinct, we have a \emph{real chord}. For a pair of complex conjugate points, we have an \emph{imaginary chord}.
\item[(ii)] An \emph{axis} of $\mathrm{\Gamma}$ is a line of $\PG(3,q)$ which is the intersection of a pair of real planes or complex conjugate planes of $\mathrm{\Gamma}$. If the real planes coincide with each other, the axis is a \emph{tangent} to $\C$; if they are distinct it is a \emph{real axis}. For complex conjugate planes, we have an \emph{imaginary axis}.
\end{description}
\end{definition}
The null polarity $\A$ \cite[Sections 2.1.5, 5.3]{Hirs_PGFF}, \cite[Theorem 21.1.2]{Hirs_PG3q} is given by
\begin{align}\label{eq2_null_pol}
&\Pf(x_0,x_1,x_2,x_3)\A=\boldsymbol{\pi}(x_3,-3x_2,3x_1,-x_0),~q\not\equiv0\pmod3.
\end{align}
\textbf{Notation 1}
~We consider $q\equiv\xi\pmod3$, $\xi\in\{-1,0,1\}$. Values depending of $\xi$ are
noted by remarks or by superscripts ``$(\xi)$''. The remarks and superscripts ``$(\xi)$'' are not used
if a value is the same for all $q$ or a property holds for all $q$, or it is not relevant, or it is clear by the context. If a value is the same for $\xi=-1,1$, then one may use the superscript ``$\ne0$''. Also, in superscripts, instead of ``$\bullet$'', one can write ``$\mathrm{ev}$'' for even $q$ or ``$\mathrm{od}$'' for odd $q$. If a value is the same for even and odd $q$, we may omit ``$\bullet$''.
The following notation is used.
\begin{align*}
&G_q && \T{the group of projectivities in } \PG(3,q) \T{ fixing }\C;\db \\
&\mathbf{Z}_n&&\T{cyclic group of order }n;\db \\
&\mathbf{S}_n&&\T{symmetric group of degree }n;\db \\
&A^{tr}&&\T{the transposed matrix of }A;\db \\
&\#S&&\T{the cardinality of a set }S;\db\\
&\overline{AB}&&\T{the line through the points $A$ and }B;\db\\
&\triangleq&&\T{the sign ``equality by definition"}.\db\\
&&&\T{\textbf{Types $\pi$ of planes:}}\db\\
&\mathrm{\Gamma}\T{-plane} &&\T{an osculating plane of }\mathrm{\Gamma};\db \\
&d_\C\T{-plane}&&\T{a plane containing \emph{exactly} $d$ distinct points of }\C,~d=0,2,3;\db \\
&\overline{1_\C}\T{-plane}&&\T{a plane not in $\mathrm{\Gamma}$ containing \emph{exactly} 1 point of }\C;\db \\
&\Pk&&\T{the list of possible types $\pi$ of planes},~\Pk\triangleq\{\mathrm{\Gamma},2_\C,3_\C,\overline{1_\C},0_\C\};\db\\
&\pi\T{-plane}&&\T{a plane of type }\pi\in\Pk; \db\\
&\N_\pi&&\T{the orbit of $\pi$-planes under }G_q,~\pi\in\Pk.\db\\
&&&\T{\textbf{Types $\pk$ of points with respect to the twisted cubic $\C$:}}\db\\
&\C\T{-point}&&\T{a point of }\C;\db\\
&\mu_\mathrm{\Gamma}\T{-point}&&\T{a point off $\C$ lying on \emph{exactly} $\mu$ distinct osculating planes;}\db\\
&\Tr\T{-point}&&\T{a point off $\C$ on a tangent to $\C$ for }\xi\ne0;\db\\
&\TO\T{-point}&&\T{a point off $\C$ on a tangent and one osculating plane for }\xi=0;\db\\
&\RC\T{-point}&&\T{a point off $\C$ on a real chord;}\db\\
&\IC\T{-point}&&\T{a point on an imaginary chord (it always is off $\C$);}\\
&\Mk^{(\xi)}&&\T{the list of possible types $\pk$ of points},\db\\
&&&\Mk^{(\ne0)}\triangleq\{\C,0_\mathrm{\Gamma},1_\mathrm{\Gamma},3_\mathrm{\Gamma},\Tr,\RC,\IC\},\db\\
&&&\Mk^{(0)}\triangleq\{\C,(q+1)_\mathrm{\Gamma},\TO,\RC,\IC\};\db\\
&\M_\pk^{(\xi)}&&\T{the orbit of $\pk$-points under }G_q,~\pk\in\Mk^{(\xi)}.\db\\
&&&\T{\textbf{Types $\lambda$ of lines with respect to the twisted cubic $\C$:}}\db\\
&\RC\T{-line}&&\T{a real chord of $\C$;}\db \\
&\RA\T{-line}&&\T{a real axis of $\mathrm{\Gamma}$ for }\xi\ne0;\db \\
&\Tr\T{-line}&&\T{a tangent to $\C$};\db \\
&\IC\T{-line}&&\T{an imaginary chord of $\C$;}\db \\
&\IA\T{-line}&&\T{an imaginary axis of $\mathrm{\Gamma}$ for }\xi\ne0;\db \\
&\UG&&\T{a non-tangent unisecant in a $\mathrm{\Gamma}$-plane;}\db \\
&\UnG\T{-line}&&\T{a unisecant not in a $\mathrm{\Gamma}$-plane (it is always non-tangent);}\db \\
&\EG\T{-line}&&\T{an external line in a $\mathrm{\Gamma}$-plane (it cannot be a chord);}\db \\
&\EnG\T{-line}&&\T{an external line, other than a chord, not in a $\mathrm{\Gamma}$-plane;}\db \\
&\Ar\T{-line}&&\T{the axis of the pencil of $\mathrm{\Gamma}$-planes for }\xi=0;\db\\
&\EA\T{-line}&&\T{an external line meeting the axis of $\mathrm{\Gamma}$ for }\xi=0;\db\\
&\Lk^{(\xi)}&&\T{the list of possible types $\lambda$ of lines},\db\\
&&&\Lk^{(\ne0)}\triangleq\{\RC,\RA,\Tr,\IC,\IA,\UG,\UnG,\EG,\EnG\}\T{ for }\xi\ne0,\db\\
&&&\Lk^{(0)}\triangleq\{\RC,\Tr,\IC,\UG,\UnG, \EnG,\Ar,\EA\}\T{ for }\xi=0;\db\\
&\lambda\T{-line}&&\T{a line of type }\lambda\in\Lk^{(\xi)};\db\\
&&&\textbf{Orbits of lines.\,Plane-line incidence matrix.}\, \pi\in\Pk,\lambda\in\Lk^{(\xi)}\db\\
&L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}&&\T{the total number of orbits of $\lambda$-lines};\db\\
&\OO_\lambda&&\T{the union (class) of all $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}$ orbits of $\lambda$-lines};\\
&\OO_{\lambda_j}&&\T{the $j$-th orbit of the class }\OO_\lambda,~ j=1,\ldots,L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet},~\OO_\lambda=\bigcup_{j=1}^{L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}}\OO_{\lambda_j};\db\\
&\OO_{i_j}&&\T{the $j$-th orbit of the class }\OO_i;\db\\
&\lambda_j\T{-lines}&&\lambda\T{-lines forming the $j$-th orbit $\OO_{\lambda_j}$ of the class }O_{\lambda};\db\\
&\mathrm{\Lambda}_{\lambda_j,\pi}^{(\xi)\bullet}&&\T{the number of lines from an orbit $\OO_{\lambda_j}$ in a $\pi$-plane};\db\\
&\mathrm{\Lambda}_{\lambda,\pi}^{(\xi)\bullet}&&\T{the total number of $\lambda$-lines in a $\pi$-plane};\db\\
&\mathrm{\Pi}_{\pi,\lambda_j}^{(\xi)\bullet} &&\T{the exact number of $\pi$-planes through a line of an orbit $\OO_{\lambda_j}$};\db\\
&\mathrm{\Pi}_{\pi,\lambda}^{(\xi)\bullet}&&\T{the average number of $\pi$-planes through a $\lambda$-line over all the}\db\\
&&&\T{$\lambda$-lines; if the class $\OO_\lambda$ consists of \emph{a single orbit} then }\mathrm{\Pi}_{\pi,\lambda}^{(\xi)}\db\\
&&&\T{is \emph{the exact number} of $\pi$-planes through each $\lambda$-line};\db\\
&\I^{\mathrm{\Pi}\mathrm{\Lambda}}&&\T{the $\beta_{3,q}\times\theta_{3,q}$ plane-line incidence matrix of }\PG(3,q);\db\\
&\I_{\pi,\lambda}^{\mathrm{\Pi}\mathrm{\Lambda}}&&\T{the $\#\OO_\lambda\times\#\N_\pi$ submatrix of $\I^{\mathrm{\Pi}\mathrm{\Lambda}}$ with incidences between }\db\\
&&&\T{$\pi$-planes and $\lambda$-lines}; \db\\
&\I_{\pi,\lambda_j}^{\mathrm{\Pi}\mathrm{\Lambda}}&&\T{the $\#\OO_{\lambda_j}\times\#\N_\pi$ submatrix of $\I^{\mathrm{\Pi}\mathrm{\Lambda}}_{\pi,\lambda}$ with incidences between }\db\\
&&&\T{$\pi$-planes and $\lambda_j$-lines}.
\end{align*}
\begin{theorem}\label{th2_Hirs}
\emph{\cite[Chapter 21]{Hirs_PG3q}} The following properties of the twisted cubic $\C$ of \eqref{eq2_cubic} hold:
\begin{align}
&\T{(i)}\T{ The group $G_q$ acts triply transitively on }\C.\dbn\\
& \T{Also, }G_q\cong PGL(2,q)~\T{for }q\ge5;\dbn \\
&\phantom{\T{Also, }} G_4\cong\mathbf{S}_5\cong P\mathrm{\Gamma} L(2,4)\cong\mathbf{Z}_2PGL(2,4);~ G_3\cong\mathbf{S}_4\mathbf{Z}_2^3;~
G_2\cong\mathbf{S}_3\mathbf{Z}_2^3.\dbn\\
&\T{ The matrix $\MM$ corresponding to a projectivity of $G_q$ has the general form}\dbn\\
& \label{eq2_M} \mathbf{M}=\left[
\begin{array}{cccc}
a^3&a^2c&ac^2&c^3\\
3a^2b&a^2d+2abc&bc^2+2acd&3c^2d\\
3ab^2&b^2c+2abd&ad^2+2bcd&3cd^2\\
b^3&b^2d&bd^2&d^3
\end{array}
\right],~a,b,c,d\in\F_q,~ ad-bc\ne0.
\end{align}
(ii) (a) Under $G_q$, $q\ge5$, there are the following five orbits $\N_j$ of planes:
\begin{align}\label{eq2_plane orbit_gen}
&\N_1=\N_\mathrm{\Gamma}=\{\mathrm{\Gamma}\T{-planes}\},~~~\#\N_1=\#\N_\mathrm{\Gamma}=q+1;\db\\
&\N_{2}=\N_{2_\C}=\{2_\C\T{-planes}\},~\#\N_2=\#\N_{2_\C}=q^2+q;\dbn\\
&\N_{3}=\N_{3_\C}=\{3_\C\T{-planes}\},~\#\N_3=\#\N_{3_\C}=(q^3-q)/6;\dbn\\
&\N_{4}=\N_{\overline{1_\C}}=\{\overline{1_\C}\T{-planes}\},~\#\N_4=\#\N_{\overline{1_\C}}=(q^3-q)/2;\dbn\\
& \N_{5}=\N_{0_\C}=\{0_\C\T{-planes}\},~\#\N_5=\#\N_{0_\C}=(q^3-q)/3.\nt
\end{align}
(b) For $q\not\equiv0\pmod 3$, the five orbits $\M_j^{(\ne0)}$ of points are as follows:
\begin{align}\label{eq2_point_orbits_gen}
&\M_1^{(\ne0)}=\M_\C^{(\ne0)}=\{\C\T{-points}\},~\M_2^{(\ne0)}=\M_\Tr^{(\ne0)}=\{\Tr\T{-points}\},\db\\
&\M_3^{(\ne0)}=\M_{3_\mathrm{\Gamma}}^{(\ne0)}=\{3_\mathrm{\Gamma}\T{-points}\},~\M_4^{(\ne0)}=\M_{1_\mathrm{\Gamma}}^{(\ne0)}=\{1_\mathrm{\Gamma}\T{-points}\},\dbn\\
&~\M_5^{(\ne0)}=\M_{0_\mathrm{\Gamma}}^{(\ne0)}=\{0_\mathrm{\Gamma}\T{-points}\};~\#\M_j^{(\ne0)}=\#\N_j,~j=1,\ldots,5.\dbn\\
\label{eq2_=1_orbit_point}
&\T{For } q\equiv1\pmod 3,~ \M_{3_\mathrm{\Gamma}}^{(1)}\cup\M_{0_\mathrm{\Gamma}}^{(1)}=\{\RC\T{-points}\}, ~ \M_{1_\mathrm{\Gamma}}^{(1)}=\{\IC\T{-points}\};\db\\
&\T{for } q\equiv-1\pmod 3,~\M_{3_\mathrm{\Gamma}}^{(-1)}\cup\M_{0_\mathrm{\Gamma}}^{(-1)}=\{\IC\T{-points}\},~
\M_{1_\mathrm{\Gamma}}^{(-1)}=\{\RC\T{-points}\}.\nt
\end{align}
(c) For $q\equiv0\pmod 3$, the five orbits $\M_j^{(0)}$ of points are as follows:
\begin{align}\label{eq2_=0_orbit_point}
&\M_1^{(0)}=\M_\C^{(0)}=\{\C\T{-points}\},~\M_2^{(0)}=\M_{(q+1)_\mathrm{\Gamma}}^{(0)}=\{(q+1)_\mathrm{\Gamma}\T{-points}\},\db\\
&~\#\M_1^{(0)}=\#\M_\C^{(0)}=\#\M_2^{(0)}=\#\M_{(q+1)_\mathrm{\Gamma}}^{(0)}=q+1;\dbn\\
&~\M_3^{(0)}=\M_\TO^{(0)}=\{\TO\T{-points}\},~\#\M_3^{(0)}=\#\M_\TO^{(0)}=q^2-1;\dbn\\
&\M_4^{(0)}=\M_\RC^{(0)}=\{\RC\T{-points}\},~M_5^{(0)}=\M_\IC^{(0)}=\{\IC\T{-points}\},\dbn\\
&\#\M_4^{(0)}=\#\M_\RC^{(0)}=\#\M_5^{(0)}=\#\M_\IC^{(0)}=(q^3-q)/2.\nt
\end{align}
(iii) Let $q\not\equiv0\pmod3$. The null polarity $\A$ \eqref{eq2_null_pol} interchanges $\C$ and $\mathrm{\Gamma}$ and their corresponding chords and axes. We have
\begin{align}\label{eq2:MiU=Ni}
& \M_j^{(\ne0)}\A=\N_j,~j=1,\ldots,5; ~\M_\C^{(\ne0)}\A=\N_\mathrm{\Gamma},~\M_\Tr^{(\ne0)}\A=\N_{2_\C},\db\\
&\M_{3_\mathrm{\Gamma}}^{(\ne0)}\A=\N_{3_\C},~\M_{1_\mathrm{\Gamma}}^{(\ne0)}\A=\N_{\overline{1_\C}},~
\M_{0_\mathrm{\Gamma}}^{(\ne0)}\A=\N_{0_\C}.\nt
\end{align}
(iv) For all $q$, no two chords of $\C$ meet off $\C$.
Every point off $\C$ lies on exactly one chord of $\C$.
(v) Let $q\not\equiv0\pmod3$. No two axes of $\mathrm{\Gamma}$ meet unless they lie in the same plane of $\mathrm{\Gamma}$.
Every plane not in $\mathrm{\Gamma}$ contains exactly one axis of $\mathrm{\Gamma}$.
\end{theorem}
\subsection{Orbits of lines under the stabilizer group $G_q$ of the twisted cubic}
\begin{theorem} \label{th2:MAGMA}
\emph{\cite[Section 8]{DMP_OrbLine}}
Let $q\equiv\xi\pmod3$, $\xi\in\{1,-1,0\}$.
\begin{description}
\item[(i)] Let $5\le q\le 37$ and $q=64$. Then
$\mathbf{(a)}$ For the total number $L_{\EnG\mathrm{\Sigma}}^{(\xi)\bullet}$ of orbits of $\EnG$-lines we have
\begin{align}\label{eq2:L_EnG}
&L_{\EnG\mathrm{\Sigma}}^{(\xi)\od}=2q-3+\xi,~L_{\EnG\mathrm{\Sigma}}^{(\xi)\ev}=2q-2+\xi.
\end{align}
$\mathbf{(b)}$ The total number of line orbits in $\PG(3,q)$ is $2q+7+\xi$.
\item[(ii)] Let $q$ be odd, $5\le q\le 37$.
Then under $G_q$, for $\EnG$-lines, there are
\begin{align*}
& (q-\xi)/3&&\T{ orbits of length }&q^3-q,\db\\
& q-1&&\T{ orbits of length }&(q^3-q)/2,\db\\
& n_q^{(\xi)} &&\T{ orbits of length }& (q^3-q)/4,
\end{align*}
where $n_q^{(1)}=(2q-11)/3,~n_q^{(-1)}=(2q-10)/3,~
n_q^{(0)}=(2q-6)/3$.\\
In addition, for $q\in\{7,13,19,25,31,37\}$ where $q\equiv1\pmod3$, there are
one orbit of length $(q^3-q)/12$ and two orbits of length $(q^3-q)/3$.
\item[(iii)] Let $q=8,16,32,64$. Then under $G_q$, for $\EnG$-lines, there are
$2+\xi$ orbits of length $(q^3-q)/(2+\xi)$ and $2q-4$ orbits of length $(q^3-q)/2$.
\end{description}
\end{theorem}
\begin{conjecture} \label{conj2:orbEnG} \cite{DMP_OrbLine}
The results of Theorem \ref{th2:MAGMA} hold for all $q\ge5$ with the corresponding parity and $\xi$ value.
\end{conjecture}
For odd $q\not\equiv0\pmod3$, the conjecture on \eqref{eq2:L_EnG} is given also in \cite{GulLav}.
The unions (classes) of line orbits are considered in \cite[Chapter 21]{Hirs_PG3q}; they are called $\OO_i$ and $\OO'_i=\OO_i\A$. In \cite{DMP_OrbLine} (for all $q\ge2$), \cite{BlokPelSzo} (for all $q\ge23$), and \cite{GulLav} (for odd $q\not\equiv0\pmod3$), these classes (apart from $\OO_6$) are investigated; the sizes and the structures of the orbits forming each class are obtained.
Theorem~\ref{th2:orbLine} and Table \ref{tab1} summarize some results from \cite{BlokPelSzo,DMP_OrbLine,GulLav,Hirs_PG3q} useful in this paper.
\begin{table}[h]
\caption{Unions (classes) $O_i$ and $O'_i=\OO_i\A$ of line orbits under $G_q$ in $\PG(3,q)$, $q\equiv\xi\pmod3$, $q\ge5$.
$\OO_i=\OO'_i, ~i=2,4,6$. $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}$ is the total number of orbits in the class $\OO_\lambda$. $\#\OO_{\lambda_j}$ is the size of the $j$-th orbit of a class $\OO_\lambda$ consisting of 2 or 3 orbits}
\label{tab1}
\centering
\begin{tabular}{llcccccl}\hline\noalign{\smallskip}
&&content&&&&&$\#\OO_{\lambda_1}$\\
$\OO_i$&&of the&size of&&&&$\#\OO_{\lambda_2}$\\
$\OO'_i$&$\OO_\lambda$&class&the class&$\xi$&$L_{\lambda\mathrm{\Sigma}}^{(\xi)\od}$&$L_{\lambda\mathrm{\Sigma}}^{(\xi)\ev}$&$\#\OO_{\lambda_3}$
\\\noalign{\smallskip}\hline\noalign{\smallskip}
$\OO_1$&$\OO_\RC$&$\RC$-lines&$(q^2+q)/2$&any&$1$&$1$&\\
$\OO'_1$&$\OO_\RA$&$\RA$-lines&$(q^2+q)/2$&$\ne0$&$1$&$1$&\\
$\OO_2$&$\OO_\Tr$&$\Tr$-lines&$q+1$&any&$1$&$1$&\\
$\OO_3$&$\OO_\IC$&$\IC$-lines&$(q^2-q)/2$&any&$1$&$1$&\\
$\OO'_3$&$\OO_\IA$&$\IA$-lines&$(q^2-q)/2$&$\ne0$&$1$&$1$&\\
$\OO_4$&$\OO_\UG$&$\UG$-lines&$q^2+q$&any&$1$&$2$&$q+1$\\
&&&&&&&$q^2-1$\\
$\OO_5$&$\OO_\UnG$&$\UnG$-lines&$q^3-q$&any&$2$&$1$&$(q^3-q)/2$\\
&&&&&&&$(q^3-q)/2$\\
$\OO'_5$&$\OO_\EG$&$\EG$-lines&$q^3-q$&$\ne0$&$2$&$1$&$(q^3-q)/2$\\
&&&&&&&$(q^3-q)/2$\\
$\OO_6$&$\OO_\EnG$&$\EnG$-lines&$(q^2-q)(q^2-1)$&any&$L_{\EnG\mathrm{\Sigma}}^{(\xi)\od}$&$L_{\EnG\mathrm{\Sigma}}^{(\xi)\ev}$&\\
$\OO_7$&$\OO_\Ar$&$\Ar$-line&$1$&$0$&$1$&--&\\
$\OO_8$&$\OO_\EA$&$\EA$-lines&$(q+1)(q^2-1)$&$0$&$3$&--&$q^3-q$\\
&&&&&&&$(q^2-1)/2$\\
&&&&&&&$(q^2-1)/2$\\\noalign{\smallskip}\hline
\end{tabular}
\end{table}
In the last column of Table \ref{tab1}, the sizes of the orbits $\OO_{\lambda_j}$ of a class $\OO_\lambda$ consisting of 2 or 3 orbits are given from top to bottom,
e.g. for $\OO_\EA$ we have $\OO_{\EA_1}=q^3-q$, $\OO_{\EA_2}=(q^2-1)/2$, $\OO_{\EA_3}=(q^2-1)/2$.
\begin{theorem}\label{th2:orbLine} \emph{\cite{BlokPelSzo,DMP_OrbLine,GulLav,Hirs_PG3q}} Let $q\ge5$. The lines of $\PG(3,q)$ can be partitioned into classes called $\OO_i$ and $\OO'_i$, each of which is a union of orbits under $G_q$. The classification of the unions \emph{(}classes\emph{)} of line orbits is given in Table \emph{\ref{tab1}}.
If $q\not\equiv0\pmod3$ we have
\begin{align}
&\OO'_i=\OO_i\A,~\#\OO'_i=\#\OO_i,~i=1,\ldots,6;~\OO_i=\OO'_i, ~i=2,4,6;\label{eq2:O'=OU}\\
&\OO_\RA=\OO_\RC\A,~\OO_\IA=\OO_\IC\A,~\OO_\EG=\OO_\UnG\A,~\OO_\lambda=\OO_\lambda\A,\,\lambda\in\{\Tr,\UG,\EnG\}.\nt
\end{align}
\end{theorem}
In \cite[Theorem 3.2]{DMP_OrbLine}, the cases when Table \ref{tab1} holds for $q=2,3,4$ are noted.
\begin{theorem}\label{th2_null_pol} \emph{\cite[Theorem 4.3]{DMP_OrbLine}}
Let $q\not\equiv0\pmod 3$. Let $\LL$ be an orbit of lines under~$G_q$. Then $\LL\mathfrak{A}$ also is an orbit of lines under~$G_q$.
\end{theorem}
In \cite{GulLav}, the line orbits are denoted by $\LL_i$ and $\LL^\bot_i=\LL_i\A$, $i=1,\ldots,10$. We give the correspondence between $\LL_i$ and the notations of this paper (in \cite{DMP_OrbLine,DMP_PlLineInc} the notations are the same as in this paper).
\begin{align}\label{eq2:L_Lav}
&\LL_1=\OO'_1=\OO_\RA,~\LL_2=\OO_2=\OO'_2=\OO_\Tr,~\LL_3=\OO_4=\OO'_1=\OO_\UG,\db\\
&\LL_4=\OO'_{5_2}=\OO_{\EG_2},~\LL_5=\OO'_{5_1}=\OO_{\EG_1},~\LL_6=\LL^\bot_1=\OO_1=\OO_\RC,\dbn\\
&\LL_7=\LL^\bot_4=\OO_{5_2}=\OO_{\UnG_2},~
\LL_8=\LL^\bot_5=\OO_{5_1}=\OO_{\UnG_1},~\LL_9=\OO_3=\OO_\IC,\dbn\\
&\LL_{10}=\LL^\bot_9=\OO'_3=\OO_\IA.\nt
\end{align}
\subsection{The plane-line incidence matrix of $\PG(3,q)$}\label{subsec_incid}
The $\beta_{3,q}\times\theta_{3,q}$ plane-line incidence matrix $\I^{\mathrm{\Pi}\mathrm{\Lambda}}$ of $\PG(3,q)$ is considered in \cite{DMP_PlLineInc,GulLav}.
In \cite{DMP_PlLineInc}, all $q\ge2$ are considered, including even $q$ and $q\equiv0\pmod3$,
see \cite[Section 3]{DMP_PlLineInc}, where the results of the paper are summarized.
In $\I^{\mathrm{\Pi}\mathrm{\Lambda}}$, columns correspond to planes, rows correspond to lines, and there is an entry ``1'' if the corresponding line lies in the corresponding plane. In \cite{DMP_PlLineInc}, $\I^{\mathrm{\Pi}\mathrm{\Lambda}}$ is partitioned into $\#\OO_\lambda\times\#\N_\pi$ submatrices $\I_{\pi,\lambda}^{\mathrm{\Pi}\mathrm{\Lambda}}$, $\lambda\in\Lk^{(\xi)}$, $\pi\in\Pk$. If $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}>1$, see Table~\ref{tab1}, then $\I_{\pi,\lambda}^{\mathrm{\Pi}\mathrm{\Lambda}}$ splits into $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}$ submatrices
$\I_{\pi,\lambda_j}^{\mathrm{\Pi}\mathrm{\Lambda}}$, $j=1,\ldots,L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}$.
The values of $\mathrm{\Pi}_{\pi,\lambda}^{(\xi)\bullet}$, $\mathrm{\Lambda}_{\lambda,\pi}^{(\xi)\bullet}$ for all $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}$ and $\mathrm{\Pi}_{\pi,\lambda_j}^{(\xi)\bullet}$, $\mathrm{\Lambda}_{\lambda_j,\pi}^{(\xi)\bullet}$ for $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}=2,3$ are obtained in \cite{DMP_PlLineInc}
and collected in \cite[Tables 1, 2]{DMP_PlLineInc}. For the class $\OO_6$, the values of $\mathrm{\Pi}_{\pi,\lambda}^{(\xi)\bullet}$ are average over all $\EnG$-lines.
In \cite{GulLav}, for odd $q\not\equiv0\pmod3$, the ten orbits $\LL_i$, see \eqref{eq2:L_Lav}, are considered and the corresponding values $\mathrm{\Pi}_{\pi,\lambda}^{(\xi)\od}$ are obtained (they are denoted by $OD_2(\ell)$ and are called ``the plane orbit distribution of a line $\ell$"). These results of \cite{GulLav} are in accordance with the ones of \cite{DMP_PlLineInc}.
\subsection{The point-line incidence matrix of $\PG(3,q)$}\label{subsec_incid}
\textbf{Notation 2}
According to Notation 1, let $\pk$ and $\lambda$ be the type of a point and of a line and let $\Mk^{(\xi)}$ and $\Lk^{(\xi)}$ be the lists of the possible types. By default, $\pk\in\Mk^{(\xi)}$, $\lambda\in\Lk^{(\xi)}$. A $\pk$-point is a point of the orbit $\M_\pk$; a $\lambda$-line is a line of the class $\OO_\lambda$.
In addition to Notation 1, the following notation is used:
\begin{align*}
&\Lb_{\lambda_j,\pk}^{(\xi)\bullet}&&\T{the number of lines from an orbit $\OO_{\lambda_j}$ through a $\pk$-point};\db\\
&\Lb_{\lambda,\pk}^{(\xi)\bullet}&&\T{the total number of $\lambda$-lines through a $\pk$-point};\db\\
&\Pb_{\pk,\lambda_j}^{(\xi)\bullet} &&\T{the number of $\pk$-points on a line of an orbit }\OO_{\lambda_j};\db\\
&\Pb_{\pk,\lambda}^{(\xi)\bullet}&&\T{the average number of $\pk$-points on a $\lambda$-line over all the $\lambda$-lines};\db\\
&&&\T{if the class $\OO_\lambda$ consists of \emph{a single orbit} then }\Pb_{\pk,\lambda}^{(\xi)}\T{ is \emph{the exact number}}\db\\
&&&\T{of $\pk$-points on each $\lambda$-line};\db\\
&\I^{\Pb\Lb}&&\T{the $\beta_{3,q}\times\theta_{3,q}$ point-line incidence matrix of }\PG(3,q);\db\\
&\I_{\pk,\lambda}^{\Pb\Lb}&&\T{the $\#\OO_\lambda\times\#\M_\pk$ submatrix of $\I^{\Pb\Lb}$ with incidences between }\db\\
&&&\T{$\pk$-points and $\lambda$-lines}; \db\\
&\I_{\pk,\lambda_j}^{\Pb\Lb}&&\T{the $\#\OO_{\lambda_j}\times\#\M_\pk$ submatrix of $\I_{\pk,\lambda}^{\Pb\Lb}$ with incidences between }\db\\
&&&\T{$\pk$-points and $\lambda_j$-lines.}
\end{align*}
In $\I^{\Pb\Lb}$, columns correspond to points, rows correspond to lines, and there is an entry ``1'' if the corresponding point lies on the corresponding line. Every column and every row of $\I^{\Pb\Lb}$ contains $\theta_{2,q}$ and $\theta_{1,q}$ ones, respectively, as in $\PG(3,q)$, there are $\theta_{2,q}$ lines through every point and $\theta_{1,q}$ points in every line. Thus, $\I^{\Pb\Lb}$ is a tactical configuration \cite[Chapter 2.3]{Hirs_PGFF}, \cite[Chapter 7, Section~2]{Lidl_Nied}.
Moreover, $\I^{\Pb\Lb}$ gives a 2-$(\theta_{3,q},\theta_{1,q},1)$ design~\cite{HandbCombDes2v_k_lamb} since there is exactly one line through any two points.
\begin{definition}\label{def2_config}\cite{GroppConfig}
A configuration $(v_r,b_k)$ is an incidence structure of $v$ points and $b$ lines such that
each line contains $k$ points, each point lies on $r$ lines, and
two different points are connected by at most one line. If $v = b$ and, hence, $r = k$, the configuration is symmetric, denoted by $v_k$.
\end{definition}
\noindent For an introduction to configurations see \cite{DFGMP_SymConf,GroppConfig} and the references therein.
The transposition $(\I^{\Pb\Lb})^{tr}$ gives the $\theta_{3,q}\times\beta_{3,q}$ line-point incidence matrix. It can be viewed as a $(v_r,b_k)$ configuration with $v=\beta_{3,q}$, $b=\theta_{3,q}$, $r=\theta_{1,q}$, $k=\theta_{2,q}$, as there is at most one point as the intersection of two different lines.
In \cite{GulLav}, for odd $q\not\equiv0\pmod3$, the ten orbits $\LL_i$, see \eqref{eq2:L_Lav}, are considered and the corresponding values $\Pb_{\pk,\lambda}^{(\xi)\od}$, $\Pb_{\pk,\lambda_j}^{(\xi)\od}$ are obtained (they are denoted by $OD_0(\ell)$ and are called ``the point orbit distribution of a line $\ell$").
These results of \cite{GulLav} are obtained also in this paper by another way. In this process we also obtained the values of $\Pb_{\pk,\lambda}^{(\xi)\ev}$, $\Pb_{\pk,\lambda_j}^{(\xi)\ev}$ for even $q\not\equiv0\pmod3$ and the values of $\Lb_{\lambda,\pk}^{(\xi)\bullet}$, $\Lb_{\lambda_j,\pk}^{(\xi)\bullet}$ for all odd and even $q\not\equiv0\pmod3$, see the first two tables of Section \ref{sec_mainres} and Section~\ref{sec:results_q_ne0}.
Moreover, in this paper we obtained also values $\Pb_{\pk,\lambda}^{(0)\od}$, $\Pb_{\pk,\lambda_j}^{(0)\od}$, $\Lb_{\lambda,\pk}^{(0)\od}$, and $\Lb_{\lambda_j,\pk}^{(0)\od}$ for $q\equiv0\pmod3$, see the last two tables of Section \ref{sec_mainres} and Section \ref{sec:results_q=0}.
As we mentioned above, for the class $\OO_6$, in this paper only average and cumulative results are obtained.
\section{The main results}\label{sec_mainres}
\begin{remark}
We call $\Pb_{\pk,\lambda}^{(\xi)\bullet}$ \emph{the average number} of $\pk$-points on a $\lambda$-line over all the $\lambda$-lines. If the class $\OO_\lambda$ of $\lambda$-lines consists of a single orbit, i.e. $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}=1$, then $\Pb_{\pk,\lambda}^{(\xi)\bullet}$ is \emph{the exact number} of $\pk$-points on each $\lambda$-line, see Lemma \ref{lemma4_line&point}. The situation is always clear by the context.
If $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}=1$ then $\Pb_{\pk,\lambda}^{(\xi)\bullet}$ certainly is an integer. If $\lambda$-lines form two or more orbits, i.e. $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}\ge2$, then $\Pb_{\pk,\lambda}^{(\xi)\bullet}$ may be not integer as well as an integer.
On the other hand, regardless of the number of orbits in $\OO_\lambda$, for all pairs $(\pk,\lambda)$, we always have the same total number of $\lambda$-lines through each $\pk$-point, i.e. $\mathrm{\Lambda}_{\lambda,\pk}^{(\xi)\bullet}$ always is an integer, again see Lemma \ref{lemma4_line&point}.
\end{remark}
From now on, we consider $q\ge5$, apart from Theorem \ref{th3:q=2 3 4}. Theorem~\ref{th3:q=2 3 4} is obtained by an exhaustive computer search using the computer algebra system Magma \cite{Magma}.
Tables \ref{tab2}--\ref{tab5} and Theorem \ref{th3_main_res} summarize the results of Sections \ref{sec:useful}--\ref{sec:gen res} for $q\ge5$.
\begin{table}[htbp]
\caption{Values $\Pb_{\pk,\lambda}^{(\xi)}$ (top entry) and $\Lb_{\lambda,\pk}^{(\xi)}$ (bottom entry) for submatrices $\I_{\pk,\lambda}^{\Pb\Lb}$ of the point-line incidence matrix of $\PG(3,q)$, $q\equiv\xi\pmod3$, $\xi\in\{1,-1\}$, $q\ge5$, $\pk\in\Mk^{(\ne0)}$, $\lambda\in\Lk^{(\ne0)}$. The superscript~$(\xi)$ is $(\ne0)$ if a value is the same for all~$q\not\equiv0\pmod3$}
\label{tab2}
\centering
\begin{tabular}{lccccccc}\hline
&&&$\M_1^{(\ne0)}$&$\M_2^{(\ne0)}$&$\M_3^{(\ne0)}$&$\M_4^{(\ne0)}$&$\M_5^{(\ne0)}$\\
&&&$\C\T{-}$&$\Tr\T{-}$&$3_\mathrm{\Gamma}\T{-}$&$1_\mathrm{\Gamma}\T{-}$&$0_\mathrm{\Gamma}\T{-}$\\
$\OO_j$&$\lambda\T{-lines}$&$\Pb_{\pk,\lambda}^{(\xi)}$&points&points&points&points&points\\
$\OO'_j$&$\#\OO_\lambda$&$\Lb^{(\xi)}_{\lambda,\pk}$&$q+1$&$q^2+q$&$\frac{1}{6}(q^3-q)$&$ \frac{1}{2}(q^3-q)$&$\frac{1}{3}(q^3-q)$\\\hline
$\OO_1$&$\RC\T{-lines}$&$\Pb_{\pk,\RC}^{(1)}$&$2$&$0$&$\frac{1}{3}(q-1)$&$0$&$\frac{2}{3}(q-1)$\\
&$\frac{1}{2}(q^2+q)$&$\Lb^{(1)}_{\RC,\pk}$&$q$&$0$&$ 1$&$ 0$&$ 1$\\\hline
$\OO_1$&$\RC\T{-lines}$&$\Pb_{\pk,\RC}^{(-1)}$&$2$&$0$&$0$&$ q-1$&$0$\\
&$\frac{1}{2}(q^2+q)$&$\Lb^{(-1)}_{\RC,\pk}$&$q$&$0$&$ 0$&$ 1$&$ 0$\\\hline
$\OO'_1$&$\RA\T{-lines}$&$\Pb_{\pk,\RA}^{(\ne0)}$&$0$&$2$&$q-1$&$0$&$0$\\
$$&$\frac{1}{2}(q^2+q)$&$\Lb_{\RA,\pk}^{(\ne0)}$&$0$&$1$&$3$&$0$&$0$\\\hline
$\OO_2$&$\Tr\T{-lines}$&$\Pb_{\pk,\Tr}^{(\ne0)}$&$1$&$q$&$0$&$0$&$0$\\
$\OO'_2$&$q+1$&$\Lb_{\Tr,\pk}^{(\ne0)}$&$1$&$1$&$0$&$0$&$0$\\\hline
$\OO_3$&$\IC\T{-lines}$&$\Pb^{(1)}_{\pk,\IC}$&$0$&$0$&$ 0$&$ q+1$&$ 0$\\
&$\frac{1}{2}(q^2-q)$&$\Lb^{(1)}_{\IC,\pk}$&$0$&$0$&$ 0$&$ 1$&$ 0$\\\hline
$\OO_3$&$\IC\T{-lines}$&$\Pb^{(-1)}_{\pk,\IC}$&$0$&$0$&$\frac{1}{3}(q+1)$&$ 0$&$\frac{2}{3}(q+1)$\\
$$&$\frac{1}{2}(q^2-q)$&$\Lb^{(-1)}_{\IC,\pk}$&$0$&$0$&$ 1$&$ 0$&$ 1$\\\hline
$\OO'_3$&$\IA\T{-lines}$&$\Pb_{\pk,\IA}^{(\ne0)}$&$0$&$0$&$0$&$q+1$&$0$\\
&$\frac{1}{2}(q^2-q)$&$\Lb_{\IA,\pk}^{(\ne0)}$&$0$&$0$&$0$&$1$&$0$\\\hline
$\OO_4$&$\UG\T{-lines}$&$\Pb_{\pk,\UG}^{(\ne0)}$&$1$&$1$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q-1)$&$0$\\
$\OO'_4$&$q^2+q$&$\Lb_{\UG,\pk}^{(\ne0)}$&$q$&$1$&$3$&$1$&$0$\\\hline
$\OO_5$&$\UnG\T{-lines}$&$\Pb^{(1)}_{\pk,\UnG}$&$1$&$1$&$ \frac{1}{6}(q-4)$&$ \frac{1}{2}q$&$\frac{1}{3} (q-1)$\\
&$q^3-q$&$\Lb^{(1)}_{\UnG,\pk}$&$q^2-q$&$q-1$&$ q-4$&$ q$&$ q-1$\\\hline
$\OO_5$&$\UnG\T{-lines}$&$\Pb^{(-1)}_{\pk,\UnG}$&$1$&$1$&$\frac{1}{6}(q-2)$&$ \frac{1}{2}(q-2)$&$\frac{1}{3}(q+1)$\\
&$q^3-q$&$\Lb^{(-1)}_{\UnG,\pk}$&$q^2-q$&$q-1$&$ q-2$&$ q-2$&$ q+1$\\\hline
$\OO'_5$&$\EG\T{-lines}$&$\Pb_{\pk,\EG}^{(\ne0)}$&$0$&$2$&$ \frac{1}{2}(q-2)$&$ \frac{1}{2}q$&$0$\\
&$q^3-q$&$\Lb_{\EG,\pk}^{(\ne0)}$&$0$&$2(q-1)$&$ 3(q-2)$&$ q$&$0$\\\hline
$\OO_6$&$\EnG\T{-lines}$&$\Pb^{(1)}_{\pk,\EnG}$&$0$&$1$&$ \frac{q^2-3q+4}{6(q-1)}$&$ \frac{(q+1)(q-2)}{2(q-1)}$&$ \frac{q^2+1}{3(q-1)}$\\
$\OO'_6$&$(q^2-q)\cdot$&$\Lb^{(1)}_{\EnG,\pk}$&$0$&$(q-1)^2$&$q^2- $&$(q+1)\cdot$&$q^2+1$\\
&$(q^2-1)$&&&&$3q+4$&$(q-2)$&\\\hline
$\OO_6$&$\EnG\T{-lines}$&$\Pb^{(-1)}_{\pk,\EnG}$&$0$&$1$&$ \frac{1}{6}(q-2)$&$ \frac{1}{2}q$&$ \frac{1}{3}(q+1)$\\
$\OO'_6$&$(q^2-q)\cdot$&$\Lb^{(-1)}_{\EnG,\pk}$&$0$&$(q-1)^2$&$(q-1)\cdot$&$ q^2-q$&$ q^2-1$\\
&$(q^2-1)$&&&&$(q-2)$&&\\\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Values $\Pb_{\pk,\lambda_j}^{(\xi)\bullet}$ (top entry) and
$\Lb_{\lambda_j,\pk}^{(\xi)\bullet}$ (bottom entry) for submatrices $\I_{\pk,\lambda_j}^{\Pb\Lb}$ of the point-line incidence matrix of $\PG(3,q), q\ge5,~\pk\in\Mk^{(\ne0)}$; $j=1,2$; $\lambda=\UG$ with even $q\not\equiv0\pmod3$ ($\UG_1$- and $\UG_2$-lines);
$\lambda=\UnG$ with odd $q\equiv\xi\pmod3$, $\xi\in\{1,-1\}$ ($\UnG_1$- and $\UnG_2$-lines for $\xi=1$ and $\xi=-1$);
$\lambda=\EG$ with odd $q\not\equiv0\pmod3$ ($\EG_1$- and $\EG_2$-lines)}
\label{tab3}
\centering
\begin{tabular}{lccccccc}\hline
&&&$\M_1^{(\ne0)}$&$\M_2^{(\ne0)}$&$\M_3^{(\ne0)}$&$\M_4^{(\ne0)}$&$\M_5^{(\ne0)}$\\
&&&$\C\T{-}$&$\Tr\T{-}$&$3_\mathrm{\Gamma}\T{-}$&$1_\mathrm{\Gamma}\T{-}$&$0_\mathrm{\Gamma}\T{-}$\\
$\OO_{i_j}$&$\lambda_j\T{-lines}$&$\Pb_{\pk,\lambda_j}^{(\xi)}$&$\T{points}$&$\T{points}$&$\T{points}$&$\T{points}$&$\T{points}$\\
$\OO'_{i_j}$&$\#\OO_{\lambda_j}$&$\Lb^{(\xi)}_{\lambda_j,\pk}$&$q+1$&$q^2+q$&$\frac{q^3-q}{6}$&$ \frac{q^3-q}{2}$&$\frac{q^3-q}{3}\vphantom{H_{H_H}}$\\\hline
$\OO_{4_1}$&$\UG_1\T{-lines}$&$\Pb_{\pk,\UG_1}^{(\ne0)\mathrm{ev}}$&$1$&$q$&$ 0$&$ 0$&$0$\\
&$q+1$&$\Lb_{\UG_1,\pk}^{(\ne0)\mathrm{ev}}$&$1$&$1$&$ 0$&$ 0$&$0$\\\hline
$\OO_{4_2}$&$\UG_2\T{-lines}$&$\Pb_{\pk,\UG_2}^{(\ne0)\mathrm{ev}}$&$1$&$0$&$ \frac{1}{2}q$&$\frac{1}{2}q$&$0$\\
&$q^2-1$&$\Lb_{\UG_2,\pk}^{(\ne0)\mathrm{ev}}$&$q-1$&$0$&$ 3$&$ 1$&$0$\\\hline
$\OO_{5_1}$&$\UnG_1\T{-}$&$\Pb^{(1)\mathrm{od}}_{\pk,\UnG_1}$&$1$&$0$&$ \frac{1}{6}(q-1)$&$ \frac{1}{2}(q+1)$&$ \frac{1}{3}(q-1)$\\
$\xi=$&lines&&&&&&\\
$1$&$\frac{1}{2}(q^3-q)$&$\Lb^{(1)\mathrm{od}}_{\UnG_1,\pk}$&$\frac{1}{2}(q^2-q)$&$0
$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q+1)$&$\frac{1}{2}(q-1)$\\\hline
$\OO_{5_2}$&$\UnG_2\T{-}$&$\Pb^{(1)\mathrm{od}}_{\pk,\UnG_2}$&$1$&$2$&$\frac{1}{6}(q-7)$&$\frac{1}{2}(q-1)$&$\frac{1}{3}(q-1)$\\
$\xi=$&lines&&&&&&\\
$1$&$\frac{1}{2}(q^3-q)$&$\Lb^{(1)\mathrm{od}}_{\UnG_2,\pk}$&$\frac{1}{2}(q^2-q)$&$q-1
$&$\frac{1}{2}(q-7)$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q-1)$\\\hline
$\OO_{5_1}$&$\UnG_1\T{-}$&$\Pb^{(-1)\mathrm{od}}_{\pk,\UnG_1}$&$1$&$0$&$ \frac{1}{6}(q+1)$&$ \frac{1}{2}(q-1)$&$ \frac{1}{3}(q+1)$\\
$\xi=$&lines&&&&&&\\
$-1$&$\frac{1}{2}(q^3-q)$&$\Lb^{(-1)\mathrm{od}}_{\UnG_1,\pk}$&$\frac{1}{2}(q^2-q)$&$0
$&$\frac{1}{2}(q+1)$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q+1)$\\\hline
$\OO_{5_2}$&$\UnG_2\T{-}$&$\Pb^{(-1)\mathrm{od}}_{\pk,\UnG_2}$&$1$&$2$&$\frac{1}{6}(q-5)$&$\frac{1}{2}(q-3)$&$ \frac{1}{3}(q+1)$\\
$\xi=$&lines&&&&&&\\
$-1$&$\frac{1}{2}(q^3-q)$&$\Lb^{(-1)\mathrm{od}}_{\UnG_2,\pk}$&$\frac{1}{2}(q^2-q)$&$q-1
$&$\frac{1}{2}(q-5)$&$\frac{1}{2}(q-3)$&$ \frac{1}{2}(q+1)$\\\hline
$\OO'_{5_1}$&$\EG_1\T{-lines}$&$\Pb_{\pk,\EG{_1}}^{(\ne0)\mathrm{od}}$&$0$&$1$&$ \frac{1}{2}(q-1)$&$ \frac{1}{2}(q+1)$&$0$\\
&$\frac{1}{2}(q^3-q)$&$\Lb_{\EG{_1},\pk}^{(\ne0)\mathrm{od}}$&$0$&$\frac{1}{2}(q-1)$&$ \frac{3}{2}(q-1)$&$ \frac{1}{2}(q+1)$&$0$\\\hline
$\OO'_{5_2}$&$\EG_2\T{-lines}$&$\Pb_{\pk,\EG{_2}}^{(\ne0)\mathrm{od}}$&$0$&$3$&$ \frac{1}{2}(q-3)$&$\frac{1}{2}( q-1)$&$0$\\
&$\frac{1}{2}(q^3-q)$&$\Lb_{\EG{_2},\pk}^{(\ne0)\mathrm{od}}$&$0$&$\frac{3}{2}(q-1)$&$ \frac{3}{2}(q-3)$&$ \frac{1}{2}(q-1)$&$0$\\\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Values $\Pb_{\pk,\lambda}^{(0)}$ (top entry) and $\Lb_{\lambda,\pk}^{(0)}$ (bottom entry) for submatrices $\I_{\pk,\lambda}^{\Pb\Lb}$ of the point-line incidence matrix $\I^{\Pb\Lb}$ of $\PG(3,q)$, $q\equiv0\pmod3$, $q\ge9$, $\lambda\in\Lk^{(0)}$, $\pk\in\Mk^{(0)}$}
\label{tab4}
\centering
\begin{tabular}{cccccccc}\hline
&&&$\M_1^{(0)}$&$\M_2^{(0)}$&$\M_3^{(0)}$&$\M_4^{(0)}$&$\M_5^{(0)}$\\
&&&$\C\T{-}$&$(q+1)_\mathrm{\Gamma}\T{-}$&$\TO\T{-}$&$\RC\T{-}$&$\IC\T{-}$\\
&$\lambda\T{-lines}$&$\Pb_{\pk,\lambda}^{(0)}$&$\,\T{points}$&$\T{points}$&$\T{points}$&$\T{points}$&$\T{points}$\\
$\OO_j$&$\#\OO_\lambda$&$\Lb^{(0)}_{\lambda,\pk}$&$q+1$&$
q+1$&$q^2-1$&$ \frac{1}{2}(q^3-q)$&$\frac{1}{2}(q^3-q)$\\\hline
$\OO_1$&$\RC\T{-lines}$&$\Pb^{(0)}_{\pk,\RC}$&$2$&$0$&$0$&$q-1$&$0$\\
&$\frac{1}{2}(q^2+q)$&$\Lb^{(0)}_{\RC,\pk}$&$q$&$0$&$0$&$1$&$0$\\\hline
$\OO_2$&$\Tr\T{-lines}$&$\Pb^{(0)}_{\pk,\Tr}$&$1$&$1$&$q-1$&$0$&$0$\\
&$q+1$&$\Lb^{(0)}_{\Tr,\pk}$&$1$&$1$&$1$&$0$&$0$\\\hline
$\OO_3$&$\IC\T{-lines}$&$\Pb^{(0)}_{\pk,\IC}$&$0$&$0$&$0$&$0$&$q+1$\\
&$\frac{1}{2}(q^2-q)$&$\Lb^{(0)}_{\IC,\pk}$&$0$&$0$&$0$&$0$&$1$\\\hline
$\OO_4$&$\UG\T{-lines}$&$\Pb^{(0)}_{\pk,\UG}$&$1$&$1$&$0$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q-1)$\\
&$q^2+q$&$\Lb^{(0)}_{\UG,\pk}$&$q$&$q$&$0$&$1$&$1$\\\hline
$\OO_5$&$\UnG\T{-lines}$&$\Pb^{(0)}_{\pk,\UnG}$&$1$&$0$&$1$&$\frac{1}{2}(q-2)$&$\frac{1}{2}q$\\
&$q^3-q$&$\Lb^{(0)}_{\UnG,\pk}$&$q^2-q$&$0$&$q$&$q-2$&$q$\\\hline
$\OO_6$&$\EnG\T{-lines}$&$\Pb^{(0)}_{\pk,\EnG}$&$0$&$0$&$1$&$\frac{q^2-q+1}{2(q-1)}$&$\frac{q^2-q-1}{2(q-1)}$\\
&$(q^2-q)\cdot$&$\Lb^{(0)}_{\EnG,\pk}$&$0$&$0$&$q^2-q$&$q^2-q+1$&$q^2-q-1$\\
&$(q^2-1)$&&&&&\\\hline
$\OO_7$&$\Ar\T{-lines}$&$\Pb_{\pk,\Ar}^{(0)}$&$0$&$q+1$&$0$&$0$&$0$\\
&$1$&$\Lb^{(0)}_{\Ar,\pk}$&$0$&$1$&$0$&$0$&$0$\\\hline
$\OO_8$&$\EA\T{-lines}$&$\Pb_{\pk,\EA}^{(0)}$&$0$&$1$&$\frac{q}{q+1}$&$\frac{q^2}{2(q+1)}$&$\frac{q^2}{2(q+1)}$\\
&$(q+1)\cdot$&$\Lb^{(0)}_{\EA,\pk}$&$0$&$q^2-1$&$q$&$q$&$q$\\
&$(q^2-1)$&&&&\\\hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Values $\Pb_{\pk,\lambda_j}^{(0)}$ (top entry) and $\Lb_{\lambda_j,\pk}^{(0)}$ (bottom entry) for submatrices $\I_{\pk,\lambda_j}^{\Pb\Lb}$ of the point-line incidence matrix $\I^{\Pb\Lb}$ of $\PG(3,q)$, $q\equiv0\pmod3$, $q\ge9$, $\pk\in\Mk^{(0)}$, $\lambda\in\{\UnG,\EA\}$, $j=1,2$ if $\lambda=\UnG$, $j=1,2,3$ if $\lambda=\EA $}
\label{tab5}
\centering
\begin{tabular}{cccccccc}\hline
&&&$\M_1^{(0)}$&$\M_2^{(0)}$&$\M_3^{(0)}$&$\M_4^{(0)}$&$\M_5^{(0)}$\\
&&&$\C\T{-}$&$(q+1)_\mathrm{\Gamma}\T{-}$&$\TO\T{-}$&$\RC\T{-}$&$\IC\T{-}$\\
&$\lambda_j\T{-lines}$&$\Pb_{\pk,\lambda}^{(0)}$&$\,\T{points}$&$\T{points}$&$\T{points}$&$\T{points}$&$\T{points}$\\
$\OO_{i_j}$&$\#\OO_{\lambda_j}$&$\Lb^{(0)}_{\lambda,\pk}$&$q+1$&$
q+1$&$q^2-1$&$\frac{q^3-q}{2}$&$\frac{q^3-q}{2}$\\\hline
$\OO_{5_1}$&$\UnG_1\T{-}$&$\Pb^{(0)}_{\pk,\UnG_1}$&$1$&$0$&$0$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q+1)$\\
$\xi=$&lines&&&&&&\\
$0$&$\frac{1}{2}(q^3-q)$&$\Lb^{(0)}_{\UnG_1,\pk}$&$\frac{1}{2}(q^2-q)$&$0$&$0$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q+1)$\\\hline
$\OO_{5_2}$&$\UnG_2\T{-}$&$\Pb^{(0)}_{\pk,\UnG_2}$&$1$&$0$&$2$&$\frac{1}{2}(q-3)$&$\frac{1}{2}(q-1)$\\
$\xi=$&lines&&&&&&\\
$0$&$\frac{1}{2}(q^3-q)$&$\Lb^{(0)}_{\UnG_2,\pk}$&$\frac{1}{2}(q^2-q)$&$0$&$q$&$\frac{1}{2}(q-3)$&$\frac{1}{2}(q-1)$\\\hline
$\OO_{8_1}$&$\EA_1\T{-lines}$&$\Pb_{\pk,\EA_1}^{(0)}$&$0$&$1$&$1$&$\frac{1}{2}(q-1)$&$\frac{1}{2}(q-1)$\\
&$q^3-q$&$\Lb^{(0)}_{\EA_1,\pk}$&$0$&$q^2-q$&$q$&$q-1$&$q-1$\\\hline
$\OO_{8_2}$&$\EA_2\T{-lines}$&$\Pb_{\pk,\EA_2}^{(0)}$&$0$&$1$&$0$&$q$&$0$\\
&$\frac{1}{2}(q^2-1)$&$\Lb^{(0)}_{\EA_2,\pk}$&$0$&$\frac{1}{2}(q-1)$&$0$&$1$&$0$\\\hline
$\OO_{8_3}$&$\EA_3\T{-lines}$&$\Pb_{\pk,\EA_3}^{(0)}$&$0$&$1$&$0$&$0$&$q$\\
&$\frac{1}{2}(q^2-1)$&$\Lb^{(0)}_{\EA_3,\pk}$&$0$&$\frac{1}{2}(q-1)$&$0$&$0$&$1$\\\hline
\end{tabular}
\end{table}
For the point-line incidence matrix $\I^{\Pb\Lb}$ of $\PG(3,q)$,
$q\equiv\xi\pmod3$, Tables~\ref{tab2} (for $q\not\equiv0\pmod3$) and \ref{tab4} (for $q\equiv0\pmod3$) show the values $\Pb_{\pk,\lambda}^{(\xi)}$ (top entry) and $\Lb_{\lambda,\pk}^{(\xi)}$ (bottom entry) for each pair $(\pk,\lambda)$, $\pk\in\Mk^{(\xi)}$, $\lambda\in\Lk^{(\xi)}$, where $\Pb_{\pk,\lambda}^{(\xi)}$ is the exact (if $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}=1$) or average (if $L_{\lambda\mathrm{\Sigma}}^{(\xi)\bullet}\ge2$) number of $\pk$-points on every $\lambda$-line, whereas $\Lb_{\lambda,\pk}^{(\xi)}$ always is the exact number of $\lambda$-lines through every $\pk$-point. In other words, $\Pb_{\pk,\lambda}^{(\xi)}$ is the exact or average number of ones in every row of the submatrix $\I^{\Pb\Lb}_{\pk,\lambda}$ of $\I^{\Pb\Lb}$, whereas $\Lb_{\lambda,\pk}^{(\xi)}$ always is the exact number of ones in every column of $\I^{\Pb\Lb}_{\pk,\lambda}$. In Table \ref{tab2}, the superscript $(\xi)$ is $(\ne0)$ for $\lambda\in\{\RA,\Tr,\IA,\UG,\EG\}$ where the values $\Pb_{\pk,\lambda}^{(\xi)}$, $\Lb_{\lambda,\pk}^{(\xi)}$ are the same for all~$q\not\equiv0\pmod3$.
The total number of orbits of $\lambda$-lines is given in Table 1.
In Table \ref{tab3}, the values $\Pb_{\pk,\lambda_j}^{(\xi)\bullet}$ and $\Lb_{\lambda_j,\pk}^{(\xi)\bullet}$ are given for the following cases:
$q\ge5,~\pk\in\Mk^{(\ne0)}$; $\lambda=\UG$ with even $q\not\equiv0\pmod3$ ($\UG_1$- and $\UG_2$-lines);
$\lambda=\UnG$ with odd $q\equiv\xi\pmod3$, $\xi\in\{1,-1\}$ ($\UnG_1$- and $\UnG_2$-lines for $\xi=1$ and $\xi=-1$);
$\lambda=\EG$ with odd $q\not\equiv0\pmod3$ ($\EG_1$- and $\EG_2$-lines).
In Table \ref{tab5}, the values $\Pb_{\pk,\lambda_j}^{(0)}$ and $\Lb_{\lambda_j,\pk}^{(0)}$ are given for the following cases:
$q\equiv0\pmod3$, $q\ge9$, $\pk\in\Mk^{(0)}$, $\lambda\in\{\UnG,\EA\}$, $j=1,2$ if $\lambda=\UnG$, $j=1,2,3$ if $\lambda=\EA $.
\begin{theorem}\label{th3_main_res}
Let $q\ge5$, $q\equiv\xi\pmod3$. Let notations be as in Section $\ref{sec_prelimin}$ and Notations~$1, 2$. The following holds:
\begin{description}
\item[(i)] In $\PG(3,q)$, for the submatrices $\I^{\Pb\Lb}_{\pk,\lambda}$ of the point-line incidence matrix $\I^{\Pb\Lb}$, the values $\Pb_{\pk,\lambda}^{(\xi)}$ (i.e. the exact or average number of $\pk$-points on a $\lambda$-line) and $\Lb_{\lambda,\pk}^{(\xi)}$ (i.e. the exact number of $\lambda$-lines through a $\pk$-point) are given in Table $\ref{tab2}$ (for $\xi\ne0$) and Table $\ref{tab4}$ (for $\xi=0$).
For the submatrices $\I^{\Pb\Lb}_{\pk,\lambda_j}$ corresponding to each of two orbits of the classes $\OO_4=\OO_\UG$, $\OO_5=\OO_\UnG $, and $\OO'_5=\OO_\EG$, the values $\Pb_{\pk,\lambda_j}^{(\xi)\bullet}$, $\Lb_{\lambda_j,\pk}^{(\xi)\bullet}$ are given in Table $\ref{tab3}$ (for $\xi\ne0$). For the submatrices $\I^{\Pb\Lb}_{\pk,\lambda_j}$ corresponding to each of two orbits of the class $\OO_5=\OO_\UnG $ and to each of three orbits of the class $\OO_8=\OO_\EA$, the values $\Pb_{\pk,\lambda_j}^{(0)}$, $\Lb_{\lambda_j,\pk}^{(0)}$ are given in Table $\ref{tab5}$ (for $\xi=0$).
\item[(ii)] Let a class $\OO_\lambda$ consist of a single
orbit according to Table $\ref{tab1}$. Then, in Tables $\ref{tab2}$ and $\ref{tab4}$, the values of $\Pb_{\pk,\lambda}^{(\xi)}$, $\pk\in\Mk^{(\xi)}$, are the \emph{exact numbers} of $\pk$-points on every $\lambda$-line.
\item[(iii)] Let $q\equiv1\pmod3$. Let $V^{(1)}=\{\OO_1=\OO_\RC,\OO_2=\OO_\Tr,\OO'_3=\OO_\IA\}$. Then, cf. Theorem $\ref{th2_Hirs}$(iv), no two lines of $V^{(1)}$ meet off $\C$. Every point off $\C$ lies on exactly one line of~$V^{(1)}$.
\item[(iv)]
Let $q\equiv0\pmod3$. Let $\W^{(0)}=\{\OO_2=\OO_\Tr,\OO_4=\OO_\UG\}$. Let $\mathbb{M}=\C\cup\Ar$-line be the union of the twisted cubic and the $\Ar$-line. Then
no two lines of $\W^{(0)}$ meet off $\mathbb{M}$.
Every point off $\mathbb{M}$ lies on exactly one line of~$\W^{(0)}$, cf. Theorems $\ref{th2_Hirs}$(iv) and $\ref{th3_main_res}$(iii).
\item[(v)] Let $\pk\in\Mk^{(\xi)}$. Let a class $\OO_\lambda$ consist of a single orbit.
Then the submatrix $\I^{\Pb\Lb}_{\pk,\lambda}$ of $\I^{\Pb\Lb}$ is
a $(v_r,b_k)$ configuration of Definition \emph{\ref{def2_config}} with $v=\#\M_\pk$, $b=\#\OO_\lambda$, $r=\Lb_{\lambda,\pk}^{(\xi)}$, $k=\Pb_{\pk,\lambda}^{(\xi)}$. Also, up to rearrangement of rows and columns, the submatrices $\I^{\Pb\Lb}_{\pk,\lambda}$ with $\Lb_{\lambda,\pk}^{(\xi)}=1$ can be viewed as a concatenation of $\Pb_{\pk,\lambda}^{(\xi)}$ identity matrices of order $\#\OO_\lambda$. The same holds for the submatrices~$\I^{\Pb\Lb}_{\pk,\lambda_j}$.
\item[(vi)] Let $(\lambda,\pk)\in\{(\UG,\C), (\UnG,\C)\}$ if $\xi\ne0$, and
$(\lambda,\pk)\in\{(\UnG,\C),$ $ (\EA,(q+1)_\mathrm{\Gamma})\}$ if $\xi=0$. Then, independently of the number of orbits in the class $\OO_\lambda$, we have exactly one $\pk$-point on every $\lambda$-line. Up to rearrangement of rows and columns, the submatrices $\I^{\Pb\Lb}_{\pk,\lambda}$ can be viewed as a vertical concatenation of\/ $\Lb_{\lambda,\pk}^{(\xi)}$ identity matrices of order $\#\M_\pk$.
\end{description}
\end{theorem}
\begin{theorem}\label{th3:q=2 3 4}
Let the types of lines and points be as in Tables $\ref{tab1}-\ref{tab5}$.
\begin{description}
\item[(i)] Let $q=2$. The group $G_2\cong\mathbf{S}_3\mathbf{Z}_2^3$ contains $8$ subgroups isomorphic to $PGL(2,2)$ divided into two conjugacy classes. For one of these subgroups, the matrices corresponding to the projectivities of the subgroup assume the form described by \eqref{eq2_M}. For line and point orbits under this subgroup (and only under it) the point-line incidence matrix has the form of Tables $\ref{tab2}$ and~$\ref{tab3}$ for even $q\equiv-1\pmod3$ and also Table $\ref{tab1}$ holds.
\item[(ii)] Let $q=3$. The group $G_3\cong\mathbf{S}_4\mathbf{Z}_2^3$ contains $24$ subgroups isomorphic to $PGL(2,3)$ divided into four conjugacy classes. For one of these subgroups, the matrices corresponding to the projectivities of the subgroup assume the form described by \eqref{eq2_M}. For line and point orbits under this subgroup (and only under it) the point-line incidence matrix has the form of Tables $\ref{tab4}$ and~$\ref{tab5}$ for $q\equiv0\pmod3$ and also Table $\ref{tab1}$ holds.
\item[(iii)] Let $q=4$. The group $G_4\cong\mathbf{S}_5\cong P\mathrm{\Gamma} L(2,4)$ contains one subgroup isomorphic to $PGL(2,4)$. The matrices corresponding to the projectivities of this subgroup assume the form described by \eqref{eq2_M} and for line and point orbits under this subgroup the point-line incidence matrix has the form of Tables $\ref{tab2}$ and $\ref{tab3}$ for even $q\equiv1\pmod3$ and also Table $\ref{tab1}$ holds.
\item[(iv)] For line orbits under the subgroups of $G_q$ noted in the points (i)--(iii) of this theorem, Theorem $\ref{th2:MAGMA}$ holds also if $q=2,3,4$.
\end{description}
\end{theorem}
\section{Some useful relations}\label{sec:useful}
In this section, we omit the superscripts ``$(\xi)$'', ``od'', and ``ev'' as they are the same for all terms in a formula; in particular, we use $\Lk$ and $L_{\lambda\mathrm{\Sigma}}$ instead of $\Lk^{(\xi)}$ and $L_{\lambda\mathrm{\Sigma}}^{(\xi)\od}$, $L_{\lambda\mathrm{\Sigma}}^{(\xi)\ev}$. In the rest of the paper, when relations of this section are applied, we add the superscripts if they are necessary by the context.
\begin{lemma}\label{lemma4_line&point}
The following holds:
\begin{description}
\item[(i)]
The number $\Lb_{\lambda_j,\pk}$ of lines from an orbit $\OO_{\lambda_j}$ through a point of an orbit $\M_\pk$ is the same for all points of~$\M_\pk$.
\item[(ii)]
The total number $\Lb_{\lambda,\pk}$ of lines from an orbit union $\OO_\lambda$ through a point of an orbit $\M_\pk$ is the same for all points of~$\M_\pk$. We have
\begin{align}\label{eq4_class_Lambda}
& \Lb_{\lambda,\pk}=\sum_{j=1}^{L_{\lambda\mathrm{\Sigma}}} \Lb_{\lambda_j,\pk}.
\end{align}
\item[(iii)] The number $\Pb_{\pk,\lambda_j}$ of points from an orbit $\M_\pk$ on a line of an orbit $\OO_{\lambda_j}$ is the same for all lines of $\OO_{\lambda_j}$.
\item[(iv)] The average
number $\Pb_{\pk,\lambda}$ of points from an orbit $\M_\pk$ on a line of a union $\OO_\lambda$ over all lines of $\OO_\lambda$ satisfies the following relations:
\begin{align}\label{eq4:Pi_aver}
&\Lb_{\lambda,\pk}\cdot\# \M_\pk=\Pb_{\pk,\lambda}\cdot\#\OO_\lambda;\db\\
&\Pb_{\pk,\lambda}=\frac{1}{\#\OO_{\lambda}}\sum\limits_{j=1}^{L_{\lambda\mathrm{\Sigma}}}\left(\Pb_{\pk,\lambda_j}\cdot\#\OO_{\lambda_j}\right).\label{eq4:Pi_aver2}
\end{align}
\item[(v)]
If $L_{\lambda\mathrm{\Sigma}}=1$, then $\OO_\lambda$ is an orbit and the
number of points from $\M_\pk$ on a line of $\OO_\lambda$ is
the same for all the lines of $\OO_\lambda$.
In this case, $\Pb_{\pk,\lambda}$ is certainly an integer.
If $\Pb_{\pk,\lambda}$ is not an integer then the class $\OO_\lambda$ contains more than one orbit, i.e. $L_{\lambda\mathrm{\Sigma}}\ge2$.
\end{description}
\end{lemma}
\begin{proof}
\begin{description}
\item[(i)]
Consider points $\pk_1$ and $\pk_2$ of~$\M_\pk$. Denote by $\ell$ a line of $\OO_{\lambda_j}$. Let $S(\pk_1)$ and $S(\pk_2)$ be the subsets of $\OO_{\lambda_j}$ such that $S(\pk_1)=\{\ell\in\OO_{\lambda_j}|\pk_1\in\ell\}$, $S(\pk_2)=\{\ell\in\OO_{\lambda_j}|\pk_2\in\ell\}$. There exists $\varphi\in G_q$ such that $\pk_2=\pk_1\varphi$. Clearly, $\varphi$ embeds $S(\pk_1)$ in $S(\pk_2)$, i.e. $S(\pk_1)\varphi\subseteq S(\pk_2)$ and $\#S(\pk_1)\le\#S(\pk_2)$. In the same way, $\varphi^{-1}$ embeds $S(\pk_2)$ in $S(\pk_1)$, i.e. $\#S(\pk_2)\le\#S(\pk_1)$. Thus, $\#S(\pk_2)=\#S(\pk_1)$.
\item[(ii)] For a fixed $\lambda$, orbits $\OO_{\lambda_j}$ do not intersect each other.
\item[(iii)] The assertion can be proved similarly to the case (i).
\item[(iv)] The cardinality $C_1$ of the multiset consisting of the lines of $\OO_\lambda$ through all the points of $\M_\pk$ is equal to $\Lb_{\lambda,\pk}\cdot\# \M_\pk$. The cardinality $C_2$ of the multiset consisting of the points of $\M_\pk$ on all the lines of $\OO_\lambda$ is $\Pb_{\pk,\lambda}\cdot\#\OO_\lambda$. Every $C_i$ is the number of ones in the incidence submatrix $\I_{\pk,\lambda}^{\Pb\Lb}$ of $\I^{\Pb\Lb}$. Thus, $C_1=C_2$.
The assertion \eqref{eq4:Pi_aver2} holds as $\OO_\lambda$ is \emph{partitioned} into $L_{\lambda\mathrm{\Sigma}}$ orbits $\OO_{\lambda_j}$.
\item[(v)] The assertion follows from the case (iii). \qedhere
\end{description}
\end{proof}
\begin{corollary}\label{cor4_=0}
If $\Pb_{\pk,\lambda}=0$ then $\Lb_{\lambda,\pk}=0$ and vice versa.
\end{corollary}
\begin{proof}
The assertions follow from \eqref{eq4:Pi_aver}.
\end{proof}
\begin{theorem}\label{th4_linepoint}
Let the lines of $\PG(3,q)$ be partitioned under $G_q$ into $\#\Lk$ classes $\OO_\lambda$ where every class is a union of orbits of $\lambda$-lines, $\lambda\in\Lk$. Also, let $\PG(3,q)$ be partitioned under $G_q$ by $\#\Mk$ orbits $\M_\pk$ of $\pk$-points, $\pk\in\Mk$.
The following holds:
\begin{align}
&\sum_{\pk\in\Mk} \Pb_{\pk,\lambda}=q+1,~\lambda\T{ is fixed};\label{eq4_points_in_line_sum}\db\\
&\sum_{\lambda\in\Lk}\Lb_{\lambda,\pk}=\beta_{2,q}=q^2+q+1,~\pk\T{ is fixed}.\label{eq4_lines_through point_sum}
\end{align}
\end{theorem}
\begin{proof}
Relations \eqref{eq4_points_in_line_sum} and \eqref{eq4_lines_through point_sum} hold as $\PG(3,q)$ is \emph{partitioned} under $G_q$ by unions of line orbits and by orbits of points. In total, in $\PG(3,q)$, there are $q+1$ points on every line and $\beta_{2,q}$ lines through every point.
\end{proof}
\begin{corollary}\label{cor4_obtainPbLb}
The following holds:
\begin{align}\label{eq4_obtainPb}
&\Pb_{\pk,\lambda}=\frac{\Lb_{\lambda,\pk}\cdot\#\M_\pk}{\#\OO_\lambda},~\Pb_{\pk,\lambda_j}=\frac{\Lb_{\lambda_j,\pk}\cdot\#\M_\pk}{\#\OO_{\lambda_j}};\db\\
&\Lb_{\lambda,\pk}=\frac{\Pb_{\pk,\lambda}\cdot\#\OO_\lambda}{\#\M_\pk},~
\Lb_{\lambda_j,\pk}=\frac{\Pb_{\pk,\lambda_j}\cdot\#\OO_{\lambda_j}}{\#\M_\pk};\db\label{eq4_obtainLb}\\
&\Pb_{\pk^*,\lambda}= q+1-\sum_{\pk\in\Mk\setminus\{\pk^*\}} \Pb_{\pk,\lambda},~\lambda\T{ is fixed},~\pk^*\in\Mk;\label{eq4_obtainPb2}\db\\
&\Pb_{\pk^*,\lambda_j}= q+1-\sum_{\pk\in\Mk\setminus\{\pk^*\}} \Pb_{\pk,\lambda_j},~\lambda_j\T{ is fixed},~\pk^*\in\Mk;\label{eq4_obtainPb3}\db\\
&\Lb_{\lambda^*,\pk}= q^2+q+1-\sum_{\lambda\in\Lk\setminus\{\lambda^*\}}\Lb_{\lambda,\pk},~\pk\T{ is fixed},~\lambda^*\in\Lk.\label{eq4_obtainLb2}
\end{align}
\end{corollary}
\begin{proof}
The assertions directly follow from \eqref{eq4:Pi_aver}, \eqref{eq4_points_in_line_sum}, \eqref{eq4_lines_through point_sum}.
\end{proof}
\begin{remark} \label{observation4:EA}
Let $q\equiv0\pmod3$. By Section \ref{sec_prelimin}, $\mathrm{\Gamma}$-planes form a pencil with the $\Ar$-line as the axis. Only lines lying in a
$\mathrm{\Gamma}$-plane can intersect the axis.
By definition, an $\EA$-line necessary intersects the $\Ar$-line; therefore an $\EA$-line always lies in a
$\mathrm{\Gamma}$-plane and intersects all the other lines belonging to this plane including the only tangent. Also, in \cite[Tables 1, 2, Theorem 3.3(iv), Corollary 7.2]{DMP_PlLineInc} it is proved that we have exactly one $\mathrm{\Gamma}$-plane through every $\EA$-line and, in every $\mathrm{\Gamma}$-plane, there are $q^2-1$ $\EA$-lines such that $q^2-q$ from them belong to the orbit $\OO_{\EA_1}$ while the remaining $q-1$ ones are equally divided into the orbits $\OO_{\EA_2}$, $\OO_{\EA_3}$.
In total, in every $\mathrm{\Gamma}$-plane, we have $q^2-1$ intersections of $\EA$-lines and the $\Ar$-line.
In addition, by definition, every $\mathrm{\Gamma}$-plane contains a tangent and $q$ $\UG$-lines intersecting the $\Ar$-line in distinct points. Thus, in every $\mathrm{\Gamma}$-plane, through a $(q+1)_\mathrm{\Gamma}$-point (i.e. a point of the $\Ar$-line) we have a unisecant, the $\Ar$-line, and $q-1$ $\EA$-lines.
\end{remark}
\begin{remark}\label{observation4:IC}
By \cite[Table 1, Theorem 3.3(vi)]{DMP_PlLineInc}, all $q+1$ planes through an imaginary chord are $\overline{1_\C}$-planes forming a pencil. The $\binom{q}{2}(q+1)$-orbit of all $\overline{1_\C}$-planes can be partitioned into $\binom{q}{2}$ pencils of planes having an imaginary chord as the axis. Only lines lying in a $\overline{1_\C}$-plane can intersect an $\IC$-line. If, in average, there are $\mathrm{\Pi}_{\overline{1_\C},\lambda}$ $\overline{1_\C}$-planes through a $\lambda$-line ($\lambda\ne\IC$) then every $\lambda$-line intersects, in average, $\mathrm{\Pi}_{\overline{1_\C},\lambda}$ $\IC$-lines and contains, in average, $\mathrm{\Pi}_{\overline{1_\C},\lambda}$ $\IC$-points. So,
\begin{align}\label{eq4:IC_P}
\Pb_{\IC,\lambda}=\mathrm{\Pi}_{\overline{1_\C},\lambda},~\lambda\in\Lk.
\end{align}
whence, by \eqref{eq4_obtainLb}, Theorem~\ref{th2_Hirs}(ii)(a)(c), and \cite[equation (4.9)]{DMP_PlLineInc}, we have
\begin{align}\label{eq4:IC_L}
\Lb_{\lambda,\IC}=\frac{\Pb_{\IC,\lambda}\cdot\#\OO_\lambda}{\#\M_\IC}=
\frac{\mathrm{\Pi}_{\overline{1_\C},\lambda}\cdot\#\OO_\lambda}{\#\N_{\overline{1_\C}}}=\mathrm{\Lambda}_{\lambda,\overline{1_\C}},~\lambda\in\Lk.
\end{align}
Similarly, for the $j$-th orbit $\OO_{\lambda_j}$ we have
\begin{align}\label{eq4:IC_lambdaj}
\Pb_{\IC,\lambda_j}=\mathrm{\Pi}_{\overline{1_\C},\lambda_j},~\Lb_{\lambda_j,\IC}=\mathrm{\Lambda}_{\lambda_j,\overline{1_\C}},
~j=1,\ldots,L_{\lambda\mathrm{\Sigma}},~\lambda\in\Lk.
\end{align}
The values of $\mathrm{\Pi}_{\overline{1_\C},\lambda}$, $\mathrm{\Pi}_{\overline{1_\C},\lambda_j}$, $\mathrm{\Lambda}_{\lambda,\overline{1_\C}}$, and $\mathrm{\Lambda}_{\lambda_j,\overline{1_\C}}$ can be taken from \cite[Tables~1,~2]{DMP_PlLineInc}.
\end{remark}
\begin{theorem}\label{th4:NiU=Mi}
Let $q\not\equiv0\pmod3$. Let $\pi\in\Pk$, $\pk\in\Mk^{(\ne0)}$, and $\{\lambda_a,\lambda_b\}\subset\Lk^{(\ne0)}$ be such that $\M_\pk^{(\ne0)}\A=\N_\pi$ and $\OO_{\lambda_a}=\OO_{\lambda_b}\A$. Then
\begin{align}\label{eq4::NiU=Mi}
\M_\pk^{(\ne0)}=\N_\pi\A,~~ \OO_{\lambda_a}\A=\OO_{\lambda_b}.
\end{align}
\end{theorem}
\begin{proof}
By definition, see \cite[Sections 2.1.5, 5.3]{Hirs_PGFF}, a polarity is involutory, i.e. $\A^2 =\mathfrak{J}$, where $\mathfrak{J}$ is the identity. Therefore,
$\A^{-1}=\A$.
\end{proof}
\begin{corollary}\label{cor4:NiU=Mi_all}
Let $q\not\equiv0\pmod3$. The following holds:
\begin{align}\label{eq4:NiU=Mi}
& \M_j^{(\ne0)}=\N_j\A, ~\#\M_j^{(\ne0)}=\#\N_j,~j=1,\ldots,5;~\M_\C^{(\ne0)}=\N_\mathrm{\Gamma}\A, \db\\
&\M_\Tr^{(\ne0)}=\N_{2_\C}\A,~\M_{3_\mathrm{\Gamma}}^{(\ne0)}=\N_{3_\C}\A,\,\M_{1_\mathrm{\Gamma}}^{(\ne0)}=\N_{\overline{1_\C}}\A,~
\M_{0_\mathrm{\Gamma}}^{(\ne0)}=\N_{0_\C}\A;\dbn\\
&\OO'_i\A=\OO_i,~\#\OO'_i=\#\OO_i,~i=1,\ldots,6,~q\not\equiv0\pmod3;\db\label{eq4:O'iU=Oi}\\
&\OO_\RA\A=\OO_\RC,~\OO_\IA\A=\OO_\IC,~\OO_\EG\A=\OO_\UnG; ~\OO_\lambda\A=\OO_\lambda,~\lambda\in\{\Tr,\UG,\EnG\}.\nt
\end{align}
\end{corollary}
\begin{proof}
We use \eqref{eq2:MiU=Ni}, Table \ref{tab1}, and Theorems \ref{th2_Hirs}(ii)(iii), \ref{th2:orbLine}, \ref{th4:NiU=Mi}.
\end{proof}
\section{The numbers of $\lambda$-lines through $\pk$-points and of $\pk$-points on $\lambda$-lines, $q\not\equiv0\pmod3$}\label{sec:results_q_ne0}
Remind that we consider $q\ge5$; also $q\equiv\xi\pmod3$.
\textbf{Notation 3}
~In addition to Notations 1 and 2 we denote the following:
\begin{align*}
&\pi(\pk)\in\Pk &&\T{the plane type such that }\M_\pk^{(\ne0)}\A=\N_{\pi(\pk)},~\pk\in\Mk^{(\ne0)},~\xi\ne0;\db\\
&\lambda(\widetilde{\lambda})\in\Lk^{(\ne0)} &&\T{the line type such that }\OO_{\lambda(\widetilde{\lambda})}=\OO_{\widetilde{\lambda}}\A,~\widetilde{\lambda}\in\Lk^{(\ne0)},~\xi\ne0;\db\\
&\lambda_j(\widetilde{\lambda}_j)&&\T{the line type of the $j$-th orbit of the class $\OO_{\lambda(\widetilde{\lambda})}$ correspon-}\db\\
&&&\T{ding to the $j$-th orbit of the class $\OO_{\widetilde{\lambda}}$ so that }\OO_{\lambda_j(\widetilde{\lambda}_j)}=\OO_{\widetilde{\lambda}_j}\A.\nt
\end{align*}
\begin{theorem}\label{th5:pi(pk)lambda(widehatlambda)}
Let $q\not\equiv0\pmod3$. The following holds:
\begin{align}\label{eq5_pi(pk)}
&\pi(\C)=\mathrm{\Gamma},~\pi(\Tr)=2_\C,~\pi(3_\mathrm{\Gamma})=3_\C,~\pi(1_\mathrm{\Gamma})=\overline{1_\C},~\pi(0_\mathrm{\Gamma})=0_\C;\db\\
&\lambda(\RC)=\RA,~\lambda(\RA )=\RC,~\lambda(\Tr)=\Tr,~\lambda(\IC)=\IA,~\lambda(\IA)=\IC,~\dbn\\
&\lambda(\UG)=\UG,~\lambda(\UnG)=\EG,~\lambda(\EG)=\UnG,~\lambda(\EnG)=\EnG;\dbn\\
&\lambda_j(\UG_j)=\UG_j,~\lambda_j(\UnG_j)=\EG_j,~\lambda_j(\EG_j)=\UnG_j,~ j=1,2. \nt
\end{align}
\end{theorem}
\begin{proof}
The assertions directly follow from \eqref{eq2:MiU=Ni}, \eqref{eq2:O'=OU}, \eqref{eq4::NiU=Mi}--\eqref{eq4:O'iU=Oi}, Theorems \ref{th2_Hirs}(iii), \ref{th2:orbLine}, \ref{th4:NiU=Mi}, and Corollary \ref{cor4:NiU=Mi_all}.
\end{proof}
\begin{theorem}\label{th5:Pi-->Pb}
Let $q\not\equiv0\pmod3$. Let $\pk\in\Mk^{(\ne0)}$, $\widetilde{\lambda}\in\Lk^{(\ne0)}$. Then
\begin{align*}
& \Pb_{\pk,\widetilde{\lambda}}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\lambda(\widetilde{\lambda})}^{(\xi)},~\Lb_{\widetilde{\lambda},\pk}^{(\xi)}=
\mathrm{\Lambda}_{\lambda(\widetilde{\lambda}),\pi(\pk)}^{(\xi)};\,
\Pb_{\pk,\widetilde{\lambda}_j}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\lambda_j(\widetilde{\lambda}_j)}^{(\xi)},\,\Lb_{\widetilde{\lambda}_j,\pk}^{(\xi)}=
\mathrm{\Lambda}_{\lambda_j(\widetilde{\lambda}_j),\pi(\pk)}^{(\xi)}.
\end{align*}
\end{theorem}
\begin{proof}
We have $\M_\pk^{(\ne0)}\A=\N_{\pi(\pk)}$,~$\OO_{\lambda(\widetilde{\lambda})}=\OO_{\widetilde{\lambda}}\A$. By Theorem \ref{th4:NiU=Mi}, $\M_\pk^{(\ne0)}=\N_{\pi(\pk)}\A$, $\OO_{\lambda(\widetilde{\lambda})}\A=\OO_{\widetilde{\lambda}}$. The incidences between $\pi(\pk)$-planes and $\lambda(\widetilde{\lambda})$-lines are saved for $\pk$-points and $\widetilde{\lambda}$-lines. The same holds for orbits $\OO_{\lambda_j}$.
\end{proof}
\begin{corollary}\label{cor5:for tables}
Let $q\not\equiv0\pmod3$. Let $\pk\in\Mk^{(\ne0)}$. Let $\pi(\pk)\in\Pk$ be as in \eqref{eq5_pi(pk)}. For $\xi=1,-1$, the following holds:
\begin{align*}
&\Pb_{\pk,\RC}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\RA}^{(\xi)},~\Lb_{\RC,\pk}^{(\xi)}=\mathrm{\Lambda}_{\RA,\pi(\pk)}^{(\xi)},~
\Pb_{\pk,\RA}^{(\ne0)}=\mathrm{\Pi}_{\pi(\pk),\RC},~\Lb_{\RA,\pk}^{(\ne0)}=\mathrm{\Lambda}_{\RC,\pi(\pk)};\dbn\\
&\Pb_{\pk,\IC}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\IA}^{(\xi)},~\Lb_{\IC,\pk}^{(\xi)}=\mathrm{\Lambda}_{\IA,\pi(\pk)}^{(\xi)},
~\Pb_{\pk,\IA}^{(\ne0)}=\mathrm{\Pi}_{\pi(\pk),\IC},~\Lb_{\IA,\pk}^{(\ne0)}=\mathrm{\Lambda}_{\IC,\pi(\pk)};\dbn\\
&\Pb_{\pk,\UnG}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\EG}^{(\xi)},\,\Lb_{\UnG,\pk}^{(\xi)}=\mathrm{\Lambda}_{\EG,\pi(\pk)}^{(\xi)},
~\Pb_{\pk,\EG}^{(\ne0)}=\mathrm{\Pi}_{\pi(\pk),\UnG},\,\Lb_{\EG,\pk}^{(\ne0)}=\mathrm{\Lambda}_{\UnG,\pi(\pk)};\dbn\\
&\Pb_{\pk,\lambda}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\lambda}^{(\xi)},~\Lb_{\lambda,\pk}^{(\xi)}=
\mathrm{\Lambda}_{\lambda,\pi(\pk)}^{(\xi)},~\lambda\in\{\Tr,\UG,\EnG\}.\dbn\\
&\Pb_{\pk,\UG_j}^{(\ne0)}=\mathrm{\Pi}_{\pi(\pk),\UG_j},~\Lb_{\UG_j,\pk}^{(\ne0)}=\mathrm{\Lambda}_{\UG_j,\pi(\pk)},~ \Pb_{\pk,\UnG_j}^{(\xi)}=\mathrm{\Pi}_{\pi(\pk),\EG_j}^{(\xi)},\db\\
&\Lb_{\UnG_j,\pk}^{(\xi)}=\mathrm{\Lambda}_{\EG_j,\pi(\pk)}^{(\xi)},~
\Pb_{\pk,\EG_j}^{(\ne0)}=\mathrm{\Pi}_{\pi(\pk),\UnG_j},~\Lb_{\EG_j,\pk}^{(\ne0)}=\mathrm{\Lambda}_{\UnG_j,\pi(\pk)},~ j=1,2.
\end{align*}
\end{corollary}
\begin{proof}
We use Theorems \ref{th5:pi(pk)lambda(widehatlambda)} and \ref{th5:Pi-->Pb}.
\end{proof}
Now we are able to form Tables \ref{tab2} and \ref{tab3} using Corollary \ref{cor5:for tables}, and the values of $\mathrm{\Pi}_{\pi,\lambda}^{(\xi)}$ and
$\mathrm{\Lambda}_{\lambda,\pi}^{(\xi)}$ from \cite[Tables 1, 2]{DMP_PlLineInc}.
\section{The numbers of $\lambda$-lines through $\pk$-points and of $\pk$-points on $\lambda$-lines, $q\equiv0\pmod3$}\label{sec:results_q=0}
In this section, we consider $q\equiv0\pmod3$.
The values of $\#\M_\pk$, $\#\OO_\lambda$, $\#\OO_{\lambda_j}$, needed for \eqref{eq4_obtainPb}, \eqref{eq4_obtainLb}, are taken from \eqref{eq2_point_orbits_gen}--\eqref{eq2_=0_orbit_point} and Table~\ref{tab1}. When we use \eqref{eq4_obtainPb2}--\eqref{eq4_obtainLb2}, the values $\Pb_{\pk,\lambda}$, $\Pb_{\pk,\lambda_j}$, and $\Lb_{\lambda,\pk}$, obtained above, are summed~up.
Note that if some of the relations \eqref{eq4_class_Lambda}--\eqref{eq4:IC_lambdaj} are not used directly in proofs then they can be used to check the results.
\begin{theorem}\label{th6:RC}
For $\RC$-lines the following holds:
\begin{align*}
&\Pb_{\C,\RC}^{(0)}=2,~\Pb_{\RC,\RC}^{(0)}=q-1,~\Lb_{\RC,\C}^{(0)}=q,~\Lb_{\RC,\RC}^{(0)}=1,~\db\\
& \Pb_{\pk,\RC}^{(0)}=\Lb_{\RC,\pk}^{(0)}=0,~
\pk\in\{(q+1)_\mathrm{\Gamma},\TO,\IC\}.
\end{align*}
\end{theorem}
\begin{proof}
By definition, a real chord contains exactly two $\C$-points; the other $q-1$ points of the chord are $\RC$-points. So, $\Pb_{\C,\RC}^{(0)}=2,~\Pb_{\RC,\RC}^{(0)}=q-1$. Also, by definition, there are $q$ $\RC$-lines through every $\C$-point, i.e. $\Lb_{\RC,\C}^{(0)}=q$.
By Theorem \ref{th2_Hirs}(iv), two chords do not intersect each other off $\C$; therefore, $\Lb_{\RC,\RC}^{(0)}=1$, $\Pb_{\pk,\RC}^{(0)}=\Lb_{\RC,\pk}^{(0)}=0,~
\pk\in\{\TO,\IC\}$.
Finally, by Remark \ref{observation4:EA}, only lines lying in a
$\mathrm{\Gamma}$-plane can intersect the $\Ar$-line. As an $\RC$-line contains two $\C$-points, it cannot lie in a $\mathrm{\Gamma}$-plane, whence $\Pb_{(q+1)_\mathrm{\Gamma},\RC}^{(0)}=\Lb_{\RC,(q+1)_\mathrm{\Gamma}}^{(0)}=0$.
\end{proof}
\begin{theorem}\label{th6:T}
For $\Tr$-lines the following holds:
\begin{align*}
&\Pb_{\C,\Tr}^{(0)}=\Pb_{(q+1)_\mathrm{\Gamma},\Tr}^{(0)}=\Lb_{\Tr,\C}^{(0)}=\Lb_{\Tr,(q+1)_\mathrm{\Gamma}}^{(0)}=\Lb_{\Tr,\TO}^{(0)}=1,~\Pb_{\TO,\Tr}^{(0)}=q-1,\db\\
&\Pb_{\pk,\Tr}^{(0)}= \Lb_{\Tr,\pk}^{(0)}=0,~\pk\in\{\RC,\IC\}.
\end{align*}
\end{theorem}
\begin{proof}
By definition, a tangent contains exactly one $\C$-point and there is one tangent through every $\C$-point. Thus, $\Pb_{\C,\Tr}^{(0)}=\Lb_{\Tr,\C}^{(0)}=1$. Also, a tangent lies in a $\mathrm{\Gamma}$-plane and, hence, intersects the $\Ar$-line. This implies
$\Pb_{(q+1)_\mathrm{\Gamma},\Tr}^{(0)}=1$ whence, by~\eqref{eq4_obtainLb}, $\Lb_{\Tr,(q+1)_\mathrm{\Gamma}}^{(0)}=1$. The remaining $q-1$ points of the tangent are $\TO$-points, i.e. $\Pb_{\TO,\Tr}^{(0)}=q-1$.
By Theorem \ref{th2_Hirs}(iv), two chords do not intersect each other off $\C$; therefore, $\Lb_{\Tr,\TO}^{(0)}=1$,
$\Pb_{\RC,\Tr}^{(0)}=\Pb_{\IC,\Tr}^{(0)}= \Lb_{\Tr,\RC}^{(0)}=\Lb_{\Tr,\IC}^{(0)}=0$.
\end{proof}
\begin{theorem}\label{th6:IC}
For $\IC$-lines the following holds:
\begin{align*}
&\Pb_{\IC,\IC}^{(0)}=q+1,~\Lb_{\IC,\IC}^{(0)}=1,~\Pb_{\pk,\IC}^{(0)}=\Lb_{\IC,\pk}^{(0)}=0,~\pk\in\{\C,(q+1)_\mathrm{\Gamma},\TO,\RC\}.
\end{align*}
\end{theorem}
\begin{proof}
By definition, all points of an $\IC$-line are $\IC$-points; this implies $\Pb_{\IC,\IC}^{(0)}=q+1$, $\Pb_{\pk,\IC}^{(0)}=\Lb_{\IC,\pk}^{(0)}=0,~\pk\in\{\C,(q+1)_\mathrm{\Gamma},\TO,\RC\}$.
By Theorem \ref{th2_Hirs}(iv), two $\IC$-lines do not intersect each other; therefore, $\Lb_{\IC,\IC}^{(0)}=1$.
\end{proof}
\begin{theorem}\label{th6:UG}
For $\UG$-lines the following holds:
\begin{align*}
&\Pb_{\C,\UG}^{(0)}=\Pb_{(q+1)_\mathrm{\Gamma},\UG}^{(0)}=1,~\Lb_{\UG,\C}^{(0)}=\Lb_{\UG,(q+1)_\mathrm{\Gamma}}^{(0)}=q,~\Pb_{\TO,\UG}^{(0)}=\Lb_{\UG,\TO}^{(0)}=0,\db\\
&\Lb_{\UG,\RC}^{(0)}=\Lb_{\UG,\IC}^{(0)}=1,~\Pb_{\RC,\UG}^{(0)}=\Pb_{\IC,\UG}^{(0)}=\frac{1}{2}(q-1).
\end{align*}
\end{theorem}
\begin{proof}
By definition, a $\UG$-line contains exactly one $\C$-point and there are $q$ $\UG$-lines thro\-ugh every $\C$-point, i.e. $\Pb_{\C,\UG}^{(0)}=1$, $\Lb_{\UG,\C}^{(0)}=q$. A $\UG$-line lies in a $\mathrm{\Gamma}$-plane and, hence, intersects the $\Ar$-line. This implies $\Pb_{(q+1)_\mathrm{\Gamma},\UG}^{(0)}=1$ whence, by \eqref{eq4_obtainLb}, $\Lb_{\UG,(q+1)_\mathrm{\Gamma}}^{(0)}=q$.
By definition, $\UG$-lines and $\Tr$-lines lie in $\mathrm{\Gamma}$-planes. If a $\UG$-line and a $\Tr$-line belong to the same $\mathrm{\Gamma}$-plane, then their common point is a $\C$-point. Otherwise they are skew. So, a $\UG$-line cannot intersect a $\Tr$-line off $\C$. As all $\TO$-points are off $\C$, we have $\Pb_{\TO,\UG}^{(0)}=\Lb_{\UG,\TO}^{(0)}=0$.
By \cite[Table 2, Theorem 5.13(ii)]{BDMP-TwCub}, there is exactly one $\mathrm{\Gamma}$-plane, say $\pi_P$, through an $\RC$-point $P$. Let $Q\in\pi_P$ be the contact point of $\C$ and $\pi_P$. By Theorem \ref{th2_Hirs}(iv), the line $\overline{PQ}$ cannot be either a real chord or a tangent, hence $\overline{PQ}$ is a $\UG$-line. Thus, $\Lb_{\UG,\RC}^{(0)}=1$. Similarly, it can be shown that $\Lb_{\UG,\IC}^{(0)}=1$. Now, by \eqref{eq4_obtainPb}, we obtain $\Pb_{\RC,\UG}^{(0)}=\Pb_{\IC,\UG}^{(0)}=(q-1)/2$.
\end{proof}
\begin{theorem}\label{th6:A}
For the $\Ar$-line the following holds:
\begin{align*}
&\Pb_{(q+1)_\mathrm{\Gamma},\Ar}^{(0)}=q+1,~\Lb_{\Ar,(q+1)_\mathrm{\Gamma}}^{(0)}=1,~\Pb_{\pk,\Ar}^{(0)}=\Lb_{\Ar,\pk}^{(0)}=0,~\pk\in\{\C,\TO,\RC,\IC\}.
\end{align*}
\end{theorem}
\begin{proof}
By definition, all points of the $\Ar$-line are $(q+1)_\mathrm{\Gamma}$-points and there is one $\Ar$-line through every $(q+1)_\mathrm{\Gamma}$-point; this implies $\Pb_{(q+1)_\mathrm{\Gamma},\Ar}^{(0)}=q+1$, $\Lb_{\Ar,(q+1)_\mathrm{\Gamma}}^{(0)}=1$, and $\Pb_{\pk,\Ar}^{(0)}=\Lb_{\Ar,\pk}^{(0)}=0$, $\pk\ne(q+1)_\mathrm{\Gamma}$.
\end{proof}
\begin{theorem}\label{th6:UnGa}
The following holds:
\begin{align*}
&\Pb_{\C,\UnG}^{(0)}=\Pb_{\C,\UnG_v}^{(0)}=1,~\Lb_{\UnG,\C}^{(0)}=q^2-q,~\Lb_{\UnG_v,\C}^{(0)}=\frac{1}{2}(q^2-q),~v=1,2;\db\\
&\Pb_{\C,\lambda}^{(0)}= \Lb_{\lambda,\C}^{(0)}=0,~\lambda\in\{\EnG,\EA,\EA_j\},~j=1,2,3.
\end{align*}
\end{theorem}
\begin{proof}
By definition, a $\UnG$-line contains exactly one $\C$-point, i.e. $\Pb_{\C,\UnG}^{(0)}=\\
\Pb_{\C,\UnG_v}^{(0)}=1$, whence, by \eqref{eq4_obtainLb}, $\Lb_{\UnG,\C}^{(0)}=q^2-q$, $\Lb_{\UnG_v,\C}^{(0)}=(q^2-q)/2$. Also, by definition, $\Pb_{\C,\lambda}^{(0)}= 0$, $\lambda\in\{\EnG,\EA,\EA_j\}$,
whence, by Corollary \ref{cor4_=0}, $\Lb_{\lambda,\C}^{(0)}=0$.
\end{proof}
\begin{theorem}\label{th6:(q+1)_Gamma}
A $\UnG$- and an $\EnG$-line cannot intersect the $\Ar$-line whereas an $\EA$-line necessary intersects it. The following holds:
\begin{align*}
&\Pb_{(q+1)_\mathrm{\Gamma},\lambda}^{(0)}=\Lb_{\lambda,(q+1)_\mathrm{\Gamma}}^{(0)}=0,~\lambda\in\{\UnG,\UnG_v,\EnG\},~v=1,2;\db\\
& \Pb_{(q+1)_\mathrm{\Gamma},\lambda}^{(0)}=1,~\lambda\in\{\EA,\EA_j\},~j=1,2,3;\db\\
&\Lb_{\EA,(q+1)_\mathrm{\Gamma}}^{(0)}=q^2-1, ~\Lb_{\EA_1,(q+1)_\mathrm{\Gamma}}^{(0)}=q^2-q,~\Lb_{\EA_j,(q+1)_\mathrm{\Gamma}}^{(0)}=\frac{1}{2}(q-1),~j=2,3;
\end{align*}
where $\Pb_{(q+1)_\mathrm{\Gamma},\EA}^{(0)}=1$ is the \emph{exact} number of $(q+1)_\mathrm{\Gamma}$-points on an $\EA$-line.
\end{theorem}
\begin{proof}
By definition, $\UnG$- and $\EnG$-lines do not lie in any $\mathrm{\Gamma}$-plane; it implies, due to Remark \ref{observation4:EA}, $ \Pb_{(q+1)_\mathrm{\Gamma},\lambda}^{(0)}=\Lb_{\lambda,(q+1)_\mathrm{\Gamma}}^{(0)}=0,~\lambda\in\{\UnG,\UnG_v,\EnG\}$. Also, by definition, an $\EA$-line necessary intersects the $\Ar$-line that gives $\Pb_{(q+1)_\mathrm{\Gamma},\lambda}^{(0)}=1,~\lambda\in\{\EA,\EA_j\}$, as the exact value. Now, by \eqref{eq4_obtainLb}, we obtain $\Lb_{\EA,(q+1)_\mathrm{\Gamma}}^{(0)}$ and $\Lb_{\EA_j,(q+1)_\mathrm{\Gamma}}^{(0)}$.
\end{proof}
\begin{theorem}\label{th6:UnG}
For $\UnG$-lines the following holds:
\begin{align*}
&\Lb_{\UnG,\TO}^{(0)}=\Lb_{\UnG,\IC}^{(0)}=q,~\Pb_{\TO,\UnG}^{(0)}=1,~\Pb_{\IC,\UnG}^{(0)}=\frac{1}{2}q,\db\\
&\Lb_{\UnG,\RC}^{(0)}=q-2,\,\Pb_{\RC,\UnG}^{(0)}=
\frac{1}{2}(q-2).
\end{align*}
\end{theorem}
\begin{proof}
Let $\TT$ be a tangent to $\C$ at a point $P$. Let $B\in\TT$ be a $\TO$-point. Let $\ell$ be a line through $B$ and one of the $q$ points of $\C\setminus \{P\}$. By Theorem~\ref{th2_Hirs}(iv), $\ell$ can be neither a real chord nor a tangent, hence it is a non-tangent unisecant. By\cite[Table 2, Theorem 5.13(ii)]{BDMP-TwCub}, there is exactly one $\mathrm{\Gamma}$-plane, say $\pi_B$, through the $\TO$-point $B$. Obviously, $\TT\in\pi_B$ and $P$ is the contact point of $\C$ and $\pi_B$.
Thus, $\ell$ does not lie in a $\mathrm{\Gamma}$-plane, i.e. $\ell$ is a $\UnG$-line and we have $\#\C\setminus\{P\}$ $\UnG$-lines through every $\TO$-point. So, $\Lb_{\UnG,\TO}^{(0)}=q$, whence, by~\eqref{eq4_obtainPb}, $\Pb_{\TO,\UnG}^{(0)}=1$.
Let $\overline{PQ}$ be a real chord through $\C$-points $P$ and $Q$. Let $B\in\overline{PQ}$ be an $\RC$-point. By\cite[Table 2, Theorem 5.13(ii)]{BDMP-TwCub}, there is exactly one $\mathrm{\Gamma}$-plane, say $\pi_B$, through the $\RC$-point~$B$. Let $R\in\pi_B$ be the contact point of $\C$ and $\pi_B$. Let $\ell$ be a line through $P$ and one of the $q-2$ points of $\C\setminus\{P,Q,R\}$. By Theorem~\ref{th2_Hirs}(iv), $\ell$ can be neither a real chord nor a tangent; also, $\ell\notin\pi_B$. Thus, $\ell$ is a $\UnG$-line and we have $\#\C\setminus\{P,Q,R\}$ $\UnG$-lines through every $\RC$-point. So, $\Lb_{\UnG,\RC}^{(0)}=q-2$, whence, by~\eqref{eq4_obtainPb}, $\Pb_{\RC,\UnG}^{(0)}=(q-2)/2$.
Finally, let $\mathcal{IC}$ be an imaginary chord and $B\in\mathcal{IC}$ be an $\IC$-point. By \cite[Table~2]{BDMP-TwCub}, there is exactly one $\mathrm{\Gamma}$-plane, say $\pi_B$, through $B$. Let $R\in\pi_B$ be the contact point of $\C$ and $\pi_B$. Similarly to above, all the $q$ lines through $B$ and a point of $\C\setminus\{R\}$ are $\UnG$-lines; this gives $\Lb_{\UnG,\IC}^{(0)}=q$, whence, by~\eqref{eq4_obtainPb}, $\Pb_{\IC,\UnG}^{(0)}=q/2$.
\end{proof}
\begin{theorem}\label{th6:TO}
For $\TO$-points the following holds:
\begin{align*}
&\Pb_{\TO,\EA}^{(0)}=\frac{q}{q+1},~\Lb_{\EA,\TO}^{(0)}=\Lb_{\EA_1,\TO}^{(0)}=q,~\Lb_{\EnG,\TO}^{(0)}=q^2-q,\\
&\Pb_{\TO,\EA_1}^{(0)}= \Pb_{\TO,\EnG}^{(0)}=1,~\Pb_{\TO,\EA_j}^{(0)}= \Lb_{\EA_j,\TO}^{(0)}=0,~j=2,3.
\end{align*}
\end{theorem}
\begin{proof}
By Remark \ref{observation4:EA}, in every $\mathrm{\Gamma}$-plane, $q-1$ $\EA$-lines intersect the only tangent at its common point with the $\Ar$-line while the remaining $q^2-q$ ones intersect the tangent in $\TO$-points. Thus, in total, there are $(q^2-q)\cdot\#\N_\mathrm{\Gamma}=(q^2-q)(q+1)$ $\TO$-points on all $(q+1)(q^2-1)$ $\EA$-lines. The average number is $\Pb_{\TO,\EA}^{(0)}=q/(q+1)$, whence, by \eqref{eq4_obtainLb}, $\Lb_{\EA,\TO}^{(0)}=q$.
An $\EA$-line intersects exactly one tangent either in its common point with the $\Ar$-line or in a $\TO$-point. Therefore, $\Pb_{\TO,\EA_j}^{(0)}\in\{0,1\}$. If for $j=2,3$, we put $\Pb_{\TO,\EA_j}^{(0)}=1$ then, by \eqref{eq4_obtainLb}, we obtain $\Lb_{\EA_j,\TO}^{(0)}=1/2$ that is not an integer, contradiction.
So, $\Pb_{\TO,\EA_j}^{(0)}= \Lb_{\EA_j,\TO}^{(0)}=0$, $j=2,3$, whence, by \eqref{eq4:Pi_aver2}, $\Pb_{\TO,\EA_1}^{(0)}= 1$ and, by \eqref{eq4_obtainLb}, $\Lb_{\EA_1,\TO}^{(0)}=q$.
Finally, by \eqref{eq4_obtainLb2}, $\Lb_{\EnG,\TO}^{(0)}=q^2-q$, whence, by \eqref{eq4_obtainPb}, $\Pb_{\TO,\EnG}^{(0)}=1$.
\end{proof}
\begin{remark}\label{observation6}
By Remark \ref{observation4:EA} and Theorem \ref{th6:TO}, it can be seen that in every $\mathrm{\Gamma}$-plane, the $q-1$ $\EA$-lines from the orbits $\OO_{\EA_2}$ and $\OO_{\EA_3}$
intersect the only tangent of this plane at its common point with the $\Ar$-line (it is not a $\TO$-point). At the same time, the $q^2-q$ $\EA$-lines from the orbit $\OO_{\EA_1}$ intersect the tangent in $\TO$-points.
\end{remark}
\begin{theorem}\label{th6:IC}
For $\IC$-points the following holds:
\begin{align*}
& \Pb_{\IC,\EnG}^{(0)}=\frac{q^2-q-1}{2(q-1)},~\Lb_{\EnG,\IC}^{(0)}=q^2-q-1;~
\Pb_{\IC,\EA}^{(0)}=\frac{q^2}{2(q+1)},~\Lb_{\EA,\IC}^{(0)}=q;\db\\
&\Pb_{\IC,\EA_1}^{(0)}=\frac{1}{2}(q-1),~\Lb_{\EA_1,\IC}^{(0)}=q-1,~
\Pb_{\IC,\EA_2}^{(0)}=\Lb_{\EA_2,\IC}^{(0)}=0,\db\\
&\Pb_{\IC,\EA_3}^{(0)}=q,~\Lb_{\EA_3,\IC}^{(0)}=1.
\end{align*}
\end{theorem}
\begin{proof}
The assertions follow from Remark \ref{observation4:IC} with \eqref{eq4:IC_P}--\eqref{eq4:IC_lambdaj} and \cite[Tables~1,~2]{DMP_PlLineInc}.
\end{proof}
\begin{theorem}\label{th6:RC_RC}
For $\RC$-points the following holds:
\begin{align*}
& \Pb_{\RC,\EnG}^{(0)}=\frac{q^2-q+1}{2(q-1)},~\Lb_{\EnG,\RC}^{(0)}=q^2-q+1;\db\\
&\Pb_{\RC,\EA}^{(0)}=\frac{q^2}{2(q+1)},~\Lb_{\EA,\RC}^{(0)}=q ;~\Pb_{\RC,\EA_1}^{(0)}=\frac{1}{2}(q-1),~\Lb_{\EA_1,\RC}^{(0)}=q-1,\db\\ &\Pb_{\RC,\EA_2}^{(0)}=q,~\Lb_{\EA_2,\RC}^{(0)}=1,~\Pb_{\RC,\EA_3}^{(0)}=\Lb_{\EA_3,\IC}^{(0)}=0.
\end{align*}
\end{theorem}
\begin{proof}
The values of $\Pb_{\RC,\lambda}^{(0)}$, $\Pb_{\RC,\lambda_j}^{(0)}$ are obtained by \eqref{eq4_obtainPb2}, \eqref{eq4_obtainPb3}. Then we obtain
$\Lb_{\lambda,\RC}^{(0)}$, $\Lb_{\lambda_j,\RC}^{(0)}$ by \eqref{eq4_obtainLb}.
\end{proof}
\begin{theorem}
For $\UnG_v$-lines, $v=1,2$, the following holds:
\begin{align*}
&\Pb_{\TO,\UnG_1}^{(0)}=\Lb_{\UnG_1,\TO}^{(0)}=0,~\Pb_{\TO,\UnG_2}^{(0)}=2,~\Lb_{\UnG_2,\TO}^{(0)}=q,\db\\
&\Pb_{\IC,\UnG_1}^{(0)}=\Lb_{\UnG_1,\IC}^{(0)}=\frac{1}{2}(q+1),~\Pb_{\IC,\UnG_2}^{(0)}=\Lb_{\UnG_2,\IC}^{(0)}=\frac{1}{2}(q-1),\db\\
&\Pb_{\RC,\UnG_1}^{(0)}=\Lb_{\UnG_1,\RC}^{(0)}=\frac{1}{2}(q-1),~
\Pb_{\RC,\UnG_2}^{(0)}=\Lb_{\UnG_2,\RC}^{(0)}=\frac{1}{2}(q-3).
\end{align*}
\end{theorem}
\begin{proof}
By Theorem \ref{th6:TO}, $\Pb_{\TO,\EnG}^{(0)}=1$ whence $\Pb_{\TO,\UnG_1}^{(0)}+\Pb_{\TO,\UnG_2}^{(0)}=2$, by \eqref{eq4:Pi_aver2}. If for $v=1,2$, we put $\Pb_{\TO,\UnG_v}^{(0)}=1$, then, by \eqref{eq4_obtainLb}, we obtain $\Lb_{\UnG_v.\TO}^{(0)}=q/2$ that is not an integer, contradiction. So, $\Pb_{\TO,\UnG_v}^{(0)}\in\{0,2\}$.
By \cite[Table 1]{DMP_PlLineInc}, through a $\Tr$-line we have one $\mathrm{\Gamma}$-plane and $q$ $2_\C$-planes; also, every $2_\C$-plane contains one $\Tr$-line. Therefore, through a $\Tr$-line and a $\UnG$-line meeting the $\Tr$-line in a $\TO$-point we have a $2_\C$-plane. By \cite[Table~2]{DMP_PlLineInc}, there are one and three $2_\C$-planes through a $\UnG_1$- and a $\UnG_2$-line, respectively. Therefore, a $\UnG_1$- and a $\UnG_2$-line intersect one and three $\Tr$-lines, respectively. This means that $\Pb_{\TO,\UnG_1}^{(0)}=0$, $\Pb_{\TO,\UnG_2}^{(0)}=2$, and also every $\UnG_v$-line intersects one $\Tr$-line at a $\C$-point. Now, by~\eqref{eq4_obtainLb}, we obtain $\Lb_{\UnG_1,\TO}^{(0)}=0,~\Lb_{\UnG_2,\TO}^{(0)}=q$.
By \cite[Table 2]{DMP_PlLineInc}, $\mathrm{\Pi}_{\overline{1_\C},\UnG_1}=\mathrm{\Lambda}_{\UnG_1,\overline{1_\C}}=(q+1)/2, \mathrm{\Pi}_{\overline{1_\C},\UnG_2}=\mathrm{\Lambda}_{\UnG_2,\overline{1_\C}}=(q-1)/2$ whence, by \eqref{eq4:IC_lambdaj}, $\Pb_{\IC,\UnG_1}^{(0)}=\Lb_{\UnG_1,\IC}^{(0)}=(q+1)/2$, $\Pb_{\IC,\UnG_2}^{(0)}=\Lb_{\UnG_2,\IC}^{(0)}=(q-1)/2$.
Finally, $\RC$-points lie only on $\RC$-lines. By \cite[Table 1]{DMP_PlLineInc}, through an $\RC$-line there are two $2_\C$-planes and $q-1$ $3_\C$-planes.
A $\UnG$-line lying in a $2_\C$-plane intersects the $\RC$-line of this plane at a $\C$-point. A $\UnG$-line lying in a $3_\C$-plane intersects two $\RC$-lines of this plane at a $\C$-point and one $\RC$-line at an $\RC$-point. Therefore, $\Pb_{\TO,\UnG_v}^{(0)}=\mathrm{\Pi}_{3_\C,\UnG_v}^{(0)}$. By \cite[Table~2]{DMP_PlLineInc}, $\mathrm{\Pi}_{3_\C,\UnG_1}^{(0)}=(q-1)/2,~\mathrm{\Pi}_{3_\C,\UnG_2}^{(0)}=(q-3)/2$ whence together with \eqref{eq4_obtainLb} we obtain the remaining assertions.
\end{proof}
Now we form Tables \ref{tab4} and \ref{tab5} using the results of this section.
\section{Some general results }\label{sec:gen res}
\begin{theorem}
Let $\pk\in\Mk^{(\xi)}$. Let a class $\OO_\lambda$ consist of a single orbit.
Then the submatrix $\I^{\Pb\Lb}_{\pk,\lambda}$ of $\I^{\Pb\Lb}$ is
a $(v_r,b_k)$ configuration of Definition \emph{\ref{def2_config}} with $v=\#\M_\pk$, $b=\#\OO_\lambda$, $r=\Lb_{\lambda,\pk}^{(\xi)}$, $k=\Pb_{\pk,\lambda}^{(\xi)}$. Also, up to rearrangement of rows and columns, the submatrices $\I^{\Pb\Lb}_{\pk,\lambda}$ with $\Lb_{\lambda,\pk}^{(\xi)}=1$ can be viewed as a concatenation of $\Pb_{\pk,\lambda}^{(\xi)}$ identity matrices of order $\#\OO_\lambda$. The same holds for the submatrices $\I^{\Pb\Lb}_{\pk,\lambda_j}$.
\end{theorem}
\begin{proof}
As the class $\OO_\lambda$ is an orbit, $\I^{\Pb\Lb}_{\pk,\lambda}$ contains $\Pb_{\pk,\lambda}$ (resp. $\Lb_{\lambda,\pk}$) ones in every row (resp. column), see Lemma~\ref{lemma4_line&point}. In $\PG(3,q)$, two lines are either skew or intersect at a point. Therefore, two points of $\I^{\Pb\Lb}_{\pk,\lambda}$ are connected by at most one line. If $\Lb_{\lambda,\pk}^{(\xi)}=1$, $\I^{\Pb\Lb}_{\pk,\lambda}$ contains $\Pb_{\pk,\lambda}^{(\xi)}$ (resp.\ 1) ones in every row (resp. column).
\end{proof}
\begin{theorem}
Let $(\lambda,\pk)\in\{(\UG,\C), (\UnG,\C)\}$ if $q\not\equiv0\pmod3$, and
$(\lambda,\pk)\in\{(\UnG,\C), (\EA,(q+1)_\mathrm{\Gamma})\}$ if $q\equiv0\pmod3$. Then, independently of the number of orbits in the class $\OO_\lambda$, we have exactly one $\pk$-point on every $\lambda$-line. Up to rearrangement of rows and columns, the submatrices $\I^{\Pb\Lb}_{\pk,\lambda}$ can be viewed as a vertical concatenation of\/ $\Lb_{\lambda,\pk}^{(\xi)}$ identity matrices of order $\#\M_\pk$.
\end{theorem}
\begin{proof}
By Tables \ref{tab2}--\ref{tab5}, in the considered submatrices $\I^{\Pb\Lb}_{\pk,\lambda}$ there is exactly one unit in every row.
\end{proof}
\begin{theorem}\label{th7:one line}
Let $q\equiv1\pmod3$. Let $V^{(1)}=\{\OO_1=\OO_\RC,\OO_2=\OO_\Tr,\OO'_3=\OO_\IA\}$. Then, cf. Theorem $\ref{th2_Hirs}$(iv), no two lines of $V^{(1)}$ meet off $\C$. Every point off $\C$ lies on exactly one line of~$V^{(1)}$.
\end{theorem}
\begin{proof}
The assertions follow from Table \ref{tab2}. Also, they can be proved using Theorem~\ref{th2_Hirs}(ii)(iv) with \eqref{eq2_=1_orbit_point} and \cite[Theorem 1]{LunarPolv}.
\end{proof}
\begin{theorem}
Let $q\equiv0\pmod3$. Let $\W^{(0)}=\{\OO_2=\OO_\Tr,\OO_4=\OO_\UG\}$. Let $\mathbb{M}=\C\cup\Ar$-line be the union of the twisted cubic and the $\Ar$-line. Then
no two lines of $\W^{(0)}$ meet off $\mathbb{M}$.
Every point off $\mathbb{M}$ lies on exactly one line of~$\W^{(0)}$, cf. Theorems $\ref{th2_Hirs}$(iv) and \emph{\ref{th7:one line}}.
\end{theorem}
\begin{proof}
The assertions follow from Table \ref{tab4}. Note also that for $q\equiv0\pmod3$, $\mathrm{\Gamma}$-planes form a pencil with the axis $\Ar$-line and, in turn, in each plane $\pi_\T{osc}(t)$, $q$ $\UG$-lines and the tangent form the pencil of lines through the point $P(t)$.
\end{proof}
\section*{Acknowledgments}
The research of S. Marcugini, and F. Pambianco was supported in part by the Italian
National Group for Algebraic and Geometric Structures and their Applications (GNSAGA -
INDAM) (Contract No. U-UFMBAZ-2019-000160, 11.02.2019) and by University of Perugia
(Project No. 98751: Strutture Geometriche, Combinatoria e loro Applicazioni, Base Research
Fund 2017-2019).
|
1,314,259,995,580 | arxiv | \section{Introduction}
\label{sec:introduction}
Geometrical inequalities play an important role in General Relativity, in
particular for vacuum black holes, where the geometrical aspects of the theory
appears in their pure form. A geometrical inequality in General Relativity
relates quantities that have both a physical interpretation and a geometrical
definition. The most relevant example is the positive mass theorem. The mass
of the spacetime measures the total amount of energy and hence it should be
positive from the physical point of view. Also, the mass $m$ in General
Relativity is represented by a pure geometrical quantity on a Riemannian
manifold \cite{Arnowitt62} \cite{Bartnik86}. From the geometrical mass
definition, without the physical picture, it would be very hard to conjecture
that this quantity should be positive. On the other hand, the highly non
trivial proof of this inequality \cite{Schoen79b}\cite{Schoen81}\cite{Witten81}
reveals the subtle way in which Einstein equations describe the gravitational
field.
For black holes, the first example of geometrical inequality is the Penrose
inequality (see the recent review article \cite{Mars:2009cj} and references
therein) which relates the
area of the horizon $A$ (the `size' of the black holes) with the total mass of
the spacetime
\begin{equation}
\label{eq:65}
\sqrt{\frac{A}{16\pi}}\leq m.
\end{equation}
Another example is the inequality between mass
and the angular momentum $J$ for axially symmetric black holes
\begin{equation}
\label{eq:4}
\sqrt{|J|}\leq m.
\end{equation}
See \cite{Dain06c}\cite{Dain05e} \cite{Chrusciel:2007ak}, and also the
generalization which includes charge presented in \cite{Chrusciel:2009ki}
\cite{Costa:2009hn}. Inequalities \eqref{eq:65} and \eqref{eq:4} are closed
related with the weak cosmic censorship conjecture. They can be interpreted as
indirect but relevant indications of the validity of this conjecture. As in
the case of the positive of the mass, these inequalities has been discovered
first by physical arguments, and then proved afterword (under appropriated and
restricted assumptions, see the references mentioned above) as a rigorous
consequence of Einstein equations. The proofs provide also new insight into
the mechanisms of Einstein equations. As an example (which is connected with
the subject of this article), we mention that the proof of \eqref{eq:4}
involves a variational characterization of the extreme Kerr black hole as an
absolute minimum of the mass.
The total mass is a global quantity. On the other hand the area $A$ and the
angular momentum in axial symmetry $J$, involved in inequalities \eqref{eq:65}
and \eqref{eq:4} respectively, are quasi-local quantities. Namely they carry
information on a bounded region of the spacetime. In contrast with a local
quantity like a tensor field which depends on a point of the spacetime.
Inequalities \eqref{eq:65} and \eqref{eq:4} relate global quantities with
quasi-local ones. It is well known that the energy of the gravitational field
can not be represented by a local quantity. The best one can hope is to obtain
an expression of the total energy of a bounded region of the spacetime. These
are the so called quasi-local mass definition (see the review article
\cite{Szabados04} and reference therein). For some of the quasi-local mass
proposals there exist also positivity proofs and hence we obtain a
quasi-local geometrical inequality. One would expect that for a black hole
pure quasi-local inequalities are also valid. The relevance of this kind of
inequalities is that they provide a much finer control on the dynamics of a black
holes than the global versions. The main purpose of this article is to study
one example of such quasi-local inequality for vacuum black holes and to give
non trivial evidences of its validity.
The area of the horizon is a well defined quasi-local quantity for a generic
black hole. In general, the quasi-local angular momentum is difficult to define
(see \cite{Szabados04}), but in the case of axial symmetry there exists a well
defined notion, namely the Komar angular momentum. This is the angular
momentum used in inequality \eqref{eq:4}. That is, for generic (i.e. not necessarily
stationary) axially symmetric black holes we have two well defined quasi-local
physical quantities, the horizon area $A$ and the angular momentum $J$. In
terms of $A$ and $J$, the Christodoulou \cite{Christodoulou70} mass of the
black hole is defined as follows
\begin{equation}
\label{eq:32}
m_{bh}=\sqrt{\frac{A}{16\pi}+ \frac{4\pi J^2}{A}}.
\end{equation}
For the Kerr black hole this formula gives precisely the total mass of the
black hole which is equal to the total mass of the spacetime. In general, for
an horizon with only one connected component one would expect that $m_{bh}$ is
less than the total mass of the spacetime with equality only for the Kerr black
hole. This would give a generalization of Penrose inequality including angular
momentum (in fact, an strong generalization) in the spirit of \cite{Jang79}
(see the discussion in \cite{Mars:2009cj}). For the case of many black holes
the negative interaction energy between the black holes should be also
considered and hence it is not expected a simple inequality with respect to the
total mass (see \cite{Weinstein05} for a discussion of the analog of this
inequality with charges). We will not discuss this point further, since we
are interested in this article only in quasi-local inequalities. We mention it
because it plays a role in the physical interpretation of the quasi-local mass
$m_{bh}$.
The formula \eqref{eq:32} trivially satisfies the inequality
\begin{equation}
\label{eq:33}
\sqrt{|J|}\leq m_{bh}.
\end{equation}
This is, of course, just because the Kerr black hole satisfies this bound.
Hence, if we accept \eqref{eq:32} as the correct formula for the quasi-local
mass of an axially symmetric black hole, then \eqref{eq:33} provides the,
rather trivial, quasi-local version of \eqref{eq:4}. The real question is, does
the formula \eqref{eq:32} represents the quasi-local mass of a non-stationary
black hole? Let us analyze the behavior of the mass \eqref{eq:32} from a physical point
of view.
Let us assume that for a generic axially symmetric black hole the quantity
$m_{bh}$ gives a measure of the quasi-local mass of the black hole. Consider the
evolution of $m_{bh}$. By the area theorem, we know that the horizon area will
increase. If we assume axial symmetry, then the angular momentum will be
conserved at the quasi-local level (we are assuming pure vacuum). On physical
grounds, one would expect that in this situation the quasi-local mass of the
black hole should increase with the area, since there is no mechanism at the
classical level to extract mass from the black hole. In effect, the only way
to extract mass from a black hole is by extracting angular momentum through a
Penrose process. But angular momentum transfer is forbidden in pure vacuum
axial symmetry. Then, one would expect that both the area $A$ and the
quasi-local mass $m_{bh}$ should monotonically increase with time.
Let us take a time derivative of $m_{bh}$ (denoted by a dot). From the formula
\eqref{eq:32} we obtain
\begin{equation}
\label{eq:34}
\dot m_{bh} = \frac{\dot A}{32\pi m_{bh}} \left(1-\left(\frac{8\pi
J}{A}\right)^2 \right),
\end{equation}
were we have used that the angular momentum $J$ is conserved. Since, by the
area theorem, we have
\begin{equation}
\label{eq:9}
\dot A \geq 0,
\end{equation}
the time derivative of $m_{bh}$ will be positive (and hence the mass $m_{bh}$ will
increase with the area) if and only if the following inequality is satisfied
\begin{equation}
\label{eq:5}
8\pi |J|\leq A.
\end{equation}
Then, it is natural to conjecture that \eqref{eq:5} should be satisfied for any
horizon in an axially symmetric asymptotically flat initial data. If there are
initial data that violate \eqref{eq:5} then in the evolution the area will
increase but the mass $m_{bh}$ will decrease. This will indicate that the quantity
$m_{bh}$ has not the desired physical meaning. Also, a rigidity statement is
expected. Namely, the equality in \eqref{eq:5} is reached only by the extreme
Kerr black hole.
If inequality \eqref{eq:5} is true, then we have a non trivial monotonic
quantity (in addition to the black hole area) $m_{bh}$
\begin{equation}
\label{eq:10}
\dot m_{bh} \geq 0.
\end{equation}
It is important to emphasize that the physical arguments presented above in
support of \eqref{eq:5} are certainly weaker in comparison with the ones behind
the inequalities \eqref{eq:65} and \eqref{eq:4}. A counter example of any of
these inequality will prove that the standard picture of the gravitational
collapse is wrong. On the other hand, a counter example of \eqref{eq:5} will
just prove that the quasi-local mass \eqref{eq:32} is not appropriate to
describe the evolution of a non-stationary black hole. One can
imagine other expressions for quasi-local mass, may be more involved, in axial
symmetry. On the contrary, reversing the argument, a proof of \eqref{eq:5}
will certainly prove that the mass \eqref{eq:32} has physical meaning for
non-stationary black holes as a natural quasi-local mass. Also, the inequality
\eqref{eq:5} provide a non trivial control of the size of a black hole valid at
any time.
If the rigidity statement also holds, this inequality will provide a remarkable
quasi-local measure of how far is the data from the extreme black hole data.
This provides an `extremality criteria' in the spirit of \cite{Booth:2007wu},
although restricted only to axial symmetry. In the article \cite{Dain:2007pk}
it has been conjectured that, within axially symmetry, to prove the stability
of a nearly extreme black hole is perhaps simpler than a Schwarzschild black
hole. It is possible that this quasi-local extremality criteria will have relevant
applications in this context.
Let us also point out that the inequality \eqref{eq:5} is related with the
surface gravity density (or temperature) of a black hole. The surface gravity
density $\kappa$ of the Kerr black hole can be written in terms of the
quasi-local quantities $A$ and $J$ as follows
\begin{equation}
\label{eq:14}
\kappa=\frac{1}{4\sqrt{\frac{A}{16\pi}+\frac{4\pi J^2}{A}}}
\left(1-\left(\frac{8\pi J}{A}\right)^2 \right).
\end{equation}
From this equation, we see that for the Kerr black hole $\kappa$ is positive
because inequality (\ref{eq:5}) holds. Hence, if inequality (\ref{eq:5}) holds
for generic, non-stationary axially symmetric black holes we can define the
same expression for $\kappa$ for this class of black holes.
All the previous arguments lead to the following conjecture
\begin{conjecture}
\label{c:1}
Consider an asymptotically flat, vacuum, complete axially symmetric initial
data set for the Einstein equations. Then the following inequality holds
\begin{equation}
\label{eq:16}
8\pi |J|\leq A,
\end{equation}
where $A$ and $J$ are the area and angular momentum of a connected component of
the apparent horizon.
\end{conjecture}
Note that in the previous discussion we have considered the area $A$ of the
event horizon (since we have used the area theorem). As it usual in geometrical
inequalities in order to make an useful statement we need to replace the event
horizon with a quasi-local quantity. In our case the most appropriate quantity appears
to be the apparent horizon on the initial data. Generalization of the area
theorem also holds (under appropriate assumptions) for apparent horizons (see
the review article \cite{lrr-2004-10} and reference therein).
Let us mention some support for this inequality. This inequality has been
proved for stationary black holes surrounded by matter in \cite{hennig08}
\cite{Hennig:2008zy} (also with charge). Although this case is only slightly
related with the conjecture (since the conjecture applies for non-stationary
spacetimes in vacuum) it is highly non trivial and it certainly suggests the
validity the conjecture. It is also important to note that there is a counter
example for this inequality in the non-asymptotically flat case
\cite{Booth:2007wu}. That is, the assumption that the horizon belong to an
asymptotically flat data is essential. This counter example points out that
although the inequality involves only quasi-local quantities is not pure
quasi-local in the sense that a global assumption should be made on the initial
data (namely, asymptotic flatness).
The purpose of this article is to present non trivial evidences for the
validity of conjecture \ref{c:1}. The main part of this evidence is a formula
that relates in a remarkable way the variations of the area and the variations
of an appropriate defined mass functional on extreme throat initial data.
These kind of data (described in section \ref{sec:extr-cylindr-init}) isolate
the cylindrical structure of extreme black hole and hence they represent a
natural source of counter examples to the inequality \eqref{eq:16} as we will
see. The very existence of this formula suggests that the inequality
\eqref{eq:16} should hold, at least in a relevant family of initial
conditions. Using this result we also prove the inequality in the non trivial
family of spinning Bowen-York initial data.
The plan of the article is the following. In section
\ref{sec:extr-cylindr-init} we describe extreme throat initial data. In section
\ref{sec:main-result} we present our main result, given by theorem
\ref{t:main}. The proof of this result is divided naturally in two steps,
described en section \ref{sec:mass-funct-extr} and \ref{sec:vari-area-extr}. In
section \ref{sec:mass-funct-extr} we present an appropriate mass functional for
extreme throat initial data. We calculate the first and second variations of this
functional evaluated at the extreme Kerr cylindrical initial data. These results
are analogous to the ones described for asymptotically flat axially symmetric
initial data described in \cite{Dain05c} \cite{Dain05d}. Section
\ref{sec:vari-area-extr} constitutes the most important part of the article. In
this section it is shown the relation between the variations of the mass
functional and the variations of the area. This relation was, in our opinion,
completely unexpected a priori. In section \ref{sec:aplic-spinn-bowen} we apply
this result to prove the inequality \eqref{eq:16} on the spinning Bowen-York
black hole family. We discuss the relevant open problems in section
\ref{sec:final-comments}. Finally, we conclude with an appendix in which we collect
some properties of the extreme Kerr throat initial data.
\section{Extreme throat initial data set}
\label{sec:extr-cylindr-init}
In order to present our results we need to discuss first extreme throat
initial data. The definition of this kind of initial data is motivated by the
behavior of the Kerr black hole initial data in the extreme limit. Let us
briefly review this behavior (for more details, see for example, section 2
in \cite{Dain:2010uh}).
Consider the Kerr black hole with mass $m$ and angular momentum $J$. We define
the following parameter $\mu$ (which has unit of mass) in terms of $m$ and $J$
\begin{equation}
\label{eq:mu}
\mu=\sqrt{ m^2 -|J|}.
\end{equation}
The extreme Kerr black hole corresponds to $\mu = 0$. For the Schwarzschild
black hole we have $\mu=m$.
In the standard Boyer-Lindquist coordinates for the Kerr black hole, take a
slice $t=constant$. Let us denote by $S$ the 3-dimensional manifold defined by
that slice. The topology of this surface is $S=S^2\times \mathbb{R}$.
The triple $(S,h_{ij}, K_{ij})$, where $ h_{ij}$ is the induced intrinsic
metric on $S$ and $ K_{ij}$ is the second fundamental form of $S$, constitute
an initial data set for Einstein equations. That is, they are solutions of the
constraint equations
\begin{align}
\label{const1}
D_j K^{ij} - D^i K= 0,\\
\label{const2}
R - K_{ij} K^{ij}+ K^2=0,
\end{align}
where $ {D}$ and $ R$ are the Levi-Civita connection and the Ricci scalar
associated with ${h}_{ij}$, and $ K = K_{ij} h^{ij}$. In these equations the
indices are moved with the metric $ h_{ij}$ and its inverse $ h^{ij}$.
For $\mu >0$ these data have the geometry of two asymptotically flat ends. In
the extreme limit $\mu =0$ the geometry changes. One of the asymptotic ends is
asymptotically flat but the other is cylindrical.
Let us take a closer look at the structure of the cylindrical end.
In coordinates ($r,\theta,\phi$), the induced metric on $S$ has the form
\begin{equation}\label{thcero}
h_{ij}=\Phi^4\tilde h_{ij},
\end{equation}
where the conformal metric $\tilde h_{ij}$ is defined by
\begin{equation}
\label{eq:42}
\tilde h=e^{2q}(dr^2+r^2d\theta^2)+r^2\sin^2\theta d\phi^2,
\end{equation}
and the functions $\Phi$ and $q$ are given by equations \eqref{ficero} in appendix
\ref{sec:extr-kerr-cylindr}. The extrinsic curvature is given by
\begin{equation}
\label{tkcero}
K_{ij}=\frac{2}{\eta} S_{(i} \eta_{j)}, \quad
S_i=\frac{1}{\eta}\epsilon_{ijk}\eta^j\partial^k\omega,
\end{equation}
where $\eta^i$ is the axial Killing vector
\begin{equation}
\label{eq:7}
\eta^i=\frac{\partial}{\partial \phi},
\end{equation}
the square of its norm $\eta$ is given by \eqref{eq:59bb}, $\epsilon_{ijk}$
denotes the volume element with respect to the metric $h_{ij}$ and $\omega$ is
given by \eqref{wcero}. The advantage of this particular form of writing
$K_{ij}$ is that it is easy to check from \eqref{tkcero} that $K_{ij}$
satisfies the momentum constraint \eqref{const1} (see, for example, the
appendix in \cite{Dain99b} and \cite{Dain:2010uh}). In particular, we have
that $K_{ij}$ is trace-free, namely
\begin{equation}
\label{eq:62}
K=0.
\end{equation}
That is, these initial data are maximal surfaces.
In these coordinates, the asymptotically flat end of the metric \eqref{thcero}
corresponds to the limit $r\to \infty$ and the cylindrical end corresponds to
the limit $r\to 0$. The radial coordinate $r$ is a good coordinate in the
asymptotically flat end, since the metric and the extrinsic curvature are
manifestly asymptotically flat with respect to these coordinates: they have
the standard decay to the flat metric.
On the other hand, in the limit $r \to 0$ the conformal factor $\Phi$ blows
up. This is, however, just a coordinate problem. To see this, define $s=-\ln r$,
then the cylindrical end corresponds to $s\to \infty$, and the metric has the
form
\begin{equation}
\label{eq:2}
h^0=(\sqrt{r}\Phi)^4\left(e^{2q}(ds^2+d\theta^2)+\sin^2\theta
d\phi^2 \right).
\end{equation}
The functions $\sqrt{r}\Phi$ and $q$ are smooth and uniformly bounded in
the whole range $-\infty < s< \infty$.
The metric
\eqref{eq:2} and the second fundamental form \eqref{tkcero} have a well defined
limit $s\to \infty$ as initial data. For the metric $h^0$ in the limit $s\to
\infty$ we obtain
\begin{equation}
\label{eq:55}
h=\varphi_0^4(e^{2q_0}(ds^2+d\theta^2)+\sin^2\theta d\phi^2).
\end{equation}
where $\varphi_0$ and $q_0$ are defined by the limits
\eqref{eq:56}--\eqref{eq:57}. The extrinsic curvature $K^{ij}$ has the form
\eqref{tkcero} where $\omega$ is replaced by its limiting value $\omega_0$
defined by \eqref{eq:58} and all the other quantities are computed with respect
to the metric \eqref{eq:55}. These are in fact solutions of the constraint
equations \eqref{const1}--\eqref{const2} on $S^2\times \mathbb{R}$.
We call these initial data set the extreme Kerr throat initial data set.
Let us make a summary. The extreme Kerr throat initial data set is constructed
out of the Kerr black hole initial data by two limits.
The first one is the extreme limit
\begin{equation}
\label{eq:3}
\mu \to 0.
\end{equation}
In this limit the geometry of the Kerr black hole initial data changes from two
asymptotically flat ends to one asymptotically flat and one cylindrical. The
second limit is
\begin{equation}
\label{eq:15}
s\to \infty.
\end{equation}
This limit isolates the cylindrical structure of the extreme Kerr initial data
cutting off the asymptotically flat end.
This procedure of taking the extreme limit can be perform for more generic data
(see \cite{Dain:2008yu}, \cite{gabach09}). And the behavior is identical,
although, of course, one ends up with a different extreme throat initial data.
We isolate the properties of generic extreme throat initial data in the following
definition. Consider the following Riemannian metric
\begin{equation}
\label{eq:1}
h=\varphi^4(e^{2q}(ds^2+d\theta^2)+\sin^2\theta d\phi^2),
\end{equation}
were the functions $\varphi$ and $q$ depend only on $\theta$.
We assume that $\varphi$ and $q$ satisfy the following equation
\begin{equation}
\label{eq:2b}
\Delta_0\varphi -\frac{1}{4}(1-\partial^2_\theta
q)\varphi=-\frac{|\partial_\theta \omega|^2}{16\sin^4\theta \varphi^7},
\end{equation}
where $\Delta_0$ is the Laplace operator in $S^2$ with respect to the standard
metric acting on axially symmetric functions, namely
\begin{equation}
\label{eq:17}
\Delta_0\varphi = \frac{1}{\sin\theta} \partial_\theta
\left(\sin\theta \partial_\theta \varphi \right).
\end{equation}
Finally, for convenience, we define out of $\varphi$ two additional functions,
$\sigma$ and $\eta$, as follows
\begin{equation}
\label{eq:40}
\varphi^4=e^\sigma,\quad \eta=\sin^2\theta \varphi^4.
\end{equation}
The function $\eta$ is the square of the norm of the Killing vector (\ref{eq:7})
with respect to the metric \eqref{eq:1}.
With these ingredients, we can formulate the following definition.
\begin{definition}
Consider a set of functions $(\sigma,\omega, q)$ (depending only on $\theta$)
that satisfy equation \eqref{eq:2b} on $S^2$ and such that $q$ vanished at
the poles $\theta=0,\pi$. Then, an extreme throat initial data set is a
triple $(S, h_{ij}, K_{ij})$ where $S=\mathbb{R}\times S^2$, $h_{ij}$ is
given by \eqref{eq:1} and $K_{ij}$ is constructed from $\omega$ by the
formula \eqref{tkcero}. In equation \eqref{tkcero}, the volume element
$\epsilon_{ijk}$ is calculated with respect to the metric \eqref{eq:1}, the
vector $\eta^i$ is given by (\ref{eq:7}) and the indices are moved with
the metric \eqref{eq:1}.
\end{definition}
From the definition it follows that the data satisfy the constraint equations
(\ref{const1})--(\ref{const2}), since equation \eqref{eq:2b} is just the
Lichnerowicz equation for the conformal factor $\varphi$ with respect to the
conformal metric
\begin{equation}
\label{eq:19}
\tilde h =e^{2q}(ds^2+d\theta^2)+\sin^2\theta d\phi^2).
\end{equation}
Note that the Ricci scalar of $\tilde h$ is given by
\begin{equation}
\label{eq:68}
\tilde R= 2e^{-2q}(1-\partial^2_\theta q).
\end{equation}
For a discussion on the Lichnerowicz equation and the conformal method see, for
example, the review article \cite{Bartnik04b}.
The vector $\eta^i$ is a Killing vector of the metric $h_{ij}$
\begin{equation}
\label{eq:12}
\pounds_\eta h_{ij}=0,
\end{equation}
where $\pounds$ denotes Lie derivative. Moreover, the
requirement that $q$ vanishes at the poles arises from the regularity of the
metric at the axis, namely the condition
\begin{equation}
\label{eq:8}
\lim_{\theta\to 0,\pi} \frac{\partial_i\eta \partial^i \eta}{4\eta}=1.
\end{equation}
Hence, the metric $h_{ij}$ is axially symmetric (see \cite{Dain06c}
\cite{Chrusciel:2007dd} for a discussion of axial symmetry on initial data and
also \cite{stephani03} for a general discussion of axial symmetry in General
Relativity).
From the definition of $K_{ij}$ it is also clear that $\eta^i$ is a symmetry of
$K_{ij}$, namely
\begin{equation}
\label{eq:11}
\pounds_\eta K_{ij}=0.
\end{equation}
In our definition, we have made for simplicity the assumption that $\eta^i$ is
hypersurface orthogonal (with respect to the metric $h_{ij}$). We expect that
all the results obtained in this article are also valid without this
assumption, but this analysis remains to be done.
The metric $h_{ij}$ has another symmetry, namely the vector $\xi^i$ defined by
\begin{equation}
\label{eq:13}
\xi^i=\frac{\partial }{\partial s}.
\end{equation}
It straightforward to check that $\xi^i$ is also a symmetry of $K_{ij}$
\begin{equation}
\label{eq:66}
\pounds_\xi K_{ij}=0.
\end{equation}
Also, the vectors $\eta^i$ and $\xi^i$ commute.
Riemannian metrics of the form \eqref{eq:1} are generically called
cylindrically symmetric. In addition, we have equations (\ref{eq:11}) and
(\ref{eq:66}) and hence at first sight it looks appropriate to call the whole
initial data set cylindrically symmetric. However this terminology is
misleading for the following reason: in general, the spacetime originated from
this kind of data will not be cylindrically symmetric. Recall that a spacetime
is cylindrically symmetric if it admits two spacelike commuting Killing vectors
(see \cite{stephani03}). Since the problem of cylindrically symmetric
spacetimes has been frequently analyzed in the literature, it is important to
discuss this point in detail.
The vectors $\eta^i$ and $\xi^i$ are Killing vectors of $h_{ij}$ and we also
have equations (\ref{eq:11})--(\ref{eq:66}), hence it follows from the results
of \cite{Moncrief75} that the development of this class of initial data will be
a spacetime with, at least, two Killing vectors. The projection of the
spacetime Killing vectors to the initial surface are given by $\eta^i$ and
$\xi^i$ (see \cite{beig97} for a discussion about the relation of spacetime
symmetries and symmetries on the initial data).
These data constitute initial data for the axially symmetric vacuum Einstein
equations (see, for example, \cite{Dain:2008xr} \cite{dain10}), hence it
follows that the spacetime will be axially symmetric. In particular, the
spacetime Killing vector $\eta^\mu$ corresponding to $\eta^i$ will be spacelike
outside the axis.
However, although the spacetime will have another symmetry it will not be
cylindrically symmetric, because the extra symmetry will not be, in general,
spacelike. The behavior of the spacetime Killing vector $\xi^\mu$ originated
from the initial data symmetry $\xi^i$ is clearly illustrated in the following
explicit example which is also interesting by itself.
Consider the following 4-dimensional metric in coordinates $(t,s,\theta,\phi)$
\begin{multline}
\label{eq:74}
g= \frac{(1+\cos^2\theta)}{2}
\left[-\frac{e^{-2s}}{r^2_0} dt^2 +r^2_0(ds^2+d\theta^2) \right]+\\
\eta_0 \left(d\phi
+ \frac{e^{-s}}{r^2_0} dt \right)^2,
\end{multline}
where $r^2_0=2|J|$ and $\eta_0$ is given by (\ref{eq:6}).
This metric was introduced in \cite{Bardeen:1999px} as the
extreme Kerr throat geometry. It characterizes the spacetime geometry near the
horizon of the extreme Kerr black hole.
It can be easily verified that the extreme Kerr throat initial data given by
equations (\ref{eq:55}) and (\ref{tkcero}) are the initial data of the metric
(\ref{eq:74}) in a surface $t=constant$. The spacetime Killing vectors of the
metric $g$ which correspond to the initial data Killing vectors $\eta^i$ and
$\xi^i$ are given by
\begin{equation}
\label{eq:75}
\eta^\mu=\frac{\partial }{\partial \phi}, \quad \xi^\mu =t\frac{\partial
}{\partial t}- \frac{\partial }{\partial s}.
\end{equation}
The metric has also two more Killing vectors (see \cite{Bardeen:1999px})
\begin{equation}
\label{eq:76}
\xi_1^\mu = \frac{\partial }{\partial t},\quad \xi_2^\mu=
\left(\frac{e^{-2s}}{2} +\frac{t^2}{2} \right) \frac{\partial
}{\partial t}- t\frac{\partial }{\partial s}-
e^{-s}\frac{\partial }{\partial \phi}.
\end{equation}
We see that the Killing vector $\xi^\mu$ is not spacelike everywhere. In
particular, the metric $g$ is not cylindrically symmetric.
Finally, let us mention that there are two important physical quantities
defined on a extreme throat initial data. First, the angular momentum given by
\begin{equation}
\label{eq:41}
J=\frac{1}{8} \left(\omega(\pi)-\omega(0)\right).
\end{equation}
This formula follows from the expression of the angular momentum for standard
asymptotically flat
axially symmetric initial data (see, for example, \cite{Dain06c}).
Second, the area of the cylinder
\begin{equation}
\label{eq:26}
A =2\pi \int_0^\pi e^{\sigma+q} \sin\theta \, d\theta.
\end{equation}
\section{Main result}
\label{sec:main-result}
The extreme limit procedure \eqref{eq:3} and \eqref{eq:15} that lead to the
extreme throat initial data for the Kerr black hole discussed in the previous
section \ref{sec:extr-cylindr-init} has an additional, remarkable property.
The area of the extreme cylinder (with value $A=8\pi|J|$) is smaller that the
minimal surface area of any non-extreme Kerr black hole initial data (recall
that the angular momentum $J$ is kept fixed). In fact, the area of the minimal
surface is a monotonically decreasing function with respect to $\mu$. This can
be, of course, trivially verified since for the Kerr black hole we have the
explicit expression for $A$ in terms of $\mu$.
As we have pointed out, this extreme limit can be performed for other class of
initial data, like the Bowen-York black hole initial data showed in section
\ref{sec:aplic-spinn-bowen}. It is conceivable (but it certainly remain to be
shown) that there exists such procedure for general black hole initial data
in axial symmetry, or at least for a relevant family of initial data. Let us
assume that this is the case. That is, let as assume that for an initial data
with an horizon of area $A_1$ we can perform the limit procedure to obtain an
extreme throat initial data of area $A$, with $A\leq A_1$. Then, if
inequality \eqref{eq:16} is true, it should also holds for the extreme
throat initial data. Our main result indicates that this is precisely the case.
This result is summarized in the following theorem.
\begin{theorem}
\label{t:main}
Let us consider families of extreme throat initial data with fixed angular
momentum $J$. Then, the area on these families satisfy the
following properties:
\begin{itemize}
\item The first variation of the area is zero evaluated on the extreme Kerr
throat initial data.
\item The second variation of the are is positive evaluated on the extreme Kerr
throat initial data.
\end{itemize}
\end{theorem}
This theorem strongly suggests that the area is an absolute minimum for extreme
Kerr throat initial data among all the extreme throat initial data with the
same angular momentum. Since for extreme Kerr we have $A=8\pi |J|$, the
inequality \eqref{eq:16} is satisfied for general extreme throat initial data.
In order to prove that, we can follow a similar line as in \cite{Dain05d} to
prove that it is a local minimum and to \cite{Dain06c} \cite{Costa:2009hn}
\cite{Chrusciel:2009ki} \cite{Chrusciel:2007ak} to prove that it is in fact a
global minimum. It appears that the same analysis will go throw without major
difficulties. This however should be checked and it will be done in a
subsequent work.
Theorem \ref{t:main} gives also strong evidences in favor to inequality
\eqref{eq:16}. Namely, if this inequality were false, there is no reason to
expect that it will hold on extreme throat initial data. As it have been pointed out
above, this theorem suggest also an strategy to prove the conjecture: given an
initial data with an apparent horizon construct a limit procedure analogous to
\eqref{eq:3} and \eqref{eq:15} in such a way that i) in the limit an extreme
throat initial data set is obtained and ii) the area of the extreme throat
initial data is less or equal than the area of the horizon. In fact, in
section \ref{sec:aplic-spinn-bowen} we construct this limit procedure for the
spinning Bowen-York family of initial data.
The proof of theorem \ref{t:main} is naturally divided in two parts, presented
in sections \ref{sec:mass-funct-extr} and \ref{sec:vari-area-extr}
respectively.
\section{The mass functional for extreme throat initial data}
\label{sec:mass-funct-extr}
An extreme throat initial data are stationary if the following equations are
satisfied
\begin{align}
\label{eq:18}
\Delta_0\sigma-2 & =-\frac{|\partial_\theta \omega|^2}{\eta^2}\\
\label{eq:20}
\Delta_0\omega & =2\frac{\partial_\theta\omega\partial_\theta\eta}{\eta}.
\end{align}
The fact that these equations for an extreme throat initial data define
stationary solutions can be deduced from the standard stationary axially
symmetric equations. However, for our present purpose, the only property of
equations \eqref{eq:18}--\eqref{eq:20} that we will use is that the extreme
Kerr throat initial data (defined by \eqref{eq:35b}--\eqref{eq:58}) are a
solution of them. This can be easily checked explicitly.
Equation \eqref{eq:20} can be written in divergence form as follows
\begin{equation}
\label{eq:21}
\partial_\theta \left(\sin\theta \frac{\partial_\theta \omega}{\eta^2}
\right)=0.
\end{equation}
The stationary equations can be written in a natural form
as equations on the unit sphere $S^2$ with the standard metric. Namely, let
$D_A$ be the covariant derivative with respect to the standard metric in
$S^2$. Then, equations \eqref{eq:18}--\eqref{eq:20} are written as
\begin{align}
\label{eq:43}
D_A D^A\sigma -2 & =\frac{D_A\omega D^A\omega}{\eta^2},\\
D_A\left(\frac{D^A\omega}{\eta^2}\right) & =0. \label{eq:43f}
\end{align}
These expression were defined for axially symmetric functions, but they
also make sense for functions which depends on $\phi$. In fact, in all the
results that follows we will not use the assumption that the functions are
axially symmetric (this is very similar to what happens in the study of
the inequality \eqref{eq:4} discussed in \cite{Dain06c}).
We define the following functional
\begin{equation}
\label{eq:22}
\mathcal{M}= \int_0^\pi\left( |\partial_\theta \sigma|^2 +4\sigma + \frac{|\partial_\theta
\omega|^2}{\eta^2}\right) \sin\theta \, d\theta.
\end{equation}
On the unit sphere, using the notation of equations
\eqref{eq:43}--\eqref{eq:43f} this functional is written as
\begin{equation}
\label{eq:44}
\mathcal{M}=\frac{1}{2\pi} \int_{S^2}\left( |D \sigma|^2 +4\sigma + \frac{|D
\omega|^2}{\eta^2}\right) \, dS,
\end{equation}
were $dS=\sin\theta\, d\theta d\phi$ is the volume element of the standard
metric in $S^2$. This functional is the obvious translation of the mass
functional used in \cite{Dain05c} adapted to this kind of initial
data.
Let us make some general comments regarding the functional $\mathcal{M}$ which are not
directly relevant for the present article but they can have interesting future
applications. It is very likely that for non-stationary initial data the
functional $\mathcal{M}$ represents a lower bound for another mass functional $\mathcal{M}'$
which includes the time dependent terms. This is what happens with the
functional considered in \cite{Dain05c}. When the complete spacetime is
considered (and not just the initial data), this new functional is precisely
the total energy (the ADM mass) of axially symmetric, asymptotically flat
spacetimes, and it is conserved (see \cite{Dain:2008xr}). In the present case,
the mass functional $\mathcal{M}'$ will describe the total energy of the class of
spacetime discussed in section \ref{sec:extr-cylindr-init}. Namely, axially
symmetric spacetimes which has another Killing vector. These spacetimes are
not asymptotically flat. An analog situation occur for cylindrical symmetric
spacetimes, for which the total energy can be defined (see
\cite{Ashtekar:1996cd} and reference therein). We emphasize however that the
situation here is more complicated since the extra Killing vector is not
spacelike everywhere. It would be very interesting to explore this issue and
construct explicitly the functional $\mathcal{M}'$.
Relevant for our present purpose, are the following two important properties of
the mass functional \eqref{eq:22}. We will prove them in in lemma \ref{l:1}.
The first one is that the stationary equations are the Euler-Lagrange equations
of this functional. That is, the extreme Kerr throat initial data are critical
points of this functional. The second property is that the second variation of
this functional evaluated at the extreme Kerr throat initial data is positive.
That suggests that extreme Kerr throat initial data set is in fact a minimum of
this functional. These properties can be expected from the analysis developed
in \cite{Dain05c} and \cite{Dain05d}, since the functional $\mathcal{M}$ is the natural
generalization of the mass functional used in these articles adapted to
cylindrical initial data.
Before proving that lemma is important to make the connexion between the
functional $\mathcal{M}$ and the energy of harmonic maps between $S^2$ and
$H^2$. Namely, consider the functional
\begin{equation}
\label{eq:47}
\tilde \mathcal{M}_\Omega= \frac{1}{2\pi} \int_\Omega \frac{|\partial \eta|^2+|\partial
\omega|^2}{\eta^2} \, dS,
\end{equation}
defined on some domain $\Omega\subset S^2$, such that $\Omega$ does not include
the poles. Integrating by parts and using the
identity
\begin{equation}
\label{eq:48}
\Delta_0(\log(\sin\theta))=-1,
\end{equation}
we obtain the following relation between $\mathcal{M}$ and $\mathcal{M}'$
\begin{multline}
\label{eq:49}
\tilde\mathcal{M}_\Omega= \mathcal{M}_\Omega+4 \int_\Omega \log\sin\theta\, dS+ \\
\oint_{\partial
\Omega} (4\sigma + \log\sin\theta) \frac{\partial \log\sin\theta}{\partial
n}\, ds,
\end{multline}
where $n$ denotes the exterior normal to $\Omega$, $ds$ is the surface element
on the boundary $\partial \Omega$ and we have used the obvious notation
$\mathcal{M}_\Omega$ to denote the mass functional \eqref{eq:44} defined over the
domain $\Omega$. The difference between $\mathcal{M}$ and $\mathcal{M}'$ are the boundary
integral plus the second term which is just a numerical constant. Note that if
we integrate over $S^2$ this constant term is finite
\begin{equation}
\label{eq:50}
\int_\Omega \log\sin\theta\, dS=2\log2-2.
\end{equation}
The boundary terms however diverges at the poles.
In an analogous way as it was described in \cite{Dain06c}, the functional
$\mathcal{M}'$ defines an energy for maps $(\eta,\omega):S^2\to \mathbb{H}^2$ where
$\mathbb{H}^2$ denotes the hyperbolic plane $\{(\eta, \omega ) : \eta > 0\}$,
equipped with the negative constant curvature metric
\begin{equation}
\label{eq:51}
ds^2=\frac{d\eta^2+d\omega^2}{\eta^2}.
\end{equation}
The Euler-Lagrange equations for the energy $\mathcal{M}'$ are called harmonic maps
from $S^2\to \mathbb{H}^2$. Since $\mathcal{M}$ and $\mathcal{M}'$ differ only by a constant
and boundary terms, they have the same Euler-Lagrange equations.
We present in the following lemma the main result of this section.
\begin{lemma}
\label{l:1}
Let us consider families of extreme throat initial data with fixed angular
momentum $J$. Then, the area on these families satisfy the
following properties:
\begin{itemize}
\item The first variation of $\mathcal{M}$ is zero evaluated on the extreme Kerr
throat initial
data.
\item The second variation of $\mathcal{M}$ is positive evaluated on the extreme Kerr
throat initial data.
\end{itemize}
\end{lemma}
\begin{proof}
The proof follows very similar lines as the one presented in \cite{Dain05d}. The only
difference is the presence of an extra term in $\mathcal{M}$, the one containing
$\sigma$. But this term, since it is linear, makes no contribution to the
second variation which is the delicate part of the proof.
To define the variations, let us consider the real-valued function
$\iota(\epsilon)$
defined by
\begin{equation}
\label{eq:19b}
\iota(\epsilon)= \mathcal{M}(\sigma(\epsilon),\omega(\epsilon)),
\end{equation}
where
\begin{equation}
\label{eq:35}
\sigma(\epsilon)=\sigma_0+\epsilon\bar \sigma,\quad \omega(\epsilon)=\omega_0+
\epsilon\bar \omega.
\end{equation}
We assume that $\bar \omega$ vanished at the poles
$\theta=0,\pi$. This boundary condition keeps fixed the angular momentum under
the variations. In analogous way we define
\begin{equation}
\label{eq:33b}
\eta(\epsilon)=\sin^2\theta e^{\sigma(\epsilon)}.
\end{equation}
The first derivative of $\iota(\epsilon)$ with respect to $\epsilon$ is
given by
\begin{multline}
\label{eq:43b}
\iota'(\epsilon) = \frac{1}{\pi}\int_{S^2}
\left\{D_A \sigma D^A \bar \sigma + 2\bar \sigma+ \right. \\
\left. +\left ( D_A \omega D^A \bar \omega -\bar \sigma |D \omega|^2
\right)\eta^{-2}\right\} dS,
\end{multline}
where a prime denote derivative with respect to $\epsilon$ and the $\epsilon$
dependence in the right-hand side of \eqref{eq:43b} is encoded in the functions
$\sigma(\epsilon),\omega(\epsilon),\eta(\epsilon)$ defined by
\eqref{eq:35}--\eqref{eq:33b}. If we evaluate at $\epsilon=0$, integrate by
parts and use the condition that $\bar \omega$ vanished at the poles we obtain
the Euler-Lagrange equations \eqref{eq:43}--\eqref{eq:43f}. Since extreme Kerr
is a solution of this equation the first item in the Lemma is proved.
The second derivative of $\iota$ is given by
\begin{multline}
\label{eq:44b}
\iota''(\epsilon)= \frac{1}{16\pi}\int_{S^2}
\left\{| D \bar \sigma |^2 + \right.\\
\left. +\left(2\bar \sigma^2|D \omega|^2
- 4\bar \sigma D_A \omega D^A \bar \omega + |D \bar \omega|^2
\right)\eta^{-2}\right\} dS.
\end{multline}
From equation \eqref{eq:44b}, it is far from obvious that the second variation
evaluated at the critical point $\epsilon=0$ is positive definite. In order to
prove that, the key ingredient is the following remarkable identity proved by
Carter \cite{Carter71}. In terms of our variables it has the following form
\begin{equation}
\label{eq:9b}
F + \bar \sigma G'_{\sigma}+\bar \omega G'_{\omega}+2\bar \sigma\bar \omega G_{\omega} - \eta^{-2}\bar \omega^2 G_{\sigma} =H
\end{equation}
where
\begin{align}
\label{eq:200}
G_\sigma(\epsilon) & = \Delta \sigma + \eta^{-2} |D \omega|^2 -2,\\
G_\omega(\epsilon) &= D_A( \eta^{-2}D^A \omega), \label{eq:200b}
\end{align}
the derivatives with respect to $\epsilon$ of these functions are given
\begin{align}
\label{eq:101}
G'_\sigma(\epsilon) & =\Delta \bar \sigma + \left( 2D_A \bar \omega D^A \omega -2\bar \sigma |D
\omega|^2
\right)\eta^{-2},\\
G'_\omega(\epsilon) &= D_A \left(\eta^{-2} \left(D^A \bar \omega-2\bar \sigma D^A \omega\right)
\right), \label{eq:102}
\end{align}
the positive definite function $F$ is given by
\begin{multline}
\label{eq:26b}
F(\epsilon)=\left(D\bar \sigma+ \bar \omega \eta^{-2} D \omega \right )^2 +
\left(D( \bar \omega \eta^{-1})- \eta^{-1} \bar \sigma D \omega \right)^2\\
+\left( \eta^{-1} \bar \sigma D \omega - \bar \omega \eta^{-2} D \eta \right)^2,
\end{multline}
and the divergence term $H$ is given by
\begin{equation}
\label{eq:52}
H= D_A \left(\bar \sigma D^A \bar \sigma +\bar \omega \eta^{-1} D^A \left( \bar \omega \eta^{-1}
\right)\right),
\end{equation}
The identity \eqref{eq:9b} is valid for arbitrary functions
$\sigma,\omega,\bar \sigma,\bar \omega$ and it is straightforward to check although the
computations are lengthy.
Note that using \eqref{eq:101}--~\eqref{eq:102} and integrating by parts we
obtain
\begin{equation}
\label{eq:11b}
-\int_{S^2} \left( \bar \sigma G'_{\sigma}(\epsilon)+\bar \omega G'_{\omega}(\epsilon) \right)
dS =\pi \iota''(\epsilon).
\end{equation}
We integrate on $S^2$ the identity \eqref{eq:9b}. The divergence term $H$
vanished (here we use again the boundary condition). We use \eqref{eq:11b} to
obtain
\begin{equation}
\label{eq:53}
\iota''(\epsilon)= \int_{S^2}F \, dS+ \int_{S^2} \left( 2\bar \sigma\bar \omega G_{\omega}(\epsilon) -
\eta^{-2}\bar \omega^2 G_{\sigma}(\epsilon)\right)\, dS.
\end{equation}
If we evaluate at $\epsilon=0$ the last integral vanished, and hence we get the
final result
\begin{equation}
\label{eq:54}
\iota''(0)= \int_{S^2}F \, dS \geq 0.
\end{equation}
\end{proof}
The mass functional $\mathcal{M}$ evaluated at extreme Kerr gives the value
\eqref{eq:39} which in particular is not equal to the total mass $m$ of
extreme Kerr. This is to be expected since there
is no obvious relation between $\mathcal{M}$ and the total mass of the associated
initial data with an asymptotically flat end and a cylindrical
end. However, the value of $\mathcal{M}$ at extreme Kerr suggests the following definition
\begin{equation}
\label{eq:45}
m=Ce^\frac{{\mathcal{M}}}{16}, \quad C= e^{-\frac{\ln(2)}{2}-\frac{1}{2}}.
\end{equation}
We have normalized this quantity in such a way that gives the mass for extreme
Kerr. It is also trivially positive definite (note that $\mathcal{M}$ is not due to the
extra term $\sigma$ which has no sign). More important, the first variation of
$m$ and the second variation of $m$ are given by
\begin{equation}
\label{eq:46}
m' =2^{-4} \mathcal{M}' m, \quad m'' = 2^{-8} ( \mathcal{M}'' +( \mathcal{M}')^2)m.
\end{equation}
And hence the functional $m$ has the same critical points as $\mathcal{M}$ and the
second variation is also definitive positive. These properties makes the
functional $m$ attractive but we will not make use of it in the following. For
the purpose of the proof of theorem \ref{t:main} only the functional $\mathcal{M}$ is
used.
\section{Variation of the area for extreme throat initial data}
\label{sec:vari-area-extr}
The results from previous section are somehow to be expected, since they are
the analogous of the variational formulation presented in \cite{Dain05c} and
\cite{Dain05d}. The remarkable new ingredient is the relation of this mass
functional with the area. This is the subject of this section and it
constitutes the most relevant part of this article.
Consider the formula for the area for an extreme throat initial data given
by \eqref{eq:26}. The first and second variation of the area are given
\begin{equation}
\label{eq:27}
A' = \int_{S^2}(\sigma'+ q') e^{(\sigma+q)} \, dS,
\end{equation}
and
\begin{equation}
\label{eq:28}
A'' = \int_{S^2}((\sigma'+ q')^2 +( \sigma'' + q''))
e^{(\sigma+q)} \, dS.
\end{equation}
In order to relate these equations with the mass functional we proceed as
follows. We first write the Hamiltonian constraint \eqref{eq:2b} in terms of
$\sigma$ using the relation \eqref{eq:40}
\begin{equation}
\label{eq:69}
4\Delta_0\sigma +|\partial_\theta \sigma|^2 + \frac{|\partial_\theta
\omega|^2}{\eta^2} -4(1-\partial^2_\theta q)=0.
\end{equation}
We integrate this equation in $S^2$. The first term gives zero. We write the second
and third term in terms of mass functional \eqref{eq:44}, namely
\begin{equation}
\label{eq:70}
\int_{S^2}|\partial_\theta \sigma|^2 + \frac{|\partial_\theta
\omega|^2}{\eta^2} \, dS= 2\pi \mathcal{M}- 4\int_{S^2}\sigma \, dS.
\end{equation}
For the last term, we integrate by part the terms with $\partial^2_\theta q$, namely
\begin{align}
\label{eq:71}
\int_{S^2} \partial^2_\theta q \, dS &= 2\pi \int_{0}^\pi \partial^2_\theta q
\sin\theta \, d\theta\\
&= 2\pi \int_{0}^\pi\left(\partial_\theta (\partial_\theta q
\sin\theta)- \partial_\theta q \cos \theta\right) \, d\theta \label{eq:71a} \\
&= -2\pi \int_{0}^\pi \partial_\theta q \cos \theta \, d\theta \label{eq:71b}\\
&= 2\pi \int_{0}^\pi \left(-\partial_\theta(q\cos\theta) +q\sin\theta\right) \,
d\theta \label{eq:71c}\\
&= 2\pi \int_{0}^\pi q\sin\theta \, d\theta \label{eq:71d} \\
& = \int_{S^2} q \, dS. \label{eq:71e}
\end{align}
To pass from \eqref{eq:71a} to \eqref{eq:71b} we have used that $\sin\theta$
vanished at $(0,\pi)$ and to pass from \eqref{eq:71c} to \eqref{eq:71d} we have
used that $q$ vanished at $(0,\pi)$.
Collecting equations \eqref{eq:70} and \eqref{eq:71e}, from equation
\eqref{eq:69} we deduce our fundamental equation
\begin{equation}
\label{eq:23}
\mathcal{M}=8+\frac{2}{\pi} \int_{S^2} (\sigma+q) \, dS.
\end{equation}
From equation \eqref{eq:23} we get an alternative expression for the first
variation of the mass
\begin{equation}
\label{eq:24}
\mathcal{M}'=\frac{2}{\pi} \int_{S^2} (\sigma'+ q') \, dS.
\end{equation}
And hence, using the first item in lemma \ref{l:1}, we get that
\begin{equation}
\label{eq:25}
\int_{S^2} (\sigma'+ q') \, dS|_{\epsilon=0} =0.
\end{equation}
Analogously, the second variation of the mass is given by
\begin{equation}
\label{eq:72}
\mathcal{M}''=\frac{2}{\pi} \int_{S^2} (\sigma''+ q'') \, dS.
\end{equation}
Using the second item in lemma \ref{l:1} we obtain
\begin{equation}
\label{eq:73}
\int_{S^2} (\sigma''+ q'') \, dS|_{\epsilon=0} >0.
\end{equation}
We are now ready to compute the first and second variation of the area. If we
evaluate the first variation of the area (equation \eqref{eq:27}) at
$\epsilon=0$ and use the (remarkable) fact that $e^{\sigma_0+q_0}$ is constant
for the extreme Kerr cylinder (see equation \eqref{eq:key}) we get
\begin{equation}
\label{eq:29}
A'|_{\epsilon=0} =4|J| \int(\sigma'+ q') \, dS|_{\epsilon=0}.
\end{equation}
Using \eqref{eq:25} we finally get
\begin{equation}
\label{eq:30}
A'|_{\epsilon=0} =0.
\end{equation}
For the second variation we use equations \eqref{eq:28} and again equation
\eqref{eq:key} to obtain
\begin{equation}
\label{eq:28b}
A''|_{\epsilon=0} = 4|J| \int_{S^2}((\sigma'+ q')^2 +( \sigma'' + q'')) \,
dS|_{\epsilon=0}.
\end{equation}
The first term inside the integral is clearly positive definite and the second
also by \eqref{eq:72} and \eqref{eq:73}. Hence we deduce
\begin{equation}
\label{eq:31}
A''|_{\epsilon=0} >0.
\end{equation}
This concludes the proof of theorem \ref{t:main}.
\section{Application: spinning Bowen-York initial data}
\label{sec:aplic-spinn-bowen}
The Bowen-York initial data have been discovered in \cite{Bowen80} and since
that time they have been extensively used in both analytical and numerical
studies. In this section we will prove that the area of the minimal surface
(which is also an apparent horizon) for the family of spinning Bowen-York
initial data satisfies the inequality \eqref{eq:5}. We will assume that the
inequality is true for extreme throat initial data. As it was pointed out
in section \ref{sec:main-result} theorem \ref{t:main} suggests that this is the
case but the technical steps to complete the proof remain to be done.
The argument runs as follows. In
\cite{Dain:2008yu} the extreme limit procedure was rigorously constructed for
this kind of data. The only property of this limit not proved in this article
was the monotonicity of the area. This is proved here as follows.
The area of any surface $r=constant$ is given by
\begin{equation}
\label{eq:59}
A_\mu(r)=2\pi r^2 \int_{S^2} \Phi_\mu^4\, dS.
\end{equation}
We use the same notation $\Phi_\mu$ for the conformal factor of the Bowen-York
family used in \cite{Dain:2008yu}.
The location of the minimal surface (by the isometry of the data) is on
$r=\mu/2$. That is, we want to consider the area $A_\mu(\mu/2)$.
By definition of minimal surface, we known that
\begin{equation}
\label{eq:60}
A_\mu(\mu/2)\leq A_\mu(r),
\end{equation}
for all $r$. We also known that the conformal factor is monotonically
decreasing with $\mu$ (Lemma 3.2 in \cite{Dain:2008yu}). That is, for
$\mu_1\leq \mu_2$ we have
\begin{equation}
\label{eq:61}
\Phi_{\mu_1}(r,\theta)\leq \Phi_{\mu_2}(r,\theta).
\end{equation}
Hence we have
\begin{equation}
\label{eq:63}
A_{\mu_1}(r)\leq A_{\mu_2}(r).
\end{equation}
Then we prove the following
\begin{equation}
\label{eq:64}
A_{\mu_1}(\mu_1/2)\leq A_{\mu_1}(r) \leq A_{\mu_2}(r),
\end{equation}
for all $r$. The first inequality in \eqref{eq:64} follows from \eqref{eq:60},
and the second from \eqref{eq:63}. That is, we have proved that any surface for
$\mu_2$ has bigger area than the minimal surface for $\mu_1$. In particular,
the minimal surface $r=\mu_2/2$, that is
\begin{equation}
\label{eq:67}
A_{\mu_1}(\mu_1/2)\leq A_{\mu_2}(\mu_2/2).
\end{equation}
Note, however, that inequality \eqref{eq:64} is stronger than \eqref{eq:67}.
We have proved that the area of the minimal surface is monotonically
decreasing under the extreme limit process constructed in
\cite{Dain:2008yu}. And hence the area of the related extreme throat initial data
(which can be also rigorously constructed, see \cite{gabach09}
\cite{Hannam:2009ib}) is smaller than the original area of the minimal
surface. Since the inequality holds on the extreme cylinder it follows that it
also holds for the spinning Bowen-York initial data.
\section{Final comments}
\label{sec:final-comments}
The first open problem is to complete the analysis presented in the proof of
theorem \ref{t:main} and prove the inequality \eqref{eq:16} on extreme
cylindrical initial data. We expect that the proof will follow similar lines as
the ones presented in \cite{Dain05d} \cite{Dain06c} \cite{Costa:2009hn}
\cite{Chrusciel:2009ki} \cite{Chrusciel:2007ak}. We are currently working on
this \cite{dain10c}.
The second open problem, which is much more difficult and relevant, is to
construct an extreme limit procedure for generic axially symmetric initial data
which satisfies the properties i) and ii) mentioned in section
\ref{sec:main-result}. Then, the conjecture \ref{c:1} will be reduced to the
extreme throat initial data case and hence it will be proved.
\begin{acknowledgments}
It is a pleasure to thank Marc Mars for illuminating discussions regarding
geometrical inequalities over many years. Particular useful for this work
were the ones that took place at the Mathematisches Forschungsinstitut
Oberwolfach during the workshop ``Mathematical Aspects of General
Relativity'', October 11th -- October 17th, 2009 and during the conference
``PDEs, relativity \& nonlinear waves'', Granada, April 5-9, 2010. The author
thanks the organizers of these events for the invitation and the hospitality
and support of the Mathematisches Forschungsinstitut Oberwolfach.
The author is supported by CONICET (Argentina). This work was supported in
part by grant PIP 6354/05 of CONICET (Argentina), grant 05/B415 Secyt-UNC
(Argentina) and the Partner Group grant of the Max Planck Institute for
Gravitational Physics, Albert-Einstein-Institute (Germany).
\end{acknowledgments}
|
1,314,259,995,581 | arxiv | \section{Introduction}
Let $E$ be an elliptic curve over $\mathbf Q$ with conductor $N$, and let
$K$ be a quadratic imaginary field of discriminant $d_K$ prime to $N$
with quadratic character $\epsilon$. By work of Gross and Zagier,
the sign of the functional equation of $L(E/K,s)$ is equal to
$-\epsilon(N)$,
For a rational prime $p\nmid 6 d_KN$ at which $E$ has good \emph{ordinary}
reduction, let $D_\infty/K$ be the anticyclotomic
$\mathbf Z_p$-extension of $K$. The behavior of the Selmer group
$\mathrm {Sel}_{p^\infty}(E/D_\infty)$ depends crucially on the value
of $\epsilon(N)$: if $\epsilon(N)=1$ then it is conjectured that
the Pontryagin dual, $X$, of $\mathrm {Sel}_{p^\infty}(E/D_\infty)$ has rank
one over the Iwasawa algebra $\Lambda=\mathbf Z_p[[\mathrm{Gal}(D_\infty)/K]]$
and that the characteristic ideal of the torsion submodule can be expressed
in terms of Heegner points arising from a Shimura curve parametrization of $E$.
We call this the indefinite case. In the definite case, $\epsilon(N)=-1$,
it is conjectured that $X$ is a torsion $\Lambda$-module with characteristic
ideal given by a $p$-adic $L$-function.
We refer to these two conjectures collectively as the Iwasawa main conjecture.
Much is known about the Iwasawa main conjecture,
see \cite{bertolini, me, me2} for the indefinite case and \cite{BD03}
for the definite case. In particular, in either case one knows that the rank
of $X$ is as predicted above, and one knows one divisibility of
the conjectured equality for the characteristic
ideal of the torsion submodule (in the indefinite case this is conditional
on as yet unpublished work of Cornut and Vatsal generalizing the main
result of \cite{cornut} to
Heegner points on Shimura curves attached to indefinite quaternion algebras,
see the main results of \cite{me2}).
The goal of the present article is to demonstrate that the methods used by
Bertolini and Darmon \cite{BD03} to treat the definite case
can be used give a uniform treatment of the two cases,
and to develop a criterion to determine when the known divisibility
is actually an equality. We do this by developing a theory of
\emph{bipartite Euler systems} similar in spirit to Mazur and Rubin's
\cite{mazur-rubin} theory of Kolyvagin systems, but adapted to fit the
family of cohomology classes constructed by Bertolini and Darmon.
Bertolini and Darmon's construction is, roughly speaking, as follows. Let
$f$ be the modular form of level $N$ attached to $E$. For a choice
of positive integer $k$ one can define a set of \emph{admissible} primes
$\mathfrak L_k$, all of which are inert in $K$, with the property that for
any $n\in\mathfrak N_k$ (the set of squarefree products of primes in $\mathfrak L_k$)
there is a modular form $f_n$ of level $nN$ which is congruent
to $f$ modulo $p^k$. This modular form comes to us via a generalization
of Ribet's well-known level raising theorem. Define a
graph whose vertices are the elements of $\mathfrak N_k$ with edges connecting
$n$ to $n\ell$ for each coprime $n\in\mathfrak N_k$ and $\ell\in\mathfrak L_k$.
A vertex $n$ is said to be either definite or indefinite depending on
whether $\epsilon(nN)$ is $-1$ or $1$, respectively, and this defines
a bipartition of the graph: every edge connects a definite vertex
to an indefinite vertex.
At an indefinite vertex the modular form
$f_n$ allows one to define a cohomology class
$\kappa_n\in \varprojlim_r H^1(D_r/K, E[p^k])$, which arises as the Kummer image
of Heegner points on the abelian variety attached to $f_n$.
At a definite vertex one can attach to $f_n$ a $p$-adic
$L$-function $\lambda_n\in\Lambda/p^k\Lambda$.
There are reciprocity laws relating the elements at any two adjacent
vertices; these reciprocity laws are examples of Jochnowitz
congruences in the sense of \cite{BD99b}.
The \emph{pair} of families
$$
\{\kappa_n\mid n\in\mathfrak N_k, n \mathrm{\ indefinite}\}
\hspace{1cm}
\{\lambda_n\mid n\in\mathfrak N_k, n \mathrm{\ definite} \}
$$
is then our prototype of a bipartite Euler system.
Our main result asserts that the existence of a bipartite Euler system
implies one divisibility of the Iwasawa main conjecture, and
if one can prove sufficiently many nonvanishing
theorems for the $p$-adic $L$-functions $\lambda_n$ then equality holds in the
Iwasawa main conjecture. To emphasize, this approach treats the definite
and indefinite cases on completely equal footing.
The precise statement is given in Theorem \ref{abstract mc}.
The reader is referred to \cite{BD03} for the details of the construction
sketched above. In the present article we simply assume that a pair of
families satisfying the appropriate axioms is given. It should be noted
that Bertolini and Darmon do not construct enough
classes to provide an Euler system in our sense. Those authors
assume that $\epsilon(N)=-1$ and that $f$ is $p$-isolated
\cite[Definition 1.2]{BD03},
and then choose a particular path (starting at the vertex corresponding to
the empty product $1$) in the graph defined above.
The Euler system elements are then constructed
\emph{only at vertices along that path}.
The path is not allowed to be arbitrary: it is required
that the modular form $f_n$ is again $p$-isolated at
each definite vertex in the path. It would thus be necessary to remove
the $p$-isolated hypothesis in order to make full use of the theory
developed herein.
Recently Darmon and Iovita \cite{DarIov} have adapted the methods of
\cite{BD03} to the case where $\epsilon(N)=-1$ and $p$ is a prime
of \emph{supersingular} reduction for $E$. Given the results of the present
article, it seems likely that these ideas can be pushed further to
cover all four cases (definite/ordinary, indefinite/ordinary,
definite/supersingular, and indefinite/supersingular; the final case being the
least well understood). The main (only?) obstruction to doing so is the
removal of the technical $p$-isolated hypothesis referred to above.
We remark that the idea that Euler systems can be used not only
to bound Selmer groups, but also to give a criterion for the sharpness of
the bound, goes back to Kolyvagin. This was extended to the Iwasawa-theoretic
setting by Mazur and Rubin \cite{mazur-rubin}, but the criterion for
equality seems very difficult to verify in practice. In the
usual theory of Euler systems, in e.g. \cite{rubin}, one begins
with cohomology classes (related in some way to $L$-functions)
defined over abelian extensions of the ground
field $K$, and then applies Kolyvagin's derivative operators
to these classes to obtain classes defined over $K$ itself. These derived
classes are the \emph{Kolyvagin system}, and are somewhat less directly
related to $L$-functions than the Euler system from which they are derived.
The criterion for equality (e.g. the \emph{primitivity} of \cite{mazur-rubin}
Definitions 4.5.5 and 5.3.9)
is then a nonvanishing statement
for the Kolyvagin system, rather than for the Euler system itself.
The observation that Bertolini and Darmon's
methods make no use of Kolyvagin's derivative operators is what allows
us to obtain a criterion for equality in the main conjecture
directly in terms of ($p$-adic) $L$-functions.
Finally, and somewhat more speculatively, we address the question of
whether there exist bipartite Euler systems other than that constructed by
Bertolini and Darmon. Gross and Kudla \cite{GroKud}
have investigated the Rankin
triple product $L$-function $L(f\times g\times h,s)$
associated to three newforms $f,g,h$ of weight $2$ on $\Gamma_0(N)$. This
$L$-function has analytic continuation and functional equation
in $s\mapsto 4-s$, and the sign in the functional equation is given
by a simple formula. When this sign is $1$, Gross and Kudla prove
a special value formula similar to Gross's special value formula \cite{Gro85}
in the Heegner point situation, a key ingredient in the reciprocity
laws used in \cite{BD03}. When the sign in the functional equation is
$-1$, Gross and Kudla construct a special homologically trivial
cycle in the codimension $1$ Chow group of a triple product of Shimura
curves. Applying the $p$-adic Abel-Jacobi map to this special
cycle yields a class in the Galois cohomology of the tensor product
$V_f\otimes V_g\otimes V_h$ of the $p$-adic Galois representations
attached to $f,g,h$. Thus we have the beginnings of a bipartite Euler
system for $V_f\otimes V_g\otimes V_h$. Moreover, Gross and Kudla
conjecture that the height of their special cycle in the Chow group
is related to the derivative $L'(f\times g\times h,2)$, in close
analogy with the Gross-Zagier formula.
\section{Euler Systems over Artinian rings}
\label{S:Euler Systems}
In this section we develop a general theory of (bipartite) Euler systems.
The axioms (Definition \ref{es}) are designed to include
the family of cohomology classes used by Bertolini and Darmon \cite{BD03}.
The methods used to bound the associated Selmer group and to develop a
criterion for equality (Theorems \ref{esb} and \ref{rigidity}) originated
with Kolyvagin, and we follow closely the approach to Kolyvagin's
theory described by Mazur and Rubin \cite{mazur-rubin}.
Let $R$ be a principal Artinian local ring with maximal ideal $\mathfrak m$ and
residue characteristic $p>3$.
Let $T$ be a free $R$-module of rank two
equipped with a continuous (for the discrete topology) action of
$G_K\stackrel{\mathrm{def}}{=} \mathrm{Gal}(K^\mathrm{alg}/K)$ for some number field $K$.
We assume that $T$ admits a perfect, $G_K$-equivariant, alternating
$R(1)$-valued pairing. Let
$$
\mathrm {loc}_w:H^1(K,T)\map{}H^1(K_w,T)
$$
denote the localization map (we assume that we are given a fixed
embedding $K^\mathrm{alg}\hookrightarrow K_w^\mathrm{alg}$ for every place $w$).
Throughout \S \ref{S:Euler Systems} we assume that we are given
a fixed self-dual Selmer structure $(\mathcal F,\Sigma_\mathcal F)$ on $T$, as
defined in \S \ref{selmer modules}.
If $B$ is any $R$-module and $b\in B$ we define $\mathrm {ind}(b,B)$,
the \emph{index of divisibility} of $b$ in $B$, to be the largest $k\le\infty$
such that $b\in\mathfrak m^k B$.
\subsection{Selmer modules}
\label{selmer modules}
\begin{Def}
A \emph{Selmer structure} $(\mathcal F,\Sigma_\mathcal F)$ on $T$ is
a finite set of places $\Sigma_\mathcal F$ of $K$
containing the archimedean places, the primes at which $T$ is
ramified, and the prime $p$; and, for every place $w$ of $K$,
a choice of submodule
$$
H^1_\mathcal F(K_w,T)\subset H^1(K_w,T)
$$
such that $H^1_\mathcal F(K_w,T)=H^1_\mathrm{unr}(K_w,T)$ for all $w\not\in\Sigma_\mathcal F$.
Define the \emph{Selmer module} $\mathrm {Sel}_\mathcal F=\mathrm {Sel}_\mathcal F(K,T)$
associated to $\mathcal F$ by the exactness of
$$
0\map{}\mathrm {Sel}_\mathcal F\map{}H^1(K,T)\map{\oplus\mathrm {loc}_w}
\bigoplus_w H^1(K_w,T)/H^1_\mathcal F(K_w,T)
$$
where the sum is over all places $w$ of $K$.
A Selmer structure $\mathcal F$ is \emph{self-dual} if the submodule
$H^1_\mathcal F(K_w,T)$ is maximal isotropic under the
(symmetric) local Tate pairing
$$
H^1(K_w,T)\times H^1(K_w,T)\map{\cup}H^2(K_w, R(1)) \cong R
$$
for every finite place $w\in\Sigma_\mathcal F$.
\end{Def}
\begin{Rem}
Note that $p\not=2$ implies $H^1(K_w,T)=0$ for $w$ archimedean.
By Tate local duality, $H^1_\mathcal F(K_w,T)=H^1_\mathrm{unr}(K_w,T)$ is maximal isotropic
for all $w\not\in \Sigma_\mathcal F$.
\end{Rem}
\begin{Rem}\label{propagation}
If $S$ is a submodule (resp. quotient) of $T$ and $(\mathcal F,\Sigma_\mathcal F)$
is a Selmer structure on $T$, then there is an induced Selmer structure,
still denoted $(\mathcal F,\Sigma_\mathcal F)$, on $S$ defined as the
preimage of $H^1_\mathcal F(K_w,T)$ under
$H^1(K_w,S)\map{} H^1(K_w,T)$
(resp. the image of $H^1_\mathcal F(K_w,T)$ under
$H^1(K_w,T)\map{} H^1(K_w,S))$
for every place $w$ of $K$. By \cite[Lemma 1.1.9]{mazur-rubin},
$H^1_\mathcal F(K_w,S)=H^1_\mathrm{unr}(K_w,S)$ for every $w\not\in\Sigma_\mathcal F$, and so
this is well-defined. We refer to this as \emph{propagation}
of Selmer structures.
\end{Rem}
\subsection{Modified Selmer modules}
\label{ordinary selmer}
Now suppose we have a set of primes $\mathfrak L$ of $K$ which is
disjoint from $\Sigma_\mathcal F$ and satisfies
\begin{enumerate}
\item $\forall \mathfrak l\in\mathfrak L,\mathbf{N}(\mathfrak l)\not\equiv 1\pmod{p}$,
\item $\forall \mathfrak l\in\mathfrak L$, the Frobenius $\mathrm {Frob}_\mathfrak l$
acts on $T$ with eigenvalues $\mathbf{N}(\mathfrak l)$ and $1$.
\end{enumerate}
Let $\mathfrak N$ denote the set of squarefree
products of primes in $\mathfrak L$.
The two conditions above imply that $T\cong R\oplus R(1)$ as a
$\mathrm{Gal}(K_\mathfrak l^\mathrm{alg}/K_\mathfrak l)$-module, and that the decomposition is unique.
For each $\mathfrak l\in\mathfrak L$ we define the \emph{ordinary} cohomology
$H^1_\mathrm {ord}(K_\mathfrak l,T)$ to be the image of
$H^1(K_\mathfrak l,R(1))\map{}H^1(K_\mathfrak l,T).$
\begin{Lem}\label{local freeness}
For $\mathfrak l\in\mathfrak L$, the decomposition $T\cong R\oplus R(1)$ induces
a decomposition
$$H^1(K_\mathfrak l,T)\cong H^1_\mathrm{unr}(K_\mathfrak l,T)\oplus H^1_\mathrm {ord}(K_\mathfrak l,T)$$
in which each summand is free of rank one over $R$ and is maximal isotropic
under the local Tate pairing.
\end{Lem}
\begin{proof}
By \cite[Lemma 1.3.2]{rubin}, evaluation of cocycles at $\mathrm {Frob}_\mathfrak l$
induces an isomorphism
$$
H^1_\mathrm{unr}(K_\mathfrak l,T)\cong T/(\mathrm {Frob}_\mathfrak l-1)T \cong R.
$$
By local class field theory
$$
H^1(K_\mathfrak l,R)\cong \mathrm{Hom}(\mathrm{Gal}(K_\mathfrak l^\mathrm{unr}/K_\mathfrak l),R)\cong R,
$$
again by evaluation at $\mathrm {Frob}_\mathfrak l$. Thus $H^1_\mathrm{unr}(K_\mathfrak l,T)$
is exactly the image of $H^1(K_\mathfrak l,R)$, and is free of rank one.
Since $\mathbf{N}(\mathfrak l)\not\equiv 1\pmod{p}$, the pro-$p$-completion of
$K_\mathfrak l^\times$ is canonically isomorphic to $\mathbf Z_p$, and so
$$
H^1_\mathrm {ord}(K_\mathfrak l,T)\cong H^1(K_\mathfrak l, R(1))\cong R
$$ by local Kummer theory.
The submodules $R$ and $R(1)$ of $T$ are each maximal isotropic under the
pairing $T\times T\map{}R(1)$, and so the same is true of the
spaces $H^1_\mathrm{unr}(K_\mathfrak l,T)$ and $H^1_\mathrm {ord}(K_\mathfrak l,T)$ under the cup product.
\end{proof}
\begin{Def}\label{cartesian def}
A Selmer structure $(\mathcal F,\Sigma_\mathcal F)$ is \emph{cartesian}
if for every quotient $T/\mathfrak m^i T$ of $T$, every place $w\in\Sigma_\mathcal F$,
and any generator $\pi\in\mathfrak m$, the isomorphism
$$
T/\mathfrak m^i T\map{\pi^{\mathrm {length}(R)-i}}T[\mathfrak m^i]
$$
induces an isomorphism
$
H^1_{\mathcal F}(K_w,T/\mathfrak m^i) \cong H^1_{\mathcal F}(K_w,T[\mathfrak m^i]).
$
\end{Def}
\begin{Rem}
A Selmer structure $(\mathcal F,\Sigma_\mathcal F)$ is cartesian if and only if
it defines a cartesian local condition, for every $w\in\Sigma_\mathcal F$,
on the quotient category
$\mathrm{Quot}(T)$ in the sense of \cite[Definition 1.1.4]{mazur-rubin}.
\end{Rem}
\begin{Hyp}\label{cartesian}
For the remainder of \S \ref{S:Euler Systems}
we make the following assumptions
\begin{enumerate}
\item the residual representation $T/\mathfrak m T$ is absolutely irreducible,
\item $\mathcal F$ is cartesian.
\end{enumerate}
\end{Hyp}
\begin{Def}
For any $\mathfrak{abc}\in\mathfrak N$ we define a Selmer structure
$(\mathcal F^\mathfrak a_\mathfrak b(\mathfrak{c}), \Sigma_{\mathcal F^\mathfrak a_\mathfrak b(\mathfrak{c})})$
as follows: $\Sigma_{\mathcal F^\mathfrak a_\mathfrak b(\mathfrak{c})}$ is $\Sigma_\mathcal F$
together with all prime divisors of $\mathfrak{abc}$,
$$
H^1_{\mathcal F^\mathfrak a_\mathfrak b(\mathfrak{c})}(K_w,T)=H^1_\mathcal F(K_w,T)
$$
for $w$ prime to $\mathfrak{abc}$, and
$$
H^1_{\mathcal F^\mathfrak a_\mathfrak b(\mathfrak{c})}(K_\mathfrak l,T)
=\left\{\begin{array}{ll}
H^1(K_\mathfrak l,T) & \mathrm{if\ }\mathfrak l|\mathfrak a\\
0 & \mathrm{if\ }\mathfrak l|\mathfrak b\\
H^1_\mathrm {ord}(K_\mathfrak l,T) & \mathrm{if\ }\mathfrak l|\mathfrak{c}.
\end{array}\right.
$$
If any one of $\mathfrak a$, $\mathfrak b$, $\mathfrak{c}$ is the empty product
we omit it from the notation.
\end{Def}
\begin{Lem}\label{subquotients}
The Selmer structure $\mathcal F(\mathfrak n)$ is cartesian for any $\mathfrak n\in\mathfrak N$.
For any choice of generator $\pi\in\mathfrak m$ and any $0\le i\le\mathrm {length}(R)$, the
composition
$$
T/\mathfrak m^i T\map{\pi^{\mathrm {length}(R)-i}}T[\mathfrak m^i]\map{}T
$$
induces isomorphisms
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m^i) \cong \mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T[\mathfrak m^i])
\cong \mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T)[\mathfrak m^i].
$$
\end{Lem}
\begin{proof}
For any prime $w$ not dividing $\mathfrak n$,
$H^1_{\mathcal F(\mathfrak n)}(K_w,T)$
satisfies the cartesian property by Hypothesis \ref{cartesian}.
For $\mathfrak l|\mathfrak n$, the cartesian property follows from the
canonical isomorphism $H^1_\mathrm {ord}(K_\mathfrak l,T)\cong R$
used in the proof of Lemma \ref{local freeness}. The second claim
now follows as in \cite[Lemma 3.5.4]{mazur-rubin}.
\end{proof}
\begin{Prop}\label{structure}
For any $\mathfrak n\in\mathfrak N$ there is a (non-canonical) decomposition
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}\cong R^{e(\mathfrak n)}\oplus M_\mathfrak n\oplus M_\mathfrak n
$$
with $e(\mathfrak n)\in\{0,1\}$.
\end{Prop}
\begin{proof}
This follows from the existence of a modified form of the
Cassels-Tate pairing, together with the self-duality hypotheses
on $T$ and $\mathcal F$; see \cite[Theorem 1.4.2]{me}.
\end{proof}
\begin{Def}\label{stub}
Let $\mathfrak N^\mathrm {even}\subset\mathfrak N$ be the subset for which $e(\mathfrak n)=0$,
and $\mathfrak N^\mathrm {odd}\subset\mathfrak N$ the subset for which $e(\mathfrak n)=1$.
For $\mathfrak n\in\mathfrak N$ we define the \emph{stub module}
$$
\mathrm {Stub}_{\mathfrak n}=\left\{\begin{array}{ll}
\mathfrak m^{\mathrm {length}(M_\mathfrak n)}\cdot R & \mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}\\
\mathfrak m^{\mathrm {length}(M_\mathfrak n)}\cdot \mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T)
& \mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd} \end{array}\right.
$$
with $M_\mathfrak n$ as in Proposition \ref{structure}.
Note that $\mathrm {Stub}_\mathfrak n$ is a cyclic $R$-module for every $\mathfrak n\in\mathfrak N$.
\end{Def}
The following proposition is a consequence of Poitou-Tate
global duality, and is similar to \cite[Lemma 1.5.8]{me} and
\cite[Lemma 4.1.6]{mazur-rubin}. Our self-duality assumptions,
together with the fact that the local conditions $H^1_\mathrm{unr}(K_\mathfrak l,T)$ and
$H^1_\mathrm {ord}(K_\mathfrak l,T)$ have rank one, give a much stronger result.
\begin{Prop}\label{global duality}
For any $\mathfrak n\mathfrak l\in\mathfrak N$ there are non-negative integers $a,b$ with
$a+b=\mathrm {length}(R)$ such that in the diagram of inclusions
$$
\xymatrix{
& {\mathrm {Sel}_{\mathcal F^\mathfrak l(\mathfrak n)}} \\
{\mathrm {Sel}_{\mathcal F(\mathfrak n)}}\ar[ur]^b & & {\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)}}\ar[ul]_a \\
& {\mathrm {Sel}_{\mathcal F_\mathfrak l(\mathfrak n)}\ar[ul]^a\ar[ur]_b }
}
$$
the labels on the arrows are the lengths of the respective
quotients. All four quotients are cyclic $R$-modules
and
\begin{equation}\label{pretty diagram}
a=\mathrm {length}(\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}))
\hspace{1cm}
b=\mathrm {length}(\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})).
\end{equation}
\end{Prop}
\begin{proof}
Take (\ref{pretty diagram}) as the definition of $a$ and $b$, so that
$a$ and $b$ are the lengths of the lower left and right quotients,
respectively. The cyclicity of the quotients
follows from Lemma \ref{local freeness};
for example the lower left quotient injects into $H^1_\mathrm{unr}(K_\mathfrak l,T)$.
Exactly as in the proof of \cite[Lemma 1.5.7]{me}, the quotient
\begin{equation}\label{global quotient}
\mathrm {Sel}_{\mathcal F^\mathfrak l(\mathfrak n)}/\big(\mathrm {Sel}_{\mathcal F(\mathfrak n)}+\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)}\big)
\end{equation}
admits a nondegenerate, alternating $R$-valued pairing.
The pairing is defined as
follows: given $x,y\in\mathrm {Sel}_{\mathcal F^\mathfrak l(\mathfrak n)}$, let $x'$ be the projection
of $\mathrm {loc}_\mathfrak l(x)$ to $H^1_\mathrm{unr}(K_\mathfrak l,T)$, and let $y'$ be the
projection of $\mathrm {loc}_\mathfrak l(y)$ to $H^1_\mathrm {ord}(K_{\mathfrak l},T)$. The pairing
of $x$ and $y$ is then defined to be the local Tate pairing of $x'$ and $y'$.
The quotient (\ref{global quotient}) is a
cyclic $R$-module, and so the existence of such a pairing implies that
it is trivial.
Directly from the definitions we have
$$
\mathrm {Sel}_{\mathcal F_\mathfrak l(\mathfrak n)}= \mathrm {Sel}_{\mathcal F(\mathfrak n)}\cap \mathrm {Sel}_{\mathcal F(\mathfrak l\mathfrak n)}.
$$
Combining this with the above, it follows that the
lower left quotient is isomorphic to the upper right,
and the lower right is isomorphic to the upper left. This
proves everything except for the claim $a+b=\mathrm {length}(R)$, which is a consequence
of global duality as in \cite[Lemma 1.5.8]{me} or
\cite[Lemma 4.1.6]{mazur-rubin}.
\end{proof}
\begin{Cor}\label{rho}
Fix $\mathfrak n\in\mathfrak N$ and let $e(\mathfrak n)$ be as in Proposition \ref{structure}.
The integer
$$
\rho(\mathfrak n)=
\mathrm{dim}_{R/\mathfrak m}\big(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m T)\big)
$$
satisfies $e(\mathfrak n)\equiv\rho(\mathfrak n)\pmod{2}$, and for any $\mathfrak l\in\mathfrak L$ prime
to $\mathfrak n$
\begin{eqnarray*}
\rho(\mathfrak n\mathfrak l)=\rho(\mathfrak n)+1 &\iff&
\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m T)=0\\
\rho(\mathfrak n\mathfrak l)=\rho(\mathfrak n)-1 &\iff&
\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m T)\not=0.
\end{eqnarray*}
The claim continues to hold, and the value of $\rho(\mathfrak n)$
remains unchanged, if one replaces $\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m T)$ by
$\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T)[\mathfrak m]$ everywhere.
\end{Cor}
\begin{proof}
Apply Proposition \ref{global duality} with $T$ replaced by $T/\mathfrak m T$.
Then $$\rho(\mathfrak n\mathfrak l)=\rho(\mathfrak n)-a+b,$$ $a+b=1$, and $a=0$ if and only if
$\mathrm {loc}_\mathfrak l$ kills $\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m T)$. Combining this
with Lemma \ref{subquotients} proves the claim.
\end{proof}
\begin{Rem}
Note that Corollary \ref{rho} implies that for $\mathfrak n\mathfrak l\in\mathfrak N$,
$$
\mathfrak n\in\mathfrak N^\mathrm {even}\iff \mathfrak n\mathfrak l\in\mathfrak N^\mathrm {odd}.
$$
We will use this repeatedly throughout.
\end{Rem}
\begin{Cor}\label{length induction}
Suppose $\mathfrak n\mathfrak l\in\mathfrak N$, and let $a$ and $b$ be as in Proposition
\ref{global duality}. Then
$$
\mathrm {length}(M_\mathfrak n)=\left\{\begin{array}{ll}
\mathrm {length}(M_{\mathfrak n\mathfrak l})+a&\mathrm{\ if\ } \mathfrak n\in\mathfrak N^\mathrm {even}\\
\mathrm {length}(M_{\mathfrak n\mathfrak l})-b&\mathrm{\ if\ } \mathfrak n\in\mathfrak N^\mathrm {odd}.
\end{array}\right.
$$
\end{Cor}
\begin{proof}
Suppose $\mathfrak n\in\mathfrak N^\mathrm {even}$. Then
\begin{eqnarray*}
2\cdot \mathrm {length}(M_\mathfrak n) &=& \mathrm {length}(\mathrm {Sel}_{\mathcal F(\mathfrak n)}) \\
&=& \mathrm {length}(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})-b+a \\
&=& \mathrm {length}(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})+2a-\mathrm {length}(R) \\
&=& 2\cdot \mathrm {length}(M_{\mathfrak n\mathfrak l}) +2a.
\end{eqnarray*}
The case $\mathfrak n\in\mathfrak N^\mathrm {odd}$ is similar.
\end{proof}
\begin{Cor}\label{stub shift}
Suppose $\mathfrak n\mathfrak l\in\mathfrak N$.
There is an isomorphism
of $R$-modules
\begin{eqnarray*}
\mathrm {loc}_\mathfrak l(\mathrm {Stub}_\mathfrak n) \cong \mathrm {Stub}_{\mathfrak n\mathfrak l}
& &\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd} \\
\mathrm {loc}_\mathfrak l(\mathrm {Stub}_{\mathfrak n\mathfrak l}) \cong \mathrm {Stub}_\mathfrak n
& &\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}.
\end{eqnarray*}
\end{Cor}
\begin{proof}
Suppose $\mathfrak n\in\mathfrak N^\mathrm {odd}$. Since the modules in question are cyclic, it
suffices to check that they have the same
annihilator in $R$. The image of $\mathrm {Stub}_\mathfrak n$ in $H^1_\mathrm{unr}(K_\mathfrak l,T)$
is annihilated by $\mathfrak m^i$ if and only if $\mathfrak m^{i+\mathrm {length}(M_\mathfrak n)}$
kills the lower left quotient in the diagram of Proposition
\ref{global duality}, that is, if and only if
$a\le i+\mathrm {length}(M_\mathfrak n)$. By Corollary \ref{length induction},
this is equivalent to $\mathrm {length}(R)\le i+ \mathrm {length}(M_{\mathfrak n\mathfrak l})$, which is
equivalent to $\mathfrak m^{i}\cdot \mathrm {Stub}_{\mathfrak n\mathfrak l}=0$.
The case $\mathfrak n\in\mathfrak N^\mathrm {even}$ is similar.
\end{proof}
\subsection{Euler systems}
\label{esb subsection}
Continue to assume that Hypothesis \ref{cartesian} holds, as well as
\begin{Hyp}\label{useful primes}
For any $c\in H^1(K,T/\mathfrak m T)$ there are infinitely many $\mathfrak l\in\mathfrak L$ such that
$\mathrm {loc}_\mathfrak l(c)\not=0$.
\end{Hyp}
\begin{Def}\label{es}
A \emph{bipartite Euler system of odd type} for $(T,\mathcal F,\mathfrak L)$
is a pair of families
$$
\{ \kappa_\mathfrak n\in\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T) \mid \mathfrak n\in\mathfrak N^\mathrm {odd} \}
\hspace{1cm}
\{ \lambda_\mathfrak n\in R \mid \mathfrak n\in\mathfrak N^\mathrm {even} \}
$$
related by the first and second reciprocity laws:
\begin{enumerate}
\item for any $\mathfrak n\mathfrak l\in\mathfrak N^\mathrm {odd}$, there exists an isomorphism of
$R$-modules
$$
R/(\lambda_\mathfrak n)\cong
H^1_\mathrm {ord}(K_{\mathfrak l},T)/R\cdot\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l}),
$$
\item for any $\mathfrak n\mathfrak l\in\mathfrak N^\mathrm {even}$, there exists an isomorphism of
$R$-modules
$$
R/(\lambda_{\mathfrak n\mathfrak l})\cong
H^1_\mathrm{unr}(K_{\mathfrak l},T)/R\cdot\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n}).
$$
\end{enumerate}
A \emph{bipartite Euler system of even type} is defined in the same way,
but with even and odd interchanged everywhere in the definition.
\end{Def}
From here on we drop the adjective ``bipartite'', and simply call such a
pair of families an Euler system.
An Euler system (of even or odd type) is \emph{nontrivial} if
$\lambda_\mathfrak n\not=0$
for some $\mathfrak n$ (of the appropriate type). By the reciprocity laws
and the following lemma, this is equivalent to $\kappa_\mathfrak n\not=0$
for some $\mathfrak n$ of the appropriate type.
\begin{Lem}\label{local surjectivity}
For any $\mathfrak n\in\mathfrak N$ and any cyclic $R$-submodule
$C\subset \mathrm {Sel}_{\mathcal F(\mathfrak n)}$, there are
infinitely many $\mathfrak l\in\mathfrak L$ such that $\mathrm {loc}_\mathfrak l$ takes $C$
injectively into $H^1_\mathrm{unr}(K_\mathfrak l,T)$. If $C$ is free of rank one then
for such any such $\mathfrak l$, $\mathrm {loc}_\mathfrak l$ takes $C$ isomorphically onto
$H^1_\mathrm{unr}(K_\mathfrak l,T)$.
\end{Lem}
\begin{proof}
Let $i$ be maximal such that $\mathfrak m^i C\not=0$, so that
$\mathfrak m^iC\subset\mathrm {Sel}_{\mathcal F(\mathfrak n)}[\mathfrak m]$.
By Hypothesis \ref{useful primes} and Lemma \ref{subquotients},
there are infinitely many $\mathfrak l\in\mathfrak L$ prime to
$\mathfrak n$ such that $\mathrm {loc}_\mathfrak l(\mathfrak m^iC)\not=0$. For any such prime $\mathrm {loc}_\mathfrak l$
takes $C$ injectively into $H^1_\mathrm{unr}(K_\mathfrak l,T)$.
The final claim is immediate from
Lemma \ref{local freeness}.
\end{proof}
\begin{Prop}\label{no even es}
There are no nontrivial Euler systems for $(T,\mathcal F,\mathfrak L)$ of even type.
\end{Prop}
\begin{proof}
Given a nontrivial Euler system of even type we fix $\mathfrak n\in \mathfrak N^\mathrm {odd}$
such that $\lambda_\mathfrak n\not=0$. In the notation of Proposition \ref{structure},
$e(\mathfrak n)=1$, and so $\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T)$
contains a free rank-one $R$-submodule $C$.
If $\mathfrak l\in\mathfrak L$ is chosen as in
Lemma \ref{local surjectivity} then the natural injection
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}/\mathrm {Sel}_{\mathcal F_\mathfrak l(\mathfrak n)}\hookrightarrow H^1_\mathrm{unr}(K_\mathfrak l,T)\cong R
$$
is an isomorphism, and Proposition \ref{global duality} implies that
$\mathrm {Sel}_{\mathcal F_\mathfrak l(\mathfrak n)}=\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)}$. In particular
$\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l})=0$, violating the first reciprocity law.
\end{proof}
\begin{Prop}\label{annihilation}
Fix an Euler system of odd type for $(T,\mathcal F,\mathfrak L)$, let $k$
be the length of $R$, and let
$M_\mathfrak n$ be as in Proposition \ref{structure}. If $\lambda_\mathfrak n\not=0$
for some $\mathfrak n\in\mathfrak N^\mathrm {even}$ then $M_\mathfrak n$ is killed by $\mathfrak m^{k-1}$.
If $\kappa_\mathfrak n\not=0$
for some $\mathfrak n\in\mathfrak N^\mathrm {odd}$ then $M_\mathfrak n$ is killed by $\mathfrak m^{k-1}$.
\end{Prop}
\begin{proof}
First suppose $\mathfrak n\in\mathfrak N^\mathrm {even}$ and $\mathfrak m^{k-1}M_\mathfrak n\not=0$. Then
$\mathrm {Sel}_{\mathcal F(\mathfrak n)}$ contains a free rank one submodule, $C$.
Choose $\mathfrak l\in\mathfrak L$ not dividing $\mathfrak n$ such that
$\mathrm {loc}_\mathfrak l$ takes $C$ isomorphically onto $H^1_\mathrm{unr}(K_\mathfrak l,T)$
(Lemma \ref{local surjectivity}). By Proposition \ref{global duality},
$\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})=0$. Thus $\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l})=0$
and the first reciprocity law implies that $\lambda_\mathfrak n=0$.
Now suppose $\mathfrak n\in\mathfrak N^\mathrm {odd}$ and $\mathfrak m^{k-1}M_\mathfrak n\not=0$. Proposition
\ref{structure} implies that $\mathrm {Sel}_{\mathcal F(\mathfrak n)}$ contains a free
submodule of rank two, $C$. From this and Lemma
\ref{local freeness} one may deduce that
for any $\mathfrak l\in\mathfrak L$ the kernel of
$$
\mathrm {loc}_\mathfrak l:\mathrm {Sel}_{\mathcal F(\mathfrak n)}\map{}H^1_\mathrm{unr}(K_\mathfrak l,T)
$$
contains a free submodule. This kernel is exactly
$\mathrm {Sel}_{\mathcal F_\mathfrak l(\mathfrak n)}\subset \mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)}$,
and so $\mathfrak m^{k-1}M_{\mathfrak n\mathfrak l}\not=0$.
By the case considered above $\lambda_{\mathfrak n\mathfrak l}=0$,
and the second reciprocity law
implies that $\mathrm {loc}_\mathfrak l(\kappa_\mathfrak n)=0$.
Since this holds for all choices of $\mathfrak l$, $\kappa_\mathfrak n=0$ by
Lemma \ref{local surjectivity}.
\end{proof}
The above proposition shows that an Euler system
gives a (somewhat weak) annihilation result for Selmer groups. To strengthen
this to an upper bound on Selmer groups, we must impose the
hypothesis of \emph{freeness} defined below. For an example of
how this hypothesis may be verified in practice, see the proof
of Lemma \ref{free es}.
\begin{Def}\label{free}
We will say that an Euler system of odd type is \emph{free}
if for every $\mathfrak n\in\mathfrak N^\mathrm {odd}$, there is a free rank-one
$R$-submodule $C_\mathfrak n\subset \mathrm {Sel}_{\mathcal F(\mathfrak n)}$ containing $\kappa_\mathfrak n$.
\end{Def}
\begin{Thm}\label{esb}
For any free Euler system of odd type for $(T,\mathcal F,\mathfrak L)$,
$\lambda_\mathfrak n\in\mathrm {Stub}_\mathfrak n$ for every $\mathfrak n\in\mathfrak N^\mathrm {even}$, and
$\kappa_\mathfrak n\in\mathrm {Stub}_\mathfrak n$ for every $\mathfrak n\in\mathfrak N^\mathrm {odd}$.
Equivalently, the $R$-module $M_\mathfrak n$ of Proposition
\ref{structure} satisfies
$$
\mathrm {length}(M_\mathfrak n)\le \left\{\begin{array}{ll}
\mathrm {ind}(\lambda_\mathfrak n, R) & \mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even} \\
\mathrm {ind}(\kappa_\mathfrak n, \mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T)) & \mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd}.
\end{array}\right.
$$
\end{Thm}
\begin{proof}
The proof is by induction on
$\rho(\mathfrak n)$, as defined in Corollary \ref{rho}.
If $\rho(\mathfrak n)=0$ then $M_n=0$, and so
$\mathfrak n\in\mathfrak N^\mathrm {even}$, $\mathrm {Stub}_\mathfrak n=R$, and the claim is vacuous.
Similarly, if $\rho(\mathfrak n)=1$ then $\mathfrak n\in\mathfrak N^\mathrm {odd}$, $M_n=0$, and the
claim is vacuous. We assume now that $\rho(\mathfrak n)\ge 2$, so that $M_n\not=0$.
Suppose $\mathfrak n\in\mathfrak N^\mathrm {even}$ and $\lambda_\mathfrak n\not=0$.
Fix any $\mathfrak l\in\mathfrak L$ prime to $\mathfrak n$ such that the Selmer group
$\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T/\mathfrak m T)$ is not killed by $\mathrm {loc}_\mathfrak l$.
By Corollary \ref{rho} and the induction hypothesis,
$\kappa_{\mathfrak n\mathfrak l}\in\mathrm {Stub}_{\mathfrak n\mathfrak l}$, and so Corollary \ref{length induction}
gives
\begin{eqnarray*}
\mathrm {length}(M_n) & = & \mathrm {length}(M_{\mathfrak n\mathfrak l})+a\\
&\le &\mathrm {ind}(\kappa_{\mathfrak n\mathfrak l},\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})+a\\
&\le &\mathrm {ind}\big(\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l}),
\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})\big)+a
\end{eqnarray*}
where $a$ is as in Proposition \ref{global duality}.
The first reciprocity law implies
\begin{eqnarray*}
\mathrm {ind}(\lambda_\mathfrak n,R) &=& \mathrm {ind}(\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l}),H^1_\mathrm {ord}(K_\mathfrak l,T))\\
&=& \mathrm {ind}(\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l}),\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})\big)
+\mathrm {length}(H^1_\mathrm {ord}(K_\mathfrak l,T)/\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)}))\\
&=& \mathrm {ind}(\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l}),\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)})\big)
+\mathrm {length}(R)-b.
\end{eqnarray*}
Since $a+b=\mathrm {length}(R)$, we conclude $\mathrm {length}(M_n)\le \mathrm {ind}(\lambda_\mathfrak n,R)$.
Now suppose $\mathfrak n\in\mathfrak N^\mathrm {odd}$ and $\kappa_\mathfrak n\not=0$.
Let $C_\mathfrak n$ be as in Definition \ref{free}
and fix $\mathfrak l\in\mathfrak L$ prime to $\mathfrak n$ such that $\mathrm {loc}_\mathfrak l$ takes $C_\mathfrak n$
isomorphically onto $H^1_\mathrm{unr}(K_\mathfrak l,T)$ (using Lemma \ref{local surjectivity}).
Again applying Corollary \ref{rho},
$\rho(\mathfrak n\mathfrak l)=\rho(\mathfrak n)-1$, and so $\lambda_{\mathfrak n\mathfrak l}\in\mathrm {Stub}_{\mathfrak n\mathfrak l}$.
By Corollary \ref{length induction} (with $a=\mathrm {length}(R)$ and $b=0$)
and the second reciprocity law,
\begin{eqnarray*}
\mathrm {length}(M_n) = \mathrm {length}(M_{\mathfrak n\mathfrak l}) &\le &\mathrm {ind}(\lambda_{\mathfrak n\mathfrak l},R)\\
&= &\mathrm {ind}\big(\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n}), H^1_\mathrm{unr}(K_\mathfrak l,T)\big)\\
&=&\mathrm {ind}\big(\kappa_{\mathfrak n},\mathrm {Sel}_{\mathcal F(\mathfrak n)}).
\end{eqnarray*}
\end{proof}
\subsection{Sheaves on graphs}
\label{sheaf section}
Let $\mathcal X$ be the graph whose vertices $v(\mathfrak n)$ are indexed by $\mathfrak n\in\mathfrak N$,
and an edge $e(\mathfrak n,\mathfrak n\mathfrak l)$ connects $v(\mathfrak n)$ to $v(\mathfrak n\mathfrak l)$ whenever
$\mathfrak l\in\mathfrak L$ and $\mathfrak n\mathfrak l\in\mathfrak N$.
A vertex $v(\mathfrak n)$ will be called even or odd, depending
on whether $\mathfrak n$ lies in $\mathfrak N^\mathrm {even}$ or $\mathfrak N^\mathrm {odd}$, and every edge
connects an even vertex to an odd one (by Corollary \ref{rho}).
We define a sheaf $\mathrm{ES}(\mathcal X)$ on $\mathcal X$ in the sense of
\cite[\S 3.1]{mazur-rubin} called the \emph{Euler system sheaf}
as follows.
To each vertex $v=v(\mathfrak n)$ we attach the
$R$-module
$$
\mathrm{ES}(v)=\left\{\begin{array}{ll}
\mathrm {Sel}_{\mathcal F(\mathfrak n)} & \mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd} \\
R & \mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}\end{array}\right.
$$
and to each edge $e=e(\mathfrak n,\mathfrak n\mathfrak l)$
we attach the $R$-module
$$
\mathrm{ES}(e)=\left\{\begin{array}{ll}
H^1_\mathrm{unr}(K_\mathfrak l,T) &\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd} \\
H^1_\mathrm {ord}(K_\mathfrak l,T ) &\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}.
\end{array}\right.
$$
If $e=e(\mathfrak n,\mathfrak n\mathfrak l)$ is an edge with endpoint $v$, we define the
\emph{vertex-to-edge map}
$$\psi^e_v:\mathrm{ES}(v)\map{}\mathrm{ES}(e)$$ as follows.
If $v$ is odd then
$$
\psi^e_v=\mathrm {loc}_\mathfrak l:
\left\{\begin{array}{ll}
\mathrm {Sel}_{\mathcal F(\mathfrak n)}\map{}H^1_\mathrm{unr}(K_\mathfrak l,T)
& \mathrm{if\ }v=v(\mathfrak n)\\
\mathrm {Sel}_{\mathcal F(\mathfrak n\mathfrak l)}\map{}H^1_\mathrm {ord}(K_\mathfrak l,T)
& \mathrm{if\ }v=v(\mathfrak n\mathfrak l).
\end{array}\right.
$$
If $v$ is even then fix, using Lemma
\ref{local freeness}, an isomorphism
\begin{equation}\label{edge choice}
\psi^e_v: R\cong \left\{\begin{array}{ll}
H^1_\mathrm{unr}(K_\mathfrak l,T) & \mathrm{if\ } v=v(\mathfrak n\mathfrak l)\\
H^1_\mathrm {ord}(K_\mathfrak l,T) & \mathrm{if\ } v=v(\mathfrak n).
\end{array}\right.
\end{equation}
Of course, the choice of isomorphism (\ref{edge choice}) is not unique,
but we fix a choice, for each edge $e$ with even vertex $v$, once and for all.
\begin{Def}\label{stub def}
The Euler system sheaf has a locally cyclic (in the sense
of \cite[Definition 3.4.2]{mazur-rubin})
subsheaf, the \emph{stub sheaf} $\mathrm {Stub}(\mathcal X)$, defined as follows.
To each vertex $v=v(\mathfrak n)$ we attach the
cyclic $R$-module
$$
\mathrm {Stub}(v)=\mathrm {Stub}_\mathfrak n\subset \mathrm{ES}(v),
$$
and to each edge $e=e(\mathfrak n,\mathfrak n\mathfrak l)$
we attach the cyclic module $\mathrm {Stub}(e)\subset \mathrm{ES}(e)$
$$
\mathrm {Stub}(e)=\left\{\begin{array}{ll}
\mathrm {loc}_\mathfrak l(\mathrm {Stub}_\mathfrak n) &\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd} \\
\mathrm {loc}_\mathfrak l(\mathrm {Stub}_{\mathfrak n\mathfrak l}) &\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}.
\end{array}\right.
$$
If $e$ is an edge connecting the vertices $v$ and $v'$, with $v$ even and
$v'$ odd, then the vertex-to-edge map $\psi_{v'}^e$
restricts to a surjective map $\mathrm {Stub}(v')\map{}\mathrm {Stub}(e)$.
By Corollary \ref{stub shift},
the map $\psi_{v}^e$ restricts to an isomorphism
$\mathrm {Stub}(v)\cong \mathrm {Stub}(e)$.
\end{Def}
\begin{Def}
A \emph{core vertex} of $\mathcal X$ is a vertex $v$ such that $\mathrm {Stub}(v)\cong R$.
\end{Def}
\begin{Rem}\label{core remark}
Set $\overline T=T/\mathfrak m T$. For $\mathfrak n\in\mathfrak N$, recall the integer
$$\rho(\mathfrak n)=\dim_{R/\mathfrak m}(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T))$$ of
Corollary \ref{rho}.
It is clear from Lemma \ref{subquotients} and
Proposition \ref{structure} that $v(\mathfrak n)$
is a core vertex if and only if $\rho(\mathfrak n)=0$ or $1$.
\end{Rem}
The \emph{core subgraph} $\mathcal X_0\subset\mathcal X$ is the graph whose
vertices are the core vertices of $\mathcal X$, with two vertices connected
by an edge in $\mathcal X_0$ if and only if they are connected by an edge in
$\mathcal X$. We let $\mathrm {Stub}(\mathcal X_0)$ be the restriction of $\mathrm {Stub}(\mathcal X)$
to $\mathcal X_0$ in the obvious sense.
\begin{Lem}\label{free sheaves}
The sheaf $\mathrm {Stub}(\mathcal X_0)$ is
locally free of rank one. That is to say,
\begin{itemize}
\item for every vertex $v$ of $\mathcal X_0$, $\mathrm {Stub}(v)$ is free of rank one,
\item for every edge $e$ of $\mathcal X_0$, $\mathrm {Stub}(e)$ is free of rank one,
\item for every edge $e$ of $\mathcal X_0$
with endpoint $v$, the vertex-to-edge map
$$\psi_v^e:\mathrm {Stub}(v)\map{}\mathrm {Stub}(e)$$ is an isomorphism.
\end{itemize}
\end{Lem}
\begin{proof}
The first property is the definition of $\mathcal X_0$.
As noted in Definition \ref{stub def}, $\mathrm {Stub}(e)$ is isomorphic
to $\mathrm {Stub}(v)$, where $v$ is the even endpoint of $e$.
This proves the second property.
The final property follows from the first two, together with the surjectivity
of the vertex-to-edge maps in $\mathrm {Stub}(\mathcal X)$.
\end{proof}
\begin{Def}
A \emph{global section}, $s$, of the sheaf $\mathrm {Stub}(\mathcal X)$ on $\mathcal X$
is a function on vertices and edges of $\mathcal X$,
$$
v\mapsto s(v)\in \mathrm {Stub}(v)
\hspace{1cm}
e\mapsto s(e)\in\mathrm {Stub}(e),
$$
such that for every edge $e$ with endpoints $v$ and $v'$
$$
\psi^e_v(s(v))=s(e)=\psi^e_{v'}(s(v'))
$$
in $\mathrm {Stub}(e)$. A global section of $\mathrm{ES}(\mathcal X)$
is defined in the same way.
\end{Def}
\begin{Def}
For two vertices $v$ and $v'$ of $\mathcal X$, a \emph{path} from $v$ to $v'$
in $\mathcal X$ is a finite sequence of vertices $v=v_0, v_1,\ldots, v_k=v'$
such that $v_i$ is connected to $v_{i+1}$ by an edge $e_i$.
A path is \emph{surjective} (for the locally cyclic sheaf $\mathrm {Stub}(\mathcal X)$)
if the vertex-to-edge map
$$
\psi_{v_{i+1}}^{e_i}:\mathrm {Stub}(v_{i+1})\map{}\mathrm {Stub}(e_i)
$$
is an isomorphism for every $i$. We make the same definitions for $\mathcal X_0$.
\end{Def}
\begin{Rem}\label{surjective remark}
Note that a surjective path from
$v$ to $v'$ induces in an obvious way (\cite[\S 3.4]{mazur-rubin})
a surjective map $\mathrm {Stub}(v)\map{}\mathrm {Stub}(v')$,
and that for any global section $s$ of $\mathrm {Stub}(\mathcal X)$
this map takes $s(v)$ to $s(v')$.
\end{Rem}
\begin{Lem}\label{surjective paths}
A path $v_0, \ldots, v_k$ in $\mathcal X$ is surjective if and only if
$$\mathrm {length}(\mathrm {Stub}(v_{i+1})) \le \mathrm {length}(\mathrm {Stub}(v_i))$$
for every $i$.
\end{Lem}
\begin{proof}
Suppose we are given a path in $\mathcal X$
from $v_0$ to $v_k$.
If $v_i$ is odd and $v_{i+1}$ is even then the vertex-to-edge map
$\mathrm {Stub}(v_{i+1})\map{}\mathrm {Stub}(e_i)$ is an isomorphism,
while $\mathrm {Stub}(v_i)\map{} \mathrm {Stub}(e_i)$
is surjective. Thus the path $v_i, v_{i+1}$ is surjective and
$\mathrm {length}(\mathrm {Stub}(v_{i+1})) \le \mathrm {length}(\mathrm {Stub}(v_i))$.
If $v_i$ is even and $v_{i+1}$ is odd
then the vertex-to-edge map
$\mathrm {Stub}(v_{i+1})\map{}\mathrm {Stub}(e_i)$ is surjective, while
$\mathrm {Stub}(v_i)\map{} \mathrm {Stub}(e_i)$
is an isomorphism. In particular
$$\mathrm {length}(\mathrm {Stub}(v_{i+1})) \ge \mathrm {length}(\mathrm {Stub}(v_i)).$$
Thus $\psi_{v_{i+1}}^{e_i}$ is injective if and only
if it is an isomorphism. This is equivalent to
$\mathrm {Stub}(v_{i+1})\cong \mathrm {Stub}(v_i)$, which is equivalent to
$$\mathrm {length}(\mathrm {Stub}(v_{i+1})) \le \mathrm {length}(\mathrm {Stub}(v_i)).$$
Since $v_0,\ldots,v_k$ is surjective if and only if $v_i,v_{i+1}$
is a surjective path for every $i$, the claim follows.
\end{proof}
\begin{Lem}\label{cores}
For any vertex $v$ of $\mathcal X$ there is a core vertex $v_0$ and a
surjective path in $\mathcal X$ from $v_0$ to $v$.
For any $\mathfrak n\in\mathfrak N$ there is a $\mathfrak n'\in\mathfrak N$ with $\mathfrak n|\mathfrak n'$
such that $v(\mathfrak n')$ is a core vertex, and $\mathfrak n'$ may be chosen either in
$\mathfrak N^\mathrm {even}$ or in $\mathfrak N^\mathrm {odd}$.
\end{Lem}
\begin{proof}
Set $w_0=v$, and construct a sequence of vertices $w_i$
inductively as follows.
If $w_i=w(\mathfrak n_i)$ is even and not a core vertex,
choose $\mathfrak l\in\mathfrak L$ prime to $\mathfrak n_i$ such that
$\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n_i)})\not=0$, and set $w_{i+1}=w(\mathfrak n_i\mathfrak l)$.
In the notation of Corollary \ref{length induction}, $a>0$, and so
$$\mathrm {length}(\mathrm {Stub}(w_i))<\mathrm {length}(\mathrm {Stub}(w_{i+1})).$$
If $w_i$ is already an even core vertex, then a similar argument
shows that $$\mathrm {length}(\mathrm {Stub}(w_i))=\mathrm {length}(\mathrm {Stub}(w_{i+1}))$$ for any choice of $\mathfrak l$.
If $w_i=w(\mathfrak n_i)$ is odd
choose (using Lemma \ref{local surjectivity})
$\mathfrak l\in\mathfrak L$ prime to $\mathfrak n_i$ such that localization at $\mathfrak l$
takes a free rank-one submodule of $\mathrm {Sel}_{\mathcal F(\mathfrak n_i)}$
isomorphically onto $H^1_\mathrm{unr}(K_\mathfrak l,T)$, and set $w_{i+1}=w(\mathfrak n_i\mathfrak l)$.
In the notation of Corollary \ref{length induction},
$a=\mathrm {length}(R)$ and $b=0$, and so
$$\mathrm {length}(\mathrm {Stub}(w_i))=\mathrm {length}(\mathrm {Stub}(w_{i+1})).$$
Eventually $\mathrm {length}(\mathrm {Stub}(w_k))=\mathrm {length}(R)$, and we have constructed a path from
$v$ to a core vertex $v_0=w_k$. By construction
$$
\mathrm {length}(\mathrm {Stub}(w_i))\le\mathrm {length}(\mathrm {Stub}(w_{i+1}))
$$
for every $i$, and so Lemma \ref{surjective paths} implies
that the path $w_k, w_{k-1},\ldots, w_0$
is a surjective path from $v_0$ to $v$.
The final claim is clear from the construction above.
\end{proof}
For any $\mathfrak a\in\mathfrak N$, let $\mathcal X_{0,\mathfrak a}$ be the subgraph
of $\mathcal X_0$ whose vertices consist of those core vertices $v(\mathfrak n)$ with
$\mathfrak a|\mathfrak n$. Two vertices are connected by an edge in $\mathcal X_{0,\mathfrak a}$
if and only if they are connected by an edge in $\mathcal X_0$.
\begin{Lem}\label{pre-connected}
If $v(\mathfrak a)$ is a core vertex then the graph $\mathcal X_{0,\mathfrak a}$
is path connected.
\end{Lem}
\begin{proof}
Fix $\mathfrak n=\mathfrak a\mathfrak b\in\mathfrak N$. We show by induction on the number of prime
factors of $\mathfrak b$ that there is a path in $\mathcal X_{0,\mathfrak a}$
from $v(\mathfrak n)$ to $v(\mathfrak a)$. Assume $\mathfrak b>1$, otherwise there is nothing
to prove. First suppose $v(\mathfrak n)$ is even, so that $\rho(\mathfrak n)=0$.
By Corollary \ref{rho}, $\rho(\mathfrak n/\mathfrak l)=1$ for any $\mathfrak l\in\mathfrak L$ dividing $\mathfrak b$.
Hence $v(\mathfrak n/\mathfrak l)$ is a vertex in $\mathcal X_{0,\mathfrak a}$ connected to $v(\mathfrak n)$
by an edge, and by the induction hypothesis there is a path in
$\mathcal X_{0,\mathfrak a}$ from $v(\mathfrak n/\mathfrak l)$ to $v(\mathfrak a)$.
Similarly, if $v(\mathfrak n)$ is odd and
$\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T))\not=0$
for some $\mathfrak l$ dividing $\mathfrak b$, then Proposition \ref{global duality}
implies $\rho(\mathfrak n/\mathfrak l)=\rho(\mathfrak n)-1=0$, and again by we are done by the
induction hypothesis.
We are left to treat the case where $\mathfrak n\in\mathfrak N^\mathrm {even}$ and
$\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T))$
is trivial for every $\mathfrak l$ dividing $\mathfrak b$. Thus
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T)=\mathrm {Sel}_{\mathcal F_\mathfrak b(\mathfrak a)}(K,\overline T)
\subset \mathrm {Sel}_{\mathcal F(\mathfrak a)}(K,\overline T).
$$
Since the $R/\mathfrak m$-vector space on the left has dimension $\rho(\mathfrak n)=1$
while the space on the right has dimension $\rho(\mathfrak a)\le 1$, we conclude
that the above inclusion is an equality. In particular
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T)
=\mathrm {Sel}_{\mathcal F_{\mathfrak b'}(\mathfrak a)}(K,\overline T)
=\mathrm {Sel}_{\mathcal F(\mathfrak a)}(K,\overline T)
$$
for any $\mathfrak b'|\mathfrak b$. Take $\mathfrak b'=\mathfrak b/\mathfrak q$ for some prime $\mathfrak q$ dividing
$\mathfrak b$, and let $\mathfrak l\in\mathfrak L$ be any prime not dividing
$\mathfrak n$ such that $\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T))\not=0$.
By Corollary \ref{rho}, $\rho(\mathfrak a\mathfrak b')=2$, $\rho(\mathfrak n\mathfrak l)=0$,
and, since
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T)=\mathrm {Sel}_{\mathcal F_{\mathfrak b}(\mathfrak a)}(K,\overline T)
\subset \mathrm {Sel}_{\mathcal F(\mathfrak a\mathfrak b')}(K,\overline T)
$$
is not killed by $\mathrm {loc}_\mathfrak l$, $\rho(\mathfrak a\mathfrak b'\mathfrak l)=1$.
Thus $v(\mathfrak a\mathfrak b)$, $v(\mathfrak a\mathfrak b\mathfrak l)$, $v(\mathfrak a\mathfrak b'\mathfrak l)$
is a path in $\mathcal X_{0,\mathfrak a}$. Finally, if
$\mathrm {loc}_\mathfrak{r}(\mathrm {Sel}_{\mathcal F(\mathfrak a\mathfrak b'\mathfrak l)}(K,\overline T))=0$
for every $\mathfrak{r}\in\mathfrak L$ dividing $\mathfrak b'\mathfrak l$,
then
$$
\mathrm {Sel}_{\mathcal F(\mathfrak a\mathfrak b'\mathfrak l)}(K,\overline T)=
\mathrm {Sel}_{\mathcal F_{\mathfrak b'\mathfrak l}(\mathfrak a)}(K,\overline T)\subset
\mathrm {Sel}_{\mathcal F_{\mathfrak b'}(\mathfrak a)}(K,\overline T)= \mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T).
$$
The Selmer groups on the left and right are both one dimensional
over $R/\mathfrak m$, so equality holds everywhere. This contradicts
$\mathrm {loc}_\mathfrak l(\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,\overline T))\not=0$, and we conclude that
$\mathrm {loc}_\mathfrak{r}(\mathrm {Sel}_{\mathcal F(\mathfrak a\mathfrak b'\mathfrak l)}(K,\overline T))\not=0$
for some $\mathfrak{r}\in\mathfrak L$ dividing $\mathfrak b'\mathfrak l$. This
returns us to the case of the paragraph above, and so
$v(\mathfrak n), v(\mathfrak n\mathfrak l),v(\mathfrak a\mathfrak b'\mathfrak l),
v(\mathfrak a\mathfrak b'\mathfrak l/\mathfrak{r})$ is a path in $\mathcal X_0$.
By the induction hypothesis, this may be continued to a path
terminating at $v(\mathfrak a)$.
\end{proof}
\begin{Prop}\label{hub}
The core subgraph is path connected and contains both even and odd
vertices. For any
vertex $v$ of $\mathcal X$ and any core vertex $v_0$ of $\mathcal X$,
there is a surjective path from $v_0$ to $v$.
\end{Prop}
\begin{proof}
The fact that $\mathcal X_0$ contains both even and odd vertices
follows from the final statement of Lemma \ref{cores}.
Suppose we are given two core vertices $v(\mathfrak a)$ and $v(\mathfrak b)$.
By the second part of Lemma \ref{cores} we may choose
$\mathfrak n\in\mathfrak N$ divisible by $\mathfrak a\mathfrak b$
such that $v(\mathfrak n)$ is a core vertex. By Lemma \ref{pre-connected},
there is a path in $\mathcal X_{0,\mathfrak a}$ from $v(\mathfrak a)$ to $v(\mathfrak n)$,
and a path in $\mathcal X_{0,\mathfrak b}$ from $v(\mathfrak b)$ to $v(\mathfrak n)$. Since
any path in $\mathcal X_{0,\mathfrak a}$ is also a path in $\mathcal X_0$, and
similarly for $\mathfrak b$, there is a path in $\mathcal X_0$ from $v(\mathfrak a)$
to $v(\mathfrak b)$. Since any path in $\mathcal X_0$ is surjective, any two
core vertices may be connected by a surjective path.
The final claim now follows from Lemma \ref{cores}.
\end{proof}
\begin{Cor}\label{section defect}
For any global section $s$ of $\mathrm {Stub}(\mathcal X)$ there is a unique
$\delta=\delta(s)$ with $0\le\delta\le\mathrm {length}(R)$
such that $s(v)$ generates $\mathfrak m^\delta \mathrm {Stub}(v)$
for every vertex $v$ of $\mathcal X$.
The section $s$ is determined by its value at any core vertex.
\end{Cor}
\begin{proof}
Fix a core vertex $v_0$ and define $\delta$ to be such that $s(v_0)$
generates $\mathfrak m^\delta \mathrm {Stub}(v_0)$. By Remark \ref{surjective remark}
and Proposition \ref{hub}, for any vertex $v$ in $\mathcal X$ there is
a surjective map $\mathrm {Stub}(v_0)\map{}\mathrm {Stub}(v)$ taking $s(v_0)$ to $s(v)$.
The claim follows.
\end{proof}
\subsection{The rigidity theorem}
\begin{Thm}\label{rigidity}
Assume Hypotheses \ref{cartesian} and \ref{useful primes}, and
suppose that we are given a nontrivial free Euler system of odd type
for $(T,\mathcal F,\mathfrak L)$.
There is a unique integer $\delta$, independent of $\mathfrak n\in\mathfrak N$,
with the property that
$\lambda_\mathfrak n$ generates $\mathfrak m^\delta \mathrm {Stub}_\mathfrak n$ for every $\mathfrak n\in\mathfrak N^\mathrm {even}$
and $\kappa_\mathfrak n$ generates $\mathfrak m^\delta\mathrm {Stub}_\mathfrak n$ for
every $\mathfrak n\in\mathfrak N^\mathrm {odd}$. Furthermore, $\delta$ is given by
\begin{eqnarray*}
\delta &=& \min \{\ \mathrm {ind}(\lambda_\mathfrak n, R) \mid \mathfrak n\in\mathfrak N^\mathrm {even} \} \\
&=& \min \{\ \mathrm {ind}(\kappa_\mathfrak n, \mathrm {Sel}_{\mathcal F(\mathfrak n)}) \mid \mathfrak n\in\mathfrak N^\mathrm {odd} \}.
\end{eqnarray*}
\end{Thm}
\begin{proof}
For a vertex $v=v(\mathfrak n)$ of the graph $\mathcal X$
of \S \ref{sheaf section}, we define $s(v)\in \mathrm{ES}(v)$ by
$$
s(v)=\left\{\begin{array}{ll} \lambda_\mathfrak n&\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}\\
\kappa_\mathfrak n&\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd}.
\end{array}\right.
$$
For an edge $e=e(\mathfrak n,\mathfrak n\mathfrak l)$ define $s(e)\in \mathrm{ES}(e)$ by
$$
s(e)=\left\{\begin{array}{ll} \mathrm {loc}_\mathfrak l(\kappa_\mathfrak n)
&\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {odd}\\
\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l})&\mathrm{if\ }\mathfrak n\in\mathfrak N^\mathrm {even}.
\end{array}\right.
$$
The reciprocity laws of Definition \ref{es} now say exactly that,
modifying the vertex-to-edge maps (\ref{edge choice}) by an element
of $R^\times$ if needed,
the function $v\mapsto s(v)$ forms a global section of the Euler system sheaf
$\mathrm{ES}(\mathcal X)$ with edge germ $e\mapsto s(e)$.
By Theorem \ref{esb}, this global section is actually a global
section of the subsheaf $\mathrm {Stub}(\mathcal X)\subset\mathrm{ES}(\mathcal X)$.
By Corollary \ref{section defect},
there is a unique $0\le\delta<\mathrm {length}(R)$
such that $s(v)$ generates $\mathfrak m^\delta\cdot \mathrm {Stub}(v)$
for every vertex $v$.
For any vertex $v$ with $s(v)\not=0$ we have
$$
\delta=\mathrm {ind}( s(v), \mathrm {Stub}(v) )\le \mathrm {ind}( s(v), \mathrm{ES}(v) )
$$
with equality if and only if $v$ is a core vertex. Since there are
even core vertices (by Proposition \ref{hub}),
$$
\delta=\min \{\ \mathrm {ind}(s(v),\mathrm{ES}(v)) \mid v \mathrm{\ even} \},
$$
and similarly with even replaced by odd.
\end{proof}
\subsection{A variant}
\label{variant}
In the applications to Iwasawa theory
we will need to work under slightly different
hypothesis on $T$. In this subsection we assume that $K$ is
a quadratic imaginary field. Fix an embedding $K^\mathrm{alg}\hookrightarrow \mathbf C$
and let $\tau\in G_K$ be the associated complex conjugation.
Let $R$ and $T$ be as in the introduction to \S \ref{S:Euler Systems},
but instead of assuming that $T$ is self Cartier dual via
an alternating pairing, assume, as in \S 1.3 of \cite{me},
that there is a perfect \emph{symmetric} pairing
$$
(\ ,\ ):T\times T\map{}R(1)
$$
which satisfies $(x^\sigma, y^{\tau\sigma\tau})=(x,y)^\sigma$ for
any $\sigma\in G_K$.
Let $\mathrm {Tw}(T)$ be the $G_K$-module whose underlying $R$-module is $T$,
but with the $G_K$-action twisted by conjugation by $\tau$.
The above pairing can then be viewed as a perfect $G_K$-invariant
pairing
$$
T\times\mathrm {Tw}(T)\map{}R(1).
$$
There is a canonical isomorphism
$H^1(K,T)\cong H^1(K,\mathrm {Tw}(T))$
given on cocycles by $c(\sigma)\mapsto c^*(\sigma)=c(\tau\sigma\tau)$.
For any finite place $v$ of $K$, there is similarly a canonical isomorphism
from the local cohomology of $T$ at $v$ to the local cohomology of
$\mathrm {Tw}(T)$ at $\tau(v)$. This isomorphism induces a local Tate pairing
\begin{equation}\label{twisted local duality}
H^1(K_v,T)\times H^1(K_{\tau(v)},T)\map{}R,
\end{equation}
and by direct calculation on cocycles one can check that if $v=\tau(v)$
then this pairing is symmetric. Thus locally at a degree two prime
of $K$, the cohomology of $T$ behaves exactly as if $T$ were self-dual
via an alternating pairing.
We now define a Selmer structure $(\mathcal F,\Sigma_\mathcal F)$ exactly
as in \S \ref{selmer modules}, and say that a Selmer structure is
self-dual if the local conditions
$H^1_\mathcal F(K_v,T)$ and $H^1_\mathcal F(K_{\tau(v)},T)$
are exact orthogonal complements under the pairing
(\ref{twisted local duality}) for every $v\in\Sigma_\mathcal F$.
All of the results of \S \ref{S:Euler Systems} hold verbatim under these
modified hypothesis (one need only verify that Lemma \ref{local freeness} and
Propositions \ref{structure} and \ref{global duality} hold,
as these are the only
places where the self-duality hypotheses on $T$ and $\mathcal F$ are
directly invoked; for the latter two,
the reader may consult Sections 1.4 and 1.5
of \cite{me}) with one minor caveat: the
statement of Lemma \ref{local freeness} and the proof of
Proposition \ref{global duality} require the self-duality of
$H^1(K_\mathfrak l,T)$, and so we must add the hypothesis
that $\mathfrak L$ contains only degree two primes of $K$.
Finally, we remark that if the action of $G_K$ on $T$ extends to an
action of $G_\mathbf Q$ then the alternate hypotheses of this subsection are
equivalent to those of the introduction to \S \ref{S:Euler Systems},
since one may identify $T\cong\mathrm {Tw}(T)$ as
$G_K$-modules via the map $x\mapsto x^\tau$.
\section{Iwasawa theory of elliptic curves}
\label{Iwasawa}
Let $K$ be a quadratic imaginary field of discriminant $d_K$ and
quadratic character $\epsilon$,
$p>3$ a rational prime, and $E/\mathbf Q$ an elliptic curve with conductor
$N$. Assume that $E$ has either multiplicative or good ordinary
reduction at $p$, and that $(d_K,pN)=1$.
Let $N^-$ be the largest divisor of $N$ which is prime to $p$ and satisfies
$\epsilon(q)=1$ for all primes $q\mid N^-$.
Factor $N=N^+ N^-$.
Let $\tau$ be a fixed choice of complex conjugation.
\begin{Hyp}\label{irreducible hyp}
Throughout \S \ref{Iwasawa} we assume:
\begin{enumerate}
\item $E[p]$ is absolutely irreducible as a $G_K=\mathrm{Gal}(K^\mathrm{alg}/K)$-module,
\item $N^-$ is squarefree.
\end{enumerate}
\end{Hyp}
We denote by $D_\infty$ the anticyclotomic
$\mathbf Z_p$-extension of $K$, characterized by $\tau\sigma\tau=\sigma^{-1}$
for any $\sigma\in\Gamma=\mathrm{Gal}(D_\infty/K)$.
Let $D_m\subset D_\infty$ be the subfield with $[D_m:K]=p^m$,
and set $\Lambda=\mathbf Z_p[[\Gamma]]$.
\subsection{Selmer modules over $\Lambda$}
\begin{Def}
A degree two prime $\mathfrak l\nmid N$ of $K$ is \emph{$k$-admissible} if
$\mathbf{N}(\mathfrak l)\not\equiv 1\pmod{p}$, and if there is a decomposition
$$
E[p^k]\cong (\mathbf Z/p^k\mathbf Z)\oplus \mu_{p^k}
$$
of $\mathrm{Gal}(K^\mathrm{unr}_\mathfrak l/K_\mathfrak l)$-modules.
A $1$-admissible prime will simply be called \emph{admissible}.
The set of $k$-admissible primes is denoted $\mathfrak L_k$, and we let
$\mathfrak N_k$ denote the set of squarefree products of primes in $\mathfrak L_k$.
\end{Def}
Let $q\mid N^-$ be a rational prime. By Hypothesis \ref{irreducible hyp}(b)
$E$ has multiplicative reduction at $q$, and hence split multiplicative
reduction at the prime $\mathfrak q$ of $K$ above $q$.
The Tate parametrization shows that $T_p(E)$ has the form
$\left(\begin{matrix}\epsilon_{\mathrm{cyc}}& * \\ 0&
1 \end{matrix}\right)$
as a $G_{K_\mathfrak q}$-module, and
we denote by $\mathrm {Fil}_q(T_p(E))\subset T_p(E)$ the $\mathbf Z_p$-line on which
$G_{K_\mathfrak q}$ acts via $\epsilon_{\mathrm{cyc}}$.
For any extension $L/K_\mathfrak q$ the \emph{ordinary} submodule
$$
H^1_\mathrm {ord}(L,T_p(E))\subset H^1(L,T_p(E))
$$
is defined to be the image of
$H^1(L,\mathrm {Fil}_q(T_p(E)))$,
and $H^1_\mathrm {ord}(L,E[p^k])$ is defined similarly.
For a $k$-admissible prime $\mathfrak l\in\mathfrak L_k$ we have a similar ordinary
local condition $H^1_\mathrm {ord}(L,E[p^k])$ for any extension $L/K_\mathfrak l$, as
in \S \ref{ordinary selmer}.
For the prime $p$, $T_p(E)$ has a distinguished line on which
an inertia group at $p$ in $G_\mathbf Q$ acts via the cyclotomic character.
Call this line $\mathrm {Fil}_p(T_p(E))$ and define the
ordinary condition at $p$ to be the image of
$$
H^1(L,\mathrm {Fil}_p(T_p(E)))\map{}H^1(L,T_p(E))
$$
for any finite extension $L/\mathbf Q_p$, and similarly for $E[p^k]$ and
$E[p^\infty]$.
\begin{Lem}
For any $\mathfrak l\in\mathfrak L_k$ the module
$$
\varprojlim_m H^1_\mathrm{unr}(D_{m,\mathfrak l}, E[p^k]) \stackrel{\mathrm{def}}{=}
\varprojlim_m \bigoplus_{w\mid\mathfrak l} H^1_\mathrm{unr}(D_{m,w}, E[p^k])
$$
is free of rank one over $\Lambda/p^k\Lambda$, and the same is true
with $\mathrm{unr}$ replaced by $\mathrm {ord}$.
\end{Lem}
\begin{proof}
Since $\mathfrak l$ splits completely in $D_\infty$, Shapiro's lemma gives
an isomorphism
$$
\varprojlim_m \bigoplus_{w\mid\mathfrak l} H^1(D_{m,w}, E[p^k])
\cong H^1(K_\mathfrak l, E[p^k]\otimes\Lambda) \cong H^1(K_\mathfrak l, E[p^k])\otimes\Lambda.
$$
This, together with Lemma \ref{local freeness}, gives the claim.
\end{proof}
We define the Selmer groups
$$
\mathcal{S}(D_\infty,T_p(E)) \subset \varprojlim H^1(D_m,T_p(E))
\hspace{1cm}
\mathrm {Sel}(D_\infty , E[p^\infty]) \subset \varinjlim H^1(D_m,E[p^\infty])
$$
to be the classes which are ordinary at the primes dividing $pN^-$
and unramified at all other primes, and abbreviate
$$
\mathcal{S}=\mathcal{S}(D_\infty,T_p(E))
\hspace{1cm}
X=\mathrm{Hom}\big(\mathrm {Sel}(D_\infty , E[p^\infty]),\mathbf Q_p/\mathbf Z_p\big).
$$
For any $\mathfrak n\in\mathfrak N_k$, let
$$
\mathcal{S}_\mathfrak n(D_\infty, E[p^k])\subset \varprojlim_m H^1(D_m, E[p^k])
$$
be the $\Lambda$-submodule of classes which are ordinary at the
primes dividing $\mathfrak n pN^-$, and unramified at all other primes.
\subsection{Euler systems over $\Lambda$}
\label{es lambda section}
\begin{Def}
Given $\mathfrak n\in\mathfrak N_1$, let $n$ be the positive integer satisfying $n\mathcal {O}_K=\mathfrak n$.
We say that $\mathfrak n$ is \emph{definite} if $\epsilon(nN^-)=-1$,
and is \emph{indefinite} if $\epsilon(nN^-)=1$. Let
$\mathfrak N^\mathrm {definite}_k\subset \mathfrak N_k$ be the subset of definite products, and
define $\mathfrak N^\mathrm {indefinite}_k$ similarly.
\end{Def}
Suppose that for every $k>0$ we are given families
\begin{equation}\label{lambda es}
\{\kappa_\mathfrak n\in \mathcal{S}_\mathfrak n(D_\infty, E[p^k])
\mid \mathfrak n\in\mathfrak N_k^\mathrm {indefinite} \}
\hspace{1cm}
\{\lambda_\mathfrak n\in \Lambda/p^k\Lambda
\mid \mathfrak n\in\mathfrak N_k^\mathrm {definite} \}
\end{equation}
which, as $k$ varies, are compatible with the
inclusion $\mathfrak N_{k+1}\subset\mathfrak N_k$
and the natural maps $\Lambda/p^{k+1}\Lambda\map{}\Lambda/p^k\Lambda$
and $E[p^{k+1}]\map{p}E[p^k]$.
Assume that these classes
satisfy the first and second reciprocity laws:
\begin{enumerate}
\item for any $\mathfrak n\mathfrak l\in \mathfrak N^\mathrm {indefinite}_k$ there is an isomorphism
of $\Lambda$-modules
$$
\varprojlim_m H^1_\mathrm {ord}(D_{m,\mathfrak l}, E[p^k])\cong \Lambda/p^k\Lambda
$$
taking $\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n\mathfrak l})$ to $\lambda_\mathfrak n$;
\item for any $\mathfrak n\mathfrak l\in \mathfrak N^\mathrm {definite}_k$ there is an isomorphism
of $\Lambda$-modules
$$
\varprojlim_m H^1_\mathrm{unr}(D_{m,\mathfrak l}, E[p^k])\cong \Lambda/p^k\Lambda
$$
taking $\mathrm {loc}_\mathfrak l(\kappa_{\mathfrak n})$ to $\lambda_{\mathfrak n\mathfrak l}$.
\end{enumerate}
Since the empty product lies in $\mathfrak N_k$
for every $k$, we obtain a distinguished element
\begin{equation}\label{senator}
\begin{array}{cll}
{\lambda^{\infty}}\in\Lambda & \mathrm{if} & \epsilon(N^-)=-1\\
{\kappa^\infty}\in\mathcal{S} &
\mathrm{if } & \epsilon(N^-)=1
\end{array}
\end{equation}
defined as the inverse limit of $\lambda_1$ or $\kappa_1$ as $k$ varies.
\begin{Lem}\label{torsion-free}
The $\Lambda$-module $\mathcal{S}$
is torsion free.
\end{Lem}
\begin{proof}
As the torsion subgroup
of $E(D_\infty)$ is finite (since $D_\infty$ has primes of finite residue
degree), $H^0(D_\infty, T_p(E))=0$ and the claim follows from
\cite[Lemma 1.3.3]{pr95}.
\end{proof}
The following theorem will be proved in \S \ref{mc proof}.
\begin{Thm}\label{abstract mc}
Assume that the special element (\ref{senator}) is nonzero and
let $X_{\Lambda-\mathrm{tors}}$
denote the torsion submodule of $X$.
\begin{enumerate}
\item
One has the rank formulas
$$
\mathrm{rank}_\Lambda\mathcal{S}=\mathrm{rank}_\Lambda X
=\left\{\begin{array}{ll} 0 & \mathrm{if\ }\epsilon(N^-)=-1\\
1 & \mathrm{if\ }\epsilon(N^-)=1.
\end{array}\right.
$$
\item
For any height one prime $\mathfrak P$ of $\Lambda$ one has
\begin{equation*}
\mathrm {ord}_\mathfrak P\big(\mathrm{char}(X_{\Lambda-\mathrm{tors}})\big) \le
2\cdot \left\{\begin{array}{ll} \mathrm {ord}_\mathfrak P(\lambda^\infty)
& \mathrm{if\ }\epsilon(N^-)=-1\\
\mathrm {ord}_\mathfrak P\big(\mathrm{char}(\mathcal{S}/\Lambda\kappa^\infty)\big)
& \mathrm{if\ }\epsilon(N^-)=1.
\end{array}\right.
\end{equation*}
\item
Equality holds in (b) if the following condition is satisfied:
there exists a $k_0$ such that for all
$j\ge k_0$ the set
$$
\{\lambda_\mathfrak n\in \Lambda/p^j\Lambda \mid \mathfrak n\in \mathfrak N_j^\mathrm {definite} \}
$$
contains an element with nontrivial image in $\Lambda/(\mathfrak P,p^{k_0})$.
\end{enumerate}
\end{Thm}
\subsection{Reduction at a height one prime}
Set $V_p(E)=T_p(E)\otimes\mathbf Q_p$, so that we have the exact sequence
\begin{equation}\label{short exact}
0\map{}T_p(E)\map{}V_p(E)\map{}E[p^\infty]\map{}0.
\end{equation}
Fix $\mathfrak P\not=p\Lambda$ a height-one prime of $\Lambda$,
and denote by $\mathcal {O}_\mathfrak P$ the integral closure of $\Lambda/\mathfrak P$,
viewed as a Galois module with
trivial action. The ring $\mathcal {O}_\mathfrak P$ is the ring of
integers of a finite extension $\Phi_\mathfrak P/\mathbf Q_p$, and we denote by
$\mathfrak m_\mathfrak P$ its maximal ideal.
By tensoring (\ref{short exact}) with $\mathcal {O}_\mathfrak P$ (viewed as a $G_K$-module
via the natural map $G_K\map{}\Lambda^\times$),
we obtain an exact sequence of $\mathcal {O}_\mathfrak P[[G_K]]$-modules
\begin{equation}\label{twisted short exact}
0\map{}T_\mathfrak P\map{}V_\mathfrak P\map{}W_\mathfrak P\map{}0.
\end{equation}
For any prime $\mathfrak q$ of $K$ and $A$ and one of $T_\mathfrak P$, $V_\mathfrak P$, or $W_\mathfrak P$,
we define a submodule
$$
H^1_{\mathcal F_\mathfrak P}(K_\mathfrak q,M)\subset H^1(K_\mathfrak q,M)
$$
as follows. First suppose $M=V_\mathfrak P$. If $\mathfrak q\nmid pN^-$ then
$H^1_{\mathcal F_\mathfrak P}(K_\mathfrak q,V_\mathfrak P)$ is the unramified cohomology classes.
If $\mathfrak q\mid pN^-$ then $H^1_{\mathcal F_\mathfrak P}(K_\mathfrak q,V_\mathfrak P)$ is defined to be
the image of
$$
H^1(K_\mathfrak q, \mathrm {Fil}_\mathfrak q(T_p(E))\otimes \Phi_\mathfrak P)\map{} H^1(K_\mathfrak q,V_\mathfrak P).
$$
If $M=T_\mathfrak P$ or $W_\mathfrak P$, then $H^1_{\mathcal F_\mathfrak P}(K_\mathfrak q,M)$ is obtained
from $H^1_{\mathcal F_\mathfrak P}(K_\mathfrak q,V_\mathfrak P)$ by propagation, in the sense of
Remark \ref{propagation}.
These local submodules define global Selmer groups which we denote
by $\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,M)$.
\begin{Prop}\label{control}
Shapiro's lemma, the natural map
$T_p(E)\otimes\Lambda\map{}T_\mathfrak P$, and its dual induce maps
\begin{eqnarray*}
\mathcal{S}/\mathfrak P\mathcal{S} \to \mathrm {Sel}_{\mathcal F_\mathfrak P}(K,T_\mathfrak P)
\hspace{1cm}
\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P) \to \mathrm {Sel}(D_\infty, E[p^\infty])[\mathfrak P].
\end{eqnarray*}
The first map is injective.
There is a finite set of height one primes $\Sigma_\Lambda$ of $\Lambda$
such that if $\mathfrak P\not\in\Sigma_\Lambda$, then these maps have
finite kernel and cokernel which are bounded by a constant depending
on $[\mathcal {O}_\mathfrak P:\Lambda/\mathfrak P]$ but not on $\mathfrak P$ itself.
\end{Prop}
\begin{proof}
The proof requires only minor modifications from that of
\cite[Proposition 5.3.14]{mazur-rubin}.
One must first prove a local control theorem at
each prime $\mathfrak q$ of $K$. If $\mathfrak q\nmid pN^-$ this local result is
\cite[Lemma 5.3.13]{mazur-rubin}. If $\mathfrak q\mid p$ the desired result is
\cite[Lemma 2.2.7]{me}. The case $\mathfrak q\mid N^-$ is similar to the latter,
but is greatly simplified by the fact that such
$\mathfrak q$ split completely in $D_\infty$.
With the local control results in hand, the remainder of the proof follows
that of \cite[Proposition 5.3.14]{mazur-rubin} verbatim.
\end{proof}
\begin{Lem}\label{injective reduction}
Abbreviate $\mathcal{S}_\mathfrak P= \mathrm {Sel}_{\mathcal F_\mathfrak P}(K,T_\mathfrak P)$.
The natural map
$$
\mathcal{S}_\mathfrak P/p^k\mathcal{S}_\mathfrak P\map{}\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,T_\mathfrak P/p^k T_\mathfrak P)
$$
is injective, where the Selmer structure on $T_\mathfrak P/p^k T_\mathfrak P$
is obtained from the Selmer structure on $T_\mathfrak P$ by propagation
(Remark \ref{propagation}).
\end{Lem}
\begin{proof}
This is Lemma 3.7.1 of \cite{mazur-rubin}.
\end{proof}
For any pair of positive integers $k\le j$, set
$$
\delta_\mathfrak P(k,j)=
\min\{ \mathrm {ind}(\lambda_\mathfrak n, \mathcal {O}_\mathfrak P/p^k\mathcal {O}_\mathfrak P) \mid \mathfrak n\in
\mathfrak N_j^\mathrm {definite}\}\le \infty.
$$
As $\delta_\mathfrak P(k,j)\le\delta_\mathfrak P(k,j+1)$, we may define
$\delta_\mathfrak P(k)=\lim_{j\to\infty}\delta_\mathfrak P(k,j)$.
\begin{Prop}\label{dvr bound}
If $\epsilon(N^-)=-1$ and $\lambda^\infty\in\Lambda$ has
nontrivial image in $\mathcal {O}_\mathfrak P/p^k\mathcal {O}_\mathfrak P$ then
$$
\mathrm {length}_{\mathcal {O}_\mathfrak P}\big(\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)\big)
+2\delta_\mathfrak P(k)= 2\cdot \mathrm {length}_{\mathcal {O}_\mathfrak P}(\mathcal {O}_\mathfrak P/\mathcal {O}_\mathfrak P\lambda^\infty).
$$
If $\epsilon(N^-)=1$ and $\kappa^\infty$ has nontrivial image in
$\mathcal{S}_\mathfrak P/p^k\mathcal{S}_\mathfrak P$ then
\begin{enumerate}
\item $\mathcal{S}_\mathfrak P$ is free of rank one over $\mathcal {O}_\mathfrak P$,
\item $\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)$ has $\mathcal {O}_\mathfrak P$-corank one, and
\item
$
\mathrm {length}_{\mathcal {O}_\mathfrak P}\big(\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)_{/\mathrm{div}}\big)
+2\delta_\mathfrak P(k)=
2\cdot\mathrm {length}_{\mathcal {O}_\mathfrak P}\big(\mathcal{S}_\mathfrak P/
\mathcal {O}_\mathfrak P\kappa^\infty\big)
$
(the subscript $/\mathrm{div}$ indicates the quotient by the
maximal $\mathcal {O}_\mathfrak P$-divisible submodule).
\end{enumerate}
\end{Prop}
\begin{proof}
Let $k$ be as in the statement of the proposition.
For any $j\ge k$, abbreviate
$$
T_j=T_{\mathfrak P}/p^j T_\mathfrak P \hspace{1cm}
R_j=\mathcal {O}_{\mathfrak P}/p^j\mathcal {O}_\mathfrak P.
$$
Let $\mathcal F$ denote the Selmer structure on $T_j$
obtained by propagation (Remark \ref{propagation}) of $\mathcal F_\mathfrak P$
from $T_\mathfrak P$, and use the same notation for the Selmer structure
on $W_\mathfrak P[p^j]$ propagated from $\mathcal F_\mathfrak P$ on $W_\mathfrak P$.
By applying the reduction maps $\Lambda/p^j \map{} R_j$ and
$$
\varprojlim_m H^1(D_m, E[p^j])\cong H^1(K, E[p^j]\otimes\Lambda)
\map{}H^1(K,T_j)
$$
to the Euler system (\ref{lambda es}), we obtain families
\begin{equation}\label{reduced euler system}
\{\overline{\kappa}_{\mathfrak n} \in
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_j)\mid \mathfrak n\in \mathfrak N_j^\mathrm {indefinite}\}
\hspace{.7cm}
\{\overline{\lambda}_{\mathfrak n} \in R_j \mid \mathfrak n\in \mathfrak N_j^\mathrm {definite}\}
\end{equation}
By assumption (and Lemma \ref{injective reduction}),
$\overline{\kappa}_{1}$ or $\overline{\lambda}_{1}$
(depending on whether
$\epsilon(N^-)=1$ or $-1$) is nontrivial, where $1\in\mathfrak N_j$ is the
empty product.
A choice of uniformizer of $\mathcal {O}_\mathfrak P$ determines an isomorphism
$T_j\cong W_\mathfrak P[p^j]$, and under such an isomorphism the
Selmer structures $\mathcal F$ are identified (as in the proof of Lemma 1.3.8(i)
of \cite{rubin}). In particular
\begin{equation}\label{important identification}
\mathrm {Sel}_{\mathcal F}(K,T_j)\cong\mathrm {Sel}_{\mathcal F}(K,W_\mathfrak P[p^j])
\cong \mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)[p^j],
\end{equation}
where the second isomorphism follows from Lemma \ref{subquotients}
and the following
\begin{Lem}\label{first hypotheses}
The triple $(T_j,\mathcal F,\mathfrak L_{j})$ satisfies
Hypotheses \ref{cartesian} and \ref{useful primes}, as well as
the hypotheses of \S \ref{variant}.
More precisely the following hold.
\begin{enumerate}
\item $T_j$ is residually an absolutely irreducible $G_K$-module.
\item For any $\mathfrak l\in\mathfrak L_j$, the Frobenius at $\mathfrak l$ acts on
$T_j$ with eigenvalues $\mathbf{N}(\mathfrak l)$ and $1$.
\item There is a perfect $R_j$-bilinear symmetric pairing
$T_j\times T_j\map{}R_j(1)$
satisfying $(s^\sigma,t^{\tau\sigma\tau})=(s,t)^\sigma$
for any $\sigma\in G_K$.
\item The Selmer structure $\mathcal F$ is
cartesian in the sense of Definition \ref{cartesian def}, and is self-dual
in the sense of \S \ref{variant}, relative to the pairing
above.
\item
The set $\mathfrak L_{j}$ satisfies Hypothesis \ref{useful primes}.
\end{enumerate}
\end{Lem}
\begin{proof}
Since $\mathfrak l$ splits completely in $D_\infty$, there is an
isomorphism of Galois modules
$T_\mathfrak P \cong T_p(E)\otimes \mathcal {O}_\mathfrak P$ with $G_{K_\mathfrak l}$
acting \emph{trivially} on $\mathcal {O}_\mathfrak P$.
In particular, the residual representation of $T_\mathfrak P$
is absolutely irreducible since $E[p]$ is (by Hypothesis
\ref{irreducible hyp}), and property (b) is immediate from the definition
of a $k$-admissible prime.
Define an $\mathcal {O}_\mathfrak P(1)$-valued pairing on $T_\mathfrak P$ by the rule
$$(x\otimes\alpha,y\otimes\beta)_\mathfrak P=\alpha\beta\cdot (x,y^\tau),$$
where $(\ ,\ )$ is the Weil pairing on $T_p(E)$. The reduction
of this pairing modulo $p^j$ defines the pairing of (c).
The cartesian property of (d) is a consequence of the
fact that the Selmer structure $\mathcal F_\mathfrak P$ on $T_\mathfrak P$ is obtained
by propagation from $V_\mathfrak P$; see \cite[Lemma 3.7.1]{mazur-rubin}.
The self-duality follows from this and the self-duality of the local
conditions defining the canonical Selmer structure on $V_\mathfrak P$.
Part (e) is \cite[Theorem 3.2]{BD03}.
\end{proof}
\begin{Lem}
The decomposition
$\mathfrak N_j=\mathfrak N_j^\mathrm {odd}\sqcup\mathfrak N_j^\mathrm {even}$ (relative to the data
$T$, $\mathcal F$, $\mathfrak L_j$)
of Definition \ref{stub} is given by
\begin{equation}\label{parity decomp}
\mathfrak N_j^\mathrm {odd}=\mathfrak N_j^\mathrm {indefinite}\hspace{1cm}\mathfrak N_j^\mathrm {even}=\mathfrak N_j^\mathrm {definite}.
\end{equation}
Furthermore, the families (\ref{reduced euler system})
form an Euler system of odd type for $(T_j,\mathcal F,\mathfrak L_{j})$.
\end{Lem}
\begin{proof}
First note that either (\ref{parity decomp}) holds or the opposite relation
$$
\mathfrak N_j^\mathrm {even}=\mathfrak N_j^\mathrm {indefinite}\hspace{1cm}\mathfrak N_j^\mathrm {odd}=\mathfrak N_j^\mathrm {definite}
$$
holds (simply because the even/odd decomposition of $\mathfrak N_j$
is determined by the function $\rho(\mathfrak n)$ of Corollary \ref{rho}, the
definite/indefinite decomposition is determined by $\epsilon(\mathfrak n)$,
and both functions are multiplied by $-1$ when one replaces $\mathfrak n$
by $\mathfrak n\mathfrak l$.) The reciprocity laws of \S \ref{es lambda section}
imply that the reduced families (\ref{reduced euler system})
satisfy the reciprocity laws of Definition \ref{es},
and so this family forms a nonzero Euler system which is of odd
type if (\ref{parity decomp}) holds, and is of even type otherwise.
By Proposition \ref{no even es}, the Euler system must by
of odd type, so (\ref{parity decomp}) holds.
\end{proof}
The Euler system of the lemma for $(T_k, \mathcal F, \mathfrak L_k)$ may not be free,
but this can be remedied by shrinking the set of indexing primes $\mathfrak L_k$
slightly.
\begin{Lem}\label{free es}
For any $j\ge 2k$ the families
$$
\{\overline{\kappa}_{\mathfrak n} \in
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_k)\mid \mathfrak n\in \mathfrak N_j^\mathrm {indefinite}\}
\hspace{.7cm}
\{\overline{\lambda}_{\mathfrak n} \in R_k \mid \mathfrak n\in \mathfrak N_j^\mathrm {definite}\}
$$
form a free Euler system of odd type for $(T_k,\mathcal F, \mathfrak L_j)$.
\end{Lem}
\begin{proof}
Fix $\mathfrak n\in\mathfrak N_j^\mathrm {odd}=\mathfrak N_j^\mathrm {indefinite}$ and $j\ge 2k$. We must show that
there is a free rank one $R_k$-submodule of $\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_k)$
containing $\overline{\kappa}_\mathfrak n$. By Proposition \ref{structure}
we may decompose
$$
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_j)\cong R_j\oplus N\oplus N
\hspace{1cm}
\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_k)\cong R_k\oplus M\oplus M.
$$
Let $\mathfrak m$ be the maximal ideal of $\mathcal {O}_\mathfrak P$ and fix a uniformizer $\pi$.
Let $e$ be the ramification degree of $\mathcal {O}_\mathfrak P$, so that $R_k$ has length
$ek$. If $\mathfrak m^{ek-1}M\not=0$ then $\overline{\kappa}_\mathfrak n=0$
by Proposition \ref{annihilation}, and there is nothing to prove.
Assume therefore that $\mathfrak m^{ek-1}M=0$.
Lemma \ref{subquotients} gives a commutative diagram
$$
\xymatrix{
{\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_j) \ar[d]\ar[rr]^{\pi^{e(j-k)}}}
& & {\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_j)[\mathfrak m^{ek}]} \\
{\mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_k). \ar[rru]^\cong}
}
$$
The diagonal isomorphism implies that
$\mathfrak m^{ek-1}\cdot \mathrm {Sel}_{\mathcal F(\mathfrak n)}(K,T_j)[\mathfrak m^{ek}]$
is a cyclic module, and so
$\mathfrak m^{ek-1}N=0$. But $j\ge 2k$ then implies that the image of $N$ under the
vertical arrow is zero, and hence the image of the vertical arrow
is free of rank one. Since $\overline{\kappa}_\mathfrak n$ is contained in this
image, the claim is proved.
\end{proof}
Now fix $j\ge 2k$.
Since $\mathfrak N_j^\mathrm {even}=\mathfrak N_j^\mathrm {definite}$, the empty product lies in $\mathfrak N_j^\mathrm {even}$
if and only if $\epsilon(N^-)=-1$. If this is the case, then
applying Theorem \ref{rigidity} with $\mathfrak n=1$ tells us that
$\mathrm {Sel}_{\mathcal F}(K,T_k)\cong M\oplus M$
with
$$
\mathrm {length}_{\mathcal {O}_\mathfrak P}(M)+\delta_\mathfrak P(k,j) = \mathrm {ind}(\overline{\lambda}_1, R_k)
=\mathrm {ind}(\lambda^\infty, \mathcal {O}_\mathfrak P/p^k\mathcal {O}_\mathfrak P).
$$
In particular, since the right hand side is $<k$,
(\ref{important identification}) implies that
$M\oplus M\cong \mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)$. We conclude
$$
\mathrm {length}_{\mathcal {O}_\mathfrak P}(\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P))+2\cdot\delta_\mathfrak P(k,j)=
2\cdot \mathrm {length}_{\mathcal {O}_\mathfrak P}(\mathcal {O}_\mathfrak P/\mathcal {O}_\mathfrak P\lambda^\infty).
$$
Now consider the case $\epsilon(N^-)=1$. First note that
Theorem \ref{rigidity} (again with $\mathfrak n=1$) tells us that
$\mathrm {Sel}_{\mathcal F}(K,T_k)\cong R\oplus M\oplus M$
with
$$
\mathrm {length}_{\mathcal {O}_\mathfrak P}(M) +\delta_\mathfrak P(k,j)=
\mathrm {ind}\big(\overline{\kappa}_1, \mathrm {Sel}_\mathcal F(K,T_k)\big).
$$
As above, this implies that $\mathrm {length}_{\mathcal {O}_\mathfrak P}(M)<k$.
Combining this with (\ref{important identification}) tells us that
$
\mathcal{S}_\mathfrak P\cong\varprojlim_k \mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)[p^k]
$
is a torsion-free rank-one $\mathcal {O}_\mathfrak P$-module.
By \cite[Lemma 3.7.1]{mazur-rubin} the reduction map
$$
\mathcal{S}_\mathfrak P/p^k\mathcal{S}_\mathfrak P\map{}\mathrm {Sel}_\mathcal F(K,T_k)
$$
is injective, and it follows from Theorem \ref{rigidity} that
\begin{eqnarray*}
\mathrm {length}_{\mathcal {O}_\mathfrak P}\big(\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)_{/\mathrm{div}}\big)
+2\delta_\mathfrak P(k,j)
&=& \mathrm {length}_{\mathcal {O}_\mathfrak P}(M\oplus M) +2\delta_\mathfrak P(k,j)\\
&=& 2\cdot \mathrm {ind}(\overline{\kappa}_1, \mathrm {Sel}_\mathcal F(K,T_k)) \\
&=&2\cdot \mathrm {length}_{\mathcal {O}_\mathfrak P}(\mathcal{S}_\mathfrak P/\mathcal{S}_\mathfrak P\kappa^\infty\big).
\end{eqnarray*}
Now take $j\to\infty$. This completes the proof of Proposition \ref{dvr bound}.
\end{proof}
\subsection{Proof of Theorem \ref{abstract mc}}
\label{mc proof}
The theorem is reduced to Proposition \ref{dvr bound} exactly as in
the proof of Theorem 5.3.10 of \cite{mazur-rubin}.
Assume that $\kappa^\infty$ or $\lambda^\infty$ is nonzero,
depending on whether we are in the case $\epsilon(N^-)=1$ or $-1$.
Since $\mathcal{S}$ is a finitely generated torsion-free $\Lambda$-module
(Lemma \ref{torsion-free}), if $\epsilon(N^-)=1$
it is easily seen that the image
of $\kappa^\infty$ in $\mathcal{S}/\mathfrak P\mathcal{S}$ is nonzero
for all but finitely many height-one primes $\mathfrak P$. Similar comments
hold for $\lambda^\infty$ when $\epsilon(N^-)=-1$.
Fix a finite set $\Sigma_\Lambda$ of height one primes of $\Lambda$
as in Proposition \ref{control} large enough that $\Sigma_\Lambda$
contains $p\Lambda$ and all prime divisors of the characteristic ideal
of the torsion submodule of $X$, and
large enough that the special element (\ref{senator}) has nonzero image in
$\mathcal{S}/\mathfrak P\mathcal{S}$ or $\Lambda/\mathfrak P\Lambda$
for all $\mathfrak P\not\in\Sigma_\Lambda$.
Fix any $\mathfrak P\not\in\Sigma_\Lambda$ and suppose $\epsilon(N^-)=1$.
By Proposition \ref{control}, $\kappa^\infty$ has nonzero image in
$\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,T_\mathfrak P)$. Proposition \ref{dvr bound} then implies that
$\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,T_\mathfrak P)$ and $\mathrm {Sel}_{\mathcal F_\mathfrak P}(K,W_\mathfrak P)$ have rank and corank
one (respectively) as $\mathcal {O}_\mathfrak P$-modules. It now follows from Proposition
\ref{control} that
$$
\mathrm{rank}_\Lambda\mathcal{S}=\mathrm{rank}_{\mathcal {O}_\mathfrak P}
(\mathcal{S}\otimes_\Lambda\mathcal {O}_\mathfrak P)=1
$$
and similarly for $X$. The case $\epsilon(N^-)=-1$ is
similar, and this completes the proof of (a).
Let $\mathfrak P$ be any height-one prime of $\Lambda$ different from $p\Lambda$,
and let $f\in\Lambda$ be a distinguished polynomial which generates $\mathfrak P$.
For each positive integer $m$ set $\mathfrak P_m=(f+p^m)\Lambda$. For
$m\gg 0$, $\mathfrak P_m$ is a prime ideal $\not\in\Sigma_\Lambda$ with
$\Lambda/\mathfrak P\cong\Lambda/\mathfrak P_m$ as rings (by Hensel's lemma).
Arguing as in the proof of Theorem 5.3.10 of \cite{mazur-rubin} and using
Proposition \ref{control}, we obtain
$$
\mathrm {length}_{\mathbf Z_p}\big( \mathrm {Sel}_{\mathcal F_{\mathfrak P_m}}
(K, W_{\mathfrak P_m})_{/\mathrm{div}}\big)
= m\ \mathrm{rank}_{\mathbf Z_p}(\mathcal {O}_\mathfrak P) \cdot
\mathrm {ord}_\mathfrak P\big(\mathrm{char}(X_{\Lambda-\mathrm{tors}})\big)
$$
up to $O(1)$ as $m$ varies.
Similarly, writing
$\mathcal{S}_{\mathfrak P_m}=\mathrm {Sel}_{\mathcal F_{\mathfrak P_m}}(K,T_{\mathfrak P_m})$,
\begin{eqnarray*}
\mathrm {length}_{\mathbf Z_p}(\mathcal{S}_{\mathfrak P_m}/\mathcal {O}_{\mathfrak P_m}\kappa^\infty)
&=&
m\ \mathrm{rank}_{\mathbf Z_p}(\mathcal {O}_\mathfrak P) \cdot
\mathrm {ord}_\mathfrak P\big(\mathrm{char}(\mathcal{S}/\Lambda \kappa^\infty)\big)
\\
\mathrm {length}_{\mathbf Z_p}(\mathcal {O}_{\mathfrak P_m}/\mathcal {O}_{\mathfrak P_m}\lambda^\infty)
&=&
m\ \mathrm{rank}_{\mathbf Z_p}(\mathcal {O}_\mathfrak P) \cdot
\mathrm {ord}_\mathfrak P(\lambda^\infty)
\end{eqnarray*}
when $\epsilon(N^-)=1$ or $-1$, respectively, up to $O(1)$ as
$m$ varies.
Proposition \ref{dvr bound} (with $k\gg 0$) gives the inequality
\begin{eqnarray*}
\mathrm {length}_{\mathbf Z_p}\big( \mathrm {Sel}_{\mathcal F_{\mathfrak P_m}}(K, W_{\mathfrak P_m})_{/\mathrm{div}}\big)
+2e\delta_{\mathfrak P_m}(k)
&=& 2\cdot
\mathrm {length}_{\mathbf Z_p}(\mathcal{S}_{\mathfrak P_m}/\mathcal {O}_{\mathfrak P_m}\kappa^\infty)\\
\mathrm {length}_{\mathbf Z_p}\big( \mathrm {Sel}_{\mathcal F_{\mathfrak P_m}}(K, W_{\mathfrak P_m})\big)
+2e\delta_{\mathfrak P_m}(k)
&=& 2\cdot
\mathrm {length}_{\mathbf Z_p}(\mathcal {O}_{\mathfrak P_m}/\mathcal {O}_{\mathfrak P_m}\lambda^\infty)
\end{eqnarray*}
(again, when $\epsilon(N^-)=1$ or $-1$, respectively)
where $e$ is the absolute ramification degree of $\mathcal {O}_{\mathfrak P_m}$,
which is independent of $m$.
As $\delta_{\mathfrak P_m}(k)\ge 0$, letting $m\to\infty$ proves the
inequality of (b) when $\mathfrak P\not=p\Lambda$.
We show that under the additional hypothesis of (c)
the value of $\delta_{\mathfrak P_m}(k)$ is bounded as $m$ and $k$ vary.
For every $j\ge k_0$ let $\mathfrak n(j)\in\mathfrak N_j^\mathrm {definite}$ be such that
$\lambda_{\mathfrak n(j)}$ has nonzero image in $\Lambda/(\mathfrak P,p^{k_0})$.
Then $\lambda_{\mathfrak n(j)}$ has nontrivial image in
$\Lambda/(\mathfrak P_m,p^{k_0})$ for all $m\ge k_0$.
Define $C_m$ to be the cokernel of $\Lambda/\mathfrak P_m\hookrightarrow\mathcal {O}_{\mathfrak P_m}$.
The groups $C_m$ are finite, and up to isomorphism do not depend on $m$.
If $k_1$ is large enough that $p^{k_1-k_0}$ kills $C_m$, then we
have the exact and commutative diagram
$$
\xymatrix{
{C_m[p^{k_1}]\ar[r]\ar[d]^0} & {\Lambda/(\mathfrak P_m,p^{k_1})\ar[r]\ar[d]} &
{\mathcal {O}_{\mathfrak P_m}/p^{k_1}\mathcal {O}_{\mathfrak P_m}\ar[d]} \\
{C_m[p^{k_0}]\ar[r]} & {\Lambda/(\mathfrak P_m,p^{k_0})\ar[r]} &
{\mathcal {O}_{\mathfrak P_m}/p^{k_0}\mathcal {O}_{\mathfrak P_m}}.
}
$$
It follows that $\lambda_{\mathfrak n(j)}$ has nontrivial image in
$\mathcal {O}_{\mathfrak P_m}/p^{k_1}\mathcal {O}_{\mathfrak P_m}$ for all $j\ge k_1$.
For $j\ge k\ge k_1$ we then have
$$
\delta_{\mathfrak P_m}(k,j)\le \mathrm {ind}(\lambda_{\mathfrak n(j)}, \mathcal {O}_{\mathfrak P_m}/p^{k}\mathcal {O}_{\mathfrak P_m})
< ek_1,
$$
hence $\delta_{\mathfrak P_m}(k)<ek_1$ for all $k\ge k_1$
and any $m\ge k_0$.
Finally, if $\mathfrak P=p\Lambda$, one instead takes
$\mathfrak P_m=\big((\gamma-1)^m+p\big)\Lambda$ for some generator $\gamma\in\Gamma$
and a similar argument holds. This completes the proof of
Theorem \ref{abstract mc}.
\bibliographystyle{plain}
|
1,314,259,995,582 | arxiv | \section{Introduction}
Open quantum dynamics---the study of the evolution of quantum systems interacting with an environment---has wide sweeping theoretical and experimental importance. It is fundamental in the study of quantum thermodynamics. Since thermalization is a non-unitary process, it requires an environment. Open dynamics is also critical in understanding the noise and decoherence modes ubiquitously present in experimental settings \cite{Nielsen:2000}.
The formalism of Gaussian Quantum Mechanics (GQM), (see, e.g., \cite{GQMRev}) simplifies the treatment of many quantum mechanical problems by making use of the phase space representation of quantum mechanics, focusing on states that can be fully characterized with a Gaussian Wigner function. Such states are theoretically and experimentally relevant, including coherent states, thermal states, and squeezed states. As long as all the relevant transformations preserve this Gaussianity (i.e. take Gaussian states to Gaussian states), GQM provides a significant decrease in the overhead of describing quantum states and transformations. One needs only track the system's first and second statistical moments instead of a vector in an infinite dimensional Hilbert space. The literature abounds with reviews on Gaussian quantum mechanics, in particular in its applications to quantum information; the reader is referred to \cite{weedbrook, adesso1, lami}.
In this paper, we consider the dynamics induced in a generic Gaussian system when rapidly bombarded by a series of Gaussian ancillae, a scenario we call \textit{Gaussian ancillary bombardment}. An intuitive example of such a scenario is a harmonic oscillator in a thermal bath of harmonic oscillators.
To study the general scenario, in Sec. \ref{InterpolateGQM} we adapt the rapid repeated interaction formalism developed in \cite{Grimmer2016a,Grimmer2017a} to the Gaussian setting. Specifically, we construct an interpolating master equation for the discrete time dynamics induced by the rapid interactions. In Sec. \ref{AncillaryBombardmentGQM} we apply this adapted formalism to the a generic Gaussian ancillary bombardment scenario and analyze the resulting master equation. In this analysis, we make use of the partition of open Gaussian dynamics developed in \cite{ArXivGrimmer2017b} to characterize the dynamics in terms of unitarity, ability to cause energy flow, state-dependence and mode mixing.
Finally, in Sec. \ref{Example}, we apply the tools built in this paper to the problem of understanding thermalization as resulting from the Markovian bombardment of a small system by the microconstituents of a thermal reservoir. We show that if we are to model equilibration and thermalization as resulting from this kind of dynamics then these processes critically depend on the system-environment coupling.
The methods and results we present not only add to a growing understanding of Gaussian open dynamics \cite{koga, nicacio, nicacio2} but also provide tools for investigating
the thermodynamics of systems that are repeatedly disturbed by an environment, particularly with regard to microscopic details connected with the flow of energy and information.
\section{Gaussian Quantum Mechanics}\label{ReviewGQM}
Consider a system composed of $N$ coupled modes (for example, harmonic oscillators) with the $n^{th}$ of these modes characterized by its quadrature operators, $\hat{q}_n$ and $\hat{p}_n$, which obey the canonical bosonic commutation relations,
\begin{equation}
[\hat{q}_n,\hat{q}_m]
=[\hat{p}_n,\hat{p}_m]
=0
\quad\text{and}\quad
[\hat{q}_n,\hat{p}_m]=\mathrm{i} \, \delta_{nm} \, \hat{\openone}.
\end{equation}
Such systems can be fully described in terms of a pseudo-probability distribution defined on the system's phase space \cite{Groenewold,Moyal}. In particular, a state with density matrix $\rho$ can be equivalently represented by its Wigner function,
\begin{equation}
W(\bm{q},\bm{p})=\frac{1}{\pi^N}\!\int_{-\infty}^\infty \d^N \bm{s}
\bra{\bm{q}+\bm{s}}\rho\ket{\bm{q}-\bm{s}}\exp(-2\mathrm{i} \, \bm{p}\cdot\bm{s}).
\end{equation}
Gaussian Quantum Mechanics (GQM) is the restriction of quantum mechanics to the class of states whose Wigner functions are Gaussian and to the class of transformations which preserve this Gaussianity. The following summary of GQM significantly summarizes the in-depth summary given in \cite{ArXivGrimmer2017b} in which many of the following claims are spelled out and demonstrated.
The main benefit of this restriction to Gaussian states and transformations is that it allows for a significantly simplified description of quantum states and transformations whilst still describing a wide variety of theoretically and experimentally relevant situations. In particular, a Gaussian distribution is completely determined by its first and second statistical moments. Thus collecting the system's quadrature operators into the vector
\bel{XhatDef}
\hat{\bm{X}}
\coloneqq
(\hat{q}_1,\hat{p}_1,\hat{q}_2,\hat{p}_2,\dots,\hat{q}_N,\hat{p}_N)^\intercal,
\end{equation}
a Gaussian state is fully described by (a) the mean of each of these operators, collected in the $2N$-dimensional mean vector
\bel{XDef}
\bm{X}
\coloneqq\langle\hat{\bm{X}}\rangle
=\big(\langle\hat{q}_1\rangle,\langle\hat{p}_1\rangle,\dots,\langle\hat{q}_N\rangle,\langle\hat{p}_N\rangle\big)^\intercal,
\end{equation}
and (b) by the covariances between them, collected in the the $2N$ by $2N$ symmetric covariance matrix
\bel{Vdef}
\sigma_j{}^k
\coloneqq
\big\langle
\hat{X}_j \, \hat{X}^k
+ \hat{X}^k \, \hat{X}_j
\big\rangle
-2\big\langle\hat{X}_j\big\rangle
\big\langle\hat{X}^k\big\rangle.
\end{equation}
Note that any two quadrature operators, say $\hat{X}_j$ and $\hat{X}^k$, will either commute to $\mathrm{i} \, \hat{\openone}$ or to $0$ such that all of the system's commutation relations are captured by the phase space matrix $\Omega$, defined as
\begin{align}\label{OmegaDef}
[\hat{X}_j,\hat{X}^k]
&=\mathrm{i} \ \Omega_j{}^k \, \hat{\openone}.
\end{align}
This matrix, called the symplectic form, is given explicitly as
\bel{OmegaExplicit}
\Omega
=\bigoplus_{n=1}^N \omega
=\openone_N\otimes\omega; \ \ \ \ \omega
=\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix},
\end{equation}
in the same representation as \eqref{XhatDef}. Note that $\Omega$ is real-valued, antisymmetric, and invertible with \mbox{$\Omega^{-1}=\Omega^T=-\Omega$}.
As in standard quantum mechanics, in GQM the commutation relations underlie the uncertainty principle, which all valid states obey. For Gaussian states the uncertainty principle is \cite{Simon1994},
\bel{SigmaPosCond}
\sigma\geq\mathrm{i} \, \Omega.
\end{equation}
For a matrix $M$, the notation $M\geq 0$ indicates here that $M$ is positive semi-definite. Moreover $M_1\geq M_2$ here means $M_1-M_2\geq0$. The uncertainty bound \eqref{SigmaPosCond} implies that that $\sigma\geq0$ (see Sec. II in \cite{ArXivGrimmer2017b}).
Gaussian unitary transformations are unitary transformations in the system's Hilbert space that preserve the Gaussianity of the state. Differential Gaussian unitary transformations are generated by Hamiltonians that are at most quadratic in the the operator vector \cite{Schumaker1986}. Such Hamiltonians can always be cast in the form,
\bel{QuadHamForm}
\hat{H}=\frac{1}{2}\hat{\bm{X}}^\intercal F \, \hat{\bm{X}}
+\bm{\alpha}^\intercal\hat{\bm{X}}.
\end{equation}
where $F$ is a $2N$ by $2N$ real symmetric matrix and $\bm{\alpha}$ is a real-valued $2N$ dimensional vector. From \eqref{QuadHamForm}, one can calculate the evolution of the mean vector, $\bm{X}$, and of the covariance matrix, $\sigma$, as
\begin{align}
\label{SymplecticDiffXUpHam}
\frac{\d}{\d t}\bm{X}(t)
&=\Omega (F \bm{X}(t)+\bm{\alpha}),\\
\label{SymplecticDiffVUpHam}
\frac{\d}{\d t}\sigma(t)
&=(\Omega \, F) \, \sigma(t)
+\sigma(t) \, (\Omega \, F)^\intercal.
\end{align}
For a time-independent Hamiltonian, integrating these equations for a time interval $[0,t]$ gives
\begin{align}
\label{SymplecticXUp}
\bm{X}(0)&\longrightarrow \bm{X}(t)=S(t) \, \bm{X}(0)+\bm{d}(t),\\
\label{SymplecticVUp}
\sigma(0)&\longrightarrow \sigma(t)=S(t) \, \sigma(0) \, S^\intercal(t)
\end{align}
where
\begin{align}
\label{SHamDef}
S(t)&=\text{exp}(\Omega F \, t),\\
\label{dHamDef}
\bm{d}(t)&=\frac{\text{exp}(\Omega F \, t)-\openone_{2N}}{\Omega F} \, \Omega\bm{\alpha}.
\end{align}
Note that \eqref{dHamDef} does not require $\Omega F$ to be invertible. Instead the notation can be understood in terms of the following series expansion
\bel{(ExpX-1)byXDef}
\frac{\text{exp}(X \, t)-\openone}{X}
=\sum_{m=0}^\infty \frac{t^{m+1}}{(m+1)!}X^m.
\end{equation}
More generally, any transformation of the form \eqref{SymplecticXUp} and \eqref{SymplecticVUp} (i.e., with generic $S$ and $\bm{d}$) can be implemented by evolving under a (potentially time dependent\footnote{Notice that in order to implement a general sympletic transformation a time dependent generator is generally needed. This follows from the exponential in the symplectic group not being surjective.}) quadratic Hamiltonian with the sole restriction that it preserves the symplectic form (i.e., the commutation relation) as
\bel{SympTranDef}
S \, \Omega \, S^\intercal=\Omega.
\end{equation}
Such a matrix $S$ implements a symplectic transformation. Together with $\bm{d}$, the update \eqref{SymplecticXUp} and \eqref{SymplecticVUp} constitutes a symplectic-affine transformation. Gaussian unitary transformations on the system's Hilbert space correspond to symplectic-affine transformations on the system's phase space.
\begin{comment}
It is important to note here that not every symplectic transformation can be achieved by such a time-independent quadratic Hamiltonian evolution. Explicitly, there are symplectic matrices $S$ such that
\begin{equation}
S \neq \exp(\Omega \, F)
\end{equation}
for any real symmetric matrix $F$. For example
\bel{ProbS}
S=\begin{pmatrix}
-4 & 0\\ 0 & -1/4\\
\end{pmatrix}
\neq \exp(\Omega \, F)
\end{equation}
If it was then $\sqrt{S}=\exp(\Omega \, F/2)$ would be symplectic as well and in particular would have a real trace. But
\begin{equation}
\text{Tr}(\sqrt{S})=\pm 2 \, \mathrm{i}\pm\mathrm{i}/2
\end{equation}
where the two plus/minuses are independent, which cannot be real. Mathematically, this is just an example of the known fact that the exponential operation is not surjective in the symplectic Lie group.
However, every symplectic matrix $S$ can be written as
\begin{equation}
S = \exp(\Omega \, F_2) \, \exp(\Omega \, F_1)
\end{equation}
for some real symmetric matrices $F_1$ and $F_2$. Find Proof, Polar Decomposition of Symplectic Group. Thus by concatenating two phases of time-independent quadratic Hamiltonian evolution (or more generally by allowing for time-dependence) we can implement any symplectic transformation.
Thus, in the context of Gaussian unitary transformations, we see novel transformations by allowing for time-dependent generators. This is not true in the non-Gaussian context, any unitary transformation can be implemented with a time-independent Hamiltonian. This implies that the problematic symplectic transformations \eqref{ProbS} discussed above, can in fact be implemented by a time-independent Hamiltonian, but this Hamiltonian will not be quadratic, and thus the intermediate states will not be Gaussian. One can imagine that the subspace of unitaries which are Gaussian has corners in it which take two steps to walk around. Alternatively, one can take a direct shortcut by leaving the realm of Gaussianity.
\end{comment}
In addition to the Gaussian unitary transformations described above, one can implement non-unitary Gaussian transformations by allowing the system to interact with an environment. In direct analogy with the Stinespring dilation theorem, one can implement any completely positive trace preserving (CPTP) Gaussian transformation as a Gaussian unitary transformation in some larger Hilbert space (or equivalently as a symplectic-affine transformation in a larger phase space) \cite{GaussianDilation}. From this it follows that the most general form of Gaussian update on $\bm{X}$ and $\sigma$ is,
\begin{align}
\label{GeneralUpdateX}
\bm{X}(0)&\to \bm{X}(t)=T(t)\bm{X}(0)+\bm{d}(t),\\
\label{GeneralUpdateV}
\sigma(0)&\to \sigma(t)=T(t) \, \sigma(0) \, T^\intercal(t)+R(t).
\end{align}
where $\bm{d}(t)$ is a real $2N$-dimensional vector, $T(t)$ and $\bm{R}(t)$ are $2N$ by $2N$ real matrices, $R(t)$ is symmetric, and $T(t)$ (unlike $S$) is not necessarily symplectic.
A transformation (given by $T$, $\bm{d}$, $R$) is CPTP if and only if it obeys the complete positivity condition \cite{GQMRev}
\bel{FiniteCPCond}
R\geq\mathrm{i} \, (T \, \Omega \, T^\intercal-\Omega).
\end{equation}
where a sketch of the proof appears in the appendix of \cite{ArXivGrimmer2017b}. Recall the notation $M\geq 0$ indicates that $M$ is a positive semi-definite matrix.
We can take the update given by \eqref{GeneralUpdateX} and \eqref{GeneralUpdateV} to be differential, as
\begin{align}
T(\d t)&=\openone_{2N}+\d t \ \Omega \, A,\\
\bm{d}(\d t)&=\d t \ \Omega \, \bm{b},\\
R(\d t)&=\d t \ C,
\end{align}
where $\bm{b}$ is a real $2N$-dimensional vector, $A$ and $C$ are $2N$ by $2N$ real matrices, $C$ is symmetric. Since $\Omega$ is invertible, and since $A$ and $\bm{b}$ are arbitrary, assuming that a factor of $\Omega$ precedes $A$ and $\bm{b}$ is justified.
From this differential update one can find that the general form of the Gaussian master equations is
\begin{align}
\label{GeneralDiffXUp}
\frac{\d}{\d t}\bm{X}(t)
&=\Omega(A(t) \bm{X}(t)+\bm{b}(t)),\\
\label{GeneralDiffVUp}
\frac{\d}{\d t}\sigma(t)
&=(\Omega A(t)) \, \sigma(t)
+\sigma(t) \, (\Omega A(t))^\intercal
+C(t).
\end{align}
The differential version of the complete positivity condition \eqref{FiniteCPCond} is
\bel{DiffCPCond}
C\geq\mathrm{i} \, \Omega (A-A^\intercal)\Omega
\end{equation}
from which it follows that $C\geq0$.
In \cite{ArXivGrimmer2017b} the dynamical effect of the $A$, $\bm{b}$, and $C$ terms were explored in detail. To summarize, the effect of the $A$ term is to implement rotations, squeezings, and amplifications in phase space, whereas the $\bm{b}$ term implements displacement and the $C$ term implements state-independent noise.
For time-independent generators ($A$, $\bm{b}$, and $C$), integrating these equations for a time interval $[0,t]$ gives an update of the form \eqref{GeneralUpdateX} and \eqref{GeneralUpdateV} with
\begin{align}
\label{TfromAbC}
T(t)&=\exp(\Omega A \, t),\\
\label{dfromAbC}
\bm{d}(t)&=\frac{\exp(\Omega A \, t)-\openone_{2N}}{\Omega A} \, \Omega \, \bm{b},\\
\label{RfromAbC}
R(t)&=\text{vec}^{-1}\Big(\frac{\exp((\Omega A\otimes\Omega A) \, t)-\openone_{4N^2}}{\Omega A\otimes\Omega A} \ \text{vec}(C)\Big).
\end{align}
where the $\vec$ operation is defined \cite{ArXivGrimmer2017b} to map outer products to tensor products as
\bel{OuterToTensor}
\vec(\lambda \ \bm{u}\bm{v}^\intercal)
\coloneqq\lambda \ \bm{u}\otimes\bm{v}
\end{equation}
for some scalar $\lambda$ and vectors $\bm{u}$ and $\bm{v}$. By linearity this defines its action on any matrix.
One quickly finds that for any matrices $X$, $Y$ and $Z$
\bel{VecIdentity}
\vec(X \, Y \, Z^\intercal)=(X\otimes Z)\vec(Y).
\end{equation}
This operation can be represented by the vector formed by taking the entries of a matrix in order as follows,
\begin{equation}
\vec\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
=(a,b,c,d)^\intercal.
\end{equation}
Note that $\text{vec}^{-1}$ is trivially defined by ``restacking'' the matrices entries.
Also, as before, note that it is not necessary that $\Omega A$ and $\Omega A\otimes\Omega A$ are invertible for us to evaluate \eqref{dfromAbC} and \eqref{RfromAbC} as we can make use of the series \eqref{(ExpX-1)byXDef}.
\section{Rapid Repeated Gaussian Interaction}\label{InterpolateGQM}
In this section we build a Gaussian master equation of the general form \eqref{GeneralDiffXUp} and \eqref{GeneralDiffVUp} from rapid repeated application of a Gaussian channel of the general form \eqref{GeneralUpdateX} and \eqref{GeneralUpdateV}.
Specifically, we take a Gaussian system (characterized by its mean vector, $\bm{X}$, and its covariance matrix, $\sigma$) to be updated in discrete time steps of duration $\delta t$ via the Gaussian channel given by some $T(\delta t)$, $\bm{d}(\delta t)$, and $R(\delta t)$ as
\begin{align}
\label{UpdateSchemeXX}
\bm{X}((n+1)\delta t)
&=T(\delta t) \, \bm{X}(n \, \delta t)
+\bm{d}(\delta t),\\
\label{UpdateSchemeVV}
\sigma((n+1)\delta t)
&=T(\delta t) \, \sigma(n \, \delta t) \, T^\intercal(\delta t)
+R(\delta t).
\end{align}
Given the initial system state, $\bm{X}(0)$ and $\sigma(0)$, the above update scheme defines the system state at the discrete time points $t=n\,\delta t$. Note this update is Markovian since it is time-local (it only depends on the current state of the system).
Further we make the natural assumptions that
\bel{NothingNoTime}
T(0)=\openone_{2N}, \ \ \ \bm{d}(0)=0, \ \ \ \text{and} \ \ \ R(0)=0
\end{equation}
(nothing happens in no time) and that
\bel{FiniteRate}
T'(0), \ \ \ \bm{d}'(0), \ \ \ \text{and} \ \ \ R'(0) \ \ \ \text{exist}
\end{equation}
(things happen at a finite rate). Finally we assume that the update scheme is invertible. Ultimately, this means that $T(\delta t)$ is non-singular. Note that we automatically have this for small enough $\delta t$.
From the above update we seek to construct a Gaussian master equation of the general form
\begin{align}
\label{GQMInterpMasterEqsX}
\bm{X}'(t)
&=\Omega(A_{\delta t} \, \bm{X}(t)
+\bm{b}_{\delta t}),\\
\label{GQMInterpMasterEqsV}
\sigma'(t)
&=(\Omega A_{\delta t}) \, \sigma(t)
+\sigma(t) \, (\Omega A_{\delta t})^\intercal
+C_{\delta t}
\end{align}
for some generators $A_{\delta t}$, $\bm{b}_{\delta t}$, and $C_{\delta t}$ such that the dynamics it describes exactly matches the dynamics given by the discrete updater at every time point, $t=n \, \delta t$. As the dynamics generated by \eqref{GQMInterpMasterEqsX} and \eqref{GQMInterpMasterEqsV} is defined for all $t\geq0$ (not just $t=n\,\delta t$) this master equation constitutes an interpolation scheme (see \cite{Grimmer2016a} for details).
In general, such an interpolation scheme is not uniquely determined. However, as discussed in \cite{Grimmer2017a}, there is a unique interpolation scheme with time-independent generators which converge in the rapid interaction limit (as $\delta t\to0$).
This unique interpolation scheme is constructed in detail in Appendix \ref{AppGQMInterpolate}, yielding the interpolation generators
\begin{align}
\label{AdtDef}
\Omega A_{\delta t}
&=\frac{1}{\delta t}\text{Log}(T(\delta t)),\\
\label{bdtDef}
\Omega \, \bm{b}_{\delta t}
&=\frac{1}{\delta t}
\frac{\text{Log}(T(\delta t))}{T(\delta t)-\openone_{2N}}\bm{d}(\delta t),\\
\label{CdtDef}
C_{\delta t}
&=\vec^{-1}\Big(\frac{1}{\delta t}\frac{\text{Log}(T(\delta t) \otimes T(\delta t))}{T(\delta t) \otimes T(\delta t)-\openone_{4N^2}} \, \vec\big(R(\delta t)\big)\Big).
\end{align}
where we emphasize that
the expressions for $\bm{b}_{\delta t}$, and $C_{\delta t}$ are to be understood via the series expansion
\bel{LogSeries2}
\frac{\text{Log}(X)}{X-\openone}
=\sum_{m=0}^\infty\frac{(-1)^m}{m+1}(X-\openone)^m
\end{equation}
and so \mbox{$T(\delta t)-\openone_{2N}$} and \mbox{$T(\delta t)\, \otimes \, T(\delta t)-\openone_{4N^2}$} need not be invertible.
Finally, we note that in the above equations we take the logarithm's principle branch cut, such that $\text{Log}(\openone)=0$. This assures that the interpolation generators converge as $\delta t\to0$.
If in addition to the minimal regularity assumed above --- \eqref{NothingNoTime} and \eqref{FiniteRate} --- we have that $T(\delta t)$, $\bm{d}(\delta t)$, and $R(\delta t)$ are analytic at $\delta t=0$, then we can then expand them as a series in $\delta t$ as
\begin{align}
\label{TSeries}
T(\delta t)
&=\openone_{2N}
+\delta t \, T_1
+\delta t^2 \, T_2
+\delta t^3 \, T_3
+\delta t^4 \, T_4
+\dots,\\
\label{dSeries}
\bm{d}(\delta t)
&=0
+\delta t \, \bm{d}_1
+\delta t^2 \, \bm{d}_2
+\delta t^3 \, \bm{d}_3
+\delta t^4 \, \bm{d}_4
+\dots,\\
\label{RSeries}
R(\delta t)
&=0
+\delta t \, R_1
+\delta t^2 \, R_2
+\delta t^3 \, R_3
+\delta t^4 \, R_4
+\dots \, .
\end{align}
Using these series expansions, through \eqref{AdtDef}, \eqref{bdtDef}, and \eqref{CdtDef}, we can expand each interpolation generator as a series in $\delta t$ as well,
\begin{align}
\label{ASeries}
A_{\delta t}
&=A_0
+\delta t \, A_1
+\delta t^2 \, A_2
+\delta t^3 \, A_3
+\dots,\\
\label{bSeries}
\bm{b}_{\delta t}
&=\bm{b}_0
+\delta t \, \bm{b}_1
+\delta t^2 \, \bm{b}_2
+\delta t^3 \, \bm{b}_3
+\dots,\\
\label{CSeries}
C_{\delta t}
&=C_0
+\delta t \, C_1
+\delta t^2 \, C_2
+\delta t^3 \, C_3
+\dots,
\end{align}
where the first few terms of the expansion of $A_{\delta t}$ are given by
\begin{align}
\label{A0def}
\Omega A_0=T_1,&\\
\label{A1def}
\Omega A_1=T_2
&-\frac{1}{2}T_1{}^2,\\
\label{A2def}
\Omega A_2=T_3
&-\frac{1}{2}(T_1 T_2+T_2 T_1)
+\frac{1}{3}T_1{}^3.
\end{align}
The first few terms of the expansion of $\bm{b}_{\delta t}$ are given by
\begin{align}
\Omega \, \bm{b}_0
=\bm{d}_1&,\\
\Omega \, \bm{b}_1
=\bm{d}_2
&-\frac{1}{2}T_1\bm{d}_1,\\
\Omega \, \bm{b}_2
=\bm{d}_3
&-\frac{1}{2}(T_1\bm{d}_2+T_2\bm{d}_1)
+\frac{1}{3}T_1^2\bm{d}_1.
\end{align}
Finally, the first few terms of the expansion of $C_{\delta t}$ are given by
\begin{align}
C_0&=R_1,\\
C_1&=R_2
-\frac{1}{2}(T_1 R_1+R_1 T_1^\intercal),\\
C_2&=R_3
-\frac{1}{2}(T_2 R_1+R_1 T_2^\intercal+T_1 R_2+R_2 T_1^\intercal)\\
\nonumber
&+\frac{1}{3}(T_1{}^2 R_1+R_1 T_1^\intercal{}^2)
+\frac{1}{6} T_1 R_1 T_1^\intercal.
\end{align}
Higher order terms in these series can be calculated but are not discussed in this paper.
\section{Gaussian ancillary bombardment}\label{AncillaryBombardmentGQM}
In this section we construct the Gaussian channel corresponding to a specific physically motivated situation that we refer to as \textit{Gaussian ancillary bombardment}, in analogy with the ancillary bombardment introduced in \cite{Grimmer2016a}. Following this we use the results of the previous section to calculate the interpolation generators and expand them as a series in $\delta t$. Finally, we will analyze these expansions order by order using the partition developed in \cite{ArXivGrimmer2017b}.
In a general Gaussian ancillary bombardment scenario, we consider a Gaussian system that is repeatedly bombarded by a series of Gaussian ancillae. Updating the system's state via \eqref{UpdateSchemeXX} and \eqref{UpdateSchemeVV} here corresponds to the system interacting with one of these Gaussian ancillae. An intuitive example of such a scenario (and one we analyze in Sec \ref{Example}) is a harmonic oscillator bombarded by a thermal bath of harmonic oscillators.
Let us consider a system, $\text{S}$, to be a Gaussian system composed of $N_\text{S}$ modes. Likewise let each ancilla, $\text{A}$, be a Gaussian system composed of $N_\text{A}$ modes. Together they form a joint system, $\text{SA}$, which is Gaussian and is composed of $N_\text{S}+N_\text{A}$ modes. Note that dimensions of $\text{S}$, $\text{A}$ and $\text{SA}$'s phase spaces are $2N_\text{S}$, $2N_\text{A}$, and $2N_\text{S}+2N_\text{A}$ respectively.
The system and ancilla's quadrature operators are collected together into the operator vector
\begin{equation}
\hat{\bm{X}}_\text{SA}=(\hat{\bm{X}}_\text{S},\hat{\bm{X}}_\text{A})^\intercal.
\end{equation}
Since the system's and ancilla's observables live in different Hilbert spaces, all pairs of their observables commute with each other. Thus they have the joint symplectic form,
\begin{equation}
\Omega_\text{SA}=
\begin{pmatrix}
\Omega_\text{S} & 0\\
0 & \Omega_\text{A}
\end{pmatrix}
\end{equation}
where $\Omega_\text{S}$ and $\Omega_\text{A}$ are the symplectic forms in the phase space of S and A respectively.
We assume that the system and ancilla are initially uncorrelated, having the initial joint mean vector,
\begin{equation}
\bm{X}_\text{SA}(0)=(\bm{X}_\text{S}(0),\bm{X}_\text{A}(0))^\intercal,
\end{equation}
and the initial joint covariance matrix,
\begin{equation}
\sigma_\text{SA}(0)=
\begin{pmatrix}
\sigma_\text{S}(0) & 0\\
0 & \sigma_\text{A}(0)
\end{pmatrix}.
\end{equation}
Further we assume that they evolve under a quadratic Hamiltonian,
\bel{HSADef}
\hat{H}_\text{SA}
=\frac{1}{2}\hat{\bm{X}}_\text{SA}^\intercal \, F_\text{SA} \, \hat{\bm{X}}_\text{SA}
+\bm{\alpha}^\intercal_\text{SA}\hat{\bm{X}}_\text{SA}
\end{equation}
where $F_\text{SA}$ is real and symmetric and $\bm{\alpha}_\text{SA}$ is real.
It is useful to divide this Hamiltonian into subblocks corresponding to the system and ancilla's phase spaces as,
\begin{equation}
F_\text{SA}=
\begin{pmatrix}
F_\text{S} & G\\
G^\intercal & F_\text{A}
\end{pmatrix},
\quad \quad
\bm{\alpha}_\text{SA}=
\begin{pmatrix}
\bm{\alpha}_\text{S}\\
\bm{\alpha}_\text{A}
\end{pmatrix}.
\end{equation}
Note that $F_\text{S}$ and $F_\text{A}$ are symmetric and that $G$ is not generally square, having dimensions $2 N_\text{S}$ by $2 N_\text{A}$.
Divided this way we can see that $F_S$ and $\bm{\alpha}_\text{S}$ correspond to the system's free Hamiltonian,
\begin{equation}
\hat{H}_\text{S}
=\frac{1}{2}\hat{\bm{X}}_\text{S}^\intercal \, F_\text{S} \, \hat{\bm{X}}_\text{S}
+\bm{\alpha}^\intercal_\text{S}\hat{\bm{X}}_\text{S}.
\end{equation}
Similarly $F_\text{A}$ and $\bm{\alpha}_\text{A}$ correspond to the ancilla's free Hamiltonian,
\begin{equation}
\hat{H}_\text{A}
=\frac{1}{2}\hat{\bm{X}}_A^\intercal \, F_\text{A} \, \hat{\bm{X}}_\text{A}
+\bm{\alpha}^\intercal_\text{A}\hat{\bm{X}}_\text{A}.
\end{equation}
Finally, we can see that the $G$ matrix contains all of the couplings between the system and the ancilla, corresponding to the interaction Hamiltonian,
\begin{equation}
\hat{H}_\text{I}
=\frac{1}{2}\hat{\bm{X}}_\text{S}^\intercal \, G \, \hat{\bm{X}}_\text{A}
+\frac{1}{2}\hat{\bm{X}}_\text{A}^\intercal \, G^ \intercal\, \hat{\bm{X}}_\text{S}.
\end{equation}
Next we compute the effect that evolving for a time $\delta t$ under this Hamiltonian has on the system (determining $T(\delta t)$, $\bm{d}(\delta t)$, and $R(\delta t)$). In order to do this we compute the evolution of the joint system then isolate the effect on the system. This evolution is unitary and therefore given by a symplectic-affine transformation in the joint phase space. Specifically,
\begin{align}
\label{SXUpdt}
\bm{X}_\text{SA}(\delta t)
&=S_\text{SA}(\delta t) \, \bm{X}_\text{SA}(0)+\bm{d}_\text{SA}(\delta t),\\
\label{SVUpdt}
\sigma_\text{SA}(\delta t)
&=S_\text{SA}(\delta t) \, \sigma_\text{SA}(0) \, S^\intercal_\text{SA}(\delta t)
\end{align}
where
\begin{align}
\label{AppSHamDef}
S_\text{SA}(\delta t)&=\text{exp}(\Omega_\text{SA} F_\text{SA} \, \delta t),\\
\label{AppdHamDef}
\bm{d}_\text{SA}(\delta t)&=\frac{\text{exp}(\Omega_\text{SA} F_\text{SA} \, \delta t)-\openone_{2 N_\text{S}+2 N_\text{A}}}{\Omega_\text{SA} F_\text{SA}} \, \Omega_\text{SA} \, \bm{\alpha}_\text{SA}.
\end{align}
In order to find the effective update on the system's state we can divide these into blocks as
\begin{equation}
\nonumber
S_\text{SA}(\delta t)
=\begin{pmatrix}
M_\text{SS}(\delta t) & M_\text{SA}(\delta t) \\
M_\text{AS}(\delta t) & M_\text{AA}(\delta t) \\
\end{pmatrix}
\ \ \text{and} \ \
\bm{d}_\text{SA}(\delta t)
=\begin{pmatrix}
\bm{d}_\text{S}(\delta t) \\ \bm{d}_\text{A}(\delta t) \\
\end{pmatrix}.
\end{equation}
Expanding \eqref{SXUpdt} and \eqref{SVUpdt} over the direct sum between the system and ancilla's phase spaces, one can identify that the reduced state of the system ($\bm{X}_\text{S}$ and $\sigma_\text{S}$) is updated as
\begin{align}
\bm{X}_\text{S}(\delta t)
&=T(\delta t) \, \bm{X}_\text{S}(0)
+\bm{d}(\delta t),\\
\sigma_\text{S}(\delta t)
&=T(\delta t) \, \sigma_\text{S}( 0) \, T^\intercal(\delta t)
+R(\delta t),
\end{align}
where
\begin{align}\label{TSMdSDef}
T(\delta t)&=M_\text{SS}(\delta t),\\
\bm{d}(\delta t)&=M_\text{SA}(\delta t) \ \bm{X}_\text{A}(0)+\bm{d}_\text{S}(\delta t),\\
R(\delta t)&=M_\text{SA}(\delta t) \, \sigma_\text{A}(0) \, M^\intercal_\text{SA}(\delta t).
\end{align}
With some effort, these can be expanded as a series in $\delta t$ (as in \eqref{TSeries}, \eqref{dSeries}, and \eqref{RSeries}). Using the results of the previous section, we can then write the interpolation generators $A_{\delta t}$, $\bm{b}_{\delta t}$, and $C_{\delta t}$ as a series in $\delta t$ (as in \eqref{ASeries}, \eqref{bSeries}, and \eqref{CSeries}) now with coefficients written explicitly in terms of the Hamiltonian \eqref{HSADef}.
This calculation is tedious but ultimately straightforward. For the first few terms of the expansion of $A_{\delta t}$ it yields
\begin{align}
A_0=&F_S,\\
\label{A1DefHam}
A_1=&\frac{1}{2}G \, \Omega_A G^\intercal,\\
A_2=&-\frac{1}{12} G \, \Omega_A G^\intercal \Omega_S F_S
-\frac{1}{12} F_S \Omega_S G \, \Omega_A G^\intercal\\
\nonumber
&+\frac{1}{6} G \, \Omega_A F_A \Omega_A G^\intercal.
\end{align}
For the first few terms of the expansion of $\bm{b}_{\delta t}$ we find
\begin{align}
\bm{b}_0&=
\bm{\alpha}_\text{S}
+G\bm{X}_\text{A}(0),\\
\bm{b}_1&=
\frac{1}{2} G \, \Omega_\text{A} F_\text{A}\bm{X}_\text{A}(0)
+\frac{1}{2} G \, \Omega_\text{A}\bm{\alpha}_\text{A},\\
\bm{b}_2&=-\frac{1}{12} F_\text{S} \Omega_\text{S} G \, \Omega_\text{A} \bm{\alpha}_\text{A}
+\frac{1}{6} \Omega_\text{S} G \, \Omega_\text{A} F_\text{A} \Omega_\text{A} \bm{\alpha}_\text{A}\\
\nonumber
&-\frac{1}{12} F_\text{S} \Omega_\text{S} G \, \Omega_\text{A} F_\text{A} \bm{X}_\text{A}(0)
+\frac{1}{6} G \, \Omega_\text{A} F_\text{A} \Omega_\text{A} F_\text{A} \bm{X}_\text{A}(0)\\
\nonumber
&-\frac{1}{12} G \, \Omega_\text{A} G^\intercal \Omega_\text{S} \bm{\alpha}_\text{S}
-\frac{1}{12} G \, \Omega_\text{A} G^\intercal \Omega_\text{S} G \bm{X}_\text{A}(0).
\end{align}
Finally, the first few terms of the expansion of $C_{\delta t}$ are
\begin{align}
C_0&=0,\\
\label{C1DefHam}
C_1&=\Omega_\text{S} G \sigma_\text{A}(0) G^\intercal \Omega^\intercal_\text{S},\\
\label{C2DefHam}
C_2&=\frac{1}{2} \Omega_\text{S} G \big(\Omega_\text{A} F_\text{A} \sigma_\text{A}(0)+\sigma_\text{A}(0) (\Omega_\text{A} F_\text{A})^\intercal\big) G^\intercal \Omega^\intercal_\text{S}.
\end{align}
It is worth noting the functional dependence of $A_{\delta t}$, $\bm{b}_{\delta t}$, and $C_{\delta t}$ on the parameters of the bombardment scenario. These include the system free Hamiltonian ($F_\text{S}$ and $\bm{\alpha}_\text{S}$), ancillae free Hamiltonian ($F_\text{A}$ and $\bm{\alpha}_\text{A}$), the interaction Hamiltonian ($G$) and the initial state of the ancilla ($\bm{X}_\text{A}$ and $\sigma_\text{A}$). The interpolation generators depend on these (even non-perturbatively) as
\begin{align}
&A_{\delta t}(F_\text{S},F_\text{A},G),\\
&\bm{b}_{\delta t}(F_\text{S},F_\text{A},G,\bm{\alpha}_\text{S},\bm{\alpha}_\text{A},\bm{X}_\text{A}(0)),\\
&C_{\delta t}(F_\text{S},F_\text{A},G,\sigma_\text{A}(0)).
\end{align}
The $A_{\delta t}$ term (which implements rotation, squeezing, amplifications and relaxation \cite{ArXivGrimmer2017b}) does not depend on either the linear part of the Hamiltonians nor on the initial ancilla state. This means that the presence and strength of all of these effects is controlled solely by the nature of the coupling to the environment and not by the particular state of the environment. Recall this is true even in the regime of long-time interactions.
Additionally, since the dynamics of the mean vector is determined entirely by $A_{\delta t}$ and $\bm{b}_{\delta t}$ it is therefore independent of the initial covariance of the ancilla, $\sigma_\text{A}(0)$.
It is also interesting to note which types of dynamics become available at each order in the series. To do this we use the results of \cite{ArXivGrimmer2017b} which partitions the generators of Gaussian dynamics into 11 parts based on: (a) whether or not the dynamics allows for energy flow between the system and the environment, (b) whether it allows for entanglement to be created between the system and the environment, (c) whether the effect of the dynamics is state-dependent or state-independent and finally (d) whether it mixes different modes together.
The result of applying this partition to the dynamics generated by Gaussian ancillary bombardment is summarized in Table \ref{Table22} (for details see Appendix \ref{AppGBPart}).
Summarizing this analysis, at zeroth order we have access to all the types of dynamics present in the system's free Hamiltonian with the option to induce an additional displacement (coming from $\bm{b}_0$). At higher orders the dynamics will generically be able to access all types of displacement and noise. Past zeroth order, the rotation, squeezing and amplification effects (coming from $A$) that are available to the system alternate between unitary and non-unitary.
\begin{table*}
\begin{tabular}{||c|c|c|c|c|c|c||}
\hline Type of Dynamics & \quad 0th (Free) \quad & \quad 0th (Induced) \quad & \quad Odd ( $\geq$ 1st) \quad & \quad Even ( $\geq$ 2nd) \quad \\
\hline Single-mode Rotation & Yes & No & No & Yes \\
\hline Single-mode Squeezing & Yes & No & No & Yes \\
\hline Displacement & Yes & Yes & Yes & Yes \\
\hline Single-mode Squeezed Noise & No & No & Yes & Yes \\
\hline Amplification/Relaxation & No & No & Yes & No \\
\hline Thermal Noise & No & No & Yes & Yes \\
\hline Multi Mode Rotation & Yes & No & No & Yes \\
\hline Multi Mode Squeezing & Yes & No & No & Yes \\
\hline Multi Mode Counter-Rotation & No & No & Yes & No \\
\hline Multi Mode Noise & No & No & Yes & Yes \\
\hline Multi Mode Counter-Squeezing & No & No & Yes & No \\
\hline
\end{tabular}
\caption{The dynamics available to a bombarded Gaussian system at each order in $\delta t$. The eleven types of dynamics listed in this table are described in detail in \cite{ArXivGrimmer2017b}. The zeroth order effects are further divided into those available through the system's free Hamiltonian and those which can be induced through the interaction.
}\label{Table22}
\end{table*}
Finally, before analyzing each of these expansions order by order, we make some comments about when open Gaussian dynamics in general, and Gaussian ancillary bombardment in particular, can lead to purification. This is an important characterization because dynamics being able to increase the purity of at least one state is a prerequisite for the dynamics to be able to capture the process of thermalization (e.g. cooling through bombardment by a cold environment).
Following \cite{Grimmer2017a} we say that a map can purify if there exists a state whose purity increases under the map. The purity of a Gaussian state \cite{GPurity} is given in our notation by
\begin{equation}
\mathcal{P}=\text{Tr}(\rho^2)
=\frac{1}{\text{det}(\sigma)}.
\end{equation}
A necessary and sufficient condition for Gaussian dynamics to be able to purify is
\bel{GaussianNandS}
\text{Tr}\big(\Omega A\big)<0.
\end{equation}
Within the partition described in \cite{ArXivGrimmer2017b}, only the Gaussian dynamics including amplification/purification effects are capable or purifying. From Table \ref{Table22} we can see that such effects are only available at odd orders. Thus if no purification effects are present at first order, the leading order purification effects will be at third order, generically two orders lower than the leading order noise term, $C_1$, with which they will compete. In subsection \ref{FirstOrderGaussian} we find that many commonly used interaction Hamiltonians cannot purify at first order.
\subsection{Zeroth Order Dynamics}
The zeroth order dynamics (i.e, in the continuum limit, as $\delta t\to 0$) is unitary, since $A_0$ is symmetric and $C_0$ vanishes. Specifically in zeroth order we have the dynamics,
\begin{align}\label{GQMInterpMasterEqs}
\bm{X}_{S}'(t)
&=\Omega(F_{S} \, \bm{X}_S(t)
+\alpha_S +G \, \bm{X}_S(0))\\
\sigma_S'(t)
&=(\Omega F_{S}) \, \sigma_S(t)
+\sigma_S(t) \, (\Omega F_{S})^\intercal.
\end{align}
Comparing this to \eqref{SymplecticDiffXUpHam} and \eqref{SymplecticDiffVUpHam} we can see that this is just evolution under the effective Hamiltonian
\begin{align}\label{Heff0Gaussian}
\hat{H}_\text{eff}^{(0)}
&=\frac{1}{2}\hat{\bm{X}}_S^\intercal \, F_{S} \, \hat{\bm{X}}_S
+\hat{\bm{X}}_S^\intercal(\bm{\alpha}_S
+ G\bm{X}_A(0))\\
\nonumber
&=\hat{H}_S+\hat{\bm{X}}_S{}^\intercal G\bm{X}_A(0).
\end{align}
This is in line with the general result from \cite{Layden:2015b} showing that rapid repeated interaction (even in a non-Gaussian setting) produces unitary dynamics in the continuum limit.
In \cite{Layden:2015b} this result was interpreted as saying that in this regime the ancillae affect the system but do not entangle with it (they ``push'' the system, but do not``talk'' to it). Further it was shown in \cite{Layden:2015b} that by switching evolution between (non-commuting) $\hat{H}_\text{S}$ and $\hat{H}_\text{eff}^{(0)}$ one can generally gain full unitary control of the system. However this cannot be done within the context of Gaussian ancillary bombardment.
In fact we will argue that only a limited range of Gaussian dynamics is available to the system at zeroth order. Specifically, unlike in \cite{Layden:2015b}, by turning on and off the environment, one can only adjust the system's Hamiltonian by a linear term in $\hat{\bm{X}}_\text{S}$, as can be seen from \eqref{Heff0Gaussian}. Such a modification of the system's Hamiltonian can only apply a displacement and cannot affect the dynamics of the system's covariance matrix. Thus while we are able to push the Gaussian state around as we like in phase space, we are not able to adjust its ``shape'' at will.
Finally, for completeness we note that since the zeroth order evolution is unitary it is trivially completely positive. Explicity from \eqref{DiffCPCond},
\bel{CPCheck0}
C_0=0\geq\mathrm{i}\Omega_S (A_0-A_0^\intercal)\Omega_S^\intercal=0.
\end{equation}
\subsection{First Order Dynamics}\label{FirstOrderGaussian}
At first order, we see a new displacement term (from $b_1$), the first noise in the dynamics (from $C_1$) and several other non-unitary effects (from $A_1$). Specifically, from Table \ref{Table22} we can see that in addition to the displacement effects coming from $b_1$ we can have all three kinds of noise (from $C_1$) as well as amplification/relaxation, multi-mode counter-rotation, and counter-squeezing coming from $A_1$. Note that we do not have access to single or multi-mode rotation or squeezing at first order. Since noise is generically present at first order (see below) single or multi-mode rotation or squeezing will be generally be subleading to the noise in the dynamics.
At this order the dynamics coming from both $A_1$ and $C_1$ is non-unitary ($A_1$ is antisymmetric, and noise is always non-unitary), thus the only unitary effects at first order come from $\bm{b}_1$. These effects give a first order correction to the effective Hamiltonian
\begin{equation}
\hat{H}_\text{eff}
=\hat{H}_\text{eff}^{(0)}
+\delta t \, \hat{H}_\text{eff}^{(1)}
+\mathcal{O}(\delta t^2)
\end{equation}
of
\begin{equation}
\hat{H}_\text{eff}^{(1)}
=\hat{\bm{X}}_\text{S}^\intercal \, \bm{b}_1
=\frac{1}{2}\hat{\bm{X}}^\intercal_\text{S}
G \, \Omega_A
\big(F_A\bm{X}_A(0)
+\bm{\alpha}_A\big).
\end{equation}
This correction can be understood as accounting for the ancilla freely evolving during the interaction.
The first order noise term is given by
\begin{equation}
C_1=\Omega_S G \, \sigma_A(0) \, G^\intercal \Omega_S^\intercal
\end{equation}
which we note is positive semi-definite \mbox{($C_1\geq 0$)}, since \mbox{$\sigma_\text{A}(0)\geq0$}. This noise vanishes only if $G=0$ (there is no interaction) or if $\sigma_A(0)$ is singular (i.e., infinitely squeezed) and $G^\intercal\Omega_S^\intercal$ maps entirely into the kernel of $\sigma_A(0)$.
As discussed above, a necessary and sufficient condition for Gaussian dynamics to cause purification is \eqref{GaussianNandS}. Since the zeroth order dynamics is unitary the first opportunity for purification is at first order. This can happen if and only if
\bel{FirstOrderPurifyNandS}
0>\text{Tr}\big(\Omega_\text{S} A_1\big)
= \frac{1}{2} \text{Tr}\big(\Omega_\text{S} \, G \, \Omega_\text{A} \, G^\intercal\big).
\end{equation}
In \cite{Grimmer2016a} a necessary and sufficient condition for dynamics causing causing purification at leading order was given in a general (non-Gaussian) ancillary bombardment scenario provided the system is finite dimensional. As such the results described there cannot be applied to Gaussian systems. They concluded that in order to cause purification at leading possible order an interaction must be ``sufficiently complicated''. In particular they found that a tensor product interaction Hamiltonian of the form
\begin{equation}
H_\text{I}=\hat{Q}_\text{S}\otimes \hat{R}_\text{A}
\end{equation}
will not purify at leading order. We will now prove that this result in fact does extend to the Gaussian context despite the infinite dimensional nature of the systems and ancillae.
Both $\hat{Q}_\text{S}$ and $\hat{R}_\text{A}$ must be linear in their respective quadrature operators, and so
\begin{equation}
\hat{Q}_\text{S}
=\bm{u}^\intercal\hat{\bm{X}}_\text{S}
=\hat{\bm{X}}_\text{S}^\intercal\bm{u}
\quad\text{and}\quad
\hat{R}_\text{A}
=\bm{v}^\intercal\hat{\bm{X}}_\text{A}
=\hat{\bm{X}}_\text{A}^\intercal\bm{v}
\end{equation}
for some real vectors $\bm{u}$ and $\bm{v}$ in order that $H_\text{I}$ be quadratic in these operators. Thus we can write
\begin{equation}
\hat{H}_\text{I}
=\frac{1}{2}\hat{\bm{X}}_\text{S}^\intercal \, G \, \hat{\bm{X}}_\text{A}
+\frac{1}{2}\hat{\bm{X}}_\text{A}^\intercal \, G^ \intercal\, \hat{\bm{X}}_\text{S}.
\end{equation}
with
\begin{equation}
G=\bm{u}\bm{v}^\intercal.
\end{equation}
Thus in Gaussian quantum mechanics, tensor product interactions correspond to rank one interaction matrices.
From \eqref{FirstOrderPurifyNandS} we can quickly see that a rank one interaction cannot purify at leading order since
\begin{align}
\text{Tr}\big(\Omega_\text{S} G \Omega_\text{A} G^\intercal\big)
&=\text{Tr}\big(\Omega_\text{S} \bm{u}\bm{v}^\intercal \Omega_\text{A} \bm{v}\bm{u}^\intercal\big)\\
\nonumber
&= \bm{u}^\intercal\Omega_\text{S} \bm{u} \ \bm{v}^\intercal \Omega_\text{A} \bm{v}\\
\nonumber
&=0
\end{align}
since $\Omega_\text{S}$ and $\Omega_\text{A}$ are antisymmetric.
Thus we have extended the result of \cite{Grimmer2016a} that ``simple'' interaction Hamiltonians cannot cause purification at leading order in rapid bombardment from finite dimensional systems to include Gaussian systems.
Moreover, for rank one interactions, purification will not arise at second order since all effects coming from $A_2$ are unitary. Thus the first purification effects can only arise at third order, generically two orders below the leading order noise terms that any purification effects would compete with.
\begin{comment}
Hey Dan, weren't we meeting at 3pm today?
Your right, I got caught up adding a new result to the paper. I proved the purification conditions in the Gaussian context. The old result doesnt apply to infinite dimesntional systems but all the results carry over analogously.
ah nice...
So are you coming before 4pm? that's the time of the next meeting
I don't think I will be on campus today. I have added a bit of content to the paper so we should read over that and send it out tomorrow, i think
okay are you gonna be here on Wednesday?
No I am leaving wednesday
okay, then let me konw when the new content is added and i'll go throughit on my own before sending it and let you know.
If you want I can highlight the new content
Yes please
Ill have it done by tonight
ok, jsut shoot an email when you're done
I kind of like talking this way lol. will do
lol yeah... well at least one makes sure that the person editing will read it :P
Okay nice job about the proof. Looking forward to reading it.
\end{comment}
Finally we show that up to first order the dynamics is completely positive. Assuming that the ancillae start in a valid state we have
\begin{equation}
\sigma_A \geq\mathrm{i} \, \Omega_A
\end{equation}
Multiplying this by $\Omega_S G$ and $G^\intercal \Omega_S^\intercal$ on either side maintains the inequality, yielding
\begin{equation}
\Omega_S G \sigma_A G^\intercal \Omega_S^\intercal
\geq \mathrm{i} \, \Omega_S G \, \Omega_A G^\intercal\Omega_S^\intercal,
\end{equation}
but here we can recognize $C_1$ and $A_1$ from \eqref{A1DefHam} and \eqref{C1DefHam}:
\bel{CPCheck1}
C_1\geq 2\mathrm{i} \, \Omega_S A_1\Omega_S^\intercal
=\mathrm{i} \, \Omega_S (A_1-A_1^\intercal)\Omega_S.
\end{equation}
where we have used the antisymmetry of $A_1$. This is exactly the complete positivity condition, \eqref{DiffCPCond}, at first order. Adding this inequality to \eqref{CPCheck0} we confirm the dynamics is completely positive at first order.
\subsection{Second Order Dynamics}
At second order the effective Hamiltonian is
\begin{equation}
\hat{H}_\text{eff}
=\hat{H}_\text{eff}^{(0)}
+\delta t \, \hat{H}_\text{eff}^{(1)}
+\delta t^2 \, \hat{H}_\text{eff}^{(2)}
+\mathcal{O}(\delta t^3)
\end{equation}
with
\begin{align}
\hat{H}_\text{eff}^{(2)}
&=\frac{1}{4}\hat{\bm{X}}_S^\intercal (A_2+A_2^\intercal) \hat{\bm{X}}_S
+\hat{\bm{X}}_S^\intercal \, \bm{b}_2
\end{align}
and there is a further correction coming from both $A_2$ and $\bm{b}_1$. This is the first order at which we have a correction to the effective Hamiltonian that is quadratic in the quadrature operators, allowing for single and multi-mode rotations and squeezings.
At second order (and in fact at all even orders) the $A_2$ term does not contribute to the non-unitary dynamics. The only new non-unitary dynamics at this order comes from the new noise term $C_2$. As we can see from \eqref{C2DefHam}, this term can be interpreted as a correction to the $C_1$ noise term accounting for the ancilla's covariance matrix undergoing free evolution during the interaction.
Up to second order the dynamics is completely positive. Proving this amounts to showing that \eqref{DiffCPCond} is obeyed at second order
\begin{align}\label{CPCheck2}
C_0+\delta t \, C_1+\delta t^2 \, C_2 +\mathcal{O}(\delta t^3) &\geq
\mathrm{i} \, \Omega_S (A_0-A^\intercal_0)\Omega_S^\intercal\\
&\nonumber
+\delta t \, \mathrm{i} \, \Omega_S (A_1-A^\intercal_1)\Omega_S^\intercal\\
&\nonumber
+\delta t^2 \, \mathrm{i} \, \Omega_S (A_2-A^\intercal_2)\Omega_S^\intercal.
\end{align}
Removing several vanishing terms ($C_0=0, A_0-A^\intercal_0=0$, and $A_2-A^\intercal_2=0$) as well as a factor of $\delta t$ we have
\begin{align}\label{CPCheck2}
C_1+\delta t \, C_2 +\mathcal{O}(\delta t^2) \geq
\mathrm{i} \, \Omega_S (A_1-A^\intercal_1)\Omega_S^\intercal
\end{align}
In order to prove this we consider the state of the ancilla after it evolves under its free Hamiltonian for a time $\delta t/2$. Since free evolution is a completely positive map, applying it to a valid initial state yields a state that satisfies \eqref{SigmaPosCond}. Computing the covariance matrix of this state to leading order yields,
\begin{align}\label{CPCheck2}
\sigma_A(0)+\frac{\delta t}{2}\big(\Omega_A F_A \sigma_A(0)+\sigma_A(0) (\Omega_A F_A)^\intercal\big)
+\mathcal{O}(\delta t^2)
\geq\mathrm{i}\Omega_A.
\end{align}
Multiplying by $\Omega_S G$ and $G^\intercal \Omega_S^\intercal$ on the either side and using equation \eqref{A1DefHam}, \eqref{C1DefHam}, and \eqref{C2DefHam} yields
\begin{align}\label{CPCheck2}
C_1+\delta t C_2 +\mathcal{O}(\delta t^2)\geq 2\mathrm{i} \, \Omega_S A_1\Omega_S^\intercal
=\mathrm{i} \, \Omega_S (A_1-A^\intercal_1)\Omega_S^\intercal
\end{align}
where in the last step we again employed the antisymmetry of $A_1$. This is the desired result.
\subsection{Higher Order Dynamics}
At third and higher orders the dynamics of the interpolation scheme is not always completely positive. This could indicate either the presence of non-Markovianity (specifically RHP non-Markovianity \cite{RHPnonMarkov}) in the interpolated dynamics or the breakdown of one of the assumptions underlying the construction of the interpolation scheme, for instance the time-independence of the interpolation generators.
Note that while the differential dynamics given by \eqref{GQMInterpMasterEqsX} and \eqref{GQMInterpMasterEqsV} may not be completely positive, the discrete dynamics described by \eqref{UpdateSchemeXX} and \eqref{UpdateSchemeVV} is guaranteed to be completely positive at every time step (i.e. when $t=n\, \delta t$) since the interpolated dynamics matches the discrete dynamics at those precise times. In the language of \cite{Layden:2015b,Grimmer2016a} this error is termed stroboscopic and can be bounded by a combination of the timescale $\delta t$ and the energy scale of the dynamics, $E$.
\begin{comment}
This non positivity is confirmed by calculating the third order noise
\begin{equation}
C^{(3)}=C_0+\delta t \, C_1+\delta t^2 \, C_2 +\delta t^3 C_3
\end{equation}
in the case of a single harmonic oscillator being bombarded by ground states oscillators via an \mbox{$\hat{q}_\text{S}\otimes \hat{q}_\text{A}$} coupling. The parameters of such an interaction are
\begin{align}
F_S
&\nonumber
=\omega_\text{S}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\quad
\bm{\alpha}_S=0,
\quad
F_A
=\omega_\text{A}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\quad
\bm{\alpha}_A=0\\
\sigma_A(0)
&\nonumber
=\nu_\text{A}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix},
\quad
\bm{X}_A(0)=0,
\quad
G=g
\begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}.
\end{align}
One finds
\begin{equation}
C^{(3)}
=\begin{pmatrix}
\frac{-1}{12}\delta t^3 \, g^2 \, \omega_\text{S}^2 \, \nu_\text{A} & 0\\
0 & \delta t \, g^2 \, \nu_\text{A}-\frac{1}{12}\delta t^3 \, g^2 \, \omega_\text{S}^2 \, \nu_\text{A}
\end{pmatrix}
\end{equation}
to be non-positive semidefinite. This implies that the third order dynamics is not completely positive.
\end{comment}
\begin{comment}
At third order, there is a correction to the effective Hamiltonian of the system coming from $\bm{b}_3$. However, as noted above the $A_3$ is antisymmetric,
\begin{equation}
A_{1,\textsc{s}}=(A_1+A_1^\intercal)/2=0
\end{equation}
Therefore it does not contribute to the symplectic part of the dynamics or to the effective Hamiltonian. Thus at third order the effective Hamiltonian is
\begin{equation}
\hat{H}_\text{eff}
=\hat{H}_\text{eff}^{(0)}
+\delta t \, \hat{H}_\text{eff}^{(1)}
+\delta t^2 \, \hat{H}_\text{eff}^{(2)}
+\delta t^3 \, \hat{H}_\text{eff}^{(3)}
+\mathcal{O}(\delta t^4)
\end{equation}
with
\begin{align}
\hat{H}_\text{eff}^{(3)}
&=\frac{1}{4}\hat{\bm{X}}_S^\intercal (A_3+A_3^\intercal) \hat{\bm{X}}_S
+\hat{\bm{X}}_S^\intercal \, \bm{b}_3\\
&=\hat{\bm{X}}_S^\intercal \, \bm{b}_3.
\end{align}
In addition to a correction to the effective Hamiltonian, at third order we see new unsymplectic dynamics coming from $A$ and $C$. The forms of these are now too difficult to analyze in detail. However as we will see in the coming examples, at third order the dynamics may not be completely positive.
This ultimately leads us to conclude that we should have included time dependence in our interpolation schemes at least in the Gaussian setting.
\end{comment}
\section{Thermalization of a Harmonic Oscillator}\label{Example}
As a first relevant physical scenario that Gaussian ancillary bombardment can shed some light on, we consider the analysis of the time evolution of a harmonic oscillator subject to short interactions with the components of a thermal reservoir. This is a picture usually associated with thermalization processes and as such we would a-priori expect that this evolution has fixed points related to the second law of thermodynamics.
More concretely, one might expect in such a scenario that the harmonic oscillator will thermalize to the temperature of the reservoir, in a way largely independent of the coupling between them. Perhaps surprisingly, we will show that the system does not always thermalize. Moreover, when it does thermalize its final temperature depends critically on the nature of the coupling to the bath (as well as the bath's temperature as expected).
Let us consider a single harmonic oscillator (the system, S) repeatedly interacting with a series of other harmonic oscillators (the ancillae, A) in thermal states with a fixed temperature.
At this point it is convenient to introduce the following basis for 2 by 2 matrices:
\bel{2by2basis}
\openone_2
=\begin{pmatrix}
1 & 0 \\
0 & 1 \\
\end{pmatrix},
\,
\omega=\begin{pmatrix}
0 & 1 \\
-1 & 0 \\
\end{pmatrix},
\,
X=\begin{pmatrix}
0 & 1 \\
1 & 0 \\
\end{pmatrix},
\,
Z=\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}.
\end{equation}
The system's free Hamiltonian is assumed to be
\begin{equation}
\hat{H}_\text{S}
=\frac{E_\text{S}}{2} (\hat{q}_\text{S}{}^2+\hat{p}_\text{S}{}^2)
=\frac{E_\text{S}}{2}\begin{pmatrix}
\hat{q}_\text{S} & \hat{p}_\text{S}
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{q}_\text{S}\\
\hat{p}_\text{S}
\end{pmatrix},
\end{equation}
where $E_\text{S}$ is the energy gap of the oscillator. This Hamiltonian is represented in phase space as
\begin{equation}
F_\text{S}=E_\text{S}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
=E_\text{S} \, \openone_2,
\quad\text{and}\quad
\bm{\alpha}_\text{S}=0.
\end{equation}
Similarly the ancillae' free Hamiltonian is assumed to be
\begin{equation}
\hat{H}_\text{A}
=\frac{E_\text{A}}{2} (\hat{q}_\text{A}{}^2+\hat{p}_\text{A}{}^2)
=\frac{E_\text{A}}{2}\begin{pmatrix}
\hat{q}_\text{A} & \hat{p}_\text{A}
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
\hat{q}_\text{A}\\
\hat{p}_\text{A}
\end{pmatrix},
\end{equation}
where $E_\text{A}$ is the energy gap of the ancilla.
This Hamiltonian is representation in phase space as
\begin{equation}
F_\text{A}=E_\text{A}
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
=E_\text{A} \, \openone_2
\quad\quad
\bm{\alpha}_\text{A}=0.
\end{equation}
The interaction Hamiltonian between the system and the ancillae is assumed to be a generic quadratic coupling,
\begin{equation}
\hat{H}_\text{int}
=\frac{1}{2}\hat{\bm{X}}_S^\intercal \, G \, \hat{\bm{X}}_A
+\frac{1}{2}\hat{\bm{X}}_A^\intercal \, G^ \intercal\, \hat{\bm{X}}_S
\end{equation}
for any real-valued $2$ by $2$ matrix, $G$. Further, the ancillae are taken to each initially be in the thermal state (see \cite{GQMRev}),
\begin{equation}
\sigma_A(0)=\nu_A
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
=\nu_A \, \openone_2,
\quad\quad
\bm{X}_A(0)=0.
\end{equation}
The parameter $\nu$ is a temperature monotone related to the inverse temperature $\beta$ and the energy gap as $E$ as,
\bel{NuBetaRelation}
\nu=\frac{\text{exp}(\beta E)+1}{\text{exp}(\beta E)-1}
\end{equation}
This represents a valid state as long as $\nu_A\geq1$.
As discussed above (and in \cite{Layden:2015b}), at zeroth order the system's dynamics is unitary. In fact, in the Gaussian regime, the dynamics is just the system's free dynamics plus a potential displacement coming from the bombardment. In this case, because the ancilla state has $\bm{X}_A(0)=0$, no new displacement dynamics is induced at zeroth order. Therefore the system evolves freely at zeroth order. All new dynamical effects besides free evolution are higher order, thus associated with a finite interaction duration.
Explicitly computing the zeroth order interpolation generators one finds
\begin{align}
A_0&=E_{S} \ \openone_2,\\
\bm{b}_0&=0,\\
C_0&=0,
\end{align}
which simply describe the free rotation of the system.
We do however see novel dynamical effects at first order. We find
\begin{align}
A_1&=\frac{1}{2}G \, \Omega_A G^\intercal=\frac{1}{2}\text{det}(G) \, \omega,\\
\bm{b}_1&=0,\\
C_1&=\nu_A \, \Omega_S \, G \, G^\intercal \, \Omega_S^\intercal
\end{align}
for the first order interpolation generators.
These produce non-unitary dynamics in the system. In particular, using the partition developed in \cite{ArXivGrimmer2017b}, we can see that $A_1$ produces amplification or relaxation depending on the sign of $\text{det}(G)$ at a rate $\sim\delta t \, \text{det}(G)$. Specifically if $\text{det}(G)>0$ the effect of this term (alone) is to exponentially shrink the state's mean vector and covariance matrix towards zero. Alternatively if $\text{det}(G)<0$ this term alone would push the state's mean vector and covariance matrix to grow exponentially. If $\text{det}(G)=0$ this term has no effect.
This amplification/relaxation competes with the noise introduced at first order by $C_1$. Generically this will include both thermal noise and squeezed noise. If \mbox{$\text{det}(G)\leq 0$} then both the $A_1$ and $C_1$ terms serve to increase the uncertainty of the state. In this case no fixed point is reached, hence the system does not thermalize. However, if $\text{det}(G)>0$ then the two effects come to an equilibrium that is approximately thermal, as we will show below.
Explicitly the first order master equation for the covariance matrix is
\begin{align}
\frac{\d}{\d t}\sigma_\text{S}(t)
&=\Omega_\text{S}(A_0+\delta t A_1)\sigma_\text{S}(t)\\
&+\sigma_\text{S}(t)(\Omega_\text{S}(A_0+\delta t A_1))^\intercal
+C_0+\delta t \, C_1.
\end{align}
We can expand the system's covariance matrix over the basis \eqref{2by2basis} as
\begin{align}
\sigma_S(t)
=\nu_S(t)\openone_2
+s_\times(t) X
+s_+(t)Z.
\end{align}
where $\nu_S(t)$ captures the system's temperature and $s_\times(t)$ and $s_+(t)$ capture how the state is squeezed.
In terms of these coefficients the first order master equation for the covariance matrix is
\begin{align}
\frac{\d}{\d t}\nu_S(t)
&=-\delta t \, \text{det}(G) \, \nu_\text{S}(t)
+ \frac{\delta t}{2} \text{Tr}(G^\intercal G) \, \nu_A\\
\frac{\d}{\d t}s_\times(t)
&=-2 \, E_{\text{S}} \, s_+(t)
-\delta t \, \text{det}(G) \, s_\times(t)\\
&\nonumber
-\frac{\delta t}{2} \text{Tr}(G^\intercal X G) \, \nu_A \\
\frac{\d}{\d t}s_+(t)
&=2 \, E_{\text{S}} \, s_\times(t)
-\delta t \, \text{det}(G) \, s_+(t)\\
&\nonumber
-\frac{\delta t}{2} \text{Tr}(G^\intercal Z G) \nu_A.
\end{align}
These equations have a fixed point if and only if $\text{det}(G)>0$, in which case the fixed point is attractive. In this case the final state of the system is
\begin{equation}
\sigma_S(\infty)
=\nu_S(\infty) \, \openone_2
+\mathcal{O}(\delta t)
\end{equation}
with
\begin{equation}
\nu_S(\infty)=\tilde{\nu}_A\coloneqq\frac{\text{Tr}(G^\intercal G)}{2\, \text{det}(G)}\nu_A
\end{equation}
where $\tilde{\nu}_A$ represents the effective temperature of the ancilla. The system approaces this state at a rate $\delta t \, \text{det}(G)$. Note that the final temperature of the system depends on the coupling between the system and environment non-trivially.
At this point, one may wonder if it is possible for the system to become colder than its environment through such a rapid bombardment process. Noting that all $2\times 2$ matrices have
\bel{Frob2Det}
\text{Tr}(G^\intercal G)\geq 2 \, \text{det}(G),
\end{equation}
we see that the system cannot be cooled to have $\nu_\text{S}(\infty)$ lower than $\nu_\text{A}$,
\bel{NuInequality}
\nu_S(\infty)=\tilde{\nu}_A\geq\nu_A.
\end{equation}
However, this does not mean that system cannot become cooler than its environment. Recall from equation \eqref{NuBetaRelation} that $\nu$ is a monotone function of temperature (in fact, it is a monotone function of $\beta E$). Thus \eqref{NuInequality} implies
\bel{BetaInequality}
\beta_\text{S}(\infty)E_\text{S}\leq E_\text{A}\beta_\text{A}
\end{equation}
or equivalently
\bel{BetaInequality2}
T_\text{S}(\infty)\geq \frac{E_\text{S}}{E_\text{A}}
T_\text{A}.
\end{equation}
If the ancilla has has a larger energy gap than the system the system will be cooled to a temperature below that of the ancillae.
This appears to be connected to the property of Gaussian passivity, introduced in \cite{Brown}. A quantum state is called Gaussian passive iff there exists no Gaussian unitary that can lower the state's energy.
In fact, if we assume that \(E_\text{S} < E_\text{A}\), then (\ref{BetaInequality}) is the necessary and sufficient condition for Gaussian passivity. Thus, under the condition \(E_\text{S} < E_\text{A}\), the result of bombardment is to evolve the system such that the joint system-ancilla system is Gaussian passive. However, in the case that the system energy gap is larger than that of the ancilla, this result implies that the joint system becomes explicitely Gaussian non-passive! The energetics of the bombardment steady state therefore depend strongly on the ordering of system and ancilla frequencies. This connection warrents further investigation.
The above inequalities ---\eqref{Frob2Det}, \eqref{NuInequality} and \eqref{BetaInequality}-- are saturated (i.e., we have maximal cooling) only for the following two parameter family of interaction matrices
\bel{PerfectThermalizationGForm}
G=g_{1} \, \openone
+g_{w} \, \omega
=\begin{pmatrix}
g_{1} & g_{w}\\
-g_{w} & g_{1}
\end{pmatrix}
\end{equation}
whose associated Hamiltonians associated are
\begin{equation}
\hat{H}_\text{I}
=g_1(\hat{q}_S\hat{q}_A+\hat{p}_S\hat{p}_A)+g_w(\hat{q}_S\hat{p}_A-\hat{p}_S\hat{q}_A).
\end{equation}
Written in terms of the system and ancillae creation and annihilation operators the maximally cooling interaction Hamiltonians are
\bel{MaxCoolaForm}
\hat{H}_\text{I}
=\begin{pmatrix}
\hat{a}_\text{S} & \hat{a}_\text{S}^\dagger
\end{pmatrix}
\begin{pmatrix}
0 & g\\
g^* & 0
\end{pmatrix}
\begin{pmatrix}
\hat{a}_\text{A} \\ \hat{a}_\text{A}^\dagger
\end{pmatrix}.
\end{equation}
where $g=g_1+\mathrm{i} g_w$. Notice that these are exactly the interaction Hamiltonians that result from dropping all the $\hat{a}_\text{S} \, \hat{a}_\text{A}$ and $\hat{a}_\text{S}^\dagger \, \hat{a}_\text{A}^\dagger$ terms as one does in the rotating wave approximation. Thus taking the rotating wave approximation can have significant phenomenological effects in rapid repeated interaction scenarios. For instance $H_\text{I}=\lambda \, \hat{q}_S \, \hat{q}_A$ does not thermalize (since it has $\text{det}(G)=0$) but under the rotating wave approximation it causes maximal cooling.
\\~\\
In order to see why the interaction Hamiltonians given by \eqref{MaxCoolaForm} cause the system to equilibrate with its environment it is useful to look at their effect on definite number states. For instance
\begin{align}
\hat{H}_\text{I}\ket{n_S, \, n_A}
\nonumber
&=\big(
g \, \hat{a}_S\hat{a}_A^\dagger
+g^* \, \hat{a}_S^\dagger\hat{a}_A
\big)\ket{n_S, \, n_A}\\
&=g \, \sqrt{n_S}\sqrt{n_A+1}\ket{n_S-1, \, n_A+1}\\
&+g^* \, \sqrt{n_S+1}\sqrt{n_A}\ket{n_S+1, \, n_A-1}
\end{align}
such that the effect of this Hamiltonian is a superposition of either transfering an excitation from S to A or vice versa. In general, these possibilities do not have the same amplitude. If $n_S>n_A$ then
\begin{equation}
\vert g \, \sqrt{n_S}\sqrt{n_A+1}\vert >
\vert g^* \, \sqrt{n_S+1}\sqrt{n_A}\vert
\end{equation}
such that the amplitude of an excitation being transferred from S to A is larger. Likewise if $n_A>n_S$ then the amplitude of an excitation to be transferred from A to S is larger. Thus this coupling will tend to transfer excitations from the more excited system to the less excited one. As we saw above this ultimately leads to an equilibrium of excitation profiles, $\nu_S=\nu_A$. Note that this is not a thermal equilibrium.
\\~\\
On the other hand, the part of the Hamiltonian
\begin{equation}
\hat{H}_\text{I}
=h \, \hat{a}_S^\dagger\hat{a}_A^\dagger
+h^* \, \hat{a}_S\hat{a}_A
\end{equation}
that is eliminated by the rotating wave approximation does not lead to equilibration. Its effect on the definite number state is
\begin{align}
\hat{H}_\text{I}\ket{n_S, \, n_A}
&=\big(
h \, \hat{a}_S^\dagger\hat{a}_A^\dagger
+h^* \, \hat{a}_S\hat{a}_A
\big)\ket{n_S, \, n_A}\\
\nonumber
&=h \, \sqrt{n_S+1}\sqrt{n_A+1}\ket{n_S+1, \, n_A+1}\\
\nonumber
&+h^* \, \sqrt{n_S}\sqrt{n_A}\ket{n_S-1, \, n_A-1}.
\end{align}
That is produces a superposition of both oscillators becoming more excited and both becoming less excited. Notice however that for every $n_S$ and $n_A$,
\begin{equation}
\vert h \, \sqrt{n_S+1}\sqrt{n_A+1}\vert >
\vert h^* \, \sqrt{n_S}\sqrt{n_A}\vert
\end{equation}
such that joint excitation has a larger amplitude than joint de-excitation. This causes the system to increasingly become more and more excited.
\\~\\
Given a general quadratic interaction Hamiltonian
\begin{align}
\hat{H}_\text{I}
=g \, \hat{a}_S\hat{a}_A^\dagger
+g^* \, \hat{a}_S^\dagger\hat{a}_A
+h \, \hat{a}_S^\dagger\hat{a}_A^\dagger
+h^* \, \hat{a}_S\hat{a}_A,
\end{align}
if $\vert h\vert>\vert g\vert$ the system does not equilibrate. However if $\vert g\vert>\vert h\vert$ then the system equilibrates to have
\begin{equation}
\nu_S(\infty)
=\frac{\text{Tr}(G^\intercal G)}{2\, \text{det}(G)}\nu_A
=\frac{\vert g \vert^2+\vert h \vert^2}{\vert g \vert^2-\vert h \vert^2}\nu_A.
\end{equation}
The final state of the system is determined by a competition between these equilibrating and exciting effects.
\begin{comment}
Ignoring the trivial case where $G=0$, since $\det{G}=0$ we know $G$ must be rank 1, that is,
\begin{equation}
G=(g_{x,S},g_{p,S})\otimes(g_{x,A},g_{p,A})^\intercal.
\end{equation}
By choosing our coordinates on the system and ancilla phase space appropriately we can bring $G$ to be an $XX$ coupling,
\begin{equation}
G=g
\begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}.
\end{equation}
Note that we could equivalently choose to rewrite $G$ as an $XP$ coupling.
These cover most of the standardly assumed interaction, $XX$, $XP$, $PP$.
\end{comment}
\begin{comment}
Consider a bound pair of harmonic oscillators (our system) bombarded by a gas of thermal harmonic oscillators.
In particular, we take the system to have free Hamiltonian,
\begin{equation}
F_S=\omega_{0,S}
\begin{pmatrix}
\openone_2 & 0\\
0 & \openone_2\\
\end{pmatrix},
\quad\quad
\bm{\alpha}_S=0.
\end{equation}
Notice that the two parts of the system are uncoupled. We take the ancillae to have free Hamiltonian,
\begin{equation}
F_A=\omega_{0,A}\openone_2,
\quad\quad
\bm{\alpha}_A=0.
\end{equation}
Moreover, the ancillae are taken to be in the thermal state,
\begin{equation}
\sigma_A=\nu_A\openone_2,
\quad\quad
\bm{X}_A=0
\end{equation}
with $\nu_A\geq1$.
Finally, we choose for each part of our system to interact with the constituents of the gas via the maximally cooling Hamiltonian identified in the previous example,
\begin{equation}
G_{12A}=\begin{pmatrix}
G_{1A}\\
G_{2A}\\
\end{pmatrix};
\qquad
G_{1A}
=G_{2A}
=g
\begin{pmatrix}
1 & 0\\
0 & 1\\
\end{pmatrix}.
\end{equation}
The zeroth order dynamics of the system are just the free system dynamics,
\begin{align}
A_0&=\omega_{0,S}
\begin{pmatrix}
\openone_2 & 0\\
0 & \openone_2
\end{pmatrix}\\
\bm{b}_0&=0\\
C_0&=0.
\end{align}
It is now convenient to identify the part of the covariance matrix which is fixed under the free dynamics. Taking the covariance matrix to be written as,
\begin{equation}
\sigma(t)=
\begin{pmatrix}
\sigma_1(t) & \gamma(t)\\
\gamma^\intercal(t) & \sigma_2(t)
\end{pmatrix}
\end{equation}
we find that under the free dynamics
\begin{align}\label{2QHOFreeDynamics}
\sigma_1'(t)&=\omega\sigma_1(t)-\sigma_1(t)\omega\\
\sigma_2'(t)&=\omega\sigma_2(t)-\sigma_2(t)\omega\\
\gamma'(t)&=\omega\gamma(t)-\gamma(t)\omega.
\end{align}
Thus the part of the covariance matrix which is fixed under the free dynamics is built out of subblocks which commute with $\omega$, namely $\openone_2$ and $\omega$ itself. The reduced states $\sigma_1$ and $\sigma_2$ must be symmetric and are therefore proportional to the identity and thus thermal.
Thus the fixed states of the free Hamiltonian form a four parameter family,
\begin{align}
\sigma_1&=(\bar{\nu}_S+\Delta\nu_S/2) \, \openone_2\\
\sigma_2&=(\bar{\nu}_S-\Delta\nu_S/2) \, \openone_2\\
\gamma&=\gamma_1 \, \openone_2+\gamma_w \, \omega
\end{align}
where $\bar{\nu}_S$ is the average temperature of the system, $\Delta\nu_S$ is the temperature spread of the system, and $\gamma_1$ and $\gamma_w$ capture the correlations between the two parts of the system.
The rest of the covariance matrix is built of $X$ and $Z$ subblocks. Noting that $\omega X-X\omega=2Z$ and $\omega Z-Z\omega=-2X$, we can see from equations \eqref{2QHOFreeDynamics} that these two parts of the covariance matrix are independent under the free Hamiltonian.
Next we consider the first order dynamics. Calculating the first order dynamics we have,
\begin{equation}
A_1=\frac{1}{2}
\text{det}(G_1)
\begin{pmatrix}
\omega & \omega\\
\omega & \omega
\end{pmatrix}
\end{equation}
and noise
\begin{equation}
C_1=\nu_A
\begin{pmatrix}
\omega G_1G_1^\intercal\omega^\intercal
& \omega G_1G_1^\intercal\omega^\intercal\\
\omega G_1G_1^\intercal\omega^\intercal
& \omega G_1G_1^\intercal\omega^\intercal\\
\end{pmatrix}.
\end{equation}
Noting that
\begin{equation}
\Omega_S A_1=\frac{1}{2}
\text{det}(G_1)
\begin{pmatrix}
-\openone_2 & -\openone_2\\
-\openone_2 & - \openone_2
\end{pmatrix}
\end{equation}
we once again we see the free stationary states, and the rest of the state are dynamically independent. Expand on this. We can thus restrict our attention to the free stationary states. The relevant part of the noise is
\begin{equation}
C_{1,\text{thermal}}=\frac{\nu_A}{2}
\text{Tr}(G_1G_1^\intercal)
\begin{pmatrix}
\openone_2 & \openone_2\\
\openone_2 & \openone_2
\end{pmatrix}.
\end{equation}
Moreover, we see that the first order dynamics generate dynamics within the fixed space of the zeroth order dynamics. Evolving under the first order dynamics gives,
\begin{align}
\bar{\nu}_S'(t)&=-\delta t \ \text{det}(G_1)(\bar{\nu}_S(t)+\gamma_1(t)-\tilde{\nu}_A)\\
\Delta\nu_S'(t)&=-\delta t \ \text{det}(G_1) \ \Delta\nu_S(t)\\
\gamma_1'(t)&=-\delta t \ \text{det}(G_1)(\gamma_1(t)+\bar{\nu}_S(t)-\tilde{\nu}_A)\\
\gamma_w'(t)&=-\delta t \ \text{det}(G_1) \ \gamma_w(t)
\end{align}
where
\begin{equation}
\tilde{\nu}_A=\frac{\text{Tr}(G_1G_1^\intercal)}{2 \, \text{det}(G_1)}\nu_A
\end{equation}
is the effective temperature of the bath. Note that as discussed earlier $\tilde{\nu}_A\geq\nu_A$, with equality only when $G_1$ has the form \eqref{PerfectThermalizationGForm}.
In order for these dynamics to converge we need $\text{det}(G_1)>0$. In which case we can immediately see that $\Delta\nu$ and $\gamma_w$ are exponentially suppressed at a rate $\Gamma=\delta t \, \text{det}(G_1)$ as,
\begin{align}
\Delta\nu_S(t)&=\Delta\nu_S(0)\exp(-\Gamma \ t)\\
\gamma_w(t)&=\gamma_w(0)\exp(-\Gamma \ t)
\end{align}
The coupled equations yield,
\begin{align}
\bar{\nu}_S(t)&=\frac{1}{2}\big(\bar{\nu}_S(0)-\gamma_1(0)+\tilde{\nu}_A\big)\\
&+\frac{\exp(-2\Gamma \ t)}{2}\big(\gamma_1(0)+\bar{\nu}_S(0)-\tilde{\nu}_A\big)\\
\gamma_1(t)&=\frac{1}{2}\big(\gamma_1(0)-\bar{\nu}_S(0)+\tilde{\nu}_A\big)\\
&+\frac{\exp(-2\Gamma \ t)}{2}\big(\gamma_1(0)+\bar{\nu}_S(0)-\tilde{\nu}_A\big).
\end{align}
Thus we see that the average temperature and the correlations are exponentially driven towards
\begin{align}
\bar{\nu}_S(\infty)&=\frac{1}{2}\big(\bar{\nu}_S(0)-\gamma(0)+\tilde{\nu}_A\big)\\
\gamma_1(\infty)&=\frac{1}{2}\big(\gamma(0)-\bar{\nu}_S(0)+\tilde{\nu}_A\big)
\end{align}
at a rate $2\Gamma$.
In the case where the parts of the system start off uncorrelated we see that the system averages its initial temperature with the effective temperature of the bath, developing correlations proportional to the temperature difference. The cooling process is halted early due to a build up of correlations. If the correlations are purged and then cooling recommences the termperature will lower again. Even after that we will only be at the effective termperature. Picking the right interaction gets us to the real temperature. But this isn't actually the "real" termperature as it is just the excitation number. The real temperature depends on the ancilla free Hamiltonian scale which the system doesn't know about.
Thus we see thermalization is frought with excess temperature can be converted into correlations. Is the reverse process possible? Can we extract heat our of correlations?
In particular we will look at the interplay of temperature and single mode squeezing with the correlation (multi-mode squeezings) between the two parts of the system.
In addition to being natural, this scenario is also interesting as a method of producing dynamics with squeezed fixed point. Assuming that there is no squeezing in the system's free dynamics, we would need the squeezing effects to arise at higher orders. If our system is a single harmonic oscillator, the only possible squeezing effects are symplectic. Such effects must can only arise at even order, and thus must be at least second order. Thus the squeezing effect will be dominated by the first order noise. Thus in designing a squeezing protocol we must consider a system of two oscillators, making use of the richer variety of unsymplectic and multi-mode squeezings it offers.
\end{comment}
\section{Conclusion}
We have considered the dynamics induced in a generic Gaussian system when rapidly bombarded (at a frequency $1/\delta t$) by a series of Gaussian ancillae, a scenario we call \textit{Gaussian ancillary bombardment}. This scenario covers (as a particular case) a harmonic oscillator bombarded by a thermal bath of harmonic oscillators.
We have applied this formalism to the relevant case of thermalization by interaction with an environment by investigating the particular case of an harmonic oscillator bombarded by the constituents of a thermal bath of harmonic oscillators.
We have explicitly shown that the equilibration of systems continually bombarded by the micro-constituents of a thermal reservoir is much richer than just the naive expectation that `the system will evolve to reach the environment's temperature'. Namely, we analyzed in depth the effect that the coupling of the system to the ancillae composing the thermal bath have on the systems dynamics. In particular we have exactly characterized the couplings which cause the system to reach a thermal fixed point. Perhaps surprisingly we showed that most couplings will not even equilibrate (e.g. \mbox{$H_\text{I}\sim q_\text{S}\otimes q_\text{E}$}). Furthermore, we analyzed the effect that the nature of the system-environment coupling has on whether the system equilibrates or not and how the final temperature of the system depends on this coupling. Remarkably, we find that in the space of possible couplings only an extremely limited set of interactions causes the system to thermalize to the temperature of its environment. We relate such couplings to the rotative wave approximation.
We have found other more general results that apply to Gaussian ancillary bombardment. For example we found that a sufficiently complicated interaction Hamiltonian is required to cause purification in this context. We also found that in a general Gaussian Bombardment scenario the presence and strength of any dynamics implementing rotation, squeezing and amplification are entirely independent of the state of the ancillae constituting the environment (even outside perturbation theory).
Expanding the dynamics as a series in $\delta t$ we found that different types of dynamics are available at each order in the inverse of the interaction frequency with the following consequences: (a) at zeroth order the evolution is unitary as predicted by the general results in \cite{Layden:2015b}; (b) however, unlike in \cite{Layden:2015b} in the Gaussian regime only a limited range of dynamics (only displacements) can be induced in the system at zeroth order; (c) past zeroth order noise and displacement effects are generically present; (d) rotations, squeezing and amplification effects alternate between unitary and non-unitary at each order.
Our work paves the way to addressing open questions related to the thermodynamics of systems bombarded by environments, and how the energy and information flows between system and environments depend on the particular microscopic details of the interaction.
\acknowledgments
AK, EMM and RBM acknowledge support through the Discovery program of the Natural Sciences and Engineering Research Council of Canada (NSERC). DG acknowledges support by NSERC through the Vanier Scholarship. EGB also acknowledges support by NSERC through their Postdoctoral Fellowship.
|
1,314,259,995,583 | arxiv | \section{Introduction}
Entanglement is one of the most intriguing phenomena promised by quantum physics. As the ``spooky
action at a distance'' unveils itself with the development of quantum physics, entanglement also turns out to be beneficial to various applications in communication~\cite{gisin2002,xu2020,pirandola2020advances}, computation~\cite{Preskill2018quantumcomputingin} and sensing~\cite{giovannetti2011advances,degen2017quantum,braun2018rmp,pirandola2018advances,sidhu2020geometric}. In computation, entangling multiple qubits in a well-controlled manner enables the efficient computation of difficult problems~\cite{Shor_1997}. In communication, entanglement enables a higher information transmission rate~\cite{bennett1992,bennett2002entanglement} and provides unconditional security~\cite{Bennett20147,Ekert_1991}. In sensing, entanglement enables the Heisenberg scaling~\cite{zwierz2010general} in measuring an identical parameter among sensors~\cite{giovannetti2006} or even a global property of parameters distributed across different sensors~\cite{ge2017distributed,proctor2017multi,zhuang2018distributed,eldredge2018optimal,zhang2020distributed}.
Entanglement is fragile---noise and loss can easily destroy it, yet surprisingly its operational advantages can survive. For example, the rate of entanglement-assisted (EA) communication can be much larger than the un-assisted classical capacity, even for an entanglement-breaking channel that destroys the entanglement at the receiver side, as predicted by the theory works~\cite{bennett2002entanglement,shi2020practical} and recently demonstrated in an experiment~\cite{hao2020}. In quantum illumination (QI)~\cite{Lloyd2008,tan2008quantum}, the target's presence can be probed with a 6dB advantage in the error exponent, despite the original entanglement being entirely destroyed at the receiver side.
Many efforts have been devoted to make QI's theoretical advantage practically relevant. Sub-optimal receiver designs~\cite{Guha2009} that enable experimental demonstrations~\cite{zhang2013,zhang2015,Lopaeva_2013} and a structured optimal receiver design to saturate the quantum advantage~\cite{zhuang2017} have been proposed. To adapt to a radar detection scenario, extensions to Neyman-Pearson decision strategy~\cite{zhuang2017NP} and target fading scenarios~\cite{zhuang2017fading} have been achieved. As the large noise background required by QI's advantage exists only in microwave, demonstration in the microwave domain is also an overall goal~\cite{barzanjeh2015microwave,barzanjeh2020microwave,chang2019quantum}.
However, as pointed out in recent reviews~\cite{pirandola2018advances,shapiro2020quantum}, a major hurdle that prevents QI being eventually practically advantageous is its limitation to be only able to interrogate a single polarization-azimuth-elevation-range-Doppler resolution bin at a time. Despite recent theoretical advances in multiary channel discrimination~\cite{zhuang2020entanglement,zhuang2020ultimate} that bring hope to solve the problem, energetic considerations seem to show that no entanglement advantage can be obtained~\cite{karsa2020energetic} from that perspective.
In this letter, we resolve the \QZ{limitation} by proposing a quantum ranging protocol enhanced by Gaussian entanglement~\cite{Weedbrook2012}. First, to go beyond previous studies~\cite{zhuang2020entanglement}, we develop a precise model for the ranging task, where one sends out a signal pulse and continuously measure at the receiver side to determine the reflection of a target at line-of-sight. As any ranging task has a finite precision requirement, we then formulate ranging as a multiary hypothesis testing problem, where each hypothesis corresponds to target being in one of the $m\ge 2$ slices of discretized range.
We show that by storing an idler entangled with the signal pulse, the target range can be determined with a
6dB advantage in the error exponent. Our results on quantum ranging also directly apply to a pulse-position modulated EA classical communication protocol, offering a rate much higher than the classical capacity in the low-photon number region. We design a practical receiver in the $m=2$ case that enables entanglement advantage and provide intuition for the optimal receiver in the general case.
{\em Model of ranging.---}
We consider the task of determining the distance between an observer and a target along the line-of-sight. Suppose the observer has a finite precision requirement $\Delta$, then we can divide the line-of-sight into $m\ge 2$ length-$\Delta$ slices, and model the problem of ranging as a hypothesis testing task between $m$ hypotheses (see Fig.~\ref{fig:schematic}). In hypothesis $h$, the target is present in the slice centered at the position $h\Delta$ from the origin.
To determine the range, one can send out a pulse, described by the mode annihilation operator $\hat{a}_S$, and wait for the reflected return from the target. The mean photon number of the mode $\braket{\hat{a}_S^\dagger \hat{a}_S}=N_S$ is constrained by the source brightness or to avoid revealing the attempt of detection. To determine the time of arrival of the returned pulse, one needs to continuously collect light at the receiver side, obtaining the modes $\{\hat{a}_{\ell}\}_{\ell=1}^m$, each arriving at time $t_\ell=2\ell \Delta/c$.
In hypothesis $h$, the target is $h\Delta$ away from the observer, and the reflected mode $\hat{a}_{h}$ arrives at the observer after time $t_h=2h\Delta/c$. We can model the reflection by a bosonic thermal-loss channel ${\cal L}_{\kappa,N_B}$ described by the beamsplitter transform
\begin{equation}
\hat{a}_{h}=\sqrt{\kappa} \hat{a}_S+\sqrt{1-\kappa}\hat{e}_h,
\label{eq:ah}
\end{equation}
where $\kappa$ is the target reflectivity and the noise mode $\hat{e}_h$ is in a thermal state with $N_B/(1-\kappa)$ photons on average.
When the returned signal does not arrive at time $t_\ell$, the noise mode being collected $\hat{a}_{\ell \neq h}=\hat{e}_\ell$ is in a thermal state with mean photon number $N_B$.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{radar.pdf}
\caption{
Schematic of the entanglement-assisted ranging protocol. In (a), the signal mode $\hat{a}_S$ (blue) and the idler mode $\hat{a}_I$ (red) are initially entangled in a TMSV state. The signal is sent out to probe the range of a target with reflectivity $\kappa$. When the target is at distance $h\Delta$, the mode $\hat{a}_h$ highlighted in red collected at time $t_h=2h\Delta/c$ contains the reflection from the target embedded in noise, while the rest of the collected modes (orange) contain entirely noise. Subplot (b) shows the $m$ possible states in the hypothesis testing problem at the receiver side. In each case, the idler (blue) is correlated with the reflected mode (red).
\label{fig:schematic}
}
\end{figure}
Now the task of ranging has been reduced to the determination of the returned signal mode $\hat{a}_h$ among the entire set of collected modes $\{\hat{a}_{\ell}\}_{\ell=1}^m$.
In a classical scheme, the input state of $\hat{a}_S$ is assumed to have a positive P-function, as widely considered in the literature~\cite{tan2008quantum,pirandola2011quantum,zhuang2020entanglement}.
In an entangled scheme, besides sending over the energy-constrained signal mode $\hat{a}_S$, one can also keep a locally-stored idler $\hat{a}_I$ entangled with the signal as depicted in Fig.~\ref{fig:schematic}. Similar to the case of QI~\cite{nair2020fundamental,bradshaw2020optimal}, we consider the signal-idler pair in the two-mode squeezed vacuum (TMSV) state~(see Appendix A), which we expect to be optimal. As depicted in Fig.~\ref{fig:schematic}(b), the stored idler mode $\hat{a}_I$ will still be correlated with the signal mode $\hat{a}_h$ returned from the thermal-loss channel ${\cal L}_{\kappa,N_B}$ in hypothesis $h$, although the initial entanglement might be destroyed. The joint state of $\hat{a}_S$ and $\hat{a}_h$ has the covariance matrix
\begin{align}
&
{\mathbf{V}}_{SI}^\prime=
\left(
\begin{array}{cccc}
(2N_B+1) {\mathbf I}_2&2\sqrt{\kappa}C_p{\mathbf Z}_2\\
2\sqrt{\kappa}C_p{\mathbf Z}_2&(2N_S+1){\mathbf I}_2
\end{array}
\right),
\label{noisy_cov}
&
\end{align}
where $C_p=\sqrt{N_S\left(N_S+1\right)}$, ${\mathbf I}_2$ and ${\mathbf Z}_2$ are the Pauli matrices. \QZ{Here we have chosen the unit such that the vacuum noise is unity. As we have $\kappa\ll1$, we have omitted the brightness signature in the signal; Note that the results are similar even if we include this difference.}
From the potential correlation depicted in Fig.~\ref{fig:schematic}(b), it is clear that ranging does not belong to the problem of quantum channel position finding (CPF) defined in Ref.~\cite{zhuang2020entanglement}: in ranging, it is unclear which pair of signal-idler is potentially correlated, while in CPF the pairing between potential correlated signals and idlers are clear.
{\em Hypothesis testing analyses.---}
The performance of the above hypothesis testing task is quantified by the error probability. To obtain the best performance, one can optimize the input state, under the total photon number constraint $N_S$, and the corresponding measurement. One can also utilize multiple degrees of freedom and send over $M$ modes $\hat{\bm a}_S\equiv \{\hat{a}_S^{(n)}\}_{n=1}^M$ in each pulse, therefore each portion of collected light also contains multiple modes $\hat{\bm a}_\ell \equiv\{\hat{a}_{\ell}^{(n)}\}_{n=1}^M$ for each time slice $ t_\ell$.
In the classical strategy, conditioned on the target range being $h\Delta$, the output state can be written as
\begin{equation}
\hat{\rho}_h^C=\left(\otimes_{\ell\neq h} \hat{\sigma}^{(B)}_{\hat{\bm a}_{\ell}}\right)\otimes \hat{\sigma}^{(T)}_{\hat{\bm a}_{h}},
\label{rho_C}
\end{equation}
where the background state $\hat{\sigma}^{(B)}$ consists of a product of $M$ thermal states, each with mean photon number $N_B$, and the target state $\hat{\sigma}^{(T)}$ is the $M$-mode returned signal embedded in the same thermal background, produced by the thermal-loss channel ${\cal L}_{\kappa,N_B}$ in Eq.~\eqref{eq:ah}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{PE_all_k001.pdf}
\caption{
Error probability performance versus the number of modes $M$ of the quantum ranging protocol in comparison with the classical schemes. Signal brightness $N_S=0.001$ and target reflectivity \QZ{$\kappa=0.01$}. The number of range slices $m$ and the environmental noise $N_B$ are chosen as (a) $m=2, N_B=3$, (b) $m=3, N_B=1$ and (c) $m=50, N_B=20$. For the entangled strategy, we evaluate the asymptotically-tight quantum Chernoff bound (QCB) $P_{E,H}$ (orange dashed), and an exact upper bound $P_{E,UB}$ (red dashed); For the classical strategy, we evaluate the QCB $P_{C,H}$ (green dashed), an exact lower bound $P_{C,LB}$ (black dashed) and the coherent-state direction detection performance $P_{C,DD}$ (black solid). In (a), we also present the OPA-based receiver performance (red solid) in an entangled strategy and the numerical results of the classical Helstrom limit (purple star). In (a)(b), the performance of the pretty-good-measurement (PGM) for coherent-state inputs is also evaluated numerically in comparison to the classical QCB.
\label{fig:m}
}
\end{figure*}
In the entangled scheme, each signal mode $\hat{a}_S^{(n)}$ has an idler $\hat{a}_I^{(n)}$ stored locally, and the overall return-idler state is
\begin{equation}
\hat{\rho}_h^E=\left(\otimes_{\ell\neq h} \hat{\sigma}^{(B)}_{\hat{\bm a}_{\ell}}\right)\otimes \hat{\Sigma}^{(T)}_{\hat{\bm a}_{h}\hat{\bm a}_I},
\label{rho_E}
\end{equation}
where the correlated output state $\hat{\Sigma}^{(T)}$ has $M$ pairs of signal-idler, each in the state described by the covariance matrix ${\mathbf{V}}_{SI}^\prime$ in Eq.~\eqref{noisy_cov}.
Given the positive operator-valued measure (POVM) elements $\{\hat{\Pi}_\ell^{C/E}\}_{\ell=1}^m$ describing the measurement in the classical (C) or entangled (E) scheme, \QZ{with each element $\hat{\Pi}_\ell^{C/E}$ representing the decision that target range is $\ell \Delta$}, the error probability
$
P_{C/E}=1-\sum_{\ell=1}^m p_\ell \tr\left[\hat{\Pi}^{C/E}_\ell\hat{\rho}_\QZ{\ell}^{C/E}\right],
$
where the priors $p_\ell=1/m$ can be chosen uniform without loss of generality.
{\em Performance of classical schemes.---} Utilizing the convexity of the Helstrom limit and the quantum Chernoff bound (QCB)~\cite{li2016discriminating,nussbaum2011asymptotic,audenaert2007discriminating,Pirandola2008}, we can derive an asymptotically tight expression of the error probability limit of any classical strategy utilizing inputs with a positive P-function~(see Appendix B)
\begin{align}
P_{C,H}&\sim \frac{m-1}{m}\exp\left[-\frac{2M\kappa N_S}{1+2N_B+2\sqrt{N_B\left(1+N_B\right)}}\right]
\nonumber
\\
&\simeq
\frac{m-1}{m}\exp\left[-\frac{M\kappa N_S}{2N_B}\right], \mbox{when $N_B\gg1$},
\label{P_C_H_QCB}
\end{align}
which is \QZ{tight in the error exponent. Here the constant $(m-1)/m$ is chosen to match the low signal-to-noise ratio limit with random guess}. The limit is achieved by any coherent state input under the proper energy constraint.
Furthermore, despite ranging being different from CPF~\cite{zhuang2020entanglement}, as in the classical strategy no idlers are present, the ultimate lower bound of the error probability of classical CPF (Eq.~(10) in Ref.~\cite{zhuang2020entanglement}) also applies to classical ranging
$
P_{C,LB} = {(m-1)}/{2m}\times\exp\left[-{2MN_S\kappa}/{(1+2N_B)} \right],
$
however giving an error exponent twice larger than that of Eq.~\eqref{P_C_H_QCB}.
We can compare the asymptotic limit in Eq.~\eqref{P_C_H_QCB} with the error-probability performance of the single-mode coherent-state direct detection (DD) strategy~\cite{Helstrom_1976,cariolaro2010theory}
\begin{align}
P_{C,DD}=&\frac{1}{m}\sum_{k=2}^m (-1)^k C_m^k \cross \nonumber \\
& \exp\left[-\frac{(1-v)(1-v^{k-1})\kappa MN_S}{1-v^k}\right],
\label{P_CI_DD}
\end{align}
where $v=N_B/(N_B+1)$ and $C_m^k$ is the binomial coefficient (number of combinations of $k$ items out of $m$). In the high-noise $N_B\gg1$ and large number of modes $M\gg1$ limit,
$
P_{C,DD}\sim\exp\left(-M\kappa N_S/2N_B\right)
$.
We see that coherent-state DD is the asymptotic optimal classical strategy \QZ{in terms of the error exponent}.
We evaluate $P_{C,H}$ (green dashed), $P_{C,LB}$ (black dashed) and the exact version of $P_{C,DD}$ (black solid) in Fig.~\ref{fig:m} for various parameters. Indeed, we see that $P_{C,DD}$ collapses with $P_{C,H}$ for $m=2$ (subplot (a)) and asymptotically agrees with $P_{C,H}$ even for $m>2$ (subplots (b)(c)). We also numerically evaluate the Helstrom limit in the $m=2$ case, and find that $P_{C,H}$ indeed provides the correct scaling. For $m>2$, numerical evaluation of the Helstrom limit is challenging, we compare with the performance of the pretty-good measurement (PGM)~\cite{PGM1,PGM2,PGM3,zhuang2020entanglement}, which agrees well with the Helstrom limit in the $m=2$ case in Fig.~\ref{fig:m}(a). For the $m=3$ case, a good agreement between the PGM performance and $P_{C,H}$ can be seen. Therefore, we conclude that $P_{C,H}$ and $P_{C,DD}$ well characterize the classical performance limit.
{\em Entanglement advantage.---}In the EA ranging protocol, one has $M\gg1$ copies of the identical states in the final idler-return joint state $\hat{\rho}_h^E$ of Eq.~\eqref{rho_E}.
We can therefore apply the QCB for multiple hypotheses~\cite{li2016discriminating,nussbaum2011asymptotic} to obtain the asymptotic error probability. Due to the symmetry of the problem, the error exponent of the multiary hypothesis testing problem is equal to that of discrimination between two three-mode zero-mean Gaussian states with the covariance matrices~(see Appendix B)
\begin{align}
&
{\mathbf{V}}_{12I}^{(1)}=
\left(
\begin{array}{cccc}
(2N_B+1) {\mathbf I}_2&\bm 0&2\sqrt{\kappa}C_p{\mathbf Z}_2\\
\bm 0&(2N_B+1) {\mathbf I}_2&\bm 0\\
2\sqrt{\kappa}C_p{\mathbf Z}_2&\bm 0&(2N_S+1){\mathbf I}_2
\end{array}
\right),
\nonumber
&
\\
&
{\mathbf{V}}_{12I}^{(2)}=
\left(
\begin{array}{cccc}
(2N_B+1) {\mathbf I}_2&\bm 0&\bm 0\\
\bm 0&(2N_B+1) {\mathbf I}_2&2\sqrt{\kappa}C_p{\mathbf Z}_2\\
\bm 0&2\sqrt{\kappa}C_p{\mathbf Z}_2&(2N_S+1){\mathbf I}_2
\end{array}
\right).
\label{noisy_cov_3mode}
&
\end{align}
The error exponent can be analytically calculated~\cite{Pirandola2008}, leading to the asymptotic formula for the Helstrom limit when $N_B\gg1, N_S\ll1$ and $M\gg1$ as
\begin{equation}
P_{E,H} \sim \frac{m-1}{m}\exp\left[-\frac{2M\kappa N_S}{N_B}\right],
\label{P_E_H_QCB}
\end{equation}
\QZ{which is tight in the error exponent. Here the constant $(m-1)/m$ is chosen to match the low signal-to-noise ratio limit with random guess}. Comparing with the optimal classical performance in Eq.~\eqref{P_C_H_QCB}, we see the EA case in Eq.~\eqref{P_E_H_QCB} has a factor of four (6dB) advantage in the error exponent, analog to the entanglement benefit in QI.
We can also derive a PGM~\cite{PGM1,PGM2,PGM3,zhuang2020entanglement}-based upper bound for the Helstrom limit~(see Appendix B)
\begin{align}
P_{E,H} \le P_{E,UB}& = (m-1)F^M\left({\mathbf{V}}_{12I}^{(1)},{\mathbf{V}}_{12I}^{(2)}\right)
\\
&\simeq (m-1)\exp\left(-\frac{M\kappa N_S}{N_B}\right),
\label{UB2}
\end{align}
where $F\left(\bm V_1,\bm V_2\right)$ is the fidelity between two zero-mean Gaussian states with the covariance matrices $\bm V_1$ and $\bm V_2$~\cite{banchi2015}.
Indeed, we see the error-exponent of $P_{E,UB}$ is a factor of two worse than that of $P_{E,H}$. However, compared with the classical performances in Eqs.~\eqref{P_C_H_QCB} and~\eqref{P_CI_DD}, we still see a factor of 2 (3dB) advantage in the error exponent.
In Fig.~\ref{fig:m}, we confirm the advantage from entanglement. The entangled upper bound $P_{E,UB}$ (red dashed) offers rigorous advantages, as well as a scaling advantage in the error exponent, while the asymptotic performance $P_{E,H}$ (orange dashed) provides further advantages (the full expression is utilized for evaluation). \QZ{The QCB results (orange dashed for the entangled case and green dashed for the classical case) are tight in the error exponent, showing a rigorous 6dB advantage from entanglement. However, these bounds can be non-tight up to a constant factor independent of $M$, and thus do not show exact amount of advantages.} These results confirm the quantum advantage of entanglement in the ranging task, assuming an optimal receiver that jointly measures the entire quantum system of the collected light and the idler. \QZ{Note that such advantages only exist in the $N_B\gg1$ limit, and disappears when the noise is small.}
\begin{figure}[t]
\centering
\includegraphics[width=0.475\textwidth]{CE_PPM.pdf}
\caption{
(a) Schematic of the entanglement-assisted communication protocol. (b) Information rate of the entanglement-assisted communication protocol, with $\kappa=0.1$ and $N_B=20$. We compare the optimized rates $R^\star_M$ for the fixed number of repetition modes $M=10^2, 10^3, 10^4$ (red, purple, blue) and the entanglement-assisted classical capacity (black), versus the signal average brightness $n_S$.
\label{fig:schematic_EA_com}
}
\end{figure}
{\em Entanglement-assisted communication.---} Our quantum ranging results can be applied to the design of pulse-position modulated EA communication, where entanglement is pre-shared between a sender and a receiver. As shown in Fig.~\ref{fig:schematic_EA_com}(a), to send the classical message $h\in[1,m]$, the sender chooses $m$ possible time slices to send the signal part $\hat{a}_S$ of the entangled TMSV to the receiver, who collects light continuously to obtain all modes $\{\hat{a}_\ell\}_{\ell=1}^m$ corresponding to the $m$ time slices. The receiver then decodes the classical message $\tilde{h}$ by determining which time slice contains the signal from the sender, via measuring the collected modes $\{\hat{a}_\ell\}_{\ell=1}^m$ jointly with the idler $\hat{a}_I$.
In the ranging protocol of Fig.~\ref{fig:schematic}, suppose we put all the loss and noise to the receiver side, then the target's range can be considered as the modulation device of the sender, and the path from the observer to the target as the ideal noiseless channel for entanglement pre-sharing~(see Appendix D). The same result of Eq.~\eqref{P_E_H_QCB} gives the asymptotic optimal decoding error probability, leading to an information rate per mode as $R_{m,M}=I\left(P_{E,H}\right)/Mm$, where the mutual information
\begin{equation}
I\left(p\right)=\log_2\left(m\right)+\left[\left(1-p\right)\log_2\left(1-p\right)+p\log_2\left(\frac{p}{m-1}\right) \right].
\end{equation}
We choose the signal total mean photon number $N_S=mn_S$, giving $n_S$ photons being sent per mode \QZ{per time slice} on average. To achieve the best rate, we optimize over the number of time slices $m$ to obtain the optimal rate of EA communication
$
R_{M}^\star=\max_{m} R_{m,M}.
$
As benchmarks, we calculate the corresponding classical capacity~\cite{hausladen1996classical,schumacher1997sending,holevo1998capacity,giovannetti2014ultimate} $C(\mathcal{L}^{\kappa,N_B})$, with the mean photon number constrained to $n_S$. As Eq.~\eqref{P_E_H_QCB} is asymptotically tight, we consider $M\gg1$ and plot the ratio of information rate over $C(\mathcal{L}^{\kappa,N_B})$ in Fig.~\ref{fig:schematic_EA_com}(b) and indeed see a great advantage in the low brightness region. In fact, when compared with the EA capacity $C_E(\mathcal{L}^{\kappa,N_B})$ (black solid) that upper bounds all possible EA communication rates, we see that the rate $R_{M}^\star$ has the scaling $R_{M}^\star/C(\mathcal{L}^{\kappa,N_B})\sim \ln(1/N_S)$ versus the signal power, identical to the scaling of $C_E(\mathcal{L}^{\kappa,N_B})$~\cite{shi2020practical}. Therefore, the receiver design for the ranging protocol would also be able to offer a great advantage in EA communication \QZ{in the low rate region}.
{\em Receiver design.---} Here we provide a practical receiver design for the ranging problem when $m=2$, based on the optical-parametric amplifier (OPA)~\cite{Guha2009} (see Appendix C). In this case of binary range discrimination, there are two groups of collected modes $\{\hat{a}_1^{(n)}\}_{n=1}^M$ and $\{\hat{a}_2^{(n)}\}_{n=1}^M$ corresponding to the two time slices. One can perform a phase shift on each block of modes and then perform a joint Gaussian operation with the idler modes to obtain
\begin{equation}
\hat{a}_I^{(n)\prime} =\sqrt{G}\hat{a}_I^{(n)} + \sqrt{\frac{G-1}{2}}\sum_{\ell=1}^2 e^{i\ell \pi}\hat{a}_\ell^{(n)\dagger}.
\end{equation}
To determine the target's range, we measure the total photon number of $\{\hat{a}_I^{(n)\prime}\}_{n=1}^M$, with each mode's mean photon number
$
\braket{\hat{a}_I^{(n)\prime\dagger} \hat{a}_I^{(n)\prime}}=G N_S+(G-1)(N_B+1)+2(-1)^hC_p\sqrt{{G(G-1)\kappa}/{2}}
$
conditioned on hypothesis $h$. Therefore, the hypothesis can be determined from a threshold decision of the photon count. Choosing the optimal gain $G\sim1+2\sqrt{N_S}/N_B$, the error probability performance
$
P_{E,OPA}\simeq \exp\left[{-M\kappa N_S/N_B}\right]/2,
$
when $N_B\gg1, N_S\ll1$
providing a factor of two (3dB) advantage in the error exponent over the classical limit in Eq.~\eqref{P_C_H_QCB}. In Fig.~\ref{fig:m}(a), we plot the receiver performance (red solid), confirming the error exponent advantage.
{\em Discussions.---} We propose a quantum ranging protocol enabled by entanglement to provide a 6dB advantage in the error exponent of determining the range among an arbitrary number of possibilities. To enable rigorous analyses, we have formulated the ranging problem as a hypothesis testing problem; the parameter estimation version would require a continuous-time treatment, which we defer to future works. The receiver design in the general case is an open problem. One potential approach is to design a non-demolition version of the sum-frequency-generation receiver design~\cite{zhuang2017entanglement}. The intuition is that the non-demolition measurement will allow one to utilize the same idler to interact with all collected modes until the correlated mode is located.
\begin{acknowledgements}
Q.Z. acknowledges the Defense Advanced Research Projects Agency (DARPA) under Young Faculty Award (YFA) Grant No. N660012014029 and Craig M. Berge Dean's Faculty Fellowship of University of Arizona. Q.Z. thanks Saikat Guha, Stefano Pirandola and Haowei Shi for discussions. \QZ{Q.Z. acknowledges Jeffrey Shapiro for valuable feedback.}
\end{acknowledgements}
|
1,314,259,995,584 | arxiv | \section{Introduction}
Surface plasmons, the collective oscillations of conduction electrons in metallic structures, allow us to confine light down to deep subwavelength volumes \cite{NH06}. Additionally, they couple strongly to electromagnetic fields \cite{HLC11}. Because of these properties, plasmons are excellent tools to engineer nanoscale devices for manipulating optical signals, without the limitation imposed by diffraction in far-field setups. This has triggered a number of applications in areas as diverse as ultrasensitive biosensing \cite{paper156}, improved photovoltaics \cite{AP10}, plasmon enhanced photodetection \cite{KSN11}, and photothermal cancer therapy \cite{NHH04}. The design of plasmonic structures with suitable spectral characteristics involves a careful choice of geometry and composition. In recent years, a vast amount of work has been devoted to producing nanostructures made of noble metals with controlled size and morphology, using in particular colloidal methods \cite{GPM08} and lithography \cite{SMC07}.
Despite these advances in the control over the {\it static} characteristics of plasmons, the dynamical modulation of their frequencies and spatial profiles remains ellusive, particularly in the visible and near-infrared (vis-NIR) parts of the spectrum. In this context, slow mild changes of the plasmon frequency have been produced by electrochemically injecting electrons in metal nanoparticles \cite{MPG06}, by electrically driving liquid crystals containing plasmonic particles \cite{CCC06_2}, and through controllable metamaterial designs \cite{BGK10,LZT12}. Magneto-optical modulation has also been explored to control plasmons in noble metal structures \cite{ACG13}. Hybrids of plasmonic and conductive oxides have been proposed \cite{FDA10,AAA11}, as well as colloids based on different materials \cite{CM14}. However, we still need to devise new methods to produce larger and faster control over plasmons, as required for nanoscale optical commutation and light modulation at high speeds.
Recently, the emergence of graphene \cite{CGP09} as a novel plasmonic material \cite{JBS09,paper176,GPN12} has opened up new paths towards the design of dynamically tunable plasmonic devices. Electrically doped graphene supports surface plasmons whose frequency can be efficiently varied by changing the level of doping \cite{paper196,FRA12,paper212}.
Consequently, the resulting modulation is intrinsically fast because it can be driven by charge-carrier injection using conventional electric gating technology. This promising material has been so far shown to support mid-infrared and lower-frequency plasmons \cite{JGH11,paper196,FRA12,paper212,BJS13}, while vis-NIR modes are being pursued by reducing the size of the structures \cite{paper214,paper215} and increasing the level of doping \cite{paper212}. The search for plasmon modulation in the vis-NIR is thus still ongoing, as these are spectral regions of utmost importance for sensing and optical signal processing technologies.
The origins of the excellent tunability of plasmons in graphene can be found in both the atomic thickness and the peculiar electronic structure of this material. The latter is characterized by a linear dispersion relation, which leads to a vanishing of the density of states at the Fermi level, so that a relatively small density of injected charge carriers produces substantial optical gaps in which collective plasmon modes emerge \cite{CGP09,paper176}. Although this unique feature cannot be easily transported to conventional plasmonic materials, such as gold, we can still mimic graphene plasmonics by going to atomically thin noble metals, whose optical response should be more susceptible to doping than traditional thicker layers. In particular, monolayer gold, the synthesis of which has been mastered for a long time in the context of surface science \cite{GB1981}, presents the advantage of having a plasma frequency compatible with the existence of plasmons in the vis-NIR \cite{JC1972}.
Here, we show that single-monolayer gold disks (SMGDs) with diameters of the order of $10\,$nm support surface plasmons with large cross-sections comparable to their geometrical areas. The frequencies of these excitations lie in the vis-NIR and can be efficiently modulated using attainable concentrations of doping charge carriers, which can be provided via electrical doping using for example backgating technology. We also analyze the optical response of periodic arrays of SMGDs, for which we predict an absorbance $\sim25\%$ for metal layer filling fractions $\sim40\%$.
\section{Results and Discussion}
\subsection{Electrically tunable optical response}
The system under study is depicted in Fig.\ 1a. We consider a gold nanodisk of diameter $D$, extracted from a single (111) atomic layer of gold. We take the thickness of the gold monolayer to be equal to the separation between (111) atomic planes in bulk gold ({\it i.e.}, $a_0/\sqrt{3}$, where $a_0=0.408\,$nm is the atomic lattice constant). Incidentally, our results are rather independent on the choice of disk thickness when this is small compared with the diameter, as long as the total valence charge is preserved (see Supplementary Fig.\ 6). As a first step in our analysis, we describe the optical response of a SMGD classically by modeling it as a thin disk described by a frequency-dependent homogeneous dielectric function $\epsilon(\omega)$. More precisely, we calculate the extinction cross-section $\sigma$ by solving Maxwell's equations using the boundary-element method (BEM) \cite{paper040}. Interestingly, for a diameter $D=20\,$nm, the cross section is dominated by a NIR plasmon at an energy $\sim1\,$eV and it exceeds the geometrical area of the disk (see left part of Fig.\ 1b).
It is convenient to separate the contribution from s-band electrons in the dielectric function as a Lorentzian term,
\begin{equation}
\epsilon\left(\omega\right)= \epsilon_{\rm b} - \frac{\omega^2_{\rm p}}{\omega\left(\omega+{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\gamma\right)}, \label{Drude}
\end{equation}
where $\epsilon_{\rm b}$ accounts for the effect of {\it background} screening due to d-band electrons, $\hbar\omega_{\rm p}=9.06\,$eV is the classical plasmon energy associated with s valence electrons (see Methods), and $\hbar\gamma=71\,$meV is an inelastic width (we adopt this value of the damping throughout this work). As explained below, we introduce doping in the classical model by changing $\omega_{\rm p}$ in Eq.\ (1). In general, the description of the response of gold in the vis-NIR region including interband transitions requires to use either experimental data \cite{JC1972} or a sophisticated multi-Lorentzian model \cite{RDE98} for $\epsilon(\omega)$, which yields a $\omega$-dependent background $\epsilon_{\rm b}=\epsilon(\omega)+\omega^2_{\rm p}/[\omega(\omega+{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\gamma)]$. However, we show in Fig.\ 1b that a simple Drude model for Eq.\ (1) (dashed curves), consisting in fixing $\epsilon_{\rm b}=9.5$ for all frequencies, produces a satisfactory level of accuracy at the observed relatively low disk-plasmon energies compared with the results obtained from tabulated optical data \cite{JC1972} (solid curves). Additionally, as the Drude model (i.e., constant $\epsilon_{\rm b}$) provides a natural connection with the quantum-mechanical approach described below, we use for disks it in what follows.
\begin{figure}
\begin{center}
\includegraphics[width=160mm,angle=0]{fig1_opt.pdf}
\caption{{\bf Optical response and electrical tunability of single-monolayer gold disks.} (a) We consider a single-monolayer gold disk (SMGD) of diameter $D$ carved from a single (111) atomic layer. The disk thickness is $a_0/\sqrt{3}$, where $a_0=0.408\,$nm is the atomic lattice constant. (b) Extinction cross-section of a $D=20\,$nm SMGD for different doping charge-carrier densities (see upper legend). A doping density of $13.8\times10^{13}\,$cm$^{-2}$ corresponds to total of 440 additional charge carriers in the disk. For comparison, we also plot the cross section for a gold nanosphere of the same diameter and total doping charge, clearly showing an almost negligible degree of tunability. The particles are assumed to be homogeneously doped and described classically through the local dielectric function tabulated from measured optical data (solid curves). \cite{JC1972} Results obtained from a Drude dielectric function (Eq.\ (1)) are shown for comparison (broken curves). (c) Plasmon frequency shift relative to the width of the plasmon resonance for the disk (orange) and the sphere (green) of panel (b) as a function of doping density. The small scales indicate the potential at the disk/sphere surface for different disk doping densities.}\label{fig1}
\end{center}
\end{figure}
We consider next the effect of electrical doping. The spatial distribution of additional charge carriers depends on the doping configuration, as it can be for example homogenous for disks connected to a non-absorbing gate (e.g., ITO) or inhomogeneous in self-standing charged disks, although the plasmon energies and spatial profiles are expected to be similar in both cases based upon our experience with graphene disk plasmons \cite{paper194}. For simplicity, we assume homogeneously doped disks in what follows. The additional doping charge density $n$ adds up to the undoped s band density $n_0=m_{\rm e}\omega_{\rm p}^2/(4\pi e^2)\approx1.4\times10^{15}\,$cm$^{-2}$, which is rather close to the s-band areal electron density in neutral monolayer gold, $4/\sqrt{3}a_0^2$. The doping charge is thus introduced by changing the bulk plasma frequency to $\omega_{\rm p}=\left[\left(4\pi e^2/m_{\rm e}\right)(n_0+n)\right]^{1/2}$ in Eq.\ (1). Now, the addition of a moderate amount of doping electrons ($\sim5-10\%$ of $n_0$) results in significant blue shifts and increase in the strength of the plasmon resonance (\emph{cf.} purple and green curves of Fig.\ 1b). Obviously, the injection of similar amounts of holes produces the opposite effects (Fig.\ 1b, orange curve). The small thickness of the disk is a key factor in producing such dramatic modifications in the optical response using realistic doping densities. In fact, repeating this operation with a gold nanosphere of the same diameter, we also observe a prominent plasmon (Fig.\ 1b, $\sim2.5\,$eV region), but it remains unchanged when adding similar amounts of doping charges. In the sphere, the doping charges pileup in the outermost atomic layer \cite{LK1970_2}, but this produces the same extinction cross-section as if the charges where homogenously distributed over its entire volume, and therefore, the change in bulk charge density is substantially reduced with respect to the disk.
Figure\ 1c compares the modulation of the nanodisk and the sphere. In particular, we plot the frequency shift normalized to the full width at half maximum (FWHM) for the plasmon resonance as a function of doping charge density. In contrast to the negligible tunability of the sphere, the disk allows shifts comparable to the FWHM to be electrically induced. Incidentally, the doping densities here considered produce realistic values of the electrostatic potential at the surface of these nanoparticles (Fig.\ 1c, small scales), indicating that they are compatible with currently available backgating technology \cite{paper212}.
\begin{figure}
\begin{center}
\includegraphics[width=160mm,angle=0]{fig2_opt.pdf}
\caption{{\bf Optical response of individual single-monolayer gold disks.} We plot the extinction cross-section normalized to the geometrical area for different diameters $D$, calculated using the quantum model (solid curves) and a classical description (dashed curves). Results obtained with and without inclusion of d-band screening are shown in (b) and (a), respectively.}\label{fig2}
\end{center}
\end{figure}
\subsection{Quantum-mechanical effects}
For nanoparticles of only a few nanometers in size, the above classical description fails to account for spatial dispersion and quantum confinement effects \cite{ZPN09,SHE12}, which generally require models based on a quantum-mechanical treatment of valence electrons and their interactions. Here, we use the random-phase approximation (RPA) to calculate the optical response of SMGDs (see Methods) within the electrostatic approximation, which should be rather accurate given the small sizes of the particles under consideration. This allows us to determine the validity of the classical approach and explore the response of nanodisks with smaller diameters.
We use particle-in-a-box states to describe independent s-band electrons. The cylindrical box has the same dimensions as in the classical calculations (see above) and it is surrounded by an infinite potential. The RPA susceptibility is then evaluated using these electron states to obtain the induced charge density, which in turn allows us to compute the optical extinction of the disk. Additionally, we model screening due to d-band electrons through an array of point dipoles placed at the atomic positions in the (111) layer and with polarizability adjusted to render an effective background permittivity $\epsilon_{\rm b}$ in the bulk material (see Methods). Apart from the relative position of these dipoles with respect to the disk center, our quantum description only depends on the three same parameters as the classical Drude theory (i.e., $\epsilon_{\rm b}$, $\omega_{\rm p}$, and $\gamma$).
A major assumption we are making is that $\gamma$ takes the same value as in the bulk metal. We use this as a reasonable estimate because s-band electrons are rather delocalized along directions parallel to the layer (i.e., similar to the bulk), while they are narrowly confined to the ground state across the transversal direction, so that plasmons result from in-plane motion. However, the actual value of $\gamma$ might depend on the detailed coupling of valence electrons to impurities and to the atomic lattice. Concerning d-band screening, our effective dipoles approach should provide a more realistic description than a homogeneous polarizable background. Although the discreteness of the dipole lattice can have strong effects in small islands, we find converged results for large islands, which are independent of the alignment of the dipole lattice relative to the disk center.
Figure\ 2 shows the extinction cross-section normalized to the disk area for SMGDs of different diameters ranging from $3$ to $15\,$nm. The main conclusions from this figure are as follows: (1) the extinction cross-sections are of the order of the disk area; (2) the plasmon energy increases with decreasing diameter $D$, exhibiting an approximate $\propto\sqrt{D}$ dependence, similar to what one finds in graphene nanodisks \cite{paper212}; (3) quantum calculations produce energies above those predicted by classical theory, as well as broader plasmon peaks, but the discrepancy between the two models decreases with increasing diameter; (4) in the absense of d-band screening (Fig.\ 2a, obtained by setting $\epsilon_{\rm b}=1$), both levels of description give rise to smooth plasmon peaks, in contrast to the quantum results obtained when d-band screening is switched on (Fig.\ 2b, with $\epsilon_{\rm b}=9.5$); (5) d-band screening also leads to a redshift of the plasmons, which is more pronounced at small sizes. Incidentally, the induced charge associated with the plasmon exhibits a dipolar profile dressed with radial oscillations mimicking those of Friedel oscillations, which are particularly intense for small diameters (see Supplementary Fig.\ 7).
Similar blue shifts with respect to classical local theory are also found in small noble metal particles \cite{KV95}, the origin of which is a combination of nonlocal and quantum effects, particularly due to the surface spill out of s electrons beyond the polarizable background of d-band electrons. In simple metals such as aluminum, the spill out produces smaller electron densities near the surface, and consequently, also smaller surface plasmon frequencies. In contrast, in noble metals, the spill out results in a weaker interaction with the localized d electrons, and thus, it leads to an increase in the observed frequency, which overcomes the redshift due to the smaller electron density. \cite{L93} We incorporate here the finite extension of s electrons across the normal direction, combined with the localization of the effective d-band dipoles, leading to similar blue shifts. Interestingly, our quantum model predicts splitting of the plasmon into multiple peaks for small disks when d-band screening is included (see for example the $D=3\,$nm spectrum in Fig.\ 2b). The presence of these peaks, which are rapidly coalescing into a single plasmon resonance for $D>8\,$nm, is a manifestation of the discrete character of the interaction with d-band electrons, which is more pronounced for small $D$'s. The jumps observed in the FWHM also shares a similar origin. It should be noted that these effects could be sensitive to the exact form of the s-electron transversal wave function in the smallest islands under consideration, which require a more atomistic analysis, based for example upon density-functional theory \cite{ORR02,LM14}. Likewise, the spectra for the smallest disks depend on the alignment of the d-band dipole lattice relative to the edges. In practice, 1D faceting of the edges becomes an important source of anisotropy, which can contribute to broaden the spectra for $D<5\,$nm.
\begin{figure}
\begin{center}
\includegraphics[width=160mm,angle=0]{fig3_opt.pdf}
\caption{{\bf Comparison of quantum and classical plasmon energies and widths.} Energy (a,b) and FWHM (c,d) of the plasmonic resonance of individual SMGDs as a function of disk diameter, calculated from quantum (green circles) and classical (orange triangles) models. Results obtained with and without inclusion of d-band screening are shown in (b,d) and (a,c), respectively. The dashed curves in (c,d) indicate the intrinsic broadening $\hbar\gamma=71\,$meV introduced in the Drude formula (Eq.\ (1)) and in the RPA susceptibility (Eq.\ (4)).}\label{fig3}
\end{center}
\end{figure}
The convergence of the quantum model to the classical description for increasing diameter is clearly observed in Fig.\ 3, which summarizes the plasmon energies and widths observed in the spectra of Fig.\ 2. Here, we define the FWHM as the frequency interval around the peak maximum that contains half of its area; this definition coincides with the standard FWHM for individual Lorentzian resonances, but it can be applied to multiple resonances as well to yield an overall width (in particular to the lower quantum-model spectra of Fig.\ 2b). Within the electrostatic limit under consideration, the FWHM predicted by the classical model is independent of diameter and equals the damping energy $\hbar\gamma=71\,$meV (see Eq.\ (1)). In contrast, the quantum model leads to a significant increase in the FWHM for small diameters, essentially as a consequence of Landau damping, which involves inelastic decay of plasmons to electron-hole pairs for momentum transfers $\sim\omega/v_{\rm F}$, where $v_{\rm F}$ is the Fermi velocity ($\sim10^6$m\,s$^{-1}$, see Supplementary Fig.\ 8a). As the momentum transfer provided by the breaking of translational invariance in a disk is $\sim2\pi/D$, the onset of Landau damping is expected to occur at $D\sim2\pi v_{\rm F}/\omega\sim4\,$nm, in qualitative agreement with the results shown in Fig.\ 3c,d. An intuitive estimate can be stablished from the electron mean free path $v_{\rm F}/\gamma\sim10\,$nm, which determines the rate of collisions with the edges ({i.e.}, events that provide the noted momentum), and is also in agreement with the trends observed in Fig.\ 3c,d, although the value of $\gamma$ regarded as a parameter simply produces an additional contribution to the FWHM that is independent of $D$, and the ultimate origin of broadening for small sizes can be found in Landau damping.
\begin{figure}
\begin{center}
\includegraphics[width=160mm,angle=0]{fig4_opt.pdf}
\caption{{\bf Quantum vs classical analysis of the electrically tunable optical response.} (a,d) Extinction cross-section normalized to the geometrical area for a $D=8\,$nm single-monolayer gold disk calculated with different doping charge densities. (b,e) Plasmon energy as a function of doping charge density. We indicate the FWHM of the plasmon resonances as shadowed regions. (c,f) Optical extinction cross-section at the plasmon peak energy (green curves and symbols, left scale) and FWHM (orange, right scale) as a function of doping density. Quantum mechanical calculations (a-c) are compared with classical results (d-f), including d-band screening in all cases.}\label{fig4}
\end{center}
\end{figure}
As discussed in Fig.\ 1, the optical response of SMGDs can be modified through the addition of small amounts of charge carriers to relatively large disks ($D=20\,$nm), for which classical theory is rather accurate (see Figs.\ 2 and 3). Using smaller SMGDs, we obtain qualitatively similar results as for larger disks (see Fig.\ 4 for an analysis of a $D=8\,$nm doped disk). Given the small disk size, we compare classical (Fig.\ 4d-f) and quantum (Fig.\ 4a-c) results, showing again a blue shift and plasmon broadening in the latter relative to the former.
In contrast to the nearly linear plasmon shift with doping charge density predicted by classical theory (Fig.\ 4e), the quantum model leads to initially smaller modulation at low doping (Fig.\ 4b), which increases to a faster pace than the classical results for higher doping. This nonlinear dependence of the plasmon energy on the doping density could be exploited for improved light modulation by operating around a highly doped SMGD configuration. In particular, the plasmon shift can be as large as the FWHM when the density of s-band electrons is changed by $\pm(5-10)\%$.
Interestingly, the nonlinear plasmon shifts observed in the quantum model becomes oscillatory when examining the maximum extinction cross-section and the FWHM (Fig.\ 4c). The oscillations of these two quantities are out of phase, as required to satisfy the $f$-sum rule \cite{PN1966}, and can be traced back to the discreteness of the electronic energies. Importantly, in all cases the maximum cross-section is of the order of the disk area (Fig.\ 4c,f), thus providing good coupling to light for potential applications to modulation devices.
\begin{figure}
\begin{center}
\includegraphics[width=160mm,angle=0]{fig5_opt.pdf}
\caption{{\bf Electrical modulation of the absorbance of an hexagonal periodic arrangement of single-monolayer gold disks.} (a) Scheme of the system under study. (b) Absorbance spectrum of undoped nanodisks (diameter $D=8\,$nm) for different values of the array spacing $d$. (c) Modulation of the absorbance relative to the undoped state as a function of doping charge density for different array spacings.}\label{fig5}
\end{center}
\end{figure}
\subsection{Periodic arrangement of single-monolayer gold nanodisks}
The large optical strength and degree of electrical tunability discussed above for SMGDs can be exploited to modulate light that is either transmitted or reflected by a periodic array of such structures. We consider an hexagonal array of $D=8\,$nm disks in Fig.\ 5 with different values of the array spacing $d$. Given the large mismatch between $D$ and the resonant light wavelength ($\sim830\,$nm), we approximate the disks as point dipoles of polarizability extracted as explained in Methods (see also Supplementary Fig.\ 9, where we show that higher-order multipoles play only a small role for the relative distances under consideration). Following previous analytical methods \cite{paper182} to compute the absorbance $A$, we find remarkably large values (e.g., $A=25\%$ for $d=1.5\,D$, see Fig.\ 5b), given the small amount of gold in the structure (sub-monoatomic layer film). The fractional change in absorbance driven by electrical doping (Fig.\ 5c) is rather independent of lattice spacing and reaches $\sim70\%$ for a $10\%$ variation in the s-band electron density. The potential of patterned gold monolayers for electro-optical modulation in the NIR is thus excellent.
\section{Conclusions and Perspectives}
In summary, we have simulated both classically and quantum-mechanically the plasmonic response and performance in electro-optical modulation of gold nanodisks carved from a single (111) atomic layer. Our RPA calculations incorporate the wave functions of free s valence electrons evolving in a circular box, as well as an adjusted distribution of dipoles to account for d-band screening. Despite the atomic thickness of the disks, this quantum-mechanical description converges smoothly to the results of classical dielectric theory, based upon the bulk, frequency-dependent dielectric function of gold. This is a remarkable result, which can be intuitively understood from the fact that the electron current associated with the plasmons flows along the gold layer, and thus, it is rather insensitive to electron confinement within the small film thickness. Nontheless, nonlocality plays a crucial role, leading to strong plasmon blue shifts, as well as splitting due to the complex interaction with the d band. We estimate that nonlocal effects become dominant when the disk diameter is below $\sim10\,$nm.
Remarkably, the disks interact strongly with light, giving rise to extinction cross-sections exceeding their geometrical areas in the vis-NIR. We have also shown that the optical response of SMGDs can be efficiently modulated through the addition or removal of realistic concentrations of doping charge carriers using for example gating technology. In particularly, periodic patterns of monolayer gold appear to be a suitable solution for combining strong plasmonic response and high doping, for example using an electrical backgate, because the average charge density of the layer is simply determined by capacitor theory for a fixed distance from the gate, and thus, the actual doping charge density in the metal scales with the inverse of the areal filling fraction occupied by the gold. Similar results are expected for films consisting of only a few atomic layers, although the degree of modulation is then reduced because the doping charge has to be shared across the increased thickness. Other plasmonic metals such as silver and copper should find similar degree of tunability (see Supplementary Fig.\ 10). In particular, the small plasmon width of silver compared with gold makes it an attractive candidate to drive plasmon shifts well beyond the FWHM. Additionally, the lower d-band screening in this material should result in higher plasmon energies, reaching the visible in small disks, or equivalently, the NIR for larger disk diameters.
It should be stressed that, while the synthesis of single-layer gold is a mature field \cite{GB1981}, the fabrication of laterally confined thin gold nanostructures represents a technical challenge, which could benefit from advances in lithography and self-assembly. Alternatively, one could use a continuous gold layer, which also exhibits large electrical tunability of its propagating plasmons (see Supplementary Fig.\ 11), coupled to external light by decorating it with dielectric colloids in order to bridge the light-plasmon momentum mismatch (i.e., this is essentially what nanostructuration does in the SMGDs that we study above). The resulting planar structures hold great potential for light modulation at vis-NIR frequencies, which could be the basis of a new generation of electrically tunable optical devices with applications ranging from sensing to nanoscale spectroscopy.
\section{Methods}
\subsection{Quantum-mechanical RPA simulations}
We consider small disk sizes compared with the light wavelength, so that we work in the electrostatic limit. Within this approximation, assuming an overall monochromatic time dependence $\ee^{-{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\omega t}$ with frequency $\omega$, the induced charge density $\rho^{\rm ind}$ can be expressed in terms of the self-consistent potential $\phi$ as
\begin{align}
\rho^{\rm ind}(\rb,\omega)=\int d^3\rb'\chi^0(\rb,\rb',\omega)\phi(\rb',\omega)\equiv\chi_0\cdot\phi,\label{rho}
\end{align}
where $\chi^0$ is the noninteracting susceptibility associated with the s valence electrons of gold, and the last identity defines a matrix notation in which matrix multiplication involves integration over space coordinates. We obtain $\chi^0$ within the RPA\cite{PN1966}, in which a one-electron picture is assumed and only individual electron-hole pair excitations are explicitly considered. We further approximate the wave functions of valence electrons by the solutions of a cylindrical box with the same diameter as the nanodisk and a thickness corresponding to the separation between (111) atomic planes in bulk gold
(\emph{i.e.}, $a_0/\sqrt{3}\approx0.236\,$nm, see Fig.\ 1a). More precisely,
\begin{align}
\psi_{lm}\left(\textbf{r}\right)=N_{lms}J_{m}\;\left(Q_{lm}R\right)\ee^{{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P} m\varphi}g_1\left(z\right),\label{psi}
\end{align}
where $N_{lms}$ is a normalization constant, $Q_{lm}=2\zeta_{lm}/D$, $\zeta_{lm}$ is the $l^{\rm th}$ zero of the Bessel function $J_m$, and $g_1\left(z\right)=\sin\left(\pi\sqrt{3}z/a_0\right)$ yields the dependence on the coordinate $z$ normal to the disk. For simplicity, we are assuming that the $z$ dependence is separable in the wave function, so that electron diffraction effects at the disk edges are not important. Moreover, we assume that the electrons remain in the ground state of the vertical cavity of thickness $a_0/\sqrt{3}$, which is a reasonable approximation if we consider that the first excited state ({\it i.e.}, $g_2(z)=\sin\left(2\pi\sqrt{3}z/a_0\right)$) lies $\sim20\,$eV above $g_1$, well beyond the Fermi and vacuum levels.
With the wave functions of Eq.\ (3), we can write the susceptibility as
\begin{align}
\chi^0\left(\textbf{r},\textbf{r}',\omega\right)=\frac{2e^2}{\hbar}\sum_{l,l',m,m'}\left(f_{l'm'}-f_{lm}\right)
\frac{\psi_{lm}\left(\textbf{r}\right)\psi^{\ast}_{lm}\left(\textbf{r}'\right)\psi^{\ast}_{l'm'}\left(\textbf{r}\right)\psi_{l'm'}\left(\textbf{r}'\right)}
{\omega-\varepsilon_{lm}+\varepsilon_{l'm'}+{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\gamma/2},\label{chi0}
\end{align}
where spin degeneracy is simply included through an overall factor of 2, $\hbar\varepsilon_{lm}=\hbar^2 Q^2_{lm}/2 m_{\rm e}$ is the energy of state $\psi_{lm}$ (notice that the energy associated with $z$ motion cancels out in Eq.\ (4), so we disregard it), $\gamma$ is an intrinsic relaxation time, which we take from a Drude fit (Eq.\ (1)) to measured optical data \cite{JC1972} ($\hbar\gamma=71\,$meV), and $f_{lm}=\left\{\exp\left[\left(\varepsilon_{lm}-E_{\rm F}\right)/k_{\rm B}T\right]+1\right\}^{-1}$ is the Fermi-Dirac distribution function, evaluated here at $T=0$. The method using to fill the energy levels in the disk is discussed in the Supplementary Fig.\ 8.
The total potential $\phi$ is the sum of the external potential $\phi^{\rm ext}$ and the potential produced by the induced charges
\begin{align}
\phi=\phi^{\rm ext}+v\cdot\rho^{\rm ind},\label{phi}
\end{align}
where $v(\rb-\rb')=1/|\rb-\rb'|$ is the Coulomb interaction. From here and Eq.\ (2), we solve the induced charge density as
\begin{align}
\rho^{\rm ind}=\chi^0\cdot\left(1-v\cdot\chi^0\right)^{-1}\cdot\phi^{\rm ext}.\label{rho2}
\end{align}
As the polarization along the direction normal to the disk is expected to be negligible, we focus on parallel components and write $\phi^{\rm ext}=-R\ee^{{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\varphi}$ (i.e., we focus on solutions with $m=1$ azimuthal symmetry), where $\varphi$ is the azimuthal angle of $(x,y)$ and $R=\sqrt{x^2+y^2}$. This allows us to obtain the in-plane polarizability by calculating \[\alpha\left(\omega\right)=\frac{1}{2}\int d^3\textbf{r}\;R\ee^{-{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\varphi}\;\rho^{\rm ind}\left(\textbf{r},\omega\right).\] Finally, the extinction cross-section is obtained from \[\sigma(\omega)=\left(4\pi\omega/c\right)\mbox{Im}\left\{\alpha\left(\omega\right)\right\}.\]
\subsection{Inclusion of d-band screening}
Deeper electrons in the d band are relatively localized in the gold atoms, and therefore, we model them by assuming a background of polarizable point particles at the atomic positions in the (111) layer (see lower part of Fig.\ 1a). The polarizability $\alpha_{\rm b}$ of these particles is adjusted to fit the experimentally measured bulk dielectric function of gold $\epsilon_{\rm exp}$. That is, if we subtract the Drude s-band contribution from $\epsilon_{\rm exp}$ (see Eq.\ (1)), we obtain the background permittivity $\epsilon_{\rm b}=\epsilon_{\rm exp}+\omega_{\rm p}^2/\omega(\omega+{\rm i}} \def\ee{{\rm e}} \def\vb{{\bf v}} \def\Pb{{\bf P}\gamma)$, where $\omega_{\rm p}^2=4\pi e^2 n_0/m_{\rm e}$ is determined by the s-band electron density $n_0=4/a_0^3\approx5.9\times10^{28}\,$m$^{-3}$. This yields $\hbar\omega_{\rm p}\approx9.01\,$eV, which is slightly different from the best fit of Eq.\ (1) to measured data \cite{JC1972} (9.06\,eV), from which we also find $\epsilon_{\rm b}=9.5$. Now the Clausius-Mossotti relation\cite{AM1976} leads to
\begin{align}
\alpha_{\rm b}=\frac{3}{4\pi n_0}\;\frac{\epsilon_{\rm b}-1}{\epsilon_{\rm b}+2}.\nonumber
\end{align}
Using dyadic notation, the susceptibility tensor of the background dipoles reduces to $\chi_{\rm b}^0(\rb,\rb')=\sum_j\overleftarrow{\nabla}\cdot\alpha_{\rm b}\delta(\rb-\rb_j)\delta(\rb'-\rb_j)\cdot\overrightarrow{\nabla}'$, where $j$ runs over the positions of the metal atoms, whereas $\overleftarrow{\nabla}$ ($\overrightarrow{\nabla}'$) acts on $\rb$-dependent ($\rb'$-dependent) functions to the left (right) of operator $\chi_{\rm b}^0$. As the charge induced through both s and d bands contribute together to the full potential, we can rewrite Eq.\ (2) as
\begin{align}
\rho^{\rm ind}=\left(\chi^0+\chi^0_{\rm b}\right)\cdot\phi\nonumber
\end{align}
to take into account the effect of d-band screening.
Using this expression together with Eq.\ (5), the total induced charge density becomes
\begin{align}
\rho^{\rm ind}=\left(\chi^0+\chi^0_{\rm b}\right)\cdot\left[1-v\cdot\left(\chi^0+\chi^0_{\rm b}\right)\right]^{-1}\cdot\phi^{\rm ext},\label{rho3}
\end{align}
from which we calculate the disk polarizability and the extinction cross-section as explained above.
\pagebreak
\section{Supplementary Figures}
\begin{figure}
\begin{center}
\includegraphics[width=130mm,angle=0]{figS1_opt.pdf}
\caption{{\bf Plasmon dependence on disk thickness.} We show classical calculations for disks of different thicknesses ($t$ in units of $t_0=a_0/\sqrt{3}=0.236\,$nm, see legend) and $D=20\,$nm in diameter. The valence electron density is adjusted to have the same number of electrons in all cases. These results show that the plasmon energies and absorption profiles are rather independent on disk thickness, provided this is small compared with the diameter, which corroborates the robustness of our calculations with respect to the choice of film thickness.}\label{figS1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=140mm,angle=0]{figS2_opt.pdf}
\caption{{\bf Maps of induced charge density corresponding to the lowest-order dipolar plasmon of single-monolayer gold disks.} We consider disks of different diameters $D$ and compare the density obtained from our quantum mechanical description (a-c,d-f) with classical, local theory (g). The upper (lower) row shows results calculated without (with) inclusion of d-band electron screening. The classical calculation (g) is obtained from the solution of Poisson's equation for a disk described by a resonant permittivity \cite{paper212} $\epsilon\approx1-D/t$, where $t$ is the disk thickness. The dipolar patttern exhibits radial oscillations of a period similar to the Friedel oscillations. The inclusion of d-band screening (Fig.\ 7d-f) results in more complex patterns driven by the discrete character of the background dipoles. In all cases considered, the dipolar character is clearly preserved and the induced charge accumulates at the border of the nanodisk as the diameter increases, thus approaching the behavior of a classical, local description of the disk.}\label{figS2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=110mm,angle=0]{figS3_opt.pdf}
\caption{{\bf Filling of electron energy levels in a metallic disk.} (a) Fermi energy $E_{\rm F}$ relative to the bottom of the parabolic s band (left scale) and Fermi velocity $v_{\rm F}=\sqrt{2E_{\rm F}/m_{\rm e}}$ (right scale) as a function of single-monolayer gold disk (SMGD) diameter. We consider a single (111) atomic layer of an fcc metal of lattice constant $a_0$ (e.g., $a_0=0.408\,$nm in gold). In practice, a single-layer disk of diameter $D$ has $\sim(\pi/\sqrt{3})(D/a_0)^2$ electrons that fill states of increasing energy up to a level that defines the Fermi energy $E_{\rm F}$.
(b) Unperturbed valence electron density profiles for SMGDs of different diameters.}\label{figS3}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=120mm,angle=0]{figS4_opt.pdf}
\caption{{\bf Multipolar effects in the interaction of disk arrays.} We examine the validity of the dipole approximation to represent each disk in the arrays considered throughout this paper. For simplicity, we present classical results, as we expect that the conclusions should be directly applicable to quantum calculations as well. We use a layer KKR approach \cite{SYM98_1,SYM00} to calculate the absorbance of periodic hexagonal arrays of $D=8\,$nm gold disks with different lattice parameters $d$, with each disk represented through its scattering matrix, which is obtained with the boundary-element method (BEM). \cite{paper040} We include multiples of orbital angular momentum $l\le l_{\rm max}$. The results are remarkably converged already with $l_{\rm max}=1$ (dipolar approximation) for the two larger spacings under discussion, whereas for a period equal to 1.5 times the disk diameter multipolar corrections are rather small. Therefore, we conclude that the dipolar approximation used in this work provides quantitatively correct results for the geometrical parameters under consideration.
}\label{figS4}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=120mm,angle=0]{figS5_opt.pdf}
\caption{{\bf Electrical modulation of the absorbance of hexagonal periodic arrays formed by single atomic-layer disks made of silver, gold, and copper.} The structures are similar to those of Fig.\ 5, but now the disks are larger, leading to lower-energy plasmons in the 0.7-0.8\,eV region. Silver is the less lossy of these three materials, and consequently, the optimum choice to maximize the optical tunability because its plasmons are narrower. Although the valence electron density is very similar in these metals, their plasmons occur at different energies due to variations in d-band screening. The geometrical and doping parameters are indicated by labels. The spectra are obtained from classical calculations.}\label{figS5}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=110mm,angle=0]{figS6_opt.pdf}
\caption{{\bf Plasmons and tunability of thin homogeneous gold layers.} We represent the plasmon dispersion relations of layers consisting of 1 and 10 atomic monolayers oriented along the $(111)$ direction. The results are obtained from classical electromagnetic theory, using optical data for the dielectric function, \cite{JC1972} modified as described in the main text to include electrical doping. The plasmons of undoped layers (solid curves) are compared with those predicted for an additional density of $10^{14}\,$cm$^{-2}$ electrons (dashed curves). The light line and the dispersion of surface plasmons in a semi-infinite gold layer are shown for comparison. We conclude that the single-layer gold film can undergo similar electrical tunability as the nanostructures considered in the main text. The plasmons of the single layer are relatively far from the light line, although they have sizable wavelengths of 100's\,nm in the NIR spectral region. Like in graphene, the in/out-coupling of light to these plasmons represents a serious challenge, which can be overcome by decorating the layer with additional structures or by placing it near a grating of period comparable to the plasmon wavelength.}\label{figS6}
\end{center}
\end{figure}
\pagebreak
\section{Acknowledgments}
This work has been supported in part by the European Commission (Graphene Flagship CNECT-ICT-604391 and FP7-ICT-2013-613024-GRASP). A.M. acknowledges financial support from the Spanish MEC through the FPU program and from the Evans Attwell-Welch Postdoctoral Fellowship for Nanoscale Research, administered by the Richard E. Smalley Institute for Nanoscale Science and Technology.
\providecommand*{\mcitethebibliography}{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{45}
\providecommand*{\natexlab}[1]{#1}
\providecommand*{\mciteSetBstSublistMode}[1]{}
\providecommand*{\mciteSetBstMaxWidthForm}[2]{}
\providecommand*{\mciteBstWouldAddEndPuncttrue}
{\def\unskip.}{\unskip.}}
\providecommand*{\mciteBstWouldAddEndPunctfalse}
{\let\unskip.}\relax}
\providecommand*{\mciteSetBstMidEndSepPunct}[3]{}
\providecommand*{\mciteSetBstSublistLabelBeginEnd}[3]{}
\providecommand*{\unskip.}}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space}
{\relax}{\relax}
\bibitem[Novotny and Hecht(2006)]{NH06}
Novotny,~L.; Hecht,~B. \emph{Principles of Nano-Optics};
\newblock Cambridge University Press: New York, 2006\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Halas et~al.(2011)Halas, Lal, Chang, Link, and Nordlander]{HLC11}
Halas,~N.~J.; Lal,~S.; Chang,~W.; Link,~S.; Nordlander,~P. Plasmons in strongly
coupled metallic nanostructures. \emph{Chemical Reviews} \textbf{2011},
\emph{111}, 3913--3961\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{\'Alvarez-Puebla} et~al.(2010){\'Alvarez-Puebla}, {Liz-Marz\'an}, and
{Garc\'{\i}a de Abajo}]{paper156}
{\'Alvarez-Puebla},~R.~A.; {Liz-Marz\'an},~L.~M.; {Garc\'{\i}a de Abajo},~F.~J.
Light concentration at the nanometer scale. \emph{J.\ Phys.\ Chem.\ Lett.}
\textbf{2010}, \emph{1}, 2428--2434\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Atwater and Polman(2010)]{AP10}
Atwater,~H.~A.; Polman,~A. Plasmonics for improved photovoltaic devices.
\emph{Nat.\ Mater.} \textbf{2010}, \emph{9}, 205--213\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Knight et~al.(2011)Knight, Sobhani, Nordlander, and Halas]{KSN11}
Knight,~M.~W.; Sobhani,~H.; Nordlander,~P.; Halas,~N.~J. Photodetection with
active optical antennas. \emph{Science} \textbf{2011}, \emph{332},
702--704\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{O'Neal} et~al.(2004){O'Neal}, Hirsch, Halas, Payne, and West]{NHH04}
{O'Neal},~D.~P.; Hirsch,~L.~R.; Halas,~N.~J.; Payne,~J.~D.; West,~J.~L.
Photo-thermal tumor ablation in mice using near infrared-absorbing
nanoparticles. \emph{Cancer\ Lett.} \textbf{2004}, \emph{209}, 171--176\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grzelczak et~al.(2008)Grzelczak, {P\'erez-Juste}, Mulvaney, , and
{Liz-Marz\'an}]{GPM08}
Grzelczak,~M.; {P\'erez-Juste},~J.; Mulvaney,~P.; ; {Liz-Marz\'an},~L.~M. Shape
control in gold nanoparticle synthesis. \emph{Chem.\ Soc.\ Rev.}
\textbf{2008}, \emph{37}, 1783--1791\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stokes et~al.(2007)Stokes, McDonagh, and Cortie]{SMC07}
Stokes,~N.; McDonagh,~A.~M.; Cortie,~M.~B. Preparation of nanoscale gold
structures by nanolithography. \emph{Gold\ Bulletin} \textbf{2007},
\emph{40/4}, 310--320\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mulvaney et~al.(2006)Mulvaney, P\'{e}rez-Juste, Giersig,
Liz-Marz\'{a}n, and Pecharrom\'{a}n]{MPG06}
Mulvaney,~P.; P\'{e}rez-Juste,~J.; Giersig,~M.; Liz-Marz\'{a}n,~L.~M.;
Pecharrom\'{a}n,~C. Drastic Surface Plasmon Mode Shifts in Gold Nanorods Due
to Electron Charging. \emph{Plasmonics} \textbf{2006}, \emph{1}, 61--66\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chu et~al.(2006)Chu, Chao, Chen, Wu, and Chen]{CCC06_2}
Chu,~K.~C.; Chao,~C.~Y.; Chen,~Y.~F.; Wu,~Y.~C.; Chen,~C.~C. Electrically
controlled surface plasmon resonance frequency of gold nanorods. \emph{Appl.\
Phys.\ Lett.} \textbf{2006}, \emph{89}, 103107\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Boardman et~al.(2010)Boardman, Grimalsky, Kivshar, Koshevaya, Lapine,
Litchinitser, Malnev, Noginov, Rapoport, and Shalaev]{BGK10}
Boardman,~A.~D.; Grimalsky,~V.~V.; Kivshar,~Y.~S.; Koshevaya,~S.~V.;
Lapine,~M.; Litchinitser,~N.~M.; Malnev,~V.~N.; Noginov,~M.; Rapoport,~Y.~G.;
Shalaev,~V.~M. Active and tunable metamaterials. \emph{Laser\ Photon.\ Rev.}
\textbf{2010}, \emph{10}, 00012\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu et~al.(2012)Liu, Zhu, Tsai, and Zheludev]{LZT12}
Liu,~A.~Q.; Zhu,~W.~M.; Tsai,~D.~P.; Zheludev,~N.~I. Micromachined tunable
metamaterials: a review. \emph{J.\ Opt.} \textbf{2012}, \emph{14},
114009\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Armelles et~al.(2013)Armelles, Cebollada, Garc\'{\i}a-Mart\'{\i}n, and
Gonz\'{a}lez]{ACG13}
Armelles,~G.; Cebollada,~A.; Garc\'{\i}a-Mart\'{\i}n,~A.; Gonz\'{a}lez,~M.~U.
Magnetoplasmonics: Combining Magnetic and Plasmonic Functionalities.
\emph{Advanced Optical Materials} \textbf{2013}, \emph{1}, 10--35\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Feigenbaum et~al.(2010)Feigenbaum, Diest, and Atwater]{FDA10}
Feigenbaum,~E.; Diest,~K.; Atwater,~H.~A. Unity-order index change in
transparent conducting oxides at visible frequencies. \emph{Nano\ Lett.}
\textbf{2010}, \emph{10}, 2111--2116\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Abb et~al.(2011)Abb, Albella, Aizpurua, and Muskens]{AAA11}
Abb,~M.; Albella,~P.; Aizpurua,~J.; Muskens,~O.~L. All-Optical Control of a
Single Plasmonic Nanoantenna-ITO Hybrid. \emph{Nano\ Lett.} \textbf{2011},
\emph{11}, 2457--2463\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Comin and Manna(0)]{CM14}
Comin,~A.; Manna,~L. New materials for tunable plasmonic colloidal
nanocrystals. \emph{Chem.\ Soc.\ Rev.} \textbf{0}, \emph{0},
arXiv:1310.5970\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Castro Neto} et~al.(2009){Castro Neto}, Guinea, Peres, Novoselov, and
Geim]{CGP09}
{Castro Neto},~A.~H.; Guinea,~F.; Peres,~N. M.~R.; Novoselov,~K.~S.;
Geim,~A.~K. The electronic properties of graphene. \emph{Rev.\ Mod.\ Phys.}
\textbf{2009}, \emph{81}, 109--162\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jablan et~al.(2009)Jablan, Buljan, and {Solja\v{c}i\'{c}}]{JBS09}
Jablan,~M.; Buljan,~H.; {Solja\v{c}i\'{c}},~M. Plasmonics in graphene at
infrared frequencies. \emph{Phys.\ Rev.\ B} \textbf{2009}, \emph{80},
245435\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Koppens et~al.(2011)Koppens, Chang, and {Garc\'{\i}a de
Abajo}]{paper176}
Koppens,~F. H.~L.; Chang,~D.~E.; {Garc\'{\i}a de Abajo},~F.~J. Graphene
plasmonics: A platform for strong light-matter interactions. \emph{Nano\
Lett.} \textbf{2011}, \emph{11}, 3370--3377\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grigorenko et~al.(2012)Grigorenko, Polini, and Novoselov]{GPN12}
Grigorenko,~A.~N.; Polini,~M.; Novoselov,~K.~S. Graphene plasmonics.
\emph{Nat.\ Photon.} \textbf{2012}, \emph{6}, 749--758\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chen et~al.(2012)Chen, Badioli, Alonso-Gonz\'alez, Thongrattanasiri,
Huth, Osmond, Spasenovi\'c, Centeno, Pesquera, Godignon, {Zurutuza Elorza},
Camara, {Garc\'{\i}a de Abajo}, Hillenbrand, and Koppens]{paper196}
Chen,~J.; Badioli,~M.; Alonso-Gonz\'alez,~P.; Thongrattanasiri,~S.; Huth,~F.;
Osmond,~J.; Spasenovi\'c,~M.; Centeno,~A.; Pesquera,~A.; Godignon,~P. et~al.
Optical nano-imaging of gate-tunable graphene plasmons. \emph{Nature}
\textbf{2012}, \emph{487}, 77--81\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fei et~al.(2012)Fei, Rodin, Andreev, Bao, McLeod, Wagner, Zhang, Zhao,
Thiemens, Dominguez, Fogler, Neto, Lau, Keilmann, and Basov]{FRA12}
Fei,~Z.; Rodin,~A.~S.; Andreev,~G.~O.; Bao,~W.; McLeod,~A.~S.; Wagner,~M.;
Zhang,~L.~M.; Zhao,~Z.; Thiemens,~M.; Dominguez,~G. et~al. Gate-tuning of
graphene plasmons revealed by infrared nano-imaging. \emph{Nature}
\textbf{2012}, \emph{487}, 82--85\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fang et~al.(2013)Fang, Thongrattanasiri, Schlather, Liu, Ma, Wang,
Ajayan, Nordlander, Halas, and {Garc\'{\i}a de Abajo}]{paper212}
Fang,~Z.; Thongrattanasiri,~S.; Schlather,~A.; Liu,~Z.; Ma,~L.; Wang,~Y.;
Ajayan,~P.~M.; Nordlander,~P.; Halas,~N.~J.; {Garc\'{\i}a de Abajo},~F.~J.
Gated tunability and hybridization of localized plasmons in nanostructured
graphene. \emph{ACS Nano} \textbf{2013}, \emph{7}, 2388--2395\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ju et~al.(2011)Ju, Geng, Horng, Girit, Martin, Hao, Bechtel, Liang,
Zettl, Shen, and Wang]{JGH11}
Ju,~L.; Geng,~B.; Horng,~J.; Girit,~C.; Martin,~M.; Hao,~Z.; Bechtel,~H.~A.;
Liang,~X.; Zettl,~A.; Shen,~Y.~R. et~al. Graphene plasmonics for tunable
terahertz metamaterials. \emph{Nat.\ Nanotech.} \textbf{2011}, \emph{6},
630--634\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Brar et~al.(2013)Brar, Jang, Sherrott, Lopez, and Atwater]{BJS13}
Brar,~V.~W.; Jang,~M.~S.; Sherrott,~M.; Lopez,~J.~J.; Atwater,~H.~A. Highly
confined tunable mid-infrared plasmonics in graphene nanoresonators.
\emph{Nano\ Lett.} \textbf{2013}, \emph{13}, 2541--2547\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Manjavacas et~al.(2013)Manjavacas, Thongrattanasiri, and {Garc\'{\i}a
de Abajo}]{paper214}
Manjavacas,~A.; Thongrattanasiri,~S.; {Garc\'{\i}a de Abajo},~F.~J. Plasmons
driven by single electrons in graphene nanoislands. \emph{Nanophotonics}
\textbf{2013}, \emph{2}, 139--151\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Manjavacas et~al.(2013)Manjavacas, Marchesin, Thongrattanasiri, Koval,
Nordlander, S\'{a}nchez-Portal, and {Garc\'{\i}a de Abajo}]{paper215}
Manjavacas,~A.; Marchesin,~F.; Thongrattanasiri,~S.; Koval,~P.; Nordlander,~P.;
S\'{a}nchez-Portal,~D.; {Garc\'{\i}a de Abajo},~F.~J. Tunable molecular
plasmons in polycyclic aromatic hydrocarbons. \emph{ACS Nano} \textbf{2013},
\emph{7}, 3635--3643\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Green and Bauer(1981)]{GB1981}
Green,~A.; Bauer,~E. Gold monolayers on silicon single crystal surfaces.
\emph{Surface Science} \textbf{1981}, \emph{103}, L127 -- L133\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Johnson and Christy(1972)]{JC1972}
Johnson,~P.~B.; Christy,~R.~W. Optical constants of the noble metals.
\emph{Phys.\ Rev.\ B} \textbf{1972}, \emph{6}, 4370--4379\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Garc\'{\i}a de Abajo} and Howie(2002)]{paper040}
{Garc\'{\i}a de Abajo},~F.~J.; Howie,~A. Retarded field calculation of electron
energy loss in inhomogeneous dielectrics. \emph{Phys.\ Rev.\ B}
\textbf{2002}, \emph{65}, 115418\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Raki\'{c} et~al.(1998)Raki\'{c}, Djuri\v{s}i\'{c}, Elazar, and
Majewski]{RDE98}
Raki\'{c},~A.~D.; Djuri\v{s}i\'{c},~A.~B.; Elazar,~J.~M.; Majewski,~M.~L.
Optical properties of metallic films for vertical-cavity optoelectronic
devices. \emph{Appl.\ Opt} \textbf{1998}, \emph{37}, 5271--5283\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Thongrattanasiri et~al.(2012)Thongrattanasiri, Silveiro, and
{Garc\'{\i}a de Abajo}]{paper194}
Thongrattanasiri,~S.; Silveiro,~I.; {Garc\'{\i}a de Abajo},~F.~J. Plasmons in
electrostatically doped graphene. \emph{Appl.\ Phys.\ Lett.} \textbf{2012},
\emph{100}, 201105\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lang and Kohn(1970)]{LK1970_2}
Lang,~N.~D.; Kohn,~W. Theory of metal surfaces: Charge density and surface
energy. \emph{Phys.\ Rev.\ B} \textbf{1970}, \emph{1}, 4555--4568\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zuloaga et~al.(2009)Zuloaga, Prodan, and Nordlander]{ZPN09}
Zuloaga,~J.; Prodan,~E.; Nordlander,~P. Quantum description of the plasmon
resonances of a nanoparticle dimer. \emph{Nano\ Lett.} \textbf{2009},
\emph{9}, 887--891\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Savage et~al.(2012)Savage, Hawkeye, Esteban, Borisov, Aizpurua, and
Baumberg]{SHE12}
Savage,~K.~J.; Hawkeye,~M.~M.; Esteban,~R.; Borisov,~A.~G.; Aizpurua,~J.;
Baumberg,~J.~J. Revealing the quantum regime in tunnelling plasmonics.
\emph{Nature} \textbf{2012}, \emph{491}, 574--577\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kreibig and Vollmer(1995)]{KV95}
Kreibig,~U.; Vollmer,~M. \emph{Optical Properties of Metal Clusters};
\newblock Springer-Verlag: Berlin, 1995\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liebsch(1993)]{L93}
Liebsch,~A. Surface-plasmon dispersion and size dependence of Mie resonance:
Silver versus simple metals. \emph{Phys.\ Rev.\ B} \textbf{1993}, \emph{48},
11317--11328\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Onida et~al.(2002)Onida, Reining, and Rubio]{ORR02}
Onida,~G.; Reining,~L.; Rubio,~A. Electronic excitations: Density-functional
versus many-body Green's-function approaches. \emph{Rev.\ Mod.\ Phys.}
\textbf{2002}, \emph{74}, 601--659\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lisinetskaya and Mitri\'{c}(2014)]{LM14}
Lisinetskaya,~P.~G.; Mitri\'{c},~R. {\it Ab initio} simulations of light
propagation in silver cluster nanostructures. \emph{Phys.\ Rev.\ B}
\textbf{2014}, \emph{89}, 035433\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pines and Nozi\`{e}res(1966)]{PN1966}
Pines,~D.; Nozi\`{e}res,~P. \emph{The Theory of Quantum Liquids};
\newblock W. A. Benjamin, Inc.: New York, 1966\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Thongrattanasiri et~al.(2012)Thongrattanasiri, Koppens, and
{Garc\'{\i}a de Abajo}]{paper182}
Thongrattanasiri,~S.; Koppens,~F. H.~L.; {Garc\'{\i}a de Abajo},~F.~J. Complete
optical absorption in periodically patterned graphene. \emph{Phys.\ Rev.\
Lett.} \textbf{2012}, \emph{108}, 047401\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ashcroft and Mermin(1976)]{AM1976}
Ashcroft,~N.~W.; Mermin,~N.~D. \emph{Solid State Physics};
\newblock Harcourt College Publishers: New York, 1976\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stefanou et~al.(1998)Stefanou, Yannopapas, and Modinos]{SYM98_1}
Stefanou,~N.; Yannopapas,~V.; Modinos,~A. Heterostructures of photonic
crystals: Frequency bands and transmission coefficients. \emph{Comput.\
Phys.\ Commun.} \textbf{1998}, \emph{113}, 49--77\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stefanou et~al.(2000)Stefanou, Yannopapas, and Modinos]{SYM00}
Stefanou,~N.; Yannopapas,~V.; Modinos,~A. MULTEM 2: A new version of the
program for transmission and band-structure calculations of photonic
crystals. \emph{Comput.\ Phys.\ Commun.} \textbf{2000}, \emph{132},
189--196\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,314,259,995,585 | arxiv | \section{Introduction}
\label{sec:intro}
There are many observational evidences that astrophysical objects like young stellar
objects (YSOs), accreting white dwarfs, X-ray binaries (XRBs), and active galactic nuclei (AGN) produce jets.
AGN jet is a relativistic, collimated
outflow which spans over large distance (few kpc to Mpc scale) with Lorentz factor ($\gamma$) ranging from few to few tens.
There are
mainly three things which are common in astrophysical jets --- first, jets are collimated \citep{asa12}, second, jets propagate with
high speeds \citep{p81} and the third is the over-collimation of the flow due to interaction with the ambient medium and/or by the
magnetic field pinching \citep{asa12,lrl12}.
The bright knots observed throughout the jet,
may occur due to the existence of
multiple shocks caused by magnetic pinching, or, interaction with the ambient medium. There are many processes that can drive the jet outward with significant speed.
In principle, thermal-pressure gradient term can accelerate the jet to speeds comparable to the sound speed
at the jet base \citep{lckhr16}. However, \citet{mstv04} used the thermal-pressure gradient term as the main accelerating
process,
but achieved fast outflows by tweaking the equation of state of the flow.
The intense radiation field emanating from the associated accretion disc, may transfer momentum or energy to the jet material
and thereby accelerate it \citep[see,][]{ftrt85,cc00,pk04,c05,vkmc15,vc17,vc18,vc19}. It may be noted that, only a luminous disc
can radiatively drive a powerful jet, which would preclude possibility of powerful jets associated with under luminous accretion discs.
Therefore, the scientific community believes that the magnetic driving is a more general physical process, through which
powerful jets can be produced both in microquasars and AGNs.
Global magnetized outflow solutions, i.e., solutions connecting the base of the outflow with the asymptotically large distance,
were first obtained by \citet{wd67}, which crossed the critical points (slow, Alfv\'en and fast) smoothly,
albeit on the equatorial plane. \cite{wd67} model predicted the correct wind speed at the earth orbit.
In the cold flow regime, \citet{bp82} proposed a model of centrifugally and magnetically driven outflow
from cold Keplerian disc, somewhat like a bead flung by a rotating wire. A novel idea as it was, but the cold flow assumption
limited its applicability in studying outflows launched from the hot inner regions of accretion discs around compact objects.
\citet{lmms86} developed the magneto-hydrodynamic (MHD) equations of motion for accretion channel on to strongly magnetized compact stars
and was later used to study accretion on to neutron stars and white dwarfs \citep{kkm08,sc18}.
Streamline of a magnetically driven outflow should originate from the accretion disc on the equatorial
plane, but as the plasma flows out, the streamline should move away from the equatorial plane and around the rotation axis.
Indeed, there were few papers which showed that open field lines, coming from the underlying disc, collimate the jet around
the rotation axis \citep{sa85, sa87, lbc91}.
But most of these models were either in the non-relativistic regime, or, in the cold plasma regime, or both.
\citet{li92} extended these cold flow to
relativistic regime and studied the radially self-similar jet solutions. Then, \cite{vla03a,vla03b} further extended
the cold relativistic MHD to
hot flow by including the thermal-pressure gradient term. Therefore, outflows with
relativistic bulk speed and temperature, could be studied. The thermal-pressure gradient term dominates near the jet base and
can accelerate
the flow near the base, but it is unlikely to do so at larger distance away from the jet-base.
\citet{pol10} used \citet{vla03a} model with fixed adiabatic index
($\Gamma=5/3$)
equation of state and showed that the flow can become trans-Alfv\'enic (sub Alfv\'enic to super Alfv\'enic)
and trans-fast (sub fast to super fast). In contrast, \citet{vla03a}
could obtain only trans-Alfv\'en flow with $\Gamma=4/3$. Therefore, the thermodynamics of the flow may play an important role
in determining the nature of the solution.
In particular, the outflow is hot near the base but the temperature decreases by few orders of magnitude at large distances,
therefore the adiabatic index is not likely to remain constant through out the flow.
In this paper, we obtain radially self-similar solutions of magnetically driven relativistic outflows by following the
methodology of \citet{pol10},
but instead of using a fixed $\Gamma$ EoS, we consider a relativistic EoS.
We use a relativistic EoS that was proposed by \citet{cr09}, which was inspired by earlier works \citep{c38,s57,cg68,rcc06}.
We would like to find out, whether we still get trans-Alfv\'enic, trans-fast flow with an EoS which has no fixed value of $\Gamma$.
We focus on how the jet solutions changes with the change in current distribution, Alfv\'en point, Alfv\'en point polar angle and other
flow parameters.
We compare an outflow solution described by relativistic EoS, with the one described by
fixed $\Gamma$ EoS \citep{vla03a,vla03b,pol10}. An interesting aspect would be to study and compare flows with
different plasma composition parameter. As far as we know, such an effort has not been considered for relativistic MHD outflows.
In short, we would like to investigate how would various flow parameters affect the magnetically driven relativistic
outflow.
The order of the paper is as follows, in section \ref{subsec:govereqs}, we present special relativistic MHD equations.
In section \ref{subsec:clo}, we discuss the two closure equations, one is flux freezing condition and other is
the relativistic EoS having variable adiabatic index. Reduced relativistic MHD equations are presented in section \ref{subsec:mhdeqs}.
Methodology to solve equations of motion are explained in section \ref{sec:meth}. In section \ref{sec:result} we present the results of
outflow solutions. Discussions and concluding remarks are presented in section \ref{sec:conclude}.
\section{Relativistic MHD equations and assumptions}
\subsection{Governing equations}
\label{subsec:govereqs}
Equations of motion of relativistic magneto-hydrodynamics (RMHD) can be obtained from the four divergence of
the total energy-momentum tensor. The
energy-momentum tensor for matter is, $T^{\mu \nu}_{\rm matter}=(\bar{e}+p)u^{\mu}u^{\nu}+p\eta^{\mu \nu}$, where
$\bar{e}$ is energy density, $p$ is gas pressure, the four-velocity component $u^{\mu}=
\left(\gamma c, \gamma \textbf{v}\right)$, $\eta^{\mu \nu}=\mbox{diag}\left[-1,1,1,1\right]$ and
$c$ is the speed of light. The energy-momentum tensor of the electromagnetic field is given by
$T^{\mu \nu}_{\rm em}=\left(F^{\mu \lambda}F^{\nu}_{\lambda} - \frac{1}{4} \eta^{\mu \nu}F^{\delta \lambda}F_{\delta \lambda}\right)/(4 \pi)$. Therefore, the total energy-momentum tensor
is $T^{\mu \nu} = T^{\mu \nu}_{\rm matter} + T^{\mu \nu}_{\rm em}$. The conservation
of energy and momentum in a covariant form can be written as,
\begin{equation}
\nabla_{\nu}T^{\mu \nu} = 0
{\label{cormhd.eq}}
\end{equation}
Maxwell's equations are,
\begin{equation}
\nabla . \textbf{B}=0,~~ \nabla . \textbf{E}=\frac{4\pi}{c} J^{0},~~
\nabla \times \textbf{B}=\frac{1}{c}\frac{\partial \textbf{E}}{\partial t} + \frac{4\pi}{c}\textbf{J},~~
\nabla \times \textbf{E}=-\frac{1}{c}\frac{\partial \textbf{B}}{\partial t},
{\label{maxs.eq}}
\end{equation}
where $J^{\mu}=\left(J^{0},\textbf{J}\right)$ is the four-current.
\subsection{Closure equations}
\label{subsec:clo}
To solve the above set of equations (\ref{cormhd.eq} and \ref{maxs.eq}) we need two more
equations, because the number of variables are more than the number of equations. For matter, we
need an equation which relates the thermodynamic variables {\em i.e., } EoS of the fluid. We also need another equation which relates
the electric field to the magnetic field.
\subsubsection{Relativistic EoS having variable $\Gamma$}
\label{subsec:eos}
In this study we have used relativistic EoS for multi-species flow which was proposed by
\citet[][ also called as CR EoS]{cr09}, which is
given by,
\begin{equation}
\bar{e} = n_{{\rm e}^-} m_{{\rm e}^-} c^{2}f(\Theta,\xi) = \rho_{{\rm e}^-} c^{2}f(\Theta,\xi) = \frac{\rho c^{2}f(\Theta,\xi)}{K},
\label{etrnl.eq}
\end{equation} where, $K = [2-\xi(1-1/\eta)]$, $f(\Theta,\xi) = (2-\xi)\left[1 + \Theta\left(\frac{9\Theta + 3}
{3\Theta + 2}\right)\right] + \xi\left[\frac{1}{\eta}
+ \Theta\left(\frac{9\Theta + 3/\eta}{3\Theta + 2/\eta}\right)\right]$, $\Theta={\kappa_{\rm \small B}T}/{m_{{\rm e}^-} c^{2}}
$ is the dimensionless temperature, $\rho_{{\rm e}^-}$ is the rest-mass density of electrons, $\rho$ is the rest-mass
density, $\eta={m_{{\rm e}^-}/m_{{\rm p}^+}}$ is electron to proton mass ratio, the composition parameter $\xi={n_{{\rm p}^+}/n_{{\rm e}^-}}
$ is the ratio of number density of protons to that of electrons.
A flow described by $\xi=0.0$ implies an electron-positron pair plasma, $0.0<\xi<1.0$ imply electron-positron-proton plasma and
$\xi=1.0$ implies electron-proton plasma.
Enthalpy $h$, variable adiabatic index $\Gamma$ and sound speed $c_{\rm s}$ are given by,
\begin{equation}
h = \frac{\bar{e} + p}{\rho} = \frac{fc^{2}}{K} + \frac{2\Theta c^{2}}{K},
\label{enthp.eq}
\end{equation} and
\begin{equation}
\Gamma = 1 + \frac{1}{N}, ~
N = \frac{1}{2}\frac{df}{d\Theta} \mbox{ and } c^{2}_{\rm s}=\frac{2\Theta \Gamma c^{2}}{f+2\Theta}.
{\label{gama.eq}}
\end{equation}
Integrating $1^{st}$ law of thermodynamics ($u_\mu \nabla_\nu T^{\mu \nu}=0$) with the help of continuity equation, we can obtain the adiabatic equation of state \citep{kscc13, vkmc15},
\begin{equation}{\label{rel_rho.eq}}
\rho={\cal K}g(\Theta,\xi),
\end{equation}
where, $g(\Theta,\xi)=\mbox{exp}(k_3) \Theta^{3/2}(3\Theta+2)^{k_1}(3\Theta+2/\eta)^{k_2}$, $k_1=3(2-\xi)/4$, $k_2=3\xi/4$ and $k_3=(f-K)/(2\Theta)$ and ${\cal K}$ is the measure of entropy.
Therefore, pressure $p$ is given by,
\begin{equation}{\label{press.eq}}
p=\frac{2{\cal K}g(\Theta,\xi)\Theta}{K}c^{2}
\end{equation}
\subsubsection{Ideal MHD flow assumption}
\label{subsec:ideal}
For the ideal MHD flow, the electric field is zero in the co-moving frame {\em i.e., }
$u_{\nu}F^{\mu \nu}=0$ or
\begin{equation}
\textbf{E}=-\frac{1}{c}\textbf{v}\times\textbf{B}.
\label{elecf.eq}
\end{equation}
This is known as the ideal MHD condition. The flux freezing condition is obtained from the
Faraday equation,
\begin{equation}
\nabla\times(\textbf{v}\times\textbf{B})=\frac{\partial \textbf{B}}{\partial t}
\end{equation}
\subsection{Conventional Relativistic MHD equations}
\label{subsec:mhdeqs}
By using the EoS and ideal MHD assumption, we can write equations (\ref{cormhd.eq}) and (\ref{maxs.eq}) in the
conventional form.
The mass conservation equation is $\nabla_{\mu}(\rho u^{\mu})=0$, or the continuity equation,
\begin{equation}
\frac{\partial\left(\gamma\rho\right)}{\partial t} + \nabla.\left(\textbf{v}\gamma\rho\right)=0.
\label{mascon.eq}
\end{equation}
The momentum conservation equation is, $\nabla_{\nu}T^{k\nu}=0$, where the $k=1,2,3$
components,
\begin{equation}
\gamma\rho\left(\frac{\partial}{\partial t} + \textbf{v}.\nabla\right)\left(h\gamma\textbf{v}\right)=-\nabla p + \frac{J^{0}\textbf{E} + \textbf{J}\times\textbf{B}}{c}.
\label{momcon.eq}
\end{equation}
The first law of thermodynamics is obtained by going to the co-moving frame of the flow, $u_{\mu}T^{\mu \nu}_{,\nu}=0$,
\begin{equation}
\left(\frac{\partial}{\partial t} + \textbf{v}.\nabla\right)e + p\left(\frac{\partial}{\partial t} + \textbf{v}.\nabla\right)\left(\frac{1}{\rho}\right)=0,
\label{1stlaw.eq}
\end{equation}
where $e \equiv \bar{e}/\rho$.\\
We study the axis-symmetric steady flow, therefore, $\partial / \partial t = 0$ and
$\partial / \partial \phi = 0$. For axis-symmetric flow, the solenoidal condition can be written as,
\begin{equation}
\nabla . \textbf{B}=\nabla . {\bf B_{\rm p}} = 0.
\end{equation}
The total magnetic field ${\bf B}$ is given as,
\begin{equation}
\textbf{B}={\bf B_{\rm p}}+{\bf B_{\phi}},~~\mbox{where, } {\bf B_{\rm p}}=\frac{\nabla A\times\hat{\textbf{$\phi$}}}{\varpi}.
\end{equation}
Here, ${\bf B_{\rm p}}$ and $\bf B_{\phi}$ are the poloidal and azimuthal components of the magnetic field, respectively.
The $A(\varpi,z)$ is a poloidal magnetic flux function and this can be defined as
$A=\frac{1}{2\pi}\int\int {\bf B_{\rm p}}.d\textbf{S}$
and ${\bf B_{\rm p}}.\nabla A=0$ which means that poloidal magnetic field lines are orthogonal to the gradient of
magnetic flux function. Here, $\varpi$ represents the cylindrical radius.
With the help of ideal MHD flow condition (\ref{elecf.eq}) and $E_{\phi}=0$
(from Faraday equation \ref{maxs.eq}) we can show that ${\bf v_{\rm p}} \parallel \textbf{B}_{\rm p}$, so
\begin{equation}
E=\frac{\varpi \Omega}{c}\textbf{B}\times\textbf{e}_{\phi}
\mbox{,~~}
\textbf{v}=\frac{\Psi_{A}}{4\pi \gamma \rho}\textbf{B} + \varpi \Omega \textbf{e}_{\phi}
\mbox{ and } \frac{\Psi_{A}}{4\pi \gamma \rho}=\frac{v_{\rm p}}{B_{\rm p}}.
\label{defvel.eq}
\end{equation}
Here, $\Psi_{A}$ is the mass to magnetic flux ratio and $\Omega$ is the angular velocity of fieldlines.
We can obtain the constants of motion by projecting equations (\ref{mascon.eq})
- (\ref{1stlaw.eq}) along and perpendicular to the poloidal fieldlines and
then integrating them \citep[for more details see][]{vla03a, vla03b}, \footnote{for non-relativistic MHD, see \citet{h78}}
we have five constants of motion $\Omega(A),~\Psi_{A}(A),~L(A),~\mu(A)
,~{\cal K}(A)$.
The poloidal Alfv\'enic Mach number \citep[see,][]{mi69} is defined as,
\begin{equation}
\nonumber
M\equiv \frac{\gamma v_{p}}{\left({{B_{p}}/{\sqrt{4\pi\rho h}}}\right)},
\end{equation}
and using equations (\ref{enthp.eq}), (\ref{press.eq}) and (\ref{defvel.eq}), we can also write $M$ as,
\begin{equation}
M^{2}=q(A)\frac{h(h-f(\Theta,\xi)/K)K}{2\Theta g(\Theta,\xi)}=q(A)\frac{h}{g(\Theta,\xi)},
\label{amach.eq}
\end{equation}
where $q(A)\equiv {\Psi_{A}^2}/{4\pi{\cal K}}$.
To solve RMHD equations we assume that jet solutions are radially self-similar \citep[for more details see
section 3 in][]{vla03a}.
The derivatives of dimensionless temperature $\Theta$ and
enthalpy $h$ w.r.t polar angle $\theta$ are given by,
\begin{equation}
\frac{d\Theta}{d\theta}=-\frac{g(\Theta,\xi)\Theta K}{qN\left(hK-2\Gamma\Theta \right)}\frac{dM^{2}}{d\theta}
\mbox{ and }
\frac{dh^{2}}{d\theta}=-\left(\frac{2h^{2}}{M^{2}}\right)\frac{2\Gamma\Theta}{hK-2\Gamma\Theta}\frac{dM^{2}}{d\theta}
\label{dTdh.eq}
\end{equation}
If we take the derivative of total energy \textit{w.r.t} polar angle $(\theta)$ with the help of equations
(\ref{amach.eq}) and (\ref{dTdh.eq}) we obtain \citep[for more details see appendix and][]{pol10},
\begin{equation}
A_{1}(\theta,\psi,G^{2},M^{2})\frac{dM^{2}}{d\theta}+B_{1}(\theta,\psi,G^{2},M^{2})\frac{d\psi}{d\theta}=C_{1}(\theta,\psi,G^{2},M^{2}),
\label{dengy.eq}
\end{equation}
where $x\equiv \varpi\Omega/c$ is cylindrical radius in terms of light-cylinder,
$G\equiv x/x_{A}$ (here, $x_A\equiv x$ at Alfv\'en point) and $\psi$ is the angle of poloidal field line with the disk.
The transfield equation which controls the collimation of the flow, can be obtained from the momentum equation
by taking dot product with $-\nabla A$ {\em i.e., } perpendicular to the poloidal field line,
\begin{equation}
A_{2}(\theta,\psi,G^{2},M^{2})\frac{dM^{2}}{d\theta}+B_{2}(\theta,\psi,G^{2},M^{2})\frac{d\psi}{d\theta}=C_{2}(\theta,\psi,G^{2},M^{2}).
\label{trans.eq}
\end{equation}
Therefore, we can get the wind equation or outflow equation $({dM^{2}}/{d\theta})$ for radially self-similar
flows by solving equations (\ref{dengy.eq}) and (\ref{trans.eq}),
\begin{equation}
\frac{dM^{2}}{d\theta}=\frac{C_{1}B_{2}-C_{2}B_{1}}{A_{1}B_{2}-A_{2}B_{1}}.
\label{dm2.eq}
\end{equation}
\section{Methodology}
\label{sec:meth}
We study
the flow in special relativistic domain, in which the slow magnetosonic point does not form,
{\em i.e., } we find the solution from the sub-Alfv\'enic to super-fast regime.
To obtain the solution of magnetically driven relativistic outflow about the axis of symmetry, we
integrate equations (\ref{dTdh.eq})\footnote{Equation (\ref{amach.eq}) instead of equation
(\ref{dTdh.eq}) may also be used, since they are equivalent.} and (\ref{dm2.eq}).
In addition, we also solve
equation (\ref{dg.eq}) and total energy to mass flux
ratio equation (\ref{ber.eq}) to obtain $\psi$ if the value of $\mu$ is known.
First, we supply the values of Alfv\'en point $x_A$, $F$ (current distribution), $q$, $\theta_A=\theta|_{x_A}$, $\psi_{A}=\psi|_{x_A}$.
We obtain $ M^{2}_A~(=1-x^{2}_{A})$ and therefore $\Theta_A$ using equations (\ref{enthp.eq} \& \ref{amach.eq}).
Then we obtain $\sigma_A$ from equations (\ref{bera.eq} \& \ref{arc.eq}) for a given value of $\sigma_{M}$.
Now we obtain the value of $\mu$ and $p_A=dM^{2}/d\theta|_{x_A}$ from equations (\ref{bera.eq}) and
(\ref{sigA.eq}),
respectively.
With these values we integrate equations (\ref{dm2.eq}, \ref{dg.eq}, \ref{amach.eq} or \ref{dTdh.eq}) starting from $x_A$ inward and outward. The solution may not pass through the fast point, so we iterate on $\sigma_{M}$ until
the solution passes through the fast point as well. We use Runge-Kutta fourth order method to integrate but also use Newton-Raphson's method to accurately obtain the flow quantities like $\theta_{\rm f},~\psi_{\rm f},~G^{2}_{\rm f},M^{2}_{\rm f}$, where the suffix `f' denotes quantities measured at the fast-point.
Since, we integrate the equations starting from the Alfv\'en point, therefore
$x^{2}_{A},\theta_{A},\psi_{A}$ essentially are the boundary conditions or boundary parameters.
In the present paper, there is no need to specify adiabatic index $\Gamma$ since it is self-consistently
obtained from EoS.
In addition to this, we have one more free parameter $\xi$ which controls the composition of the flow.
\section{Results}
\label{sec:result}
\vspace{0.0cm}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=12cm,height=12cm]{fig1.eps}
\caption{{Outflow solutions for different values of $F=0.750(\mbox{solid black})$,
$0.760(\mbox{dashed red})$,
$0.770(\mbox{long-dashed green})$, $0.780(\mbox{dashed-dotted blue})$, $0.795(\mbox{long-dashed-dotted magenta})$
and other four parameters are fixed {\em i.e., } $x^{2}_{A}=0.75,\theta_{A}=50,\psi_{A}=55,q=500,\xi=1$. (a) Projected stream
line, (b) $log(M^{2})$, (c) $v_{\rm p}$, (d) $v_{\phi}$, (e) $\mu_{S}=-\varpi\OmegaB_{\phi}/\Psi_{A} c^{2}$ and
$\mu_{M}=\gamma h$, (f) $L_{B}$ and $L_{M}$, (g) $\gamma$, (h) $log(T)$ and (i) $\Gamma$, are plotted
with $log(z)$. Here, $z$ is vertical height and $x$ is cylindrical
radius in units of light cylinder. Solid circles and triangles represent Alfv\'en point and fast-point locations.}}
\label{lab:fig1}
\end{figure}
In this paper, the velocity is measured in the units of speed of light $c$ and distance is in units of light cylinder
$r_{c}\equiv\frac{c}{\Omega}$.
In our model, there are two main free input parameters $F$ and $q$, three boundary parameters $\psi_{A},\theta_{A},x^{2}_{A}$ and
a composition parameter $\xi$. We study the effect of these parameters on the outflow solutions and on the
collimation of outflowing matter with relativistic EoS.
\subsection{Solutions for different current distributions ($F$)}
In Fig.\ref{lab:fig1}, we plot different solutions for different current distribution parameter
$F=0.750$ (solid black), $0.760~(\mbox{dashed red})$, $0.770~(\mbox{long-dashed green})$, $0.780~(\mbox{dashed-dotted blue})$, $0.795~(\mbox{long-dashed-dotted magenta})$
and other four parameters are fixed {\em i.e., } $x^{2}_{A}=0.75,~\theta_{A}=50,~\psi_{A}=55,~q=500$ \& $\xi=1.0$.
In Fig.\ref{lab:fig1}a, projected stream line in the $x-z$ plane is plotted. The distribution of corresponding flow variables like $log(M^{2})$ (Fig.\ref{lab:fig1}b), poloidal velocity
$v_{\rm p}$ (Fig.\ref{lab:fig1}c), azimuthal velocity $v_{\phi}$ (Fig.\ref{lab:fig1}d), Poynting to mass flux ratio $\mu_{S}\equiv-\varpi\OmegaB_{\phi}/\Psi_{A} c^{2}$ and
matter to mass flux ratio $\mu_{M}\equiv\gamma h$ (Fig.\ref{lab:fig1}e), angular momentum associated with the magnetic field
$L_{B}\equiv-\varpiB_{\phi}/\Psi_{A}$ and matter $L_{M}\equiv h\gamma\varpiv_{\phi}$ (Fig.\ref{lab:fig1}f),
Lorentz factor $\gamma$ (Fig.\ref{lab:fig1}g), $log(T)$ (Fig.\ref{lab:fig1}h) and adiabatic index $\Gamma$ (Fig.\ref{lab:fig1}i) with $log(z)$ are plotted. In Fig.\ref{lab:fig1}(a), solid-circles represent
Alfv\'en point location and solid-triangles represent the fast point location, where $z$ is the vertical height and
$x$ is the cylindrical radius. In Fig.\ref{lab:fig1}(a), we note that if we
increase $F$, the solution collimates at higher height $(z)$.
Higher value of $F$ implies
weaker magnetic field near the base, so it travels larger $z$ before the outflow starts to collimate.
In panel Fig.\ref{lab:fig1}(c), we see that $v_{\rm p}$ has a dip, which is due to the interaction of magnetic field
with matter. Near the base, $\mu_{S}$ gains at the cost of $\mu_{M}$ (Fig.\ref{lab:fig1}e), therefore there is
simultaneous decrease in thermal and kinetic terms.
When the magnetic energy ($\mu_{S}$) becomes sufficiently strong, it starts to accelerate the outflow, although
the outflow temperature continue to decrease. Hence there is a dip in $v_{\rm p}$. Another very interesting
result is that $v_{\phi}$ changes sign from negative to positive (Fig.\ref{lab:fig1}d).
It means, initially the flow
is rotating clockwise and somewhere in between the Alfv\'en and the fast points, the flow flips to a counter-clockwise direction. In MHD, we have two types of
angular momentum, one that is associated with the matter $L_{M}\equiv h\gamma\varpiv_{\phi}$ and the other associated with
the magnetic field $L_{B}\equiv-\varpiB_{\phi}/\Psi_{A}$. Therefore, only total angular momentum is conserved throughout the
flow but not the individual angular momenta (Fig.\ref{lab:fig1}f). Thus, azimuthal velocity $v_{\phi}$ changes sign
because of transfer of angular momentum from magnetic field to matter. In Fig.\ref{lab:fig1}(g), the
variation of Lorentz factor $\gamma$ is shown. We can see that higher value of $F$ produces outflows with higher Lorentz factor
$\gamma\sim 45$ ($F=0.795,\mbox{ long-dashed-dotted}$).
In Fig.\ref{lab:fig1}(h), we plot temperature variation of the outflow with height, for different values of
$F$ parameter. We can see that outflow starts with high temperature when it is sub-Alfv\'enic and temperature
drops to very small value when the flow becomes super-fast. Last panel
Fig.\ref{lab:fig1}(i) shows that the adiabatic index $\Gamma$ does not remain constant throughout the solution,
it varies from $\Gamma\sim ~1.44$ to $5/3$. It is well known that gases with non-relativistic
temperatures have $\Gamma=5/3$ or the polytropic index $N=3/2$. For gases with ultra-relativistic
temperatures, $N\rightarrow 3$ or $\Gamma \rightarrow 4/3$. It may be noted that, $N$ is the temperature gradient of the specific
energy of the gas
i.e., $\sim df/d\Theta$ (see, equation \ref{gama.eq}). For non-relativistic thermal speed (for $T\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10^7$K, the energy density of
the gas (${\bar e}$)
is dominated by rest-mass energy, so $N$ (therefore $\Gamma$) remains constant ($\equiv 5/3$). But for higher temperatures,
the thermal speed becomes relativistic, therefore kinetic contribution becomes comparable to rest mass in ${\bar e}$, as a result
$N$ increases with rising $T$. But the upper limit of thermal speed is $c$, therefore for ultra-relativistic temperature,
the kinetic contribution of the gas particles into ${\bar e}$ of the gas becomes maximum and therefore
$N$ again becomes temperature independent,
where asymptotically $N\rightarrow 3$ (or, $\Gamma \rightarrow 4/3$).
For example, if the temperature of a gas is in between these two extremes ($10^7~{\rm K} < T < 10^{13}$K), then the thermal state
is described by $3/2\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} N \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3$
\citep[see, figure 1a of][]{cr09}.
In Fig.\ref{lab:fig1}(h), temperature drops from $\sim 10^{10}$ to $\sim 10^4$ the thermal energy decreases
as a result, $\Gamma$ changes from $\sim 1.44$ (near-relativistic) to $\sim 5/3$ (non-relativistic).
\begin{figure}
\centering
\subfloat[Streamline side view.]{\includegraphics[width=0.55\textwidth]{fig2a.eps}\label{fig:f2a}}
\hfill
\subfloat[Streamline top view.]{\includegraphics[width=0.38\textwidth]{fig2b.eps}\label{fig:f2b}}
\caption{{Solid lines represent the stream lines of outflow solution for
$x^{2}_{A}=0.75,\theta_{A}=50,\psi_{A}=55,F=0.75,q=500,\xi=1$. (a) Sideview and (b) top view.
There are two dashed circles, one near to the center at $z\sim 0.73$ represents the Aflv\'en point location and second at
$z\sim3500$ represents the fast point location. Here, $z$ is vertical height and $x,y$ are in terms of light
cylinder. Inset: Region close to the base is zoomed to show the location of the Alfv\'en point (dashed circle).}}
\label{lab:fig2}
\end{figure}
In Fig. \ref{lab:fig2}, we plot the stream lines of outflow solution for
$x^{2}_{A}=0.75,\theta_{A}=50,\psi_{A}=55,F=0.75,q=500,\xi=1$. Figures \ref{lab:fig2}(a) \& (b) are the side and top view of stream
lines of the outflow, respectively. Here $xy$ plane represent the equatorial
plane and $z$ is the vertical height from the
equatorial plane in terms of light cylinder. Two dashed circles, one near to the base $(z\sim 0.73$ i.e., the circle in the inset of both the panels) represents the Aflv\'en point
location. The other at $z\sim 3500$ represents the fast point location. As we discussed before,
the transfer of angular momentum from the field to the matter, changes the direction of rotation of the flow. We can also
see in Fig. \ref{lab:fig2}, that the transfer of angular momentum from field to the matter has
twisted the stream lines of the outflow.
\vspace{0.0cm}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=12cm,height=12cm]{fig3.eps}
\caption{{Outflow solutions are for different values of $\psi_{A}=50~(\mbox{solid black})$,
$52~(\mbox{dashed red})$,
$54~(\mbox{long-dashed green})$, $55~(\mbox{dashed-dotted blue})$, $56~(\mbox{long-dashed-dotted magenta})$.
All the curves plotted are for $x^{2}_{A}=0.75,~\theta_{A}=50,~F=0.75,~q=500,~\&~\xi=1$. Panel (a) Stream
line on the $xz$-plane, (b) $[log(M^{2})]$, (c) $v_{\rm p}$, (d) $v_{\phi}$, (e) $\mu_{S}$ and
$\mu_{M}$, (f) $L_{B}$ and $L_{M}$, (g)
$\gamma$, (h) $log(T)$ and (i) $\Gamma$ versus $log(z)$. Here, solid circles and triangles represent Alfv\'en and fast point locations.}}
\label{lab:fig3}
\end{figure}
\subsection{Solutions for different Alfv\'en point angle ($\psi_{A}$) with the disk}
In Fig.\ref{lab:fig3} we plot outflow solutions for
different values of $\psi_{A}=50~(\mbox{solid, black})$, $52~(\mbox{dashed, red})$,
$54~(\mbox{long-dashed, green})$, $55~(\mbox{dashed-dotted, blue})$ and $56$ (long-dashed-dotted, magenta).
All the curves are for fixed values of $x^{2}_{A}=0.75,\theta_{A}=50,F=0.75,q=500$ and $\xi=1$. In
Fig.\ref{lab:fig3}(a), the solution which has lower values of $\psi_{A}$ are less collimated.
Since, centrifugal force also has component in the
poloidal direction {\em i.e., } $cos(\psi)$ component of centrifugal force \citep[see equation 20 in][]{vla03a},
therefore flow which has small Alfv\'en point angle with
the equatorial plane has larger centrifugal force which spreads the outflow over larger $x$.
In general, the solutions with lower $\psi_{A}$, are of lower $\mu$ and $\sigma_{M}$ and therefore are slower (i.e.,
less $v_{\rm p}$). Although $\mu$ and $L$ are constants of motion, but respective magnetic and matter components of
each are not constants. The azimuthal component of velocity $v_{\phi}$ also flips sign.
Panels Fig.\ref{lab:fig3}(h-i) show the variation of temperature and adiabatic index (varies from $1.4$ to $5/3$)
of the flow.
\vspace{0.0cm}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=12cm,height=12cm]{fig4.eps}
\caption{{Outflow solutions for different values of $\theta_{A}=44~(\mbox{solid black})$,
$46~(\mbox{dashed red})$,
$48~(\mbox{long-dashed green})$, $50~(\mbox{dashed-dotted blue})$, $51~(\mbox{long-dashed-dotted magenta})$
and four parameters are fixed {\em i.e., } $x^{2}_{A}=0.75,\psi_{A}=55,F=0.75,q=500,\xi=1$ for all the curves.
Panel (a) Stream
line on the $xz$-plane, (b) $[log(M^{2})]$, (c) $v_{\rm p}$, (d) $v_{\phi}$, (e) $\mu_{S}$ and
$\mu_{M}$, (f) $L_{B}$ and $L_{M}$, (g)
$\gamma$, (h) $log(T)$ and (i) $\Gamma$ versus $log(z)$. Here, solid circles and triangles represent Alfv\'en and fast point locations.}}
\label{lab:fig4}
\end{figure}
\subsection{Solutions for different Alfv\'en point polar angle ($\theta_{A}$)}
In Fig.\ref{lab:fig4}, we plot outflow solutions for different values of $\theta_{A}=44~(\mbox{solid black})$,
$46~(\mbox{dashed red})$, $48~(\mbox{long-dashed green})$, $50~(\mbox{dashed-dotted blue})$,
$51~(\mbox{long-dashed-dotted magenta})$. Five parameters are fixed $x^{2}_{A}=0.75,~\psi_{A}=55,~F=0.75,~q=500$ and $\xi=1$ for all the curves.
Solutions with smaller $\theta_{A}$ start with a smaller base (small $x$), but expands to a larger $x$. While
the ones starting with larger $\theta_{A}$ shows exactly the opposite property. This is because the
solution with smaller $\theta_{A}$ have larger value of $B_{\phi}$ near the base, but at higher $z$, $B_{\phi}$
decreases faster than the one starting with higher values of $\theta_{A}$.
In general,
$v_{\rm p}$ of outflow solution is higher for higher value of $\theta_{A}$ ($51 \mbox{ long-dashed-dotted, magenta}$).
The $\mu_S$ and $\mu_M$ feeds at each others cost, although the total specific energy $\mu$ remains constant along the flow.
This is similar to the constancy of the total angular momentum of the flow, but components associated with the field
and the matter are not constant.
As in the previous cases, here too the adiabatic index is not constant.
\vspace{0.0cm}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=12cm,height=12cm]{fig5.eps}
\caption{{Outflow solutions are for different values of $x^{2}_{A}=0.25~(\mbox{solid black})$,
$0.35~(\mbox{dashed red})$,
$0.55~(\mbox{long-dashed green})$, $0.70~(\mbox{dashed-dotted blue})$, $0.90~(\mbox{long-dashed-dotted magenta})$.
All the curves are plotted for fixed values of $\theta_{A}=50,~\psi_{A}=55,~F=0.75,~q=500,~\&~\xi=1$. Panel (a) Stream
line on the $xz$-plane, (b) $log(M^{2})$, (c) $B_{\rm p}$ and $B_{\phi}$, (d) $F_{{\rm C} \parallel}$ and $F_{{\rm C} \perp}$,
(e) $F_{{\rm B}\parallel}$ and $F_{{\rm B}\perp}$, (f) $\Gamma$ versus $log(z)$.
Here, solid circles and triangles in panel (a), represent Alfv\'en and fast point locations. The inset in panel
(e) zooms on to various curves corresponding to different values of $x_A$.}}
\label{lab:fig5}
\end{figure}
\subsection{Solutions for different Alfv\'en point cylindrical radius ($x_{A}$)}
In Fig. \ref{lab:fig5}, we plot outflow solutions for different values of
$x^{2}_{A}=0.25~(\mbox{solid black})$, $0.35~(\mbox{dashed red})$, $0.55~(\mbox{long-dashed green})$,
$0.70~(\mbox{dashed-dotted blue})$, $0.90$ (long-dashed-dotted, magenta).
And other parameters which are kept fixed for all the curves are $\theta_{A}=50,\psi_{A}=55,F=0.75,q=500$ and $\xi=1$.
The poloidal ($B_{\rm p}$) as well as toroidal magnetic
($B_{\phi}$) fields are higher for flows of higher $x_A$. However at larger $z$,
both the components of the magnetic field fall faster, compared to that in the flows of lower $x_A$ (see Fig. \ref{lab:fig5}c).
Moreover, the component of centrifugal and magnetic
forces along the streamline ($F_{{\rm C}\parallel}~\&~F_{{\rm B}\parallel}$) are larger for higher values of $x_A$.
On the other hand, collimation is achieved due to the competition between the components of magnetic ($F_{{\rm B}\perp}$)
and centrifugal ($F_{{\rm C}\perp}$) forces orthogonal to the streamline
(Fig. \ref{lab:fig5}a, d, e). As a result, solutions corresponding to lower values of $x_A$ are more collimated (Figs.\ref{lab:fig5}a),
because the resultant of magnetic and centrifugal forces are directed towards the axis, closer to the base than those with larger
values of $x_A$. This is expected due to the assumption of radial self-symmetry.
The $\Gamma$ distribution along the streamline for different values of $x_A$, varies significantly from each other (Fig. \ref{lab:fig5}f). It may be noted that, in almost all the cases, the outflow crosses the light cylinder with impunity.
\subsection{Comparison of solutions for fixed and variable adiabatic index EoS (CR EoS)}
In this section, we compared solutions of fixed adiabatic index EoS (with $\Gamma=5/3$ and $4/3$)
and CR EoS.
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=11cm,height=11cm]{fig6.eps}
\caption{Outflow solutions with variable adiabatic index CR EoS (solid black) with $\xi=1$, fixed adiabatic
index EoS with
$\Gamma=5/3~(\mbox{dashed red})$, and $\Gamma=4/3~(\mbox{long-dashed green})$. All curves are plotted for
$\mu=2.82420$, $x^2_A=0.25,~ \theta_{A}=52,~\psi_{A}=55,$ and $F=0.8$. Panel (a) Stream line on the $xz$-plane,
(b) $logM^{2}$, (c) $v_{\rm p}$, (d) $v_{\phi}$, (e) $logT$, (f) $\Gamma$ versus $log(z)$.}
\label{lab:fig6}
\end{figure}
In Fig. \ref{lab:fig6}, we plot outflow solutions for variable adiabatic index EoS or
CR EoS (solid black) with $\xi=1$ and fixed adiabatic index EoS with
$\Gamma=5/3~(\mbox{dashed red})$ and $\Gamma=4/3~(\mbox{long-dashed green})$.
All curves are plotted for $\mu=2.82420$, $x^2_A=0.25,~ \theta_{A}=52,~\psi_{A}=55,$
and $F=0.8$. Panel (a) shows the stream line on the $xz$-plane, (b) $logM^{2}$, (c) $v_{\rm p}$, (d) $v_{\phi}$,
(e) $logT$, (f) $\Gamma$ versus $log(z)$. In Fig. \ref{lab:fig6}a,
the streamlines of all the outflow solutions for different EoS are same. Interestingly,
all the solutions also pass through both Alfv\'en and fast critical points.
These solutions also have almost similar Alf\'en Mach number distribution (Fig. \ref{lab:fig6}b).
However, in Fig. \ref{lab:fig6}c, we can see that there is significant difference in the poloidal
velocity and these solutions also have different values of azimuthal velocity (Fig. \ref{lab:fig6}d).
The solutions using CR EoS, cannot be scaled with any particular fixed value of $\Gamma$. This has been
shown in many paper in the hydrodynamic (radiation hydrodynamic) limit \citep{cr09,ck16,kc17,vc19}.
As is expected, solutions of different EoS have different overall temperature
variation (Fig. \ref{lab:fig6}e).
In Fig. \ref{lab:fig6}f, we present the variation of adiabatic index
for CR EoS and the comparison with fixed adiabatic index. For solutions with different EoS,
$T(r)$ crosses each other at some distance and yet, $\Gamma$ computed from CR EoS,
is neither $5/3$ nor $4/3$. It is clear by comparing Figs. \ref{lab:fig6}(e) and (f), that,
the temperature obtained by using $\Gamma=4/3$ is less than that obtained by using $\Gamma=5/3$,
which clearly should not be the case.
Since, only very hot plasma should be described by $\Gamma=4/3$
and cold plasma ($T<10^7$K, i.e., $T<< m_{{\rm e}^-} c^2/k$) should be described by $\Gamma=5/3$, therefore, relativistic flows
described by fixed $\Gamma$ EoS clearly has a temperature discrepancy.
\vspace{0.0cm}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=11cm,height=11cm]{fig7.eps}
\caption{{Outflow solutions for different values of $\xi=1.0~(\mbox{solid, black})$,
$0.8~(\mbox{dashed, red})$,
$0.5~(\mbox{long-dashed, green})$, $0.3 ~(\mbox{dashed-dotted, blue})$,
$0.1~(\mbox{long-dashed-dotted, magenta})$. All the curves are plotted for
$x^2_A=0.25,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$ and $q=500$. Panel (a) Stream
line on the $xz$-plane, (b) $v_{\rm p}$, (c) $v_{\phi}$, (d) $\mu_{S}$ and
$\mu_{M}$, (e) $L_{B}$ and $L_{M}$, (f) $\Gamma$ versus $log(z)$. Here, solid circles and triangles represent Alfv\'en and fast point locations.}}
\label{lab:fig7}
\end{figure}
\vspace{0.0cm}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=11cm,height=11cm]{fig8.eps}
\caption{{Outflow solutions for different values of $\xi=1.0~(\mbox{solid, black})$,
$0.5~(\mbox{dashed, red})$ and
$0.1~(\mbox{long-dashed, green})$. All the curves are plotted for
$\mu=2.23362,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$ and $q=500$. Panel (a) Stream
line on the $xz$-plane, (b) $v_{\rm p}$, (c) $B_{\rm p}$ and $B_{\phi}$, (d) $v_{\phi}$, (e) $log(T)$, (f) $\Gamma$ versus $log(z)$.}}
\label{lab:fig8}
\end{figure}
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=11cm,height=11cm]{fig9.eps}
\caption{{Outflow solutions for different values of $\xi=1.0~(\mbox{solid, black})$,
$0.5~(\mbox{dashed, red})$,
$0.1~(\mbox{long-dashed, green})$. All the curves are plotted for
$L=0.55585,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$ and $q=500$. Panel (a) Stream
line on the $xz$-plane, (b) $v_{\rm p}$, (c) $B_{\rm p}$ and $B_{\phi}$, (d) $v_{\phi}$, (e) $log(T)$, (f) $\Gamma$ versus $log(z)$.}}
\label{lab:fig9}
\end{figure}
\subsection{Solutions for different plasma compositions ($\xi$)}
In Fig.\ref{lab:fig7} we have presented outflow solutions for different
compositions, $\xi=1.0(\mbox{solid black})$ is electron-proton, $0.8(\mbox{dashed red})$, $0.5(\mbox{long-dashed green})$,
$0.3~(\mbox{dashed-dotted blue})$, $0.1$ (long-dashed-dotted, magenta)
and other five parameters are fixed {\em i.e., } $x^{2}_{A}=0.25,\theta_{A}=50,\psi_{A}=55,F=0.75,q=500$.
In these solutions $\mu$ and $\sigma_{M}$ increases slightly with the increase in $\xi$, if $x_A,~\theta_{A},~\psi_{A}$ and $q$ are kept constant.
It is also reflected in the plots of $\mu_S$ and $\mu_M$, as well as $L_B$ and $L_M$ (Fig.\ref{lab:fig7}e, f).
There is very little difference in the streamlines of the jets (Fig.\ref{lab:fig7}a).
However, by varying the composition of the flow from electron-proton plasma ({\em i.e., } $\xi=1.0$) to
pair dominated flow $\xi=0.1$, $v_{\rm p}$ and $v_\phi$ of the flow varies significantly with $\xi$ (Figs.\ref{lab:fig7} b \& c).
Even $\mu_S$, $\mu_M$ and $L_S$,$L_M$ also depend on $\xi$ (Fig. \ref{lab:fig7}d \& e).
Since $\xi$ also
influences the thermodynamics of the flow, the temperature of the jet is also crucially influenced
by its composition. As a result the adiabatic index $\Gamma$ also depends on $\xi$ (Fig.\ref{lab:fig7}f).
It may be noted, that the temperature of pair-dominated flow is
higher than electron-proton flow and therefore $\Gamma$ at any given $z$ is lower for flows with lower value of $\xi$.
Since we are comparing flows with same $x_A$ (equivalently, $M_A$), therefore
from equation \ref{amach.eq}, it can be easily shown that the temperature of pair dominated flow will be higher.
In Figs. \ref{lab:fig8}, we plot magnetized outflow solutions for different compositions like $\xi=1.0$ (solid, black), $0.5$
(dashed, red) and $0.1$ (long-dashed, green), but are for the same $\mu=2.23362,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$ and $q=500$.
So all these solutions are for the same Bernoulli parameter $\mu$. Since all other parameters are same, the magnetic field components
and streamlines for each are almost the same (Figs. \ref{lab:fig8}a \& c), yet $v_{\rm p}$ \& $v_\phi$ (Figs. \ref{lab:fig8}b \& d)
distribution are completely different
for flows with different $\xi$. Moreover, even the temperature ($T$) and $\Gamma$ also depend on the composition
parameter (Fig. \ref{lab:fig8}e \& f). The baryon poor outflows which have same Bernoulli parameter, are slower and hotter,
compared to electron-proton flow. However, the gain in $v_{\rm p}$ is more for pair dominated flow than the electron-proton flow.
In Figs. \ref{lab:fig9}, we plot magnetized outflow solutions for different compositions like $\xi=1.0$ (solid, black),
$0.5$ (dashed, red) and $0.1$ (long-dashed, green), but are for the same $L=0.55585,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$ and
$q=500$, i. e., we compare outflows launched with the same total angular momentum (or $L$)
but different $\xi$. The streamlines are again almost the same (Figs. \ref{lab:fig9}a), however, $v_{\rm p}$, $v_\phi$,
and $T$ or $\Gamma$ (Figs. \ref{lab:fig9}b---f) are significantly different for flows with different $\xi$.
It may be remembered that the general expression of constants of motion $\mu$ and $L$ in physical units are \citep{vla03a}
$$
\mu=h\gamma-\frac{\varpi \Omega B_{\phi}}{\Psi_{A} c^2};~L=\varpi \gamma h v_\phi - \frac{\varpi B_{\phi}}{\Psi_{A}}
$$
From equation \ref{enthp.eq}, it is also clear that $h$ depends on composition parameter $\xi$. So, for a given $\mu$ or $L$, if
$B_{\phi}$ is somewhat similar at the base, then $\gamma$ ({\em i.e., } $v_{\rm p},~v_\phi$) and $\Theta$ will depend on $\xi$.
That is exactly what we see in Figs. \ref{lab:fig8} \& \ref{lab:fig9}.
Dependence of flow velocity and temperature on the composition of the flow,
has also been shown in the hydrodynamic regime recently \citep{cr09,ck16,vc19,sc19}.
Therefore, it is expected that some imprint of the flow composition should be there in radiative output of the flow.
\begin{figure}
\hspace{2.0cm}
\includegraphics[width=11cm,height=11cm]{fig10.eps}
\caption{Outflow solutions for composition $\xi=0.0$. All the curves are plotted for $x^2_A=0.75,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$ and $q=0.05$. Panel (a) Stream
line on the $xz$-plane, (b) $v_{\rm p}$, (c) $B_{\rm p}$ and $B_{\phi}$, (d) parallel forces, (e) perpendicular forces, (f) $\Gamma$ versus $log(z)$.}
\label{lab:fig10}
\end{figure}
In Fig. \ref{lab:fig10}, we plot an electron-positron outflow solution or
flow having $\xi=0.0$. Other parameters are $x^2_A=0.75,~ \theta_{A}=50,~\psi_{A}=55, ~F=0.75,$
and $q=0.05$. From Fig. \ref{lab:fig10}b, it is clear that pure leptonic flow is also a trans-fast flow and the
velocity nature is similar to proton poor flows as plotted in Fig. \ref{lab:fig8}b.
In Fig. \ref{lab:fig10}d, we plot the forces which control the poloidal
acceleration of the flow, for example, parallel inertial force $F_{I\parallel}$, parallel `gamma' force
$F_{G\parallel}\equiv F_{GP\parallel} + F_{G\phi\parallel}$, parallel total thermal gradient force
$F_{TP\parallel}\equiv F_{T\parallel} + F_{P\parallel}$, parallel centrifugal force $F_{C\parallel}$, and
parallel magnetic force $F_{B\parallel}$ \citep[for more details see section 2.2 in][]{vla03a}. In the inset of Fig. \ref{lab:fig10}d, we can note that
these forces are comparable to each other at lower value of $z$, however for greater value of $z$, $F_{B\parallel}$ and $F_{G
\parallel}$ forces are controlling the poloidal acceleration. In Fig. \ref{lab:fig10}e, we plot all
forces perpendicular to the poloidal fieldlines, e. g., $F_{I\perp}$ (inertial), $F_{E\perp}$ (electric),
$F_{P\perp}$ (pressure gradient), $F_{C\perp}$ (centrifugal), and $F_{B\perp}$ (magnetic). Perpendicular forces have similar nature to
parallel forces, however, at a larger distances, $F_{E\perp}$ and $F_{B\perp}$ controls the collimation of
the flow. In Fig. \ref{lab:fig10}f, the adiabatic index for pure lepton flow varies from $\sim 1.44$
to $\sim 5/3$.
\section{Discussion and Concluding Remarks}
\label{sec:conclude}
In this paper we have solved the relativistic magneto-hydrodynamic equations using relativistic equation of state, in order to
study relativistic outflows. A flow is relativistic on account of its bulk velocity (i.e., $v_{\rm p} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} c$) and also
in terms of its temperature i.e., when $kT_i \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} m_ic^2$ (subscript $i$ represents the type of constituent particle).
The first condition arises for outflows, far away from a black hole, but the second
condition especially arises in the region close to a black hole horizon which acts
as the base of an astrophysical jet. A form of EoS (CR) of a flow which can transit between relativistic to non-relativistic
temperatures has been used in this paper. As has been discussed through out this paper,
$\Gamma$ is a function of temperature in CR EoS and is automatically determined from temperature distribution.
There are a few papers in hydrodynamic regime (read in absence of ordered magnetic
field) which discusses the application of relativistic EoS in accretion and jets \citep{ck16,kc17,vc19}.
However, as far as we know, there have been no such previous attempts to solve relativistic, trans-Alfv\'enic, trans-magneto sonic
plasma expressed by relativistic EoS and study the effect of different compositions of the plasma.
Since MHD equations are only applicable for fully ionized
plasma, therefore, the composition of the flow is likely to either be electron-proton ($\xi=1$) plasma or
electron-positron-proton ($0<\xi<1$) plasma.
In this paper, we have studied how various parameters like the Bernoulli constant, current distribution, the location of the
Alfv\'en point etc affect the outflow solution but only for electron-proton plasma. And then studied the effect of different EoS
and different compositions on outflow solutions.
We investigated the
contribution played by all the flow parameters, information of which shapes the final solution
of the outflow.
We found that the current distribution affects the stream line structure, as well as the flow velocities, especially close to the base.
We also found that, not only the current distribution, the angle of the poloidal fieldline makes with the equatorial plane also affect the
solutions. In particular, the streamlines which are more inclined
to the equatorial plane are slower and less collimated. In addition, narrower the polar angle of the Alfv\'en point with the axis of the flow,
slower and less collimated is the outflow. These two angles, namely $\psi_{A}$ and $\theta_{A}$ are independent of each other. For a given
composition, the location of the Alfv\'en point has significant effect on the Bernoulli parameter $\mu$, the streamline and the Lorentz
factor of the flow.
We found that while the $q$ parameter which depends on the entropy, itself do not explicitly
affect the outflow solutions significantly except the temperature, but in conjunction with other parameters plays an important role.
We have also compared the outflow solutions using fixed adiabatic index EoS, with
the one using CR EoS for a given value of $\mu$, $x_A$, $\theta_A$, $\Psi_A$ and $F$.
Although the streamlines are similar, but distribution of flow variables ($v_{\rm p}$, $v_{\phi}$, and $T$)
are significantly different. Interestingly, solutions of all the EoS, are passed through both the
critical points (Alfv\'en and fast magnetosonic). It
may be noted, that \citet{vla03a,vla03b} only obtained trans-Alfv\'enic outflow using $\Gamma=4/3$,
but \citet{pol10} obtained trans-Alfv\'enic, trans-fast outflow solutions using $\Gamma=5/3$.
However, we showed that even with $\Gamma=4/3$,
one can obtain trans-Alfv\'enic, trans-fast outflow solution (Fig. \ref{lab:fig6}a). It appears that, depending
on the values of other parameters, there exists a critical value of $F$, below which the flow passes through both the critical points,
but for higher values of $F$, the outflow is only
trans-Alfv\'enic in nature. For example for the parameters related to Fig. \ref{lab:fig1}, trans-Alfv\'enic, trans-fast outflow is possible
if $F<0.82$.
We showed that, jets of all composition passes through the Alfv\'en and the fast point,
and get collimated to the axis after crossing the fast point. We compared solutions with different composition,
but for the same values of the Alfv\'en point, or the Bernoulli constant, or the total angular momentum. In all the cases,
composition has little effect on the streamlines, but $v_{\rm p}$, $v_{\phi}$ and $T$ distributions are significantly different.
It means that the electro-magnetic output of such outflows should also depend on the composition.
Since pair-plasma have been regularly invoked as the composition of jets,
we have also presented one case of pure pair plasma (i. e., $\xi=0.0$) outflow
solution and it nicely passes through the both critical points. The pair plasma outflow accelerates mainly in the
sub-Alfv\'enic region to super-fast region.
The effect of composition is quite pronounced in presence of gravity
as was seen in the hydrodynamic limit \citep{cr09,kscc13,ck16} as well as, in the non-relativistic MHD regime \citep{sc18,sc19}.
So we expect the effect of CR EoS will be more
pronounced in the RMHD limit, if gravity is considered. However, presently consideration of gravity is beyond the
scope of this paper. It may be noted that RMHD equations combined with pseudo-Newtonian gravity have been
used to study outflows previously, with very interesting results \citep{pmm13,pmm14,cec18}.
In this paper, the jet only passes through two critical points (Alfv\'en and fast) and not the slow. The slow
point appears in presence of gravity. The existence of slow-magnetosonic point ensures low velocity and high temperature
at the base of jet, or in other words, corrects the boundary condition at the jet base.
In all the solutions, the jet stream lines show that
there is a possibility that after crossing the
fast point, over
collimation/magnetic field pinching can produce shock. Since the flow is moving with super-fast speed, so
formation of shock is not going to affect the flow in the upstream and this shock location can be related to the
fast point location. In case of M87, \citet{asa12} showed that jet radius versus jet height nicely fit
parabolic curve up to $5\times 10^{5}r_{g}$ height and after this jet radius versus height follow conical
structure. There is a dip in jet radius near the HST-1 which is located at $5\times 10^{5}r_{g}$ {\em i.e., } jet radius
versus height departs from parabolic structure and this may be due to collimation shock.
\section*{Acknowledgement}
The authors acknowledge the anonymous referee for helpful suggestions to improve the quality of the paper.
|
1,314,259,995,586 | arxiv | \section{Introduction}
Quantum monitoring refers in general to the action of perform
a sequence of quantum measurements on a system or a portion of
it\,\cite{JacobsContempPhys2006,WisemanBook,WeberNature2014,RossiPRL2020}. Being the single quantum measurement a dynamical process with probabilistic nature, it is customary to associate to any sequence of them a stochastic process obeying, over time, to a specific probability distribution\,\cite{WisemanBook,JacobsBook}. Such distribution usually depends on properties relying on both the system and the measured observable, and even external sources of noise\,\cite{HatridgeScience2013,GherardiniPRA2019}.
The study of sequence of quantum measurements, especially projective ones, is broad and covers several topics, ranging from fundamental quantum physics and quantum Zeno phenomena\,\cite{ItanoPRA1990,KwiatPRL1995,KofmanNature2000,FischerPRL2001,FacchiJPA2008, WoltersPRA2013,ZhuPRL2014,SchaferNatComm2014,SignolesNatPhys2014,GherardiniNJP2016,ChaudhrySciRep2017}, quantum metrology and sensing\,\cite{KiilerichPRA2015,MuellerPRA2016,PiacentiniNatPhys2017,SchioppoNatPhot2017,DoNJP2019,SakuldeePRA2020} to
quantum thermodynamics\,\cite{CampisiPRL2010,CampisiPRE2011,HorowitzPRE2012,YiPRE2013,FuscoPRX2014,GherardiniQST2017,ElouardNPJ2017,BayatPRL2018,GherardiniPRE2018,GarciaPintoPRL2019,MartinsPRA2019,Hernandez2019,GiachettiCondMatt2020}, both at the theoretical and experimental level. An active line of research focused on the characterization of the thermodynamics principles ruling the statistics of the measurement outcomes, with several contributions making use of quantum fluctuation theorems and Jarzynski
relations\,\cite{EspositoRMP2009,CampisiRMP2011,SagawaBook2013,JaramilloPRE2017,DenzlerPRE2018,GherardiniQST2018,ManzanoPRX2018}. Within this framework, since each measurement entails a sudden energy variation with a given probability, one can also analyze the probability distribution of the heat exchanged by a monitored quantum system with its surroundings, as done
in Refs.\,\cite{GherardiniPRE2018,GiachettiCondMatt2020} for
two and three-level quantum systems.
In this paper, we study the asymptotic behaviour\,\cite{vanZonPRE2008,RibeiroArxiv2020} of a $N$-level quantum system subjected to a randomly distributed sequence of quantum projective measurements. As figure of merit, we consider the statistics of the heat distribution exchanged by the system with its surroundings. Our main motivation is three-fold. {\em (i)} There is an inherent difference in the response of a quantum system to a series of projective measurements depending whether it has a finite number of levels (say $N$) or it is continuous, so we aim at investigating how the limit of large $N$ affects the results found for finite $N$, such as the ones presented in\,\cite{GherardiniPRE2018,GiachettiCondMatt2020}. {\em (ii)} For spin-$s$ systems, the classical limit is retrieved for $s \to \infty$, so a natural question is to study how the effects of quantum measurements change by varying/increasing the quantum spin label $s$ that counts the possible projections $s_z$, whose number is $2s+1$, and plays the role of the number of levels $N$ [in the sense that the matrices that represent observables, including the ones measured in the monitoring process, have dimension $(2s+1) \times (2s+1$)]. {\em (iii)} We are as well motivated by recent experimental results obtained on negatively charged nitrogen-vacancy (NV) centers. An NV center is a localized impurity in diamond lattice based on a nitrogen substitutional atom and a nearby vacancy. In the NV experiment reported in\,\cite{Hernandez2019}, it is possible to locally address the impurity and perform a sequence of quantum projective measurements along the $z$-axis (not commuting with the energy eigenbasis of the system).
In\,\cite{Hernandez2019} it has been observed a tendency of the quantum system towards an equilibrium thermal state with infinite temperature, which can be seen as an instance of Infinite-Temperature Thermalization (ITT) processes. Given that NV experiments can be often modeled by resorting to two-level, spin $1/2$ Hamiltonians, the natural arising question is what is the interplay between the number of levels of the system and the quantum monitoring protocol, i.e., the number of measurements and the time interval between them.
To our knowledge, in literature there are no works that systematically
discuss how internal energy fluctuations distribute over time
in a $N$-level quantum system subjected to $M$ projective
quantum measurements. Our paper aims at filling this gap, by predicting the non-equilibrium behaviour of the monitored system in the thermodynamic limits of $M$ and $N$ large, both ideally infinite. The projective measurements are defined by a generic Hermitian observable and separated by a not-zero time interval $\tau$. We will mostly consider the case in which the time intervals between subsequent measurements is randomly chosen with average $\tau$, but the obtained results do not depend on the randomness of the time interval.
The paper is structured as follows. In Sec.\,\ref{sec2} we describe the
non-equilibrium dynamics to which a monitored $N$-level quantum system is
subjected, while in Secs.\,\ref{sec:large_M_limit} and \ref{sec4}
the asymptotic behaviour of the quantum system dynamics, as well as of the its heat statistics, are analysed in the thermodynamics limit of a large (ideally infinite) number of intermediate projective measurements. In such a limit, ITT can occur. Exceptions to ITT are then addressed in Sec.\,\ref{sec:excep}, while in Sec.\,\ref{sec:large} we show
results in the thermodynamic limit of $N$ large. An example of incomplete
thermalization is presented in Sec.\,\ref{sec:inc}.
Finally, conclusions are discussed in Sec.\,\ref{sec:concl}.
\section{Non-equilibrium dynamics}\label{sec2}
Let us consider a quantum system defined in a $N$-dimensional Hilbert space whereby the Hamiltonian $H$, assumed to be time-independent, admits the following spectral decomposition
\begin{equation}
H = \sum^N_{k=1} E_k |E_k\rangle\!\langle E_k|.
\end{equation}
At time $t=0^{-}$ the system is supposed to be in an arbitrary quantum state described by the density operator $\rho_0$. We then apply the two-point measurement scheme\,\cite{TalknerPRE2007},
where a projective measurement of energy is performed both at the initial
and at the final time. Therefore, at time $t = 0^{+}$ a first projective energy measurement is carried out, so that the state of the system after the measurement is one of the projectors $|E_k\rangle\!\langle E_k|$ with probability $c_k$ (where $c_k >0 \ \forall k = 1, \dots, N$ and $\sum^N_{k=1} c_k =1$), while the energy of the system is $E_k$.
Afterwards, the system undergoes a number $M$ of consecutive projective measurements of the generic observable
\begin{equation}
\mathcal{O} \equiv \sum^N_{k=1} \alpha_k |\alpha_k\rangle\!\langle\alpha_k|
\end{equation}
where $\alpha_k$ and $\ket{\alpha_k}$ are the outcomes and eigenstates of $\mathcal{O}$, respectively. We suppose $[H,\mathcal{O}] \neq 0$.
The monitoring protocol is detailed as follows. Between the energy measurement at time $t = 0^{+}$ and the first measurement of $\mathcal{O}$, the system does not evolve apart from a trivial phase, since only the Hamiltonian acts in this time interval. After each measurement of $\mathcal{O}$ the state of the system is given by one of the projectors $|\alpha_k\rangle\!\langle\alpha_k|$ with probability $\pi_k = \operatorname{Tr}[\rho_0|\alpha_k\rangle\!\langle\alpha_k|]$\,\cite{vN}. During the time-interval between the $(j-1)^{\text{th}}$ and the $j^{\text{th}}$ measurement of $\mathcal{O}$, the system evolves according to the unitary dynamics generated by $H$, i.e., $U(\tau_j) = e^{-iH\tau_j}$, where $\hbar$ is set to unity and the waiting times $\tau_j$ denote the interval between two consecutive measurements. The latter may not be deterministic quantities, since also $\tau_j$ can be random variables distributed by following the joint Probability Density Function (PDF) $p(\tau_1,\dots,\tau_M)$. The numerical simulations in the considered cases show that taking the waiting times $\tau_j$ as random variables or fixed does not alter the results and the late-time dynamics. The probability of finding the system in $|\alpha_k\rangle\!\langle\alpha_k|$ after the $M^{\text{th}}$ measurement is denoted as $\Tilde{\pi}_{k_M}$. Finally, a second energy measurement is performed immediately after the last, the $M^{\text{th}}$, measurement of $\mathcal{O}$. We denote by $E_m$ the outcome of the second and final energy measurement
(consequently being $\ketbra{E_m}{E_m}$ the final state of the system), and by $p_m$ the corresponding probability. It holds that $p_m = \sum_k \Tilde{\pi}_k |\braket{\alpha_k}{E_m}|^2$.
The internal energy variation $\Delta U$ of the system is defined as\,\cite{TalknerPRE2007}
\begin{equation}\label{eq:def_Q}
\Delta U \equiv E_m - E_n \,,
\end{equation}
which is thus a random variable. By considering the performed
projective measurements as random exogenous genuinely-quantum processes,
one can classify the internal energy variation $\Delta U$ as heat $Q$, absorbed or emitted by the system\,\cite{GherardiniPRE2018}.
In the following, we will denote by $\boldsymbol\tau \equiv (\tau_1, \dots,\tau_M)$ the sequence of waiting times and $\mathbf{k} \equiv (k_1, \dots, k_M)$ the sequence of the outcomes obtained by measuring $\mathcal{O}$ in the single protocol realization. As we are going to observe, the most important contribution to the variation of dynamical quantities occur during the $M$ measurements of $\mathcal{O}$. For this purpose, let us introduce the conditional probability $P_{k_M|k_1}$
to obtain the outcome $\alpha_{k_M}$ from the $M^{\text{th}}$ measurement of $\mathcal{O}$, knowing that the first intermediate-measurement outcome was $\alpha_{k_1}$. The conditional probability $P_{k_M|k_1}$ is such that
\begin{equation}
\Tilde{\pi}_{k_M} = \sum_{k_1} P_{k_M|k_1} \pi_{k_1}\,.
\end{equation}
Being all the $M$ measurements projective, one can check that
\begin{equation}\label{tildepi}
P_{k_M|k_1} = \int d^{M} \boldsymbol\tau \ p(\boldsymbol{\tau}) \sum_{k_{1}, \dots, k_{M-1}} \operatorname{Tr}\left[ \nu_{\mathbf{k},\boldsymbol\tau} |\alpha_{k_1}\rangle\!\langle\alpha_{k_1}| \nu^{\dagger}_{\mathbf{k},\boldsymbol\tau}\right]
\end{equation}
where we have introduced the quantities
\begin{equation}\label{Nu}
\begin{split}
\nu_{\mathbf{k},\boldsymbol \tau} &\equiv \ketbra{\alpha_{k_M}}{\alpha_{k_M}} U(\tau_{M-1}) \cdots \ketbra{\alpha_{k_2}}{\alpha_{k_2}} U(\tau_1) \\
&= \prod^{M}_{j=3} \bra{\alpha_{k_{j}}} U(\tau_{j-1}) \ket{\alpha_{k_{j-1}}} \ket{\alpha_{k_M}}\bra{\alpha_{k_2}} U(\tau_1).
\end{split}
\end{equation}
It is worth noting that Eq.\,\eqref{tildepi} can be rewritten,
in matrix notation, as:
\begin{equation} \label{matrixnotation}
P_{k_M|k_1} = \int d^{M} \boldsymbol\tau \ p(\boldsymbol{\tau}) \bra{\alpha_{k_M}} \prod^M_{j=2} L(\tau_{j-1}) \ket{\alpha_{k_1}}
\end{equation}
with
\begin{equation}
\bra{\alpha_{k_{j-1}}} L(\tau_{j-1}) \ket{\alpha_{k_j}} \equiv \lvert \bra{\alpha_{k_{j-1}}} U(\tau_{j-1}) \ket{\alpha_{k_j}} \rvert^{2}.
\end{equation}
This expression has a clear physical interpretation in terms of the
stochastic processes formalism. As a matter of fact, the quantity
$|\bra{\alpha_{k_{j-1}}} U(\tau_{j-1}) \ket{\alpha_{k_j}}|^2$ is the conditional probability to obtain the outcome $\alpha_{k_j}$ from the $j^{\text{th}}$ projective measurement once measured the outcome $\alpha_{k_{j-1}}$ from the $(j-1)^{\text{th}}$ one. Then, each $L(\tau)$ can be seen as the transition matrix pertaining to a discrete-time Markov chain in which the eigenstates of the observable $\mathcal{O}$ play the role of the states of the Markov chain. Consequently, the operator $L(\tau)$ is a \emph{stochastic matrix} with rows or columns summing to $1$. This property of $L(\tau)$ can be easily verified by observing
that
\begin{eqnarray}
\displaystyle{\sum_{k=1}^N \bra{\alpha_{\ell}} L(\tau) \ket{\alpha_k}} &=& \displaystyle{\sum_{k=1}^N \bra{\alpha_{\ell}} U(\tau) \ketbra{\alpha_k}{\alpha_k} U^{\dagger}(\tau) \ket{\alpha_{\ell}}}\nonumber \\
&=&\bra{\alpha_{\ell}}U(\tau)U^{\dagger}(\tau)\ket{\alpha_{\ell}} = 1
\end{eqnarray}
$\forall\ell=1,\dots,N$. In the following, the $M$-large behaviour of a
monitored $N$-level quantum systems is analyzed by studying the asymptotic properties of the transition matrix $L(\tau)$.
\section{Large-M limit}\label{sec:large_M_limit}
In this paragraph, the asymptotic behaviour of $P_{k_M|k_1}$ is studied in the limit of $M \gg 1$. The time intervals $\tau_j$ are different from zero and on average greater than the energy scale of the analysed quantum system for any $j=1,\ldots,M$. In this way the system dynamics is not stuck in the quantum Zeno regime\,\cite{KofmanNature2000,FacchiPRL2002,FacchiJPA2008,SmerziPRL2012,SchaferNatComm2014,SignolesNatPhys2014,GherardiniNJP2016,MuellerAdP2017}.
Let us start observing that, being $\{L(\tau_j)\}_{j=1}^{M-1}$ transition matrices (expressed as a function of conditional probabilities), they are
symmetric stochastic operators. In particular, since each element of the
transition matrix $L(\tau_j)$ is the square modulus of the corresponding
element of a unitary matrix, the $L(\tau_j)$'s are unistochastic matrices.
Thus, all its eigenvalues $\lambda_k$ are such that $|\lambda_k| \leq 1$ and at least one of them is equal to $1$. More formally, one can state that
$-1 \leq \lambda_k \leq 1$ with $k=1,\ldots,N$. For the sake of simplicity,
we also assume that $\tau_1 = \dots = \tau_M \equiv \tau$. In the limit of
large $M$, the product of the transition matrices $L(\tau)$ behaves asymptotically as a proper combination of the projectors $\mathcal{P}_{\lambda=1}$ and $\mathcal{P}_{\lambda=-1}$ associated, respectively, to the eigenspaces identified by $\lambda=1$ and $\lambda=-1$. In other terms,
\begin{equation}\label{eq:product_L}
L(\tau)^{M-1} \rightarrow \mathcal{P}_{\lambda=1} + (-)^{M-1} \mathcal{P}_{\lambda=-1}\,.
\end{equation}
However, while we are guaranteed the eigenvalue $\lambda = 1$ actually exists for any $\tau$, the presence of the eigenvalue $\lambda=-1$ is not so obvious. For example, in the $N=2$ case, the smallest eigenvalue of $L$ is given by $\lambda = 1 - 2 \sin^2(\phi)\sin^2\left(\frac{\Delta E \tau}{2}\right)$, where $\Delta E$ denotes the energy gap of the qubit,
while $\phi$ is the angle that defines the rotation bringing the
eigenbasis of the Hamiltonian $H$ over the eigenbasis of the measurement observable $\mathcal{O}$. In order to get $\lambda = -1$, not only we need to choose a very specific value of $\mathcal{O}$ (i.e., an observable $\mathcal{O}$ such that $\sin(\phi) = \pm 1$), but we also need to assume $\tau^{\ast} = \frac{(2k+1)\pi}{\Delta E}$ with $k \in \mathbb{Z}$. It is clear that, apart from fine-tuned cases, the concurrence of both these conditions in a $N$-level system do not take place (especially if the time intervals $\tau_j$ are randomly distributed). As a result, one can expect on physical grounds that $\mathcal{P}_{\lambda=-1} = 0$ such that
\begin{equation}\label{bruttina}
L(\tau)^M \rightarrow \mathcal{P}_{\lambda=1} \,.
\end{equation}
However, it is important to note that Eq.\,\eqref{bruttina} does
not imply that in the single realization of the system dynamics the
effects originated by the presence of rare fluctuations are absent.
In such case, indeed, the evaluation of higher-order statistical moments
could be still required. For more details on the analysis of the impact
of rare fluctuations in the statistics of quantum observables, the reader
can refer e.g.\,to Refs.\,\cite{GherardiniQST2017,GherardiniPRA2019} that
analyze the problem by means of the large deviation theory.
What discussed so far holds for a generic stochastic matrix.
However, being $L(\tau)$ also symmetric, one can verify that
\begin{equation}
\ket{v} = \frac{1}{\sqrt{N}}\sum_{k=1}^{N}\,\ket{\alpha_k}
\end{equation}
is such that $L(\tau) \ket{v} = \ket{v}$ for all values of $\tau$. This
means that $\ket{v}$ is invariant to the application of the stochastic
matrix $L(\tau)$, or in other terms, $\ket{v}$ is a \emph{fixed point}
of $L(\tau)$. If we assume that $\lambda=1$ is non degenerate,
then $L(\tau)^{M-1} \rightarrow |v\rangle\!\langle v|$. Thus, since the eigevector $\ket{v}$ does not depend on the value of $\tau$, we can
conclude that
\begin{equation} \label{bellina}
L(\tau_{M-1}) \cdots L(\tau_1) \rightarrow \ketbra{v}{v}
\end{equation}
even for randomly distributed $\tau$'s,
as long as the set of $\boldsymbol{\tau}$ for which $\lambda=-1$
is eigenvector, or $\lambda = 1$ is degenerate, has zero measure.
However, such a degeneracy of $\lambda=1$ can occur and
the corresponding analysis is postponed to Sec.\,\ref{sub_A}. It is also worth noting that, in the Markov chain language, the validity of Eq.\,(\ref{bellina}) means that the underlying process is ergodic and admits a unique asymptotic configuration, i.e., the uniform one whereby the probabilities that the final state of the system is one of the eigenvectors $\ket{\alpha_k}$ of $\mathcal{O}$ are the same.
Let us explore the meaning of this property in our context. In the $M \gg 1$ limit, Eq.\,\eqref{matrixnotation} becomes
\begin{equation}
P_{k_M|k_1} = \braket{\alpha_{k_M}}{v} \braket{v}{\alpha_{k_1}} = \frac{1}{N}
\end{equation}
so that, regardless from the state of the system after the first measurement of $\mathcal{O}$, one has:
\begin{equation}
\Tilde{\pi}_{k_M} = \sum_{k_1} P_{k_M|k_1} \pi_{k_1} = \frac{1}{N}\,.
\end{equation}
Thus, as expected, the information on the initial condition is lost as
$M$ increases. Moreover, this result is also independent on the form
of the observable $\mathcal{O}$, and all the possible outcomes
$|\alpha_{k_M}\rangle\!\langle\alpha_{k_M}|$ are equiprobable.
Accordingly, the state of the system after the $M^{\text{th}}$
measurement (with $M \gg 1$) is described by the maximally mixed state
\begin{equation} \label{finalstate}
\rho_M = \frac{\mathbb{I}}{N}\,.
\end{equation}
Note that, being $\rho_M$ diagonal in every basis, the second energy
measurement (corresponding to the last measurement of the whole non-equilibrium dynamics) has no effect, and also all the final energy
outcomes are equiprobable.
\begin{figure}
\centering
\subfloat[][]{\includegraphics[scale=0.55]{Istogramma5livelliN.pdf}} \ \
\subfloat[][]{\includegraphics[scale=0.55]{Istogramma15livelliN.pdf}}
\caption{Comparison between the initial (blue) and final (red) heat statistics for a five-level $(a)$ and fifteen-level system $(b)$. The Hamiltonian of the system and the initial density operator $\rho_0$ are randomly chosen on a basis in which $\mathcal{O}$ is diagonal. The number of realizations of the non-equilibrium process is $5\cdot 10^6$ in $(a)$ and $15\cdot 10^6$ in $(b)$. In both cases, in the thermodynamic limit of $M$ large (in our numerical simulations $M=20$), each final energy value is equiprobable and such effect can be explained as the thermalization of the system towards a thermal state with $\beta=0$ (infinite temperature). In this figure and in the following ones, the parameter $\tau_j=1$ is chosen.}
\label{fig:Hystogram}
\end{figure}
These findings are explicitly verified in Fig.\,\ref{fig:Hystogram},
where we plot for a $5$- and $15$-level quantum system the final energy
outcomes obtained at the end of the non-equilibrium dynamics of Sec.\,\ref{sec2}. Notice that in the figure $\tau_j=1$, but we verified that choosing $\tau_j$ as random variables (e.g., uniformly distributed) the final state at the end of the monitoring protocol is unaffected. The asymptotic behaviour occurring in the limit of $M$ large can be effectively interpreted as a thermalization process towards a thermal state with infinite temperature: $T=\infty$ ($\beta=0$). This can be understood by thinking that the measurement apparatus acts as a thermal reservoir with infinite energy (being it classical), by which, through a sequence of repeated interactions, a quantum system can reach the same equilibrium condition. In this respect, it is worth noting that the state of Eq.\,\eqref{finalstate} (maximally mixed state) maximizes the von Neumann entropy, and thus corresponds to the state associated to the absolute maximum of the entropy. For this reason, $\rho_M = \mathbb{I}/N$ has to be considered as the natural \emph{equilibrium state} for a quantum system to which no further constraints are imposed.
\section{Heat statistics}\label{sec4}
In this Section we characterize the fluctuation profile of the heat $Q$,
random variable, by means of the characteristic function $G(u)$, with $u\in\mathbb{C}$, associated to the probability distribution ${\rm P}(Q)$.
By construction, the characteristic function is defined as
\begin{equation}
G(u) \equiv \me{e^{iQu}} = \me{e^{i (E_m - E_n) u}}
\end{equation}
where the average $\langle\cdot\rangle$ is performed over a
large number of realizations of the underlying non-equilibrium dynamics.
As seen in the previous Section, in the $M \rightarrow \infty$ limit, the system ``forgets'' the initial state, meaning that it cannot be inferred by measurements of the system evolution. Thus, $E_m$ and $E_n$ are independent variables and $G(u)$ factorizes in the product of the characteristic functions of $E_n$ and $E_m$. The latter is given by
\begin{equation*}
G_{E_m}(u) = \frac 1 N \sum^N_{n=1} e^{i u E_n} = \frac{1}{N} \operatorname{Tr} [e^{iuH}]\,,
\end{equation*}
since $\rho_M = \mathbb{I}/N$ and thus the values that $E_m$ can take
are uniformly distributed. Instead, the characteristic function of
$E_n$ equals to
\begin{equation*}
G_{E_n}(u) = \sum_{k=1}^{N} \bra{\alpha_{k}} e^{-iHu} \rho_0 \ket{\alpha_{k}} = \operatorname{Tr}\left[e^{-iuH}\rho_0\right]
\end{equation*}
with the result that
\begin{equation}
G(u) = G_{E_n}G_{E_m}(u) = \frac{1}{N}{\rm Tr}\left[e^{iHu}\right]{\rm Tr}\left[e^{-iHu}\rho_{0}\right].
\end{equation}
Consequently, by analyzing $G(u)$ at $u=i\epsilon$ with $\epsilon\in\mathbb{R}$, one gets
\begin{equation}\label{characteristic}
G(\epsilon) = \me{e^{- \epsilon Q}} = \frac{Z(\epsilon)}{N}\,{\rm Tr}\left[\rho_{0}\,e^{\epsilon H}\right],
\end{equation}
where $Z(\epsilon) \equiv {\rm Tr}[e^{-\epsilon H}]$ is the partition
function of the Hamiltonian $H$ evaluated by taking $\epsilon$ as
reference inverse temperature. As expected, if $\rho_0$ is a thermal
state with inverse temperature $\epsilon = \beta$, we recover the
standard result $G(i\beta) = 1$, stemming directly from the
Jarzynski equality.
\begin{figure}
\centering
\includegraphics[scale=0.58]{G15livelliN.pdf}
\caption{Comparison between the expression (\ref{characteristic})
of the characteristic function $G(\epsilon) = \me{e^{-\epsilon Q}}$ (blue solid lines) and the numerical values (red dotted lines) computed for the fifteen-level system simulated in Fig.\,\ref{fig:Hystogram} panel (b).}
\label{fig:Ge}
\end{figure}
In Fig.\,\ref{fig:Ge} we show the comparison between the results obtained
by using Eq.\,(\ref{characteristic}) and the estimate of $G(u)$ from
numerical simulations of the non-equilibrium process on the same fifteen-level systems used for Fig.\,\ref{fig:Hystogram} with $\rho_0$ random initial state. Excellent agreement is found.
\subsection{Example: Spin-$s$ system}
\begin{figure}
\centering
\subfloat[][]{\includegraphics[scale=0.625]{Heatbeta=0N.pdf}} \ \
\subfloat[][]{\includegraphics[scale=0.625]{Heatbeta=05N.pdf}} \ \
\subfloat[][]{\includegraphics[scale=0.625]{Heatbeta=1N.pdf}}
\caption{Occurrence numbers to measure the heat outcomes $\omega\ell$: Comparison between the theoretical estimate (red solid line) as provided by Eq.\,(\ref{spin}) and the corresponding histogram (blue areas). The latter has been obtained by numerically repeating the non-equilibrium dynamics of sequential measurements over $10^6$ realizations on a spin $s = \frac{7}{2}$. In the three panels, the initial state $\rho_0$ has been thermal with inverse temperature, respectively equal to $\beta = 0$, $\beta = 0.5$ and $\beta =1$.}
\label{fig:spin}
\end{figure}
As an example, let us consider a spin-$s$ particle in a magnetic field taken directed along the $z$-axis. In this case the quantum number $s$ play the role of $N$ since the observables
are described by $(2s+1) \times (2s+1)$ matrices. Thus, the system
Hamiltonian is $H = -\omega S_z$, whose spectrum (apart from a constant)
is given by $E_k = \omega k$ with $k = 0,\dots,2s$. Moreover, we assume that the initial state $\rho_0$ is thermal such that $c_k = e^{-\beta E_k}/Z$, with $Z={\rm Tr}[e^{-\beta H}]$ partition function.
Under these assumptions, it is possible to compute exactly
the probabilities associated to the heat distribution. Being the levels evenly spaced, the outcomes of $Q$ are all the $4s+1$ values $Q = \omega\ell$ with $\ell = -2s,\dots,2s$. Since $S_x$ and $S_y$ are non-commuting with $S_z$, if we choose to measure the spin component along these directions we will have ITT in the limit $M \gg 1$. Then, all the possible final outcomes $E_m$ will have the same probability $\frac{1}{2s+1}$ to occur. Hence, the probability $p_{\ell}(Q)$ to get the outcome $Q = \omega\ell$ equals to
\begin{equation}\label{spin}
p_{\ell}(Q) = \frac{1}{Z\,(2s+1)} \left\lbrace
\begin{split}
&\sum^{2s}_{k=\ell} e^{-\beta\omega(k-\ell)}, \hspace{0.65cm} 0 \leq \ell \leq 2s \\
&\sum^{2s+\ell}_{k=0} e^{-\beta\omega(k-\ell)}, \hspace{0.4cm} -2s \leq \ell \leq 0
\, .
\end{split}
\right.
\end{equation}
Then, by explicitly computing the summations in Eq.\,(\ref{spin}), as well
as the partition function $Z$, we obtain
\begin{equation}\label{eq:p_l_spin-s}
p_{\ell}(Q) = \frac{1}{\eta} \left\lbrace
\begin{split}
& 1 - e^{- \beta \omega (2s+1-\ell)}, \hspace{0.85cm} 0 \leq \ell \leq 2s \\
& e^{\beta \omega n} - e^{- \beta\omega (2s+1)}, \hspace{0.4cm} -2s \leq \ell \leq 0
\end{split} \right.
\end{equation}
with $\eta \equiv (1 - e^{- \beta\omega (2s+1)})(2s+1)$.
Eq.\,(\ref{eq:p_l_spin-s}) well reproduces the results
of the numerical simulations, as shown in Fig.\,\ref{fig:spin}.
\section{Exceptions to ITT}\label{sec:excep}
In the previous Sections, we have assumed that the largest eigenvalue $\lambda=1$ of $L(\tau)$ is non degenerate. Such assumption is realistic for a generic choice of the observable $\mathcal{O}$. However, interesting properties arise also if this assumption fails. Thus, in this paragraph we will analyze exceptions to Eq.\,(\ref{bellina}), leading to what we can refer to as \emph{partial} thermalization. Specifically, we will discuss the following cases: {\em (i)} $L(\tau_j)$ having a degenerate maximum eigenvalue; {\em (ii)} dynamics in the quantum Zeno regime; {\em (iii)} $[\mathcal{O},H]$ small. In the latter two cases, $L(\tau_j)$ is close to the identity matrix, so that the difference between the largest and the second-largest eigenvalues becomes small, allowing for a non-trivial interplay between the large number of measurements and the closing gap of the system energy spectrum.
\subsection{Partial thermalization}\label{sub_A}
Let us assume that the largest eigenvalue $\lambda=1$ of $L(\tau)$ is degenerate. By construction, each element of $L(\tau)$ is $\geq 0$; thus, if $L(\tau)$ is a not reducible matrix (i.e., it cannot be put in a block diagonal form with a change of basis) the Perron-Frobenius theorem\,\cite{Perron} guarantees that the largest eigenvalue is non degenerate. Therefore, we have to consider the case in which $\mathcal{O}$
and $H$ share a common non trivial invariant subspace. This implies that
in the basis $\{\ket{\alpha_k}\}_{k=1}^{N}$, which defines the eigenstates
of $\mathcal{O}$, $H$ reads as
\begin{equation} \label{blockH}
H = \begin{pmatrix} H_1 & & \\ & \ddots & \\ & & H_R \end{pmatrix}
\end{equation}
where $R$ denotes the number of blocks of $H$ and $H_r$, with $r=1, \dots,R$,
are irreducible Hermitian matrices acting on the subspaces $S_r$. Before proceeding further, it is worth observing that having $H$ diagonal on the basis of $\mathcal{O}$ is a particular case of Eq.\,\eqref{blockH}, where each subspace has dimension one. From Eq.\,\eqref{blockH}, one can get that also the matrices $L(\tau_j)$ are block diagonal and can be written as
\begin{equation}\label{eq:L_partial_ITT}
L(\tau_j) = \begin{pmatrix} L_1 (\tau_j) & & \\ & \ddots & \\ & & L_R (\tau_j) \end{pmatrix}
\end{equation}
where $L_r(\tau_j)$ are unistochastic irreducible matrices acting on the subspaces $S_r$ for $r=1,\ldots,R$ and $j=1,\ldots,M$. In this case, the Perron-Frobenius theorem ensures that no further degeneracy is present in each matrix $L_r(\tau_j)$. Therefore, we can introduce the set of eigenvectors, one for each subspace:
\begin{equation}
\ket{v_r} = \frac{1}{\sqrt{\dim{S_r}}} \sum_{k : \ket{\alpha_k} \in S_r} \ket{\alpha_k}
\end{equation}
corresponding to an $R$-order degeneracy of the eigenvalues of $L(\tau)$.
As a result, the eigenspace associated to the largest eigenvalue
$\lambda=1$ is $R$ dimensional, and Eq.\,\eqref{finalstate} does not
longer hold. Instead, one can find that $P_{k_M|k_1} = \frac{1}{\dim S_r}$ if $\ket{\alpha_{k_1}}$ and $\ket{\alpha_{k_M}}$ both belong to the same subspace $S_r$, and $P_{k_M|k_1} = 0$ otherwise. In such case, $\Tilde{\pi}_{k_M}$ keeps memory of the initial state. Indeed, if $\ket{\alpha_{k_M}} \in S_r$, then
\begin{equation}\label{eq:tilde_pi_partial_IIT}
\Tilde{\pi}_{k_M} = \frac{1}{\dim S_r} \sum_{k : \ket{\alpha_{k}} \in S_r} \pi_{k}\,.
\end{equation}
Since the initial and final energy projective measurements
does not mix the eigenspaces linked to the eigenvalues of $L(\tau)$, one can also write that
\begin{equation}\label{eq:pm_partial_IIT}
p_m = \frac{1}{\dim S_r} \sum_{k : \ket{E_{k}} \in S_r} c_k
\end{equation}
with $S_r$ such that $\ket{E_m} \in S_r$. In the case
of $R=N$ (namely $H$ commuting with $\mathcal{O}$: $[H,\mathcal{O}]=0$),
Eqs.\,(\ref{eq:tilde_pi_partial_IIT}) and (\ref{eq:pm_partial_IIT}) reduce,
as expected, to $\Tilde{\pi}_{k_M} = \pi_{k_1}$ and $p_m = c_m$, since in that case the evolution of the system is frozen and all the measurements outcomes coincide.
Moreover, by still assuming the degeneracy of $\lambda=1$, the heat characteristic function $G(u)$ can be written as the sum of
the characteristic functions relative to each subspace $S_r$:
\begin{equation}
G(u) = \sum_{r=1}^{R} \frac{1}{\dim{S_r}}
{\rm Tr}\left[\rho_{\infty}\,e^{iH_{r}u}\right]{\rm Tr}\left[\rho_{0}\,e^{-iH_{r}u}\right] \,.
\end{equation}
These results have a simple physical interpretation.
For any realization of the introduced non-equilibrium process, after the
first measurement, the state of the system is described by a vector
belonging to $S_{\overline{r}}$ for some $\overline{r}\in\{1,\ldots,R\}$.
Since such subspaces do not mix each other, the subsequent system evolution
will take place within $S_{\overline{r}}$. As a result,
in the limit of $M \rightarrow \infty$, the monitored quantum system tends
to reach the completely mixed state in each $S_{r}$ separately.
An example of incomplete thermalization clearly showing this
feature is presented in Sec.\,\ref{sec:inc}.
\subsection{Quantum Zeno regime}
Another possible exception to ITT can be observed when the value
of all the waiting times $\tau_j$, with $j=1,\ldots,M$, is on average
much smaller than the inverse of the energy scale of the
system\,\cite{KofmanNature2000,FacchiPRL2002,FacchiJPA2008,SmerziPRL2012,SchaferNatComm2014,SignolesNatPhys2014,GherardiniNJP2016,MuellerAdP2017}.
In particular, let us consider here the case in which the total
time $\sum_{j=1}^{M}\tau_j$ remains constant in the limit of large-$M$,
thus ensuring that each waiting time $\tau_j$ is infinitesimal. In this
limiting case, we expect to recover the quantum Zeno regime that prevents
the system to thermalize.
This effect can be shown by observing that in the quantum Zeno regime the
operators $U$ and $L$ are nearly close to the identity matrix. In particular,
\begin{equation}\label{eq:Zeno_1}
\bra{\alpha_k} U(\tau_j) \ket{\alpha_{\ell}} =\delta_{k,\ell} - i\tau_j \bra{\alpha_k}H\ket{\alpha_\ell} + O(\tau_j^2)
\end{equation}
so that
\begin{equation}\label{eq:Zeno_2}
\bra{\alpha_k} L(\tau_j) \ket{\alpha_\ell} = \delta_{k,\ell} + O(\tau_j^2).
\end{equation}
Since their sum is constant, in the large-$M$ limit all
the waiting times $\tau_j$, $j=1,\ldots,M$, go to zero as $M^{-1}$.
Thus, $O(\tau_j^2) = O(M^{-2})$ such that the conditional probability $P_{k_M|k_1}$ can be read as
\begin{equation}
\begin{split}
P_{k_M|k_1} &= \delta_{k_{1},k_{M}} + (M-1)O(M^{-2}) \\
&= \delta_{k_{1},k_{M}} + O(M^{-1}).
\end{split}
\end{equation}
This means that, in the limit $M \rightarrow \infty$, the system is frozen in one of the eigenstates of $\mathcal{O}$, in accordance with the quantum Zeno effect.
\subsection{$\mathcal{O}$ and $H$ quasi-commuting observables}
Here, let us examine the case in which $[H,\mathcal{O}]$ is small.
Under this hypothesis, the eigenbases of both the observables are close
to each other, and the unitary matrix $V$ with elements
$V_{k,\ell} \equiv \braket{\alpha_k}{E_\ell}$ is close to the identity.
Being $V$ an unitary matrix, we are allowed to parametrize
$V$ as $V=e^{iR\xi}$ with $R$ Hermitian operator. In our case,
being $V \simeq \mathbb{I}$, the parameter $\xi$ is $\ll 1$. Moreover,
by introducing the diagonal matrices
$\Lambda(\tau_j) = {\rm diag}(e^{-iE_1 \tau_j},\ldots,e^{-iE_N \tau_j})$,
the propagator $U(\tau_j)$ can be expressed in the $\mathcal{O}$ eigenbasis as
\begin{equation}
U(\tau_j) = V \Lambda(\tau_j) V^{\dagger} = \Lambda \left( \mathbb{I} + i \xi (\Lambda^{\dagger} R \Lambda -R) + O(\xi^2) \right),
\end{equation}
or -- by components -- as
\begin{eqnarray}\label{quasidiag}
&\displaystyle{U_{k,\ell} (\tau_j)=}&\nonumber \\
&\displaystyle{e^{-iE_\ell \tau_j} \left(\delta_{k,\ell} + i \xi R_{k,\ell} (e^{(E_k-E_\ell) \tau_j}-1) + O(\xi^2) \right).}&
\end{eqnarray}
Accordingly, for $k \neq \ell$,
the $(k,\ell)$-element of the transition matrix $L(\tau_j)$ equals to
\begin{eqnarray}
L_{k,\ell} (\tau_j) &\equiv& |U_{k,\ell}(\tau_j)|^2 \nonumber \\
&=& 4 \xi^2 |R_{k,\ell}|^2 \sin\frac{(E_k-E_\ell)\tau_j}{2} + O(\xi^3).
\end{eqnarray}
At variance, regarding the diagonal elements of $L(\tau_j)$,
we do not actually need to compute them, since they are are fixed
by the constraint $\sum_{k}L_{k,\ell}(\tau_j)=1$. This consideration is
quite useful, since the $O(\xi^2)$ terms in Eq.\,\eqref{quasidiag},
which we did not compute, would have given rise in $L_{k,k}(\tau_j)$
to $O(\xi^2)$-terms that cannot be neglected. In conclusion,
the transition matrix $L(\tau_j)$ can be put in the following form:
\begin{equation}\label{eq:L_small_xi}
L(\tau_j) = \mathbb{I} - \xi^2 \Delta (\tau_j) + O(\xi^3)
\end{equation}
where $\Delta$ is a real symmetric operator whose elements are given by
\begin{equation}
\begin{cases}
&\Delta_{k,\ell}(\tau_j) = - 4 |R_{k,\ell}|^2 \sin^2{\frac{(E_k-E_\ell) \tau_j}{2}}\,, \ \forall\,k \neq \ell \\
&\Delta_{k,k} (\tau_j) = - \displaystyle{\sum_{k \neq \ell}\Delta_{k,\ell}}\,.
\end{cases}
\end{equation}
By analysing Eq.\,(\ref{eq:L_small_xi}), one has that,
for any finite small value of $\xi\neq 0$, the system can thermalize
if undergoes a non-equilibrium process composed by $M \gg \xi^{-2}$
projective measurements. In particular, if we take the limit
$\xi \rightarrow 0$ \emph{after} the limit $M \rightarrow \infty$,
the system thermalizes, while if $\xi \rightarrow 0$ is performed \emph{before} the limit $M \rightarrow \infty$ one recovers the same findings observed in the quantum Zeno regime. This means that the sequence of the limits $\xi \rightarrow 0$ and $M \rightarrow \infty$ does not commute. However, a non trivial result is obtained if the two limits are performed at the same time with the constraint $M \xi^2 = \widetilde{t}$. In this case, assuming for simplicity $\tau_j = \tau$ $\forall j=1,\ldots,M$, we find that
\begin{equation}\label{eq:finite_time_Euclidean_ev}
L(\tau)^M \rightarrow e^{- \Delta(\tau) \widetilde{t}} \, ,
\end{equation}
mimicking a finite-time Euclidean evolution with effective Hamiltonian $\Delta(\tau)$ for the effective time $\widetilde{t}$. Therefore,
\begin{equation}
\Tilde{\pi}_{k_M} =
\sum_{k_1}\bra{\alpha_{k_M}}e^{- \Delta(\tau) \widetilde{t}} \ket{\alpha_{k_1}} \pi_{k_1}\,.
\end{equation}
Moreover, since the bases of $\mathcal{O}$ and $H$ coincide
up to $O(\xi)$-terms, a similar relation holds also for the probability
$p_m$ to measure the energy $E_m$ after the $2^{\text{nd}}$ energy measurement
of the process:
\begin{equation}
p_m = \sum_{n}
\bra{E_m}e^{- \Delta(\tau) \widetilde{t}}\ket{E_n}c_n \,.
\end{equation}
As final remark, we also observe that, by construction, the operator $\Delta(\tau)$ has always a zero mode, namely an eigenvector with vanishing eigenvalue. This entails that the ITT and quantum Zeno regimes are recovered in the limits $\widetilde{t} \rightarrow \infty$ and $\widetilde{t} \rightarrow 0$, respectively.
\subsection{An example of $\mathcal{O}$ and $H$ as quasi-commuting observables}
Let us consider the magnetic Hamiltonian $H = -\omega S_z$ for a
generic spin-$s$ system, and let us assume that
$\mathcal{O} = \mathbf{\hat{n}} \cdot \mathbf{\hat{S}}$, with
$\mathbf{\hat{n}} \equiv \sin\xi\,\mathbf{\hat{x}} + \cos\xi\,\mathbf{\hat{z}}$ and $\mathbf{\hat{S}} \equiv S_x\mathbf{\hat{x}} + S_y\mathbf{\hat{y}} + S_z\mathbf{\hat{z}}$. On the one hand, it is worth noting that, if $\xi = 0$, then $[\mathcal{O},H] = 0$. Thus, by considering $\xi \ll 1$ (i.e., $[\mathcal{O},H]$ small), it holds that $\mathcal{O} = \xi S_x + S_z + O(\xi^2)$. On the other hand, we know that the eigenvalues of $S_z$ are indexed by
$m \in \{ -s, -s+1,\ldots,s\}$ corresponding to the state vector $\ket{m}$. Hence, from the application of the first-order perturbation theory on the observable $\mathcal{O}$, we have that in the limit of small $\xi$ the eigenstates $\ket{\alpha_m}$ of $\mathcal{O}$ are equal to
\begin{equation}\label{eq:eigs_O_small_xi}
\ket{\alpha_m} = \ket{m} + \xi \sum_{m' \neq m} \frac{\bra{m'} S_x \ket{m}}{m-m'} \ket{m'} + O(\xi^2).
\end{equation}
Since Eq.\,(\ref{eq:eigs_O_small_xi}) contains only the matrix elements
of $S_x$ in the $S_z$-eigenbasis, it is now easy to compute the matrix
$V$ up to higher order terms in $\xi$ by means of the expansion
$V = e^{i\xi R} = \mathbb{I} + i\xi R + O(\xi^2)$. As a result, we find:
\begin{equation}
R_{m,m'} = \frac{i}{2}\sqrt{(s-m)(s+m+1)} \delta_{m,m'+1} - (m \leftrightarrow m')
\end{equation}
where $(m \leftrightarrow m')$ is a shorthand notation for the transpose of $R_{m,m'}$ where the role of $m$ and $m'$ is switched. In this way, concerning the transition matrix $L(\tau)$, the effective Hamiltonian $\Delta(\tau)$ (real symmetric operator) obeying Eq.\,(\ref{eq:L_small_xi}) is given by
$\Delta(\tau) = \mathcal{A} \sin^2 \frac{\omega \tau}{2}$, whose
only non-zero elements are
\begin{equation}\label{eq:elements_op_A}
\begin{split}
\mathcal{A}_{m,m+1} &= - s(s+1) + m(m-1) \\
\mathcal{A}_{m,m-1} &= - s(s+1) + m(m+1) \\
\mathcal{A}_{m,m} &= 2(s(s+1)-m^2).
\end{split}
\end{equation}
As shown in Appendix A, the operator $\mathcal{A}$ can be diagonalized
in the limit $s \gg 1$. The eigenvalues of $\mathcal{A}$ are equal to
\begin{equation}
a_k = k(k+1)\,,
\end{equation}
with $k=0,\ldots,2s$, while the $2s$ components $v_k(m)$ of the $k^{\text{th}}$ eigenvector $v_k$ are given by
\begin{equation}\label{eq:eigenvecs_mathcal_A}
v_k (m) = \sqrt{\frac{2k+1}{2s}} P_k \left( \frac{m}{s} \right)
\end{equation}
with $m=-s,-s+1,\ldots,s$. In Eq.\,(\ref{eq:eigenvecs_mathcal_A}), $P_k$ denotes the Legendre polynomial of order $k$.
This result suggests that, in the limit of $s \gg 1$,
the operator $\mathcal{A}$ can be expressed in terms of the orbital angular momentum $\mathbf{\hat{L}} \equiv L_x\mathbf{\hat{x}} + L_y\mathbf{\hat{y}} + L_z\mathbf{\hat{z}}$ of a single quantum particle.
By setting $\frac{m}{s} \equiv \cos\theta$, the eigenvalues and eigenstates of $\mathcal{A}$ coincide with the spectrum of $\mathbf{\hat{L}}^2$ provided that we limit ourselves to the sector $\mathcal{H}_A$ of the particle Hilbert space such that $L_z \mathcal{H}_A = 0$. Notice that the latter, in standard notation, corresponds to the part of the spectrum of $\mathcal{A}$ with $m=0$.
This means that $\mathcal{A}$ can be written as
\begin{equation}
\mathcal{A} \simeq L_x^2 + L_y^2 + \mu L_z^2
\end{equation}
with $\mu \rightarrow \infty$. Under this limit, the euclidean
evolution automatically excludes all the states that do not belong to $\mathcal{H}_A$.
\section{Large-N limit}\label{sec:large}
In this paragraph we determine analytical expressions describing the behaviour of a monitored quantum system in the limit of an infinite-dimensional Hilbert-space. Under this hypothesis, the theses of the Perron-Frobenius theorem do not longer hold\,\cite{Perron}, and, thus, it is no longer guaranteed that the largest eigenvalue $\lambda=1$ of $L(\tau)$ is non-degenerate.
For simplicity, let us take a spin-$s$ system, with Hamiltonian $H = - S_z/s$, and, as (intermediate) measurement observable, the Hermitian operator $\mathcal{O} = S_x$ (not commuting with the Hamiltonian). The scaling of the system Hamiltonian with $s$ has the usual purpose to maintain finite the range of the spectrum of $H$ as $s$ grows, and to help to retrieve the classical limit of an unit spin for $s\to\infty$.
Here, we are interested in predicting the thermalization of the analysed
spin-$s$ system to the maximally mixed state, even in the limit of large-$s$ ($s\gg 1$, ideally infinite).
\begin{figure}[t!] \label{fig:collapse}
\centering
\includegraphics[scale=0.64]{collapse.pdf}
\caption{Comparison between the spectra of the stochastic matrices $L(\tau)$ with $s=300$ and different values of $\tau$, expressed in terms of the rescaled variable $\frac{k\tau}{2s}$. We can observe that, for $\frac{k\tau}{2s}$ smaller than a critical value (numerically determined to be $\approx 0.934$), all the data collapse on the same curve as predicted by Eq.\,(\ref{scaling}). In turn, for $\frac{k\tau}{2s} \ll 1$ we observe the quadratic behavior provided by Eq.\,\eqref{topspectrum}, namely $f(\frac{\tau k}{2s}) \approx 1-(\frac{\tau k}{2s})^2$.}
\label{fig:spectrum}
\end{figure}
In Fig.\,\ref{fig:spectrum} we show the eigenvalues $\lambda_k$ of the
transition matrix $L(\tau)$, with $\lambda_0 = 1 > \dots > \lambda_k > \dots > \lambda_{2s}$, for different choices of $\tau$. From the numerical simulations, we observe that the eigenvalues $\lambda_k$ tend to accumulate around $\lambda_0 = 1$. Moreover, it is also evident that, in the limit $s \gg 1$, the behavior of the highest eigenvalues is described by a universal function:
\begin{equation}\label{scaling}
\lambda_k (\tau) \equiv f \left( \frac{\tau k}{2s} \right)
\end{equation}
with $f(0)=1$. This scaling relation is valid up to the critical value $\frac{\tau k}{2s}$ in correspondence of which a transition occurs. Notice that we have checked this evidence also for larger values of $s$. The critical value $\frac{\tau k}{2s}$ is found to be $\approx 0.934$ and it corresponds to the eigenvalue $\lambda_k(\tau) \approx 0.3$. As shown in Fig.\,\ref{fig:eigenvec}, a similar pattern is also present in the eigenvectors of the matrix $L(\tau)$. One can see that the eigenvectors, corresponding to small values of the index $k$ that labels them (so that $\frac{\tau k}{2s} \ll 1$), are independent on $\tau$. Independently of the nature of such transition, only the eigenvalues of $L(\tau)$ close to $1$ can affect the ITT. Thus, for our purposes, we will just focus on the spectrum of $L(\tau)$ that obeys to the scaling relation \eqref{scaling}, and we will analyze how the function $f$ behaves if its argument $\frac{\tau k}{2s}$ is small.
In doing this, let us consider the case $\tau \ll 1$ (for which of course $\frac{\tau k}{2s} \ll 1$). In this limit, the scaling relation \eqref{scaling} is valid for every $k=0,1,\ldots,2s$. Moreover, for small $\tau$, $\bra{\alpha_k} U (\tau) \ket{\alpha_\ell} = \delta_{k,\ell} - i \frac{\tau}{s} \bra{\alpha_k} S_z \ket{\alpha_\ell} + O(\tau^2)$,
so that for $k \neq \ell$
\begin{equation}
\bra{\alpha_k} L (\tau) \ket{\alpha_\ell} = \frac{\tau^2}{s^2} |\bra{\alpha_k} S_z \ket{\alpha_\ell}|^2 + O(\tau^3),
\end{equation}
while the diagonal elements are determined by imposing the constraint that $L(\tau)$ is a stochastic matrix. Thus, being $\{\ket{\alpha_k}\}$ the set of the eigenstates of $S_x$, we find that
\begin{equation} \label{LA}
L (\tau) = \mathbb{I} - \frac{\tau^2}{4s^2} \mathcal{A} + O(\tau^3),
\end{equation}
where $\mathcal{A}$ is the operator introduced in the previous paragraph and defined by Eq.\,(\ref{eq:elements_op_A}).
We conclude that in the limit $s \gg 1$ the spectrum of $L(\tau)$ is given by the eigenvalues
\begin{equation}
\lambda_k (\tau) = 1 - \frac{\tau^2}{4 s^2} k(k+1) + O(\tau^3)
\end{equation}
with $k=0,\ldots,2s$. In this regard, it is worth noting that
$k(k+1) \approx k^2$ up to higher orders in $s^{-1}$ with the result that
\begin{equation}\label{topspectrum}
\lambda_k (\tau) = 1 - \left( \frac{\tau k }{2s} \right)^2 + O(\tau^3)\,,
\end{equation}
in agreement with Eq.\,\eqref{scaling} for $f(x) = 1 - x^2 + O(x^3)$.
\begin{figure}
\centering
\subfloat{\includegraphics[scale=0.3]{eigenvec1.pdf}}
\ \
\subfloat{\includegraphics[scale=0.29]{eigenvec2.pdf}}
\\
\subfloat{\includegraphics[scale=0.285]{eigenvec3.pdf}}
\ \
\subfloat{\includegraphics[scale=0.285]{eigenvec4.pdf}}
\\
\subfloat{\includegraphics[scale=0.285]{eigenvec6.pdf}}
\ \
\subfloat{\includegraphics[scale=0.285]{eigenvec7.pdf}}
\caption{Color plot of the matrix of eigenvectors of $L(\tau)$ for different values of $\tau$, with $s=300$. For visualization purposes, we plot on the y-axis of each panel the logarithm of each matrix element, while on the x-axis there is the index $k$ labeling the eigenvalues (the larger $k$, the larger the eigenvalue). We can observe that, in spite of the structures developed for the larger values of $\tau$, the eigenstates on the right, corresponding to the higher part of the spectrum, are practically the same as long as $\tau k/2s \lesssim 1$.}
\label{fig:eigenvec}
\end{figure}
Accordingly, supported by our numerical analysis, we have that for large $s$ the value of the greatest eigenvalues of $L(\tau)$ (with $\frac{\tau k}{2s} \ll 1$) is correctly described by Eq.\,\eqref{topspectrum}, even for finite $\tau$. However, in the limit $M \gg 1$ only the eigenvalues close to $\lambda_0 = 1$ actually matter, since all the others are exponentially suppressed. Thus, when both $M$ and $s$ are large, one has that
\begin{equation}
L(\tau)^M \approx \left( \mathbb{I} - \left( \frac{\tau }{2 s} \right)^2 \mathcal{A} \right)^M \approx e^{- M\frac{\tau^2}{4 s^2}\mathcal{A}}\,.
\end{equation}
A different result is obtained depending on the order of
the limits $M \rightarrow \infty$ and $s \rightarrow \infty$. Indeed,
if we perform the limit $M \rightarrow \infty$ while $s$ is finite,
only the null eigenvector of $\mathcal{A}$ (corresponding to $k=0$
and $\lambda_0 = 1$) ``survives'' (is propagated over time without
being nullify by a repeated sequence of products) and the system
thermalizes to an infinite-temperature state. Such finding is
in accordance with our results obtained with a finite Hilbert space
dimension and non-separable Hamiltonian. Conversely, by performing
the limit $s \rightarrow \infty$ with $M$ finite, we get $L(\tau)^M
\rightarrow \mathbb{I}$. This means that the system becomes classical as $s \rightarrow \infty$, so that the measurements are no longer effective in changing the state of the system. Quite remarkably, even in this case, a non trivial result is obtained if we perform the two limits keeping $\frac{M \tau^2}{4 s^2} = \Tilde{t}$ constant. Indeed, one gets
$L(\tau)^M \rightarrow e^{- \mathcal{A} \Tilde{t}}$ that corresponds again to a finite time Euclidean evolution with effective Hamiltonian $\mathcal{A}$, similarly to Eq.\,(\ref{eq:finite_time_Euclidean_ev}) for the case of $\mathcal{O}$ and $H$ quasi-commuting observables.
\section{An example of incomplete thermalization}\label{sec:inc}
As example of partial ITT relevant for rotating cold atoms \cite{Cooper}, let us consider a particle of mass $m\equiv 1$ moving in the $x$-$y$ plane and subjected to an anisotropic harmonic potential with frequencies $\omega_1 \neq \omega_2$, along the $x$ and $y$ directions respectively. Thus, the Hamiltonian is given by
\begin{eqnarray}\label{eq:H_two-dim_HO}
H &=& \frac{1}{2}\left(p_x^2 + p_y^2 + \omega_1^2 x^2 + \omega_2^2 y^2\right)\nonumber \\
&=& \omega_1 \left( a_x^{\dagger} a_x + \frac{1}{2} \right) + \omega_2 \left( a_y^{\dagger} a_y + \frac{1}{2} \right)
\end{eqnarray}
where $p_x$, $p_y$ denotes the momentum components of the particle in the
$x$, $y$ directions, and $a_x$, $a_y$ are the annihilation operators associated to the particle along $x$ and $y$. The energy eigenstates are given by $\ket{n_x,n_y}$ to which correspond the energy values $E = \omega_1 n_x + \omega_2 n_y$, being $\ket{n_x}$ and $\ket{n_y}$ the $1$D harmonic oscillator states along $x$ and $y$, respectively. As measurement observable $\mathcal{O}$, let us choose the pseudo-angular momentum
\begin{equation}
\widetilde{L} \equiv \frac{i}{2} \left( a_x^{\dagger} a_y - a_y^{\dagger} a_x \right) = \frac{1}{\sqrt{\omega_1 \omega_2}} (\omega_2 y p_x - \omega_1 x p_y).
\end{equation}
$\widetilde{L}$ is block diagonal on the eigenbasis of $H$. This can be seen by noting that $a_x a_y^{\dagger}\ket{n_x,n_y} \propto \ket{n_x -1,n_y+1}$ and $a^{\dagger}_x a_y \ket{n_x,n_y} \propto \ket{n_x +1,n_y-1}$. Thus, the action of $\widetilde{L}$ cannot generate any state with a different value of $n_x + n_y$. In other terms, each block with a given $n \equiv n_x + n_y$ is invariant under the action of the pseudo-angular momentum. Moreover, by computing the matrix elements of the pseudo-angular momentum, we can observe that, within each subspace with constant $n$, {\em (i)} $\widetilde{L}$ acts as (twice) the $y$-component of a spin-$s=n/2$ operator in the basis of the $z$-component, and {\em (ii)} $\widetilde{L}$ is not further reducible.
In conclusion, the thermalization process only involves the energy eigenstates $\ket{n_x,n_y}$ spanning a subspace with a fixed $n=n_x+n_y$, and the system behaves as a collection of independent spin-$s$ systems with $0<s<\infty$. Our findings are not longer valid if $\omega_1 = \omega_2 = \omega$. In such case, indeed, $\widetilde{L}$ becomes proportional to the angular momentum operator $\omega(yp_{x}-xp_{y})$ associated to an isotropic two-dimensional harmonic oscillator that commutes with $H$. Thus, no evolution is possible as well as ITT. It would be interesting to study the effect of repeated measurements of the pseudo-angular momentum in the slightly anisotropic case, with the aim to investigate in the interacting case whether and to what extent it could be usefully employed to reach quantum Hall states for two-dimensional rotating gases.
\section{Conclusions}\label{sec:concl}
In this paper, the asymptotic behaviour of a $N$-level quantum system subjected to a sequence of $M$ projective measurements is analyzed in the limit of $M$ and $N$ large. Moreover, it has been put in relation with
common properties of the Hermitian operators $H$ (system Hamiltonian) and
$\mathcal{O}$ (intermediate measurement observable), and peculiar characteristics of the heat distribution exchanged by the system with
the external environment.
We have determined that, if $H$ and $\mathcal{O}$ do not share any common non trivial subspace, the final state of a monitored quantum system in the limit of large-$M$ coincides with the maximally mixed state corresponding to a canonical thermal state with infinite temperature. We have denoted this latter condition as Infinite-Temperature Thermalization (ITT). The direct consequence of this effect is in the heat distribution, evaluated by resorting to an initial and final energy projective measurement. In the ITT regime, the initial and final energy outcomes, $\{E_n\}$ and $\{E_m\}$ respectively, are independent random variables and the corresponding characteristic function $G(u)$, with $u\in\mathbb{C}$, can be factorized in two distinct contributions just depending on the initial and final states.
Possible exceptions to ITT have been determined in the following three distinct cases. {\em (i)} Whenever the Hermitian operators $H$ and $\mathcal{O}$ have one or more eigenvectors in common, as for example when $[H,\mathcal{O}]=0$. In such case, the ITT occurs only in partial way, since we no longer have the complete mixing of the intermediate
measurement eigenvectors $|\alpha_k\rangle$, $k=1,\ldots,N$, at the
end of the non-equilibrium quantum process. Indeed, what one can observe is the mixing of the eigenvectors $|v_r\rangle$ associated to the subspaces $S_r$ in which the Hamiltonian block matrices $H_r$ are defined. For the sake of clarity, we recall that the Hamiltonian blocks $H_r$ are the operators that compose the global Hamiltonian $H$ of the system, once expressed in the basis of $\mathcal{O}$. The presence of $R$ block matrices $H_r$ (and not just one) is the reason under the onset of a degeneracy of the eigenvalue $\lambda=1$ of $L(\tau)$, independently of the $\tau$-values. In this picture, the special case of $[\mathcal{O},H]=0$ is obtained for $R=N$. {\em (ii)} ITT is not obtained when the value of the waiting times $\tau_j$ is on average much smaller than the inverse of the energy scale of the system, such that during
the application of two consecutive measurements the quantum system
does not practically evolve and remains confined in its initial state.
{\em (iii)} Finally, analytical and numerical results in the large-$N$ limit, derived on a spin-$s$ system with $s\gg 1$, suggest that ITT can occur in the limit of $M \rightarrow \infty$ with $\tau \ll 1$ and a finite value of $s$. We found that the eigenvalues of $L(\tau)$ are the same for different values of $\tau$ as long as $\tau k/2s$ is smaller than a critical value that we estimated to be $\approx 0.934$. Interestingly, the matrix of eigenvectors displays a rich structure, but nevertheless the eigenstates corresponding to the larger eigenvalues are practically the same as soon as $\tau k/2 s \lesssim 1$, in agreement with the previously mentioned critical value. When, at variance, the limit $s \rightarrow \infty$ is taken with $M$ finite, we find that for $\tau \ll 1$ the application of a sequence of quantum projective measurements does not longer entail state changes within the measured quantum system, as one would expect in the classical limit. Instances of partial ITT were then discussed.
As main outlook, our results are expected to pave the way for
further investigations on monitored quantum systems, subjected to a
sequence of non-projective quantum measurements\,\cite{WatanabePRE2014} and driven by time-dependent functions through Hamiltonian couplings. In such context, the distributions of both the heat and work, and their interplay according to the principles of thermodynamics, will have to be evaluated. Finally, we observe that we used the two-point measurement scheme and it would be interesting to extend the obtained results in different measurement schemes, such as the one recently explored in Refs.\,\cite{Sone20,Micadei20,G20,LevyPRXQ2020}.
\section*{Acknowlegments}
The authors gratefully acknowledge N. Fabbri, S. Hern\'andez-G\'omez and F. Poggiali for useful discussions. This work was financially supported by the MISTI Global Seed Funds MIT-FVG Collaboration Grant ``NV centers for the test of the Quantum Jarzynski Equality (NVQJE)'', and the MIUR-PRIN2017 project ``Coarse-grained description for non-equilibrium systems and transport phenomena (CO-NEST)'' No.\,201798CZL.
\section*{Appendix: Spectrum and eigenvectors of $\mathcal{A}$}
In this Appendix we to derive the spectrum and the eigenvectors of the operator $\mathcal{A}$. Let us start with the eigenvalues equation
\begin{equation}\label{eq1_appA}
\sum_{m'} \mathcal{A}_{m,m'} v(m') = a v(m) \,,
\end{equation}
equivalent to the relation
\begin{eqnarray}\label{eq2_appA}
a v(m) &=& 2(s(s+1) - m^2) v(m)\nonumber \\
&-& (s(s+1) - m(m+1)) v(m-1)\nonumber \\
&-& (s(s+1) - m(m-1)) v(m+1)
\end{eqnarray}
with $a$ and $v$ arbitrary eigenvalue and eigenvector of $\mathcal{A}$, respectively. Eq.\,(\ref{eq2_appA}) can be written as
\begin{eqnarray}\label{eq3_appA}
av(m) &=& (s(s+1) - m^2) ( 2v(m) - v(m+1) - v(m-1))\nonumber \\
&+& m (v(m+1) - v(m-1)).
\end{eqnarray}
In the limit $s \rightarrow \infty$, we assume that $v(m)$ is a smooth
function of the variable $x=\frac{m}{s} \in [-1,1]$. Thus, we make the
ansatz $v(m) = P(x)$ with $P(x)$ continuous function, so that
\begin{equation}\label{eq4_appA}
v(m \pm 1) = P(x) \pm \frac{1}{s} P^{\prime}(x) + \frac{1}{2s^2} P^{\prime \prime}(x) + O(s^{-3}),
\end{equation}
where $P^{\prime}(x)$ and $P^{\prime \prime}(x)$ denote,
respectively, the first and second derivatives of $P(x)$ with respect to
$x$. As a result, the eigenvalue equation \eqref{eq3_appA}, up to $O(s^{-1})$ terms, is equal to
\begin{equation}\label{eq5_appA}
a P(x) = - \frac{1}{s^2} (s(s+1) - s^2 x^2) P^{\prime \prime} (x) + \frac{2sx}{s} P^{\prime} (x)
\end{equation}
whereby, by taking the limit $s \rightarrow \infty$, we finally get
\begin{equation}\label{eq6_appA}
(1 - x^2) P^{\prime \prime} (x) - 2x P^{\prime} (x) + a P(x) = 0\,.
\end{equation}
Eq.\,\eqref{eq6_appA} is the well-known Legendre equation. In order to
have normalizable solutions of the Legendre equation in the interval
$x \in [-1,1]$, one has to set that the eigenvalue $a$ belongs to the
set $\{a_k\}$ with $a_k=k(k+1)$ and $k$ integer $\geq 0$. Thus, in this case, the eigenfunctions are proportional to the $k$-order Legendre polynomials $P_k(x)$. In conclusion, by enforcing the normalization condition, we find:
\begin{equation}\label{eq7_appA}
v_k(m) = \sqrt{\frac{2k+1}{2s}} P_k \left( \frac{m}{s} \right)
\end{equation}
where the variable $s$ at the denominator of the normalization factor is required to pass from the normalization in $x$ to that in $m$.
|
1,314,259,995,587 | arxiv | \section{Introduction}
It has been known since the early eighties \cite{CQ:1982} that the orthogonal projector $S^0_N$ mapping $\mathrm{L}^2(-1,1)$ onto the space of univariate polynomials of degree less than or equal to $N$ (equivalently, $S^0_N$ is the operation consisting in truncating the Fourier--Legendre series of its argument at degree $N$) satisfies the bound
\begin{equation}\label{LegendreBound}
(\forall\,u\in\mathrm{H}^l(-1,1)) \quad \norm{u - S^0_N(u)}_{\mathrm{H}^1(-1,1)} \leq C N^{3/2-l} \norm{u}_{\mathrm{H}^l(-1,1)},
\end{equation}
where $C > 0$ depends only on $l$ and $\mathrm{H}^1(-1,1)$ and $\mathrm{H}^l(-1,1)$ denote standard Sobolev spaces (see \cite[Ch.~5]{CHQZ-I} for a detailed proof of \eqref{LegendreBound} and its Chebyshev weight and periodic unweighted analogues and \cite{Guo:2000a} for its general Gegenbauer weight analogue).
Recently \cite{Figueroa:arXiv2015} this result was extended to the unit disk for Gegenbauer-type weights.
The purpose of this work is proving a weighted analogue of \eqref{LegendreBound} in the case of the unit ball of any dimension; in order to state it, we introduce now the minimal necessary notation.
Let $B^d$ be the unit ball of $\mathbb{R}^d$, $\alpha > -1$ and let the weight function $W_\alpha \colon B^d \to \mathbb{R}$ be defined by $W_\alpha(x) = (1-\norm{x}^2)^\alpha$ with $\norm{\cdot}$ being the Euclidean norm.
We denote by $\mathrm{L}^2_\alpha$ the weighted Lebesgue space $W_\alpha^{-1/2} \mathrm{L}^2(B^d)$, whose natural squared norm is $\norm{u}_\alpha^2 := \int_{B^d} \abs{u}^2 W_\alpha$, and, given an integer $l \geq 0$, by $\mathrm{H}^l_\alpha$ the weighted Sobolev space whose squared norm is $\norm{u}_{\mathrm{H}^l_\alpha}^2 := \sum_{k=0}^l \norm{\nabla_k u}_\alpha^2$.
Let $S^\alpha_N$ be the orthogonal projector mapping $\mathrm{L}^2_\alpha$ onto $\Pi^d_N$, where $\Pi^d_N$ is the space of $d$-variate polynomials of degree less than or equal to $N$.
Our main result is
\begin{thm}\label{thm:lossy}
For all integers $1 \leq r \leq l$ there exists $C = C(d,\alpha,l,r) > 0$ such that
\begin{equation}\label{lossyPreview}
(\forall\,u\in\mathrm{H}^l_\alpha) \quad \norm{u - S^\alpha_N(u)}_{\mathrm{H}^r_\alpha} \leq C N^{-1/2 + 2r - l} \norm{u}_{\mathrm{H}^l_\alpha}.
\end{equation}
\end{thm}
There are two application domains of our main result that we are aware of.
One lies in the analysis of polynomial interpolation operators (cf.\ \cite{CQ:1982} and \cite[Ch.~5]{CHQZ-I}), themselves important in the analysis of spectral methods.
The other, which is the one that led us into this pursuit in the first place, lies in the characterization of approximability spaces relevant to the analysis of nonlinear iterative methods for the numerical solution of high-dimensional PDE; we remit the interested reader to \cite[Ch.~4]{Figueroa} where the one-dimensional case of \autoref{thm:lossy} and the fact that the $S\sp{\alpha}_N$ projectors tensorize in a very straightforward way are exploited for such task.
As noted ever since \cite{CQ:1982}, \autoref{thm:lossy} compares unfavorably with the situation for trigonometric polynomials in unweighted periodic Sobolev spaces, where the power on $N$ is simply $r-l$.
The origin of this difference in behavior is that in the trigonometric case differentiation and projection commute, something which is impossible in the algebraic case \cite[\S~2.3.2]{CHQZ-I}.
We emphasize that the case $r = 0$ is explicitly excluded from consideration in \autoref{thm:lossy}, for in such a case the provably optimal power on $N$ is $-l$ (cf.\ \autoref{lem:L2-projection-error} below), outside the pattern set in \eqref{lossyPreview}.
We also note that if $2r \geq l+1/2$ in \eqref{lossyPreview}, $S^\alpha_N(u)$ need not converge to $u$ in $\mathrm{H}^r_\alpha$ as $N$ tends to infinity.
We further remark that \autoref{thm:lossy} is not a best or quasi-best approximation result (for those see \cite[Ch.~5]{CHQZ-I}, \cite{Guo:2000a}, \cite[\S~4]{LiXu:2014} and \cite[\S~5]{DaiXu:2011}), because in general the orthogonal projection of $\mathrm{H}^r_\alpha$ onto $\Pi^d_N$ need not coincide with the restriction of $S^\alpha_N$ to $\mathrm{H}^r_\alpha$.
In every proof of a particular instance of \autoref{thm:lossy} that we are aware of, an important role was played by spectral differentiation formulas, which connect the orthogonal expansion coefficients of a function and one of its derivatives; e.g., \cite[Eq.~(2.3.18)]{CHQZ-I}
\begin{equation*}
(\forall\,k \in \{0, 1, 2, \dotsc \}) \quad \hat u\sp{(1)}_k = (2k+1) \sum_{q=0}^\infty \hat u_{k+1+2q},
\end{equation*}
where $u = \sum_{k=0}^\infty \hat u_k \, L_k$ and $u' = \sum_{k=0}^\infty \hat u\sp{(1)}_k \, L_k$ are the orthogonal expansions of $u \in \mathrm{H}^1(-1,1)$ and its weak derivative with respect to the basis $(L_k)_{k=0}^\infty$ of Legendre polynomials.
See \cite[Eq.~(2.4.22)]{CHQZ-I}---the first plus sign there is a typo---, \cite[Eq.~(2.13)]{Guo:2000a} and \cite[Lem.~3.4]{Figueroa:arXiv2015} for spectral differentiation formulas for Chebyshev, Gegenbauer and Zernike orthogonal polynomial expansions.
Whereas in one and two dimensions these particular bases of orthogonal polynomials are known to satisfy a wealth of simple identities so as to make spectral differentiation formulas simple to derive, that might not be the case for known explicit orthogonal polynomial bases $\mathrm{L}^2_\alpha$ with $d \geq 3$ (cf.\ the example bases in \cite[\S~5.2]{DunklXu:2014}).
In this work we introduce a streamlined technique to prove \autoref{thm:lossy} which circumvents the need for spectral differentiation formulas and actually dispenses with the usage of bases of orthogonal polynomials altogether, focusing instead on orthogonal polynomial spaces; that is, spaces of polynomials of a certain degree orthogonal to all polynomials of lower degree (cf.\ \eqref{OPS} and the opening remarks of \cite[Ch.~3]{DunklXu:2014}).
In this way we can settle our main result seamlessly for any dimension.
The outline of this article is as follows.
In \autoref{sec:OP-WSS} we introduce some necessary additional notation, orthogonal polynomial spaces and some known properties of their members and their associated projectors.
The core of this work lies in \autoref{sec:id-diff}, in which we prove preliminary results concerning orthogonal polynomial spaces and their projectors.
Finally, in \autoref{sec:main} we bound a differentiation-projection commutator, prove our main result \autoref{thm:lossy} and an interpolation corollary and wrap up with some general remarks and a brief conclusion.
We finish this introductory section noting that have we omitted the dimension $d$ from the notation of $W_\alpha$, $\mathrm{L}^2_\alpha$, etc.\ and will mostly continue to do so in order to avoid cluttering and because all of our arguments will be dimension-independent.
\section{Orthogonal polynomials and weighted Sobolev spaces}\label{sec:OP-WSS}
We denote by $\natural$ the set of strictly positive integers and $\natural_0 := \{0\} \cup \natural$.
Members of $[\natural_0]^d$ will be called multi-indices and for every multi-index $\gamma \in [\natural_0]^d$, point $x \in \mathbb{R}^d$ and (strongly or weakly) differentiable enough complex-valued function $f$ defined on some open set of $\mathbb{R}^d$ we shall write $\abs{\gamma} = \sum_{i=1}^d \gamma_i$, $x^\gamma = \prod_{i=1}^d x_i^{\gamma_i}$ and $\partial_\gamma f = \partial^{\abs{\gamma}} f/(\partial x_1^{\gamma_1} \dotsm \partial x_d^{\gamma_d})$.
We will denote by $\abs{\cdot}_{\mathrm{H}^k_\alpha}$ the seminorm defined as the square root of $u \mapsto \norm{\nabla_k u}_\alpha^2 = \sum_{\abs{\gamma} = k} \binom{k}{\gamma} \norm{\partial_\gamma u}_\alpha^2$, where $\binom{k}{\gamma} = k!/(\gamma_1! \dotsm \gamma_d!)$ is the number of times the multi-index $\gamma$ of order $k$ appears in the $k$-dimensional array-valued $\nabla_k u$.
This seminorm is of course equivalent to the common choice in which the $\binom{k}{\gamma}$ are all replaced by $1$ yet better suits some induction arguments on the order of differentiation we make below.
Let $\mathcal{V}\sp{\alpha}_k$ be the space of orthogonal polynomials of degree $k$ with respect to the weight $W_\alpha$ (cf.\ \cite[Def.~3.1.1]{DunklXu:2014}); i.e.,
\begin{equation}\label{OPS}
\mathcal{V}\sp{\alpha}_k := \left\{ p \in \Pi^d_k \mid (\forall\,q \in \Pi^d_{k-1})\ \langle p, q\rangle_\alpha = 0 \right\}.
\end{equation}
If $k < 0$ we adopt the convention $\Pi^d_k = \{0\}$ and so $\mathcal{V}\sp{\alpha}_k = \{0\}$.
As $W_\alpha$ is centrally symmetric, it transpires from \cite[Th.~3.3.11]{DunklXu:2014} that for all $k \in \natural_0 = \{0, 1, 2, \dotsc\}$ there holds the following parity relation:
\begin{equation}\label{parity}
(\forall \, p_k \in \mathcal{V}\sp{\alpha}_k)\ (\forall \, x \in B^d) \quad p_k(-x) = (-1)^k p_k(x).
\end{equation}
Let $\proj\sp{\alpha}_k$ denote the orthogonal projection from $\mathrm{L}^2_\alpha$ onto $\mathcal{V}\sp{\alpha}_k$.
From \cite[Th.~3.2.18]{DunklXu:2014}, $\Pi^d_n = \bigoplus_{k=0}^n \mathcal{V}\sp{\alpha}_k$ and $\mathrm{L}^2_\alpha = \bigoplus_{k=0}^\infty \mathcal{V}\sp{\alpha}_k$, whence
\begin{equation}\label{spanning-consequence}
(\forall\,n\in\natural_0) \quad S\sp{\alpha}_n = \sum_{k=0}^n \proj\sp{\alpha}_k
\qquad\text{and}\qquad
(\forall\,u\in \mathrm{L}^2_\alpha) \quad u = \sum_{k=0}^\infty \proj\sp{\alpha}_k(u).
\end{equation}
We will denote the entrywise application of $S\sp{\alpha}_n$ to $\mathrm{L}^2_\alpha$-valued vectors and higher-order tensors by $S\sp{\alpha}_n$ as well (cf.\ \autoref{cor:diffProjComm-r} below).
From \cite[Eq.~(5.2.3) and Th.~8.1.3]{DunklXu:2014} and straightforward algebraic manipulation it is readily computed that the members of $\mathcal{V}\sp{\alpha}_k$ are eigenfunctions of the second order differential operator $p \mapsto -W_\alpha^{-1} \div\left( W_{\alpha+1} \nabla p \right) - \sum_{1 \leq i<j \leq d} D_{i,j}^2 p$, where $D_{i,j}$ denotes the first order angular differential operator $x_i \partial_j - x_j \partial_i$ \cite[\S~1.8]{DaiXu:2013}, with associated eigenvalue $k(k+d+2\alpha)$.
By integration by parts the following integral form follows:
\begin{multline}\label{weak-EV}
(\forall\,p_k \in \mathcal{V}\sp{\alpha}_k)\ \left(\forall\,q\in\mathrm{C}^1(\overline{B^d})\right)\\
\langle \nabla p_k, \nabla q \rangle_{\alpha+1} + \sum_{1 \leq i < j \leq d} \left\langle D_{i,j} p_k, D_{i,j} q \right\rangle_\alpha
= k(k+d+2\alpha) \langle p_k, q \rangle_\alpha.
\end{multline}
\begin{remark}\label{rem:easy}
Together with appropriate density results, \eqref{weak-EV} implies that a member of $\mathcal{V}\sp{\alpha}_k$ is automatically also a member of an orthogonal polynomial subspace with respect to a Sobolev-type inner product involving the weaker weight $W_{\alpha+1}$ to control the gradient and, if $d \geq 2$, additional control for the angular derivatives.
In the $d = 1$ case, measuring the projection error in this induced non-uniformly weighted Sobolev space and its generalizations to higher degree of weak differentiation turns out to follow the trigonometric case much more closely (cf.\ \cite[Th.~2.1]{GW:2004} in the one-dimensional case with not necessarily symmetric Jacobi weights).
\end{remark}
\begin{lemma}\label{lem:density}
Let $d \in \natural$, $\alpha > -1$ and $m \in \natural_0$.
Then, $\mathrm{C}^\infty(\overline{B^d})$ is dense in $\mathrm{H}^m_\alpha$.
\begin{proof}
This follows from \cite[Rem.~11.12.(iii)]{Kufner:1985} upon the realization that $W_\alpha$ is bounded from above and below by positive multiples of $\operatorname{dist}(\cdot,\partial B^d)$.
\end{proof}
\end{lemma}
We cite from \cite[Cor.~2.7 and Lem.~2.11]{Figueroa:arXiv2015} the following $\mathrm{L}^2_\alpha$ bound on the $S\sp{\alpha}_n$ projection error and an inverse or Markov-type inequality:
\begin{lemma}\label{lem:L2-projection-error}
For all $\alpha > -1$, $d \in \natural$ and $l \in \natural_0$ there exists a positive constant $C = C(\alpha,d,l)$ such that
\begin{equation*}
(\forall\,n\in\natural_0) \ (\forall\,u\in\mathrm{H}^l_\alpha) \quad
\norm{u - S\sp{\alpha}_n(u)}_\alpha \leq C (n+1)^{-l} \norm{u}_{\mathrm{H}^l_\alpha}.
\end{equation*}
\end{lemma}
\begin{lemma}\label{lem:Markov}
For $\alpha > -1$ and $d \in \natural$ there exists a positive constant $C = C(\alpha,d) > 0$ such that
\begin{equation*}
(\forall\,n\in\natural_0) \ (\forall\,p_n\in\Pi^d_n) \quad
\norm{\nabla p_n}_\alpha \leq C n^2 \norm{p_n}_\alpha.
\end{equation*}
\end{lemma}
\section{Connections between orthogonal polynomials spaces and their projectors}\label{sec:id-diff}
The following proposition collects results concerning relations between spaces of orthogonal polynomials and their associated projectors not involving differentiation.
\begin{proposition}\label{pro:id-shift}
Let $\alpha > -1$ and $d \in \natural$.
\begin{enumerate}
\item\label{it:weighted-id-downshift} Let $p_k \in \mathcal{V}\sp{\alpha+1}_k$.
Then, $(1-\norm{\cdot}^2) p_k \in \mathcal{V}\sp{\alpha}_k \oplus \mathcal{V}\sp{\alpha}_{k+2}$.
\item\label{it:unnamed} Let $q_k \in \mathcal{V}\sp{\alpha}_k$.
Then, $q_k = \proj\sp{\alpha+1}_{k-2}(q_k) + \proj\sp{\alpha+1}_k(q_k)$.
\item\label{it:proto-id-shift} Let $u \in \mathrm{L}^2_\alpha$.
Then, $\proj\sp{\alpha+1}_k(u) = \proj\sp{\alpha+1}_k\left( \proj\sp{\alpha}_k(u) + \proj\sp{\alpha}_{k+2}(u) \right)$.
\item\label{it:id-shift} Let $u \in \mathrm{L}^2_\alpha$.
Then,
\begin{equation*}
\proj\sp{\alpha+1}_k(u)
= \proj\sp{\alpha}_k(u) + \proj\sp{\alpha+1}_k \circ \proj\sp{\alpha}_{k+2}(u) - \proj\sp{\alpha+1}_{k-2} \circ \proj\sp{\alpha}_k(u).
\end{equation*}
\end{enumerate}
\begin{proof}
Given $q \in \Pi^d_{k-1}$, $\langle (1-\norm{\cdot}^2) p_k, q \rangle_\alpha = \langle p_k, q \rangle_{\alpha+1} = 0$ by definition \eqref{OPS}.
Also, by the parity relation \eqref{parity}, $(1-\norm{\cdot}^2) p_k \perp_\alpha \mathcal{V}\sp{\alpha}_{k+1}$.
Therefore part \ref{it:weighted-id-downshift} stems from \eqref{spanning-consequence}.
An analogous argument accounts for part \ref{it:unnamed}.
Part \ref{it:proto-id-shift} comes from the fact that given $p_k \in \mathcal{V}\sp{\alpha+1}_k$,
\begin{multline*}
\langle \proj\sp{\alpha+1}_k(u), p_k \rangle_{\alpha+1}
= \langle u, p_k \rangle_{\alpha+1}
= \langle u, (1-\norm{\cdot}^2) p_k \rangle_\alpha\\
\stackrel{\text{\ref{it:weighted-id-downshift}}}{=} \langle \proj\sp{\alpha}_k(u) + \proj\sp{\alpha}_{k+2}(u), (1-\norm{\cdot}^2) p_k \rangle_\alpha
= \langle \proj\sp{\alpha}_k(u) + \proj\sp{\alpha}_{k+2}(u), p_k \rangle_{\alpha+1}.
\end{multline*}
Part \ref{it:id-shift} is obtained from adding and subtracting $\proj\sp{\alpha+1}_{k-2}(\proj\sp{\alpha}_k(u))$ to the right hand side of part \ref{it:proto-id-shift} and using part \ref{it:unnamed}.
\end{proof}
\end{proposition}
We will now present another collection of results, this time involving differentiation.
To this end we introduce the first order differentiation operator $d\sp{\alpha}_j$, $\alpha > -1$ and $j \in \{1, \dotsc, d\}$, by
\begin{equation*}
d\sp{\alpha}_j q(x)
:= -W_\alpha(x)^{-1} \frac{\partial}{\partial x_j} \left( W_{\alpha+1}(x) \, q(x) \right)
= -(1-\norm{x}^2) \, \partial_j q(x) + 2 (\alpha+1) \, x_j \, q(x).
\end{equation*}
\begin{proposition}\label{pro:diff-shift}
Let $\alpha > -1$, $d \in \natural$ and $j \in \{1, \dotsc, d\}$.
\begin{enumerate}
\item\label{it:DARD} $d\sp{\alpha}_j$ maps $\Pi^d_k$ into $\Pi^d_{k+1}$.
\item\label{it:differentiationAdjoint} Given $p, q \in \mathrm{C}^1(\overline{B^d})$, $\langle \partial_j p, q \rangle_{\alpha+1} = \langle p, d\sp{\alpha}_j q \rangle_\alpha$.
\item\label{it:DALP} Let $r_k \in \mathcal{V}\sp{\alpha+1}_k$.
Then, $d\sp{\alpha}_j(r_k) \in \mathcal{V}\sp{\alpha}_{k+1}$.
\item\label{it:proto-diff-shift} Let $p_k \in \mathcal{V}\sp{\alpha}_k$.
Then, $\partial_j p_k \in \mathcal{V}\sp{\alpha+1}_{k-1}$.
\item\label{it:diff-shift} Let $u \in \mathrm{C}^1(\overline{B^d})$.
Then, $\partial_j\proj\sp{\alpha}_k(u) = \proj\sp{\alpha+1}_{k-1}(\partial_j u)$.
\end{enumerate}
\begin{proof}
Part \ref{it:DARD} is straightforward.
Part \ref{it:differentiationAdjoint} is obtained by integration by parts and noticing that no boundary term appears on account of $(1-\norm{\cdot}^2)^{\alpha+1}$ vanishing on the boundary of $B^d$.
Given $r_k \in \mathcal{V}\sp{\alpha+1}_k$, by part \ref{it:DARD}, $d\sp{\alpha}_j(r_k) \in \Pi^d_{k+1}$, and, on account of part \ref{it:differentiationAdjoint}, it is $\mathrm{L}^2_\alpha$-orthogonal to $\Pi^d_k$, whence part \ref{it:DALP}.
An analogous argument accounts for part \ref{it:proto-diff-shift}.
Given $u \in \mathrm{C}^1(\overline{B^d})$, by part \ref{it:proto-diff-shift}, $\partial_j \proj\sp{\alpha}_k(u) \in \mathcal{V}\sp{\alpha+1}_{k-1}$.
Part \ref{it:diff-shift} then comes about from the fact that for all $r \in \mathcal{V}\sp{\alpha+1}_{k-1}$,
\begin{equation*}
\langle \partial_j\proj\sp{\alpha}_k(u), r \rangle_{\alpha+1}
\stackrel{\text{\ref{it:differentiationAdjoint}}}{=} \langle \proj\sp{\alpha}_k(u), d\sp{\alpha}_j r \rangle_\alpha
\stackrel{\text{\ref{it:DALP}}}{=} \langle u, d\sp{\alpha}_j r \rangle_\alpha
\stackrel{\text{\ref{it:differentiationAdjoint}}}{=} \langle \partial_j u, r \rangle_{\alpha+1}.
\end{equation*}
\end{proof}
\end{proposition}
\begin{remark}[Shift operators]\label{rem:shift}
Part \ref{it:DALP} of \autoref{pro:diff-shift} means that $d\sp{\alpha}_j$ is a \emph{backward shift/degree raising} operator in the sense of \cite{KLS:2010}.
Similarly, by part \ref{it:proto-diff-shift}, $\partial_j$ is a \emph{forward shift/degree lowering} operator (see also \eqref{Jacobi-diff-shift} below).
\end{remark}
Inasmuch as it allows for quantifying a ``wrong'' ($\mathrm{L}^2_\alpha$) norm of a member of a space of orthogonal polynomials ($\mathcal{V}\sp{\alpha+1}_k$), the following result is distantly related to \cite[Eq.~(4.43)]{Figueroa} and \cite[Prop.~3.12]{Figueroa:arXiv2015} in the $d =1$ and $d = 2$ cases, respectively.
\begin{proposition}\label{pro:extendedOrthogonality}
Let $\alpha > -1$, $d \in \natural$ and $k \in \natural_0$.
Then, for all $p, q \in \mathcal{V}\sp{\alpha+1}_k$,
\begin{equation*}
\langle p, q \rangle_\alpha = \left(\frac{k+d/2}{\alpha+1} + 1\right)\langle p, q \rangle_{\alpha+1}.
\end{equation*}
\begin{proof}
We start with the observation that if $s$ is a homogeneous polynomial of degree $k$---that is, of the form $s(x) = \sum_{\abs{\gamma}=n} c_\gamma x^\gamma$---, it satisfies $x \cdot \nabla s(x) = k\, s(x)$, which also goes on to show that the $x \cdot \nabla$ operator exactly preserves the degree of any $d$-variate polynomial.
Let $p, q \in \mathcal{V}\sp{\alpha+1}_k$.
As every member of $\mathcal{V}\sp{\alpha+1}_k$ is a linear combination of homogeneous polynomials of degree ranging from $0$ to $k$, there exists a homogeneous polynomial $s_p$ of degree $k$ such that $p - s_p \in \Pi^d_{k-1}$ and hence $x \cdot \nabla p - x \cdot \nabla s_p \in \Pi^d_{k-1}$.
Thus,
\begin{equation}\label{origin-of-k}
\langle x \cdot \nabla p, q\rangle_{\alpha+1}
= \langle x \cdot \nabla s_p, q \rangle_{\alpha+1}
= k \langle s_p, q \rangle_{\alpha+1}
= k \langle p, q \rangle_{\alpha+1}.
\end{equation}
Using the facts that $\nabla (1-\norm{x}^2)^{\alpha+1} = -2(\alpha+1) (1-\norm{x}^2)^\alpha x$, $\operatorname{div}(x) = d$, integration by parts and \eqref{origin-of-k}, which of course is still valid if the roles of $p$ and $q$ are interchanged,
\begin{multline*}
2(\alpha+1)\int_{B^d} p(x) \overline{q(x)} \norm{x}^2 (1-\norm{x}^2)^\alpha \dd x
= \int_{B^d} \mathrm{div}\left(p(x) \overline{q(x)} x\right) (1-\norm{x}^2)^{\alpha+1} \dd x\\
= \left( \langle x \cdot \nabla p, q\rangle_{\alpha+1} + \langle p, x \cdot \nabla q \rangle_{\alpha+1} + d \langle p, q \rangle_{\alpha+1} \right)
= (2k+d) \langle p, q \rangle_{\alpha+1}.
\end{multline*}
The desired result then follows from the fact that $(1-\norm{x}^2)^\alpha = \norm{x}^2 (1-\norm{x}^2)^\alpha + (1-\norm{x}^2)^{\alpha+1}$.
\end{proof}
\end{proposition}
\begin{remark}[Relations with identities satisfied by bases]\label{rem:variants-with-bases}
In the one-dimensional case ($d = 1$), $\mathcal{V}\sp{\alpha}_k = \operatorname{span}(\{P\sp{(\alpha,\alpha)}_k\})$, where the $P\sp{(\alpha,\alpha)}_k$ are Jacobi polynomials \cite[Ch.~4]{Szego:1975}.
Then, from the ``id-shift'' identity (a combination of (6.4.21) and (6.4.23) of \cite{AAR:1999}; it must be slightly modified if $\alpha = -1/2$ and $k = 0$)
\begin{equation}\label{Jacobi-id-shift}
P\sp{(\alpha,\alpha)}_k = \frac{(k+2\alpha+1)(k+2\alpha+2)}{(2k+2\alpha+1)(2k+2\alpha+2)} P\sp{(\alpha+1,\alpha+1)}_k - \frac{k+\alpha}{2(2k+2\alpha+1)} P\sp{(\alpha+1,\alpha+1)}_{k-2},
\end{equation}
it is possible to furnish alternative proofs of parts \ref{it:unnamed} and \ref{it:proto-id-shift} of \autoref{pro:id-shift} and hence of its part \ref{it:id-shift}.
In that rough sense \autoref{pro:id-shift} corresponds to \eqref{Jacobi-id-shift}.
Similarly \cite[Eq.~(4.21.7)]{Szego:1975},
\begin{equation}\label{Jacobi-diff-shift}
{P\sp{(\alpha,\alpha)}_k}' = \frac{k+2\alpha+1}{2} P\sp{(\alpha+1,\alpha+1)}_{k-1},
\end{equation}
allows for proving part \ref{it:diff-shift} of \autoref{pro:diff-shift} and so, again in a rough sense, \autoref{pro:diff-shift} corresponds to \eqref{Jacobi-diff-shift}.
Using \eqref{Jacobi-id-shift} and explicit formulas for the norms of Jacobi polynomials (cf.\ \cite[Eq.~(4.3.3)]{Szego:1975}) it is possible to reconstruct \autoref{pro:extendedOrthogonality}, although the necessary computations are not short.
In the two-dimensional case, $\mathcal{V}\sp{\alpha}_k = \operatorname{span}(\{ P\sp{(\alpha)}_{m,n} \mid m+n = k \})$, where each $P\sp{(\alpha)}_{m,n}$ is a Zernike polynomial \cite{Wunsche:2005}.
Then, the identities \eqref{Jacobi-id-shift} and \eqref{Jacobi-diff-shift} find appropriate analogues in \cite[Eq.~(3.12)]{Figueroa:arXiv2015} and \cite[Eq.~(5.3)]{Wunsche:2005}, respectively.
\end{remark}
\section{Proof of the main result and an interpolation corollary}\label{sec:main}
We can now bound a differentiation-projection commutator.
\begin{lemma}\label{lem:diffProjComm-1}
Let $\alpha > -1$, $d \in \natural$ and $l \in \natural$.
Then, there exists $C = C(\alpha,d,l) > 0$ such that for all $u \in \mathrm{H}^l_\alpha$, $n \in \natural_0$ and $j \in \{1, \dotsc, d\}$,
\begin{equation*}
\norm{\partial_j S\sp{\alpha}_n(u) - S\sp{\alpha}_n(\partial_j u)}_\alpha
\leq C (n+1)^{3/2-l} \norm{\partial_j u}_{\mathrm{H}^{l-1}_\alpha}.
\end{equation*}
\begin{proof}
Let us first assume that $u \in \mathrm{C}^\infty(\overline{B^d})$.
Combining part \ref{it:id-shift} of \autoref{pro:id-shift} and part \ref{it:diff-shift} of \autoref{pro:diff-shift}, we obtain
\begin{equation}\label{id-diff}
\partial_j\proj\sp{\alpha}_{k+1}(u) - \proj\sp{\alpha}_k(\partial_j u)
= \proj\sp{\alpha+1}_k \circ \proj\sp{\alpha}_{k+2}(\partial_j u) - \proj\sp{\alpha+1}_{k-2} \circ \proj\sp{\alpha}_k(\partial_j u).
\end{equation}
Using \eqref{spanning-consequence} to express $S\sp{\alpha}_n$ in terms of the $\proj\sp{\alpha}_k$, using \eqref{id-diff}, noticing that a telescoping sum results and using part \ref{it:unnamed} of \autoref{pro:id-shift} to expand an appearance of $\proj\sp{\alpha}_n(\partial_j u) \in \mathcal{V}\sp{\alpha}_n$,
\begin{multline}\label{commutator}
\partial_j S\sp{\alpha}_n(u) - S\sp{\alpha}_n(\partial_j u)
= \sum_{k=0}^n \partial_j \proj\sp{\alpha}_k(u) - \sum_{k=0}^n \proj\sp{\alpha}_k(\partial_j u)\\
= \sum_{k=0}^{n-1} \left( \partial_j \proj\sp{\alpha}_{k+1}(u) - \proj\sp{\alpha}_k(\partial_j u) \right) - \proj\sp{\alpha}_n(\partial_j u)\\
= \proj\sp{\alpha+1}_{n-2} \circ \proj\sp{\alpha}_n(\partial_j u) + \proj\sp{\alpha+1}_{n-1} \circ \proj\sp{\alpha}_{n+1}(\partial_j u) - \proj\sp{\alpha}_n(\partial_j u)\\
= \proj\sp{\alpha+1}_{n-1} \circ \proj\sp{\alpha}_{n+1}(\partial_j u) - \proj\sp{\alpha+1}_n \circ \proj\sp{\alpha}_n(\partial_j u).
\end{multline}
Now, by \autoref{pro:extendedOrthogonality}, the fact that $\norm[n]{\proj\sp{\alpha+1}_{n-1}}_{\mathcal{L}(\mathrm{L}^2_{\alpha+1})} \leq 1$ and the fact that $\norm{\cdot}_{\alpha+1} \leq \norm{\cdot}_\alpha$ in $\mathrm{L}^2_\alpha$ (because $W_{\alpha+1} \leq W_\alpha$) we have that for all $n \geq 1$,
\begin{equation}\label{good-enough-1}
\norm{\proj\sp{\alpha+1}_{n-1} \circ \proj\sp{\alpha}_{n+1}(\partial_j u)}_\alpha^2
\leq \frac{n+d/2+\alpha}{\alpha+1} \norm{\proj\sp{\alpha}_{n+1}(\partial_j u)}_\alpha^2.
\end{equation}
Of course, if $n = 0$, our conventions imply that $\norm{\proj\sp{\alpha+1}_{n-1} \circ \proj\sp{\alpha}_{n+1}(\partial_j u)}_\alpha^2 = 0$.
Analogous arguments show that for all $n \in \natural_0$,
\begin{equation}\label{good-enough-2}
\norm{\proj\sp{\alpha+1}_n \circ \proj\sp{\alpha}_n(\partial_j u)}_\alpha^2
\leq \frac{n+1+d/2+\alpha}{\alpha+1} \norm{\proj\sp{\alpha}_n(\partial_j u)}_\alpha^2.
\end{equation}
Taking the squared $\mathrm{L}^2_\alpha$ norm of both ends of \eqref{commutator}, exploiting the $\mathrm{L}^2_\alpha$ orthogonality of $\mathcal{V}\sp{\alpha+1}_{n-1}$ and $\mathcal{V}\sp{\alpha+1}_n$ (a consequence of the parity relation \eqref{parity}) and the bounds \eqref{good-enough-1} and \eqref{good-enough-2} we observe that
\begin{equation*}
\norm{\partial_j S\sp{\alpha}_n(u) - S\sp{\alpha}_n(\partial_j u)}_\alpha^2
\leq \frac{n+1+d/2+\alpha}{\alpha+1} \norm{\partial_j u - S\sp{\alpha}_{n+2}(\partial_j u)}_\alpha^2.
\end{equation*}
As $\partial_j u \in \mathrm{H}^{l-1}_\alpha$, we can appeal to \autoref{lem:L2-projection-error} to obtain the desired result for $u \in \mathrm{C}^\infty(\overline{B^d})$ after realizing that there exists a constant $\tilde C$ depending only on $\alpha$, $d$ and $l$ such that $\frac{n+1+d/2+\alpha}{\alpha+1} ((n+3)^{-(l-1)})^2 \leq \tilde C (n+1)^{3-2l}$ for all $n \in \natural_0$.
The general result then follows via the density result in \autoref{lem:density}.
\end{proof}
\end{lemma}
\begin{corollary}\label{cor:diffProjComm-r}
Let $\alpha > -1$, $d \in \natural$ and $r, l \in \natural$ with $r \leq l$.
Then, there exists $C = C(\alpha,d,l,r) > 0$ such that for all $u \in \mathrm{H}^l_\alpha$ and $n \in \natural_0$,
\begin{equation*}
\norm{\nabla_r S\sp{\alpha}_n(u) - S\sp{\alpha}_n(\nabla_r u)}_\alpha \leq C (n+1)^{2r-1/2-l} \norm{u}_{\mathrm{H}^l_\alpha}.
\end{equation*}
\begin{proof}
Let us first note that iterating \autoref{lem:Markov} we find that for all $r \in \natural$ there exists $C > 0$ depending on $\alpha$, $d$ and $r$ such that
\begin{equation}\label{iteratedMarkov}
(\forall\,n\in\natural_0)\ (\forall\,p\in\Pi^d_n) \quad \abs{p}_{\mathrm{H}^r_\alpha} \leq C n^{2 r} \norm{p}_\alpha.
\end{equation}
We will now operate by induction on $r$.
Taking the square root of the sum with respect to $j$ of the square of both sides of the inequality in \autoref{lem:diffProjComm-1} the case $r = 1$ follows almost immediately.
Let us suppose now that our desired result holds for some $r \in \{1, \dotsc, l\}$ and that $r+1 \leq l$.
Then, for all $j \in \{1, \dotsc, d\}$, by the triangle inequality,
\begin{equation*}
\norm{\nabla_r \partial_j S\sp{\alpha}_n(u) - S\sp{\alpha}_n(\nabla_r \partial_j u)}_\alpha
\leq \abs{\partial_j S\sp{\alpha}_n(u) - S\sp{\alpha}_n(\partial_j u)}_{\mathrm{H}^r_\alpha} + \norm{\nabla_r S\sp{\alpha}_n(\partial_j u) - S\sp{\alpha}_n(\nabla_r \partial_j u)}_\alpha.
\end{equation*}
By \eqref{iteratedMarkov} and \autoref{lem:diffProjComm-1} the first term is bounded by an appropriate constant times $n^{2r} (n+1)^{3/2-l} \norm{\partial_j u}_{\mathrm{H}^{l-1}_\alpha}$.
By the induction hypothesis and the fact that $\partial_j u \in \mathrm{H}^{l-1}_\alpha$ the second term is bounded by an appropriate constant times $(n+1)^{2r-1/2-(l-1)} \norm{\partial_j u}_{\mathrm{H}^{l-1}_\alpha}$.
Then the desired result in the $r+1$ case follows from summing up with respect to $j$ and standard inequalities connecting vector $1$- and $2$-norms.
\end{proof}
\end{corollary}
We are now in a position prove our main result, \autoref{thm:lossy}, and the interpolation \autoref{cor:interpolatedLossy}.
As those proofs are almost completely analogous to those of Theorem~3.9 and Corollary~3.10 of \cite{Figueroa:arXiv2015} we only sketch them here.
\begin{proof}[Proof of \autoref{thm:lossy}] For every $k \in \{1, \dotsc, r\}$,
\begin{equation*}
\abs{u - S\sp{\alpha}_N(u)}_{\mathrm{H}^k_\alpha}^2
\leq 2 \norm{\nabla_k u - S\sp{\alpha}_N(\nabla_k u)}_\alpha^2 + 2 \norm{S\sp{\alpha}_N(\nabla_k u) - \nabla_k S\sp{\alpha}_N(u)}_\alpha^2.
\end{equation*}
We bound the first term using \autoref{lem:L2-projection-error} and the second using \autoref{cor:diffProjComm-r} and the desired result follows upon summing up with respect to $k$ and taking the square root.
\end{proof}
Given $m \in \natural_0$ and $\theta \in (0,1)$ we define $\mathrm{H}^{m+\theta}_\alpha$ by complex interpolation \cite[\P7.51--52]{AF:2003}:
\begin{equation}\label{InterpolatedSobolev}
\mathrm{H}^{m+\theta}_\alpha := \left[\mathrm{H}^m_\alpha, \mathrm{H}^{m+1}_\alpha\right]_\theta.
\end{equation}
\begin{corollary}\label{cor:interpolatedLossy}
Let $\alpha > -1$, $d \in \natural$ and $r, l \geq 0$ with $r \leq l$.
Then, there exists $C = C(\alpha,d,l,r) > 0$ such that for all $u \in \mathrm{H}^l_\alpha$ and $n \in \natural_0$,
\begin{equation*}
\norm{u - S\sp{\alpha}_n(u)}_{\mathrm{H}^r_\alpha} \leq C n^{e(l,r)} \norm{u}_{\mathrm{H}^l_\alpha}
\quad\text{where}\quad
e(l,r) = \begin{cases}
3/2 \, r - l & \text{if } 0 \leq r \leq 1,\\
2\, r - 1/2 - l & \text{if } r \geq 1.
\end{cases}
\end{equation*}
\begin{proof}
The desired bound on the operator norm of $T\sp{\alpha}_{n,l,r} \colon \mathrm{H}^l_\alpha \to \mathrm{H}^r_\alpha$ defined by $T\sp{\alpha}_{n,l,r} := I - S\sp{\alpha}_n$ (with $I$ being the identity operator) holds when $r$ and $l$ are integers from \autoref{lem:L2-projection-error} in the $r = 0$ case and \autoref{thm:lossy} in the $r \in \natural$ case.
The non-integer cases then follow by using the exact interpolation and reiteration theorems.
\end{proof}
\end{corollary}
\begin{remark}[Real interpolation]
Just as it was remarked upon in the $d = 2$ case in \cite{Figueroa:arXiv2015}, essentially the same argument used in \autoref{cor:interpolatedLossy} would work if we used real instead of complex interpolation to define the weighted Sobolev spaces with non-integer differentiation parameter in \eqref{InterpolatedSobolev}.
\end{remark}
\begin{remark}[On the optimality of the main result]
There are four parameters in our main result, \autoref{thm:lossy}: The dimension $d \in \natural$, the weight parameter $\alpha \in (-1, \infty)$, the regularity parameter of the function being approximated $l \in \natural$ and the regularity parameter of the norm measuring the residual $r \in \{1, \dotsc, l\}$.
We will say that \autoref{thm:lossy} is optimal if the power on $N$ in \eqref{lossyPreview} cannot be lowered.
We are aware of optimality proofs in the cases $(d,\alpha,l,r) = (1,-1/2,1,1)$ \cite[pp.~76, 78]{CQ:1982}, $(d,\alpha,l,r) = (1,0,1,1)$ \cite[p.~285]{CHQZ-I}, $(d,\alpha,l,r) \in \{2\} \times (-1,\infty) \times \natural \times \{1\}$ \cite[Th.~3.13]{Figueroa:arXiv2015} (the latter can be adapted to $(d,\alpha,l,r) \in \{1\} \times (-1,\infty) \times \natural \times \{1\}$).
All those proofs exploit a number of simple identities satisfied by particular bases of orthogonal polynomials.
Notice also that all those parameter regimes have $r = 1$, arguably the most important $r$ in \autoref{thm:lossy} because of its connection with the analysis of weak forms of second order PDE.
In \cite{Figueroa:arXiv2015} numerical experiments were used to support the conjecture that \autoref{thm:lossy} is also optimal for $(d,\alpha,l,r) \in \{2\} \times (-1,\infty) \times \{(l,r) \in \natural \times \natural \mid r \leq l\}$.
For general $d$ we do not know of bases of $\mathcal{V}\sp{\alpha}_k$ satisfying identities (particularly regarding differentiation) simple enough so as to enable us to completely extend the optimality proofs mentioned above.
Nevertheless, always in the $r = 1$ case, we managed to generalize the techniques used in \cite{Figueroa:arXiv2015} for $(\alpha,l)$ in a certain proper subset of its natural range $(-1,\infty) \times \natural$.
The arguments behind this partial result being rather involved, depending on explicit identities satisfied by Jacobi polynomials and thus out of character with the rest of this work, we decided against including them here.
\end{remark}
\subsection*{Conclusion}
We have proved our desired ``lossy'' (as compared to the unweighted trigonometric case) bound \autoref{thm:lossy} and did so without recourse to special identities satisfied by particular bases of orthogonal polynomials, arguing instead in terms of orthogonal polynomial spaces.
We certainly expect the main sequence of results in \autoref{sec:id-diff} and \autoref{sec:main} to extend to a wider class of of reflection invariant weights.
If we focused on Gegenbauer-type weights it was mostly on account of their importance in applications and the ready availability of \autoref{lem:density}, \autoref{lem:L2-projection-error} and \autoref{lem:Markov}.
\bibliographystyle{abbrvnat}
\small
|
1,314,259,995,588 | arxiv | \section{\sectionfont Introduction}
Each electronically charged elementary particle has a counterpart with the opposite electronic charge which is known as its \textit{antiparticle} (antiparticles are also referred to as \textit{antimatter}) and just like normal particles, antiparticles do combine, forming atoms of antimatter which some call antiatoms -- albeit, unlike atoms, these do not live long. Paul Dirac's brilliant theory proposed in 1928 predicted the existence of antimatter (Dirac $1928a,b$). It [Dirac's Theory] is one of the most successful Theories \textit{of} Physics. This theory, suggested that the Laws \textit{of} Nature are exactly the same for matter and antimatter; so given this symmetry, the Universe must contain matter and antimatter in equal propositions everywhere and everytime -- that is, across all of spacetime. Unfortunately (or maybe fortunately -- as will be argued soon), when we look into our immediate vicinity, we see that this is not the case -- our terrestrial habitate seems to be dominated exclusively by matter; so the question \textsl{``Why is our measurable Universe made up chiefly of matter with no significant quantities of antimatter?''} has always been hanging in-limbo since Dirac's theory was set forth -- for a good review of the origins and possible solutions to the problem of matter-antimatter asymmetry see e.g. Due \& Kusenko ($2004$).
While we may wonder why the Universe is formed this way, \textit{viz} matter-antimatter imbalance, we must be very thankful that the Universe is formed this way, because if it [Universe] did really have equal proportions of matter and antimatter uniformly distributed throughout all of space and time, you the reader would not be reading this because the Universe would be nothing but a hot-bath of radiation because matter and antimatter would annihilate to form radiation. Despite its great success. The search for an answer to this great cosmic mystery -- \textit{why we are so lucky to have a Universe chiefly made-up of matter} -- is the main theme of the present reading and it is important to mention that our adventures to seeking an answer to this great cosmic mystery will take us to other areas of physical enquiry and new discoveries. Though it shall prove difficult, we shall try to not veer too much off the main road but keep as much as we can to what we want to achieve here.
It is worthwhile to mention here that the first and probably current-best attempt at an answer to this question is that by the Russian Physicist, father of the hydrogen bomb and $1975$ Nobel Peace Prize winner, Andrei Sakharov ($1924-1987$). The attempt by Andrei Sakharov ($1967$) is the widely accepted explanation as to why there exist this matter-antimatter asymmetry -- we offer an asymptotically different solution! He [Andrei Sakharov] argued, that to create an imbalance between matter and antimatter from an initial condition of balance, certain conditions must be met and these conditions have come to the called the Sakharov conditions and $\textrm{CP}$\textit{-violation} is one of the conditions. $\textrm{CP}$\textit{-violation} is a violation of the symmetry where the Laws \textit{of} Nature are expected to act the same when we simultaneously interchange the electronic charge ($\textrm{C}$-\textit{symmetry} known as charge conjugation symmetry) of a particle and invert the space coordinates $\textrm{P}$-\textit{symmetry} (known as parity symmetry).
Given the need for $\rm{CP}$\textit{-asymmetric} equations in physics, much to the dismay of the physicist, the Fundamental Equations \textit{of} Physics, in their bare form, do not exhibit $\textrm{CP}$\textit{-violation} (or $\rm{P}$\textit{-violation}) and this -- sadly and against the desiderata, has to be inserted by hand into the equations. For example, in the Standard Model \textit{of} Particle Physics, the Cabibbo-Kobayashi-Maskawa matrix (Cabibbo $1963$; Kobayashi \& Maskawa $1973$) is employed and a complex phase factor is artificially injected into this matrix to bring about $\textrm{CP}$\textit{-violation} inorder to explain the observed $\rm{CP}$\textit{-violation} in the Kaon system. $\textrm{CP}$\textit{-violation} is observed in Kaons -- for a good read on the history of the discovery of this see e.g. Lacoste-Julien ($2003$). $\textrm{CP}$\textit{-violation} has been observed in B-meson aswell (see e.g. Aubert \textit{et. al.} $2001$). Given Sakhorov's thesis, this $\rm{CP}$\textit{-violation} (in the B-meson and Kaon system) is thought of as holding the key to unlocking the mystery of matter-antimatter asymmetry albeit some researchers (e.g. Rodgers $2001$; Sinha $2009$) feel it [the observed $\textrm{CP}$\textit{-violation} in the B-meson and Kaon system] is not enough to explain the observed matter-antimatter asymmetry.
In this reading, we make a modification to the Dirac Equation by the addition of a \textit{4-Vector Cosmological Field} and from this, we demonstrate that this modification leads to an equation that clearly points to the fact that a stable Universe can only have one form of matter; either it is filled with matter or antimatter. Further, the emergent Universe from our equations is such that if antimatter is to exist in a Universe of matter, then, it will not be stable thus it has to decay.
The possibility of the existence of a cosmological field has sound justification and it is inferred from the cosmological observations such as the apparent accelerated expansion of the Universe and the indication from the rotation curves of galaxies that there must exist a form of unseen matter or energy. This unseen matter or energy is popularity known as the Darkmatter/Darkenergy respectively. If our equations are correct, then, this dark matter/energy can be identified with the cosmological field (which is better here referred to as a cosmic fluid) and it is seen that this cosmic fluid is a fluid composed of pure-point particles, that is -- particles with no breath, height nor length thus are susceptible to be permeable and all-pervading. These particles making up the cosmic fluid travel at the speed of light. It is also seen that this cosmic fluid acts as a barrier forbidding positive energy particles from falling into negative energy states. Actually a new vacuum model is set forth.
I would like to say here, that this reading is written with the full knowledge of the results of our other readings (Nyambuya $2008a,b$) where we have presented, what we believe is a viable curved spacetime version of the Dirac Equation. Our focus here is the original Dirac Equation which is a subset of the proposed Curved Spacetime Dirac Equations (Nyambuya $2008a,b$). This has been done for the sole reason that we would like to show from the well and universally accepted Dirac Equation that matter and antimatter asymmetry can be explained on the basis of an all-pervading and permeating cosmic fluid (cosmological field). Having shown that matter-antimatter asymmetry can be explained on this basis of an all-pervading and permeating cosmic fluid, we hope in the very near future to use this to further modify the equations presented in (Nyambuya $2008a,b$).
\section{\sectionfont The Dirac Equation and its Symmetries}
Without being to assuming, let us, for instructive purposes browse through the thesis leading to the Dirac Equation. Suppose we have a particle of rest-mass $m_{0}$ and momentum $p$ and energy $E$, Albert Einstein, from his $1905$ Special Theory \textit{of} Relativity (STR) -- derived the basic equation:
\begin{equation}
E^{2}=p^{2}c^{2}+m_{0}^{2}c^{4}.\label{Emc2}
\end{equation}
This equation formed the basis of the Klein-Gordon Theory upon which the Dirac
Theory is founded. Using the already established canonical quantisation procedures, $\vec{\textbf{p}}_{\mu}\longrightarrow i\hbar\partial_{\mu}$, Klein and Gordon proposed the equation:
\begin{equation}
\square\Psi=\left(\frac{m_{0}c}{\hbar}\right)^{2}\Psi,\label{Klein-Gordon 1}
\end{equation}
where $\hbar$, $c$ are the Planck constant, the speed of light respectively, an $\Psi$ a scalar wavefunction while $ \square=\partial^{2}/c^{2}\partial t^{2}-\nabla^{2}$. This equation describes a spin-$0$ quantum mechanical scalar particle and allows for negative probabilities which from a physical standpoint are meaningless, and for this reason, Dirac was not satisfied with the Klein-Gordon Theory. He noted that the Klein-Gordon equation is second order differential equation and his suspicion was that the origin of the negative probability solutions may have something to do with this very fact.
He sought an equation linear in both the time and spatial derivatives that would upon ``squaring" reproduce the Klein-Gordon equation. The equation he found was:
\begin{equation}
\left[i\hbar\gamma^{\mu}\partial_{\mu}-m_{0}c\right]\psi=0,\label{Dirac}
\end{equation}
where:
\begin{equation}
\begin{array}{c c}
\gamma^{0}=
\left(\begin{array}{c c}
\textbf{I} & \boldsymbol{0}\\
\boldsymbol{0} & -\textbf{I} \\
\end{array}\right)
,\,\,\,\,
\gamma^{i}=
\left(\begin{array}{c c}
\textbf{0} & \boldsymbol{\sigma}^{i}\\
-\boldsymbol{\sigma}^{i} & \textbf{0} \\
\end{array}\right)
\end{array},
\end{equation}
are the $4\times4$ Dirac gamma matrices ($\textbf{I}$ and $\textbf{0}$ are the 2$\times$2 identity and null matrices respectively) and $\psi$ is the four component Dirac wave-function, namely:
\begin{equation}
\psi=\left(\begin{array}{c}
\psi_{0}\\
\psi_{1}\\
\psi_{2}\\
\psi_{3}\end{array}\right)=\left(\begin{array}{c}
\Phi\\
\chi\\
\end{array}\right),\,\rm{where:}\,\begin{array}{c}
\Phi=\left(\begin{array}{c}
\psi_{0}\\
\psi_{1}\\
\end{array}\right)\\
\\
\chi=\left(\begin{array}{c}
\psi_{2}\\
\psi_{3}\end{array}\right)
\end{array} \label{4spinor}.
\end{equation}
Throughout this reading -- unless otherwise specified, the Greek indices will be understood to mean $\mu,\nu, ... = 0,1,2\,\rm{or}\,3$ and the lower case English alphabet $i,j, ... = 1,2\, \rm{or}\, 3$. In (\ref{4spinor}), the first representation of $\psi$ is known as the four component representation and the second in which this spinor is written in-terms of $\Phi$ and $\chi$, is the bi-spinor representation.
The Dirac Equation is perfectly symmetric equation -- \textit{viz}, it obeys the following symmetries:
\begin{enumerate}
\item Obeys C-\textit{symmetry}. This is symmetry under the interchange of the electronic charge of the particle.
\item Obeys $\rm{T}$-\textit{symmetry}. This is symmetry under the interchange of the hand of time in the Dirac Equation, that is $t\longmapsto -t$.
\item Obeys P-\textit{symmetry}. This is symmetry under the interchange of the space coordinates in the Dirac Equation, that is $x^{\mu}\longmapsto -x^{\mu}$.
\item Obeys $\rm{CT}$-\textit{symmetry}. This is symmetry under both $\rm{C}$ and $\rm{T}$.
\item Obeys $\rm{CP}$-\textit{symmetry}. This is symmetric under both $\rm{C}$ and $\rm{P}$.
\item Obeys $\rm{PT}$-\textit{symmetry}. This is symmetry under both $\rm{P}$ and $\rm{T}$.
\item Obeys $\rm{CPT}$-\textit{symmetry}. This is symmetry under all the operations $\rm{C}$, $\rm{P}$ and $\rm{T}$.
\item Obeys Lorentz invariance. This is symmetry under the change of the inertial frame of reference.
\end{enumerate}
We shall not demonstrate these symmetries but direct the reader to Nyambuya ($2008a,b$) or any good book of Quantum Mechanics (QM) that deals with Dirac's Equation.
Now if we are to add a cosmological field, $\pm\Lambda_{0}$ (this constant has the dimensions of inverse length and we shall assume it to be real $\Lambda_{0}>0$) to the energy ($E$) of a particle, and the cosmological field will be assumed to be an all-pervading and permeating form of energy that fills all of space at everytime -- we will have to make the transformation $E\longrightarrow \mathcal{E}\pm\Lambda_{0} \hbar c$ because this energy will add to the existing energy $E$. This modification, $E\longrightarrow \mathcal{E}\pm\Lambda_{0} \hbar c$, leads to equation (\ref{Emc2}) transforming to:
\begin{equation}
\mathcal{E}=\pm\Lambda_{0} \hbar c\pm\sqrt{p^{2}c^{2}+m_{0}^{2}c^{4}},\label{Emc3}
\end{equation}
hence thus the modification we seek to make to the Dirac Equation must, as the Dirac Equation upon ``squaring'', lead us to this equation. For most of the times, we shall consider the case $+\Lambda_{0}$ without considering the case $-\Lambda_{0}$ as considering one case, is as good as the other has been considered and this is because of the symmetric nature of the equations for these two cases.
If as in the Dirac formulation, a particle of negative energy is the antiparticle, then according to the (\ref{Emc3}), it would mean that antiparticles and particles must -- unlike the Dirac theory, have unequal energies hence unequal masses since $\mathcal{E}_{+}\neq |\mathcal{E}_{-}|$ where $\mathcal{E}_{+}$ and $\mathcal{E}_{-}$ are the positive and negative energy solutions of (\ref{Emc3}) respectively. This is clearly contrary to observations. What does this mean for the theory we wish to set forth? Is it stillborn? How would one explain the observed equality in mass of particles and antiparticles? We shall address this question in the next section.
For the sake of completeness, we shall veer a little on to the side of history. The idea of a cosmological field originally dates back to Einstein, albeit Einstein's cosmological field is a scaler whereas the present is a four vector. After discovering his [Einstein] now famous Law \textit{of} Gravitation (initially with $\Lambda=0$), namely:
\begin{equation}
R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=\kappa T_{\mu\nu}+\Lambda g_{\mu\nu},\label{Einstein's Field eqn}\end{equation}
where: $R_{\mu\nu}$, is the Riemann curvature tensor and $R$ is the contracted Riemann curvature tensor and:
\begin{equation}
T_{\mu\nu}=\varrho v_{\mu}v_{\nu}+pg_{\mu\nu},\label{stress-tensor}
\end{equation}
is the stress and energy tensor where $\varrho$ is the density of matter, $p$ is the pressure and $v_{\mu}$ the four velocity, $\kappa=8\pi G/c^{4}$ is the Einstein constant of Gravitation with $G$ being Newton's Universal Constant of Gravitation, Einstein added $\Lambda$ controversial scalar cosmological field term (with SI units of $\textrm{m}^{-2}$). He did this so as to ``stop'' the Universe from expanding (Einstein $1917$) and he was motivated to do so because of the strong influence from the astronomical wisdom of his day that the Universe appeared to be static and thus was assumed to be so. Besides, the cosmological field fullfiled Mach's Principle (Mach $1893$), a principle that had inspired Einstein to search for the GTR and thus he thought that the GTR will have this naturally embedded in it. Mach's principle forbids the existence of a truly empty space and at the sametime supposes that the inertia of an object is due to the induction effect(s) of the totality of all-matter in the Universe.
We introduce here the cosmological field for a reason asymptotically different from that of Einstein, namely that we wish to explain the asymmetry between matter and antimatter and not to ``stop'' the Universe from expanding. As will be seen -- in the presence of an ambient electromagnetic field, our introduction of the cosmological field will induce an asymmetry in the time dimension and this asymmetry naturally leads to a modified Dirac Equation that violates $\rm{C}$-\textit{symmetry} and also the combined charge and time symmetry, i.e $\rm{CT}$-\textit{symmetry}.
\section{\sectionfont Connection Between Electronic Charge \& Rest-mass }
As already pointed out, if as in the Dirac formulation, where the particle with positive energy $\mathcal{E}_{+}$ is considered to be the particle; and the one with negative energy $\mathcal{E}_{-}$ is considered to be the antiparticle -- then, according to the present ideas, it would mean that antiparticles and particles must have unequal energies since $\mathcal{E}_{+}\neq|\mathcal{E}_{-}|$ as is, in the case of the Dirac Theory. According to the Einstein mass-energy equivalence ($\mathcal{E}=mc^{2}$), the masses of particles and antiparticles will not be equal too. This is clearly contrary to observations as particles and antiparticles have been observed to have equal masses.
In the reading Nyambuya ($2008b$), an attempt at this question has been made, where the electronic charge ($Q$) of a fundamental particle has been related to its rest-mass, that is $m_{0}\propto Q\Longrightarrow m_{0}c=\epsilon Q$ where $Q$ is the electronic charge of the fundamental particle and $\epsilon$ some numerical constant. This means if the rest-mass of a particle is $+m_{0}$, its antiparticle's rest-mass will be $-m_{0}$, and these will have the same energy solution $\mathcal{E}$ since the particle's energy will be given by $\mathcal{E}=\Lambda_{0} \hbar c+\sqrt{p^{2}c^{2}+(+m_{0})^{2}c^{4}}$ and its antiparticle's energy will be $\mathcal{E}=\Lambda_{0} \hbar c+\sqrt{p^{2}c^{2}+(-m_{0})^{2}c^{4}}$ which are exactly the same hence they will have the same mass.
This conclusion, that $m_{0}c=\epsilon Q$, was reached after consideration of the derived curved spacetime Dirac Equation there-in Nyambuya ($2008b$), where it was required that this derived curved spacetime Dirac Equation, remain invariant under the reversal of the rest-mass, that is $m_{0}\longmapsto-m_{0}$. The Dirac Equation is symmetric under a reversal of the rest-mass of the particle but this symmetry is of little significance in the Dirac Theory. In Nyambuya ($2008b$), we found out that the transformation $m_{0}\longmapsto-m_{0}$ entails the reversal of the particle's electromagnetic field and from this we argued that the rest-mass must have an intimate connection with the electronic charge of the particle. We thus direct the reader to this reading Nyambuya ($2008b$) for the full argument on this.
Taking as given that, $m_{0}c=\epsilon Q$, it means we can write equation (\ref{Emc3}) as:
\begin{equation}
\mathcal{E}=\pm\Lambda_{0} \hbar c\pm\sqrt{p^{2}c^{2}+\epsilon^{2}Q^{2}c^{2}}.\label{Emc4}
\end{equation}
It is clear from this equation that under this relationship -- $m_{0}c=\epsilon Q$, a particle and its antiparticle will have the same energy ($\mathcal{E}$) and hence the same mass ($m=\mathcal{E}/c^{2}$).
One may want to argue that if the electronic charge of a fundamental particle is related to it's rest mass in the manner suggested here, then, the magnitude of the Electron and the Proton's electronic charge when at ``rest'' must be equal. But we know from the uncertainty principle of QM ($\Delta p\cdot\Delta x\sim \hbar$) that for a particle of finite dimensions ($\Delta x$) like the Electron and Proton, the concept of rest is without meaning since bringing these particles into a ``true state of rest'' ($\Delta p=0$) would mean the dimensions these particles would have t be infinite ($\Delta x=\hbar/\Delta p=\hbar/0=\infty$). In this way, QM mechanics informs us that a particle of finite dimensions can not be at rest hence thus the thinking that an Electron and a Proton can be brought to rest is null and void. It may appear to us to be at rest -- that is, the Electron and Proton may appear to us to be stationed at the same position, but if we had a way to magnify this to a magnification of our liking, then according to QM, this particle will be seen to be dotting up and down randomly meaning to it has momentum hence thus the whole thinking of an Electron and/or Proton at rest is obsolete.
Neglecting the negative energy solution, equation (\ref{Emc2}) may be written $E=|m_{0}c^{2}|(1+p^{2}/|m_{0}c^{2}|)^{1/2}$ and in the case where the momentum of the particle is small (as in the above case of the Electron and the Proton), that is $p^{2}\lll m_{0}^{2}c^{2}$, to first order approximation this reduces to $E\simeq p^{2}/2|m_{0}|+|m_{0}|c^{2}$, thus for the Electron and Proton, will have $\Delta m=E_{p}-E_{e}=(p^{2}_{p}-p^{2}_{e})/2|m_{0}|$ where $E_{p}, E_{e},p_{p}$ and $p_{e}$ are the Electron and Proton's energy and momentum respectively. Hence thus, the difference in the Electron and the Proton's mass is a measure of the difference in the square of their momentum.
\section{\sectionfont Partial Cosmological Dirac Equation}
There exists two avenues on which to arrive at equation (\ref{Emc4}). We prefare the form (\ref{Emc4}) than (\ref{Emc3}) because (\ref{Emc4}) makes it clear where the electronic charge ($Q$) of the particle fits in explicitly hence it will be easy to investigate its symmetries under the intercharge of the particle's electronic charge. Thus from here-on, we shall understand that: $m_{0}c=\epsilon Q$, hence thus a reversal of the particle's electronic charge is a reversal of the particle's rest-mass.
As will be seen, we shall have to investigate the symmetries of the equations that we shall derive here and in the cases where charge reversal symmetry is concerned, we shall have to deal with the electromagnetic properties of the particle by reversing these properties. In Nyambuya (2008a), we derived three Dirac Equations for curved spacetime and amongst these is the equation:
\begin{equation}
\left[i\hbar\Gamma^{\mu}\partial_{\mu}-m_{0}c\right]\psi=0,\label{Cdirac}
\end{equation}
where: $\Gamma^{\mu}=\gamma^{\mu}A^{\mu}$ and $A^{\mu}$ is the electromagnetic field of the particle (see Nyambuya $2007, 2008a,b$). So the question is: ``Given that $m_{0}c=\epsilon Q$ and that $A^{\mu}$ is the electromagnetic field of the particle, and that the Dirac Equation is a special case of (\ref{Cdirac}) where $|A^{\mu}|=1$; what then, is the complete package of transformation for the Dirac Equation that comes along with the reversal of the particle's electromagnetic properties?'' A reversal of the particles electromagnetic properties means we have to reverse the field $A^{\mu}$ and the particles electronic charge $Q$, i.e. $A^{\mu}\longmapsto -A^{\mu}$ and the particles electronic charge $Q\longmapsto -Q$. The transformation: $A^{\mu}\longmapsto -A^{\mu}\Longrightarrow \Gamma^{\mu}\longmapsto -\Gamma^{\mu}$. In flat spacetime as in the case of the Dirac Equation, $|A^{\mu}|=1$ $\Longrightarrow$ $\Gamma^{\mu}=\gamma^{\mu}$. What all this means for the Dirac Equation is that, in the event that we reverse the electromagnetic properties of the particle, the $\gamma$-matrices will transform: $\gamma^{\mu}\longmapsto -\gamma^{\mu}$, that is:
\begin{equation}
Q\longmapsto-Q\Longrightarrow \gamma^{\mu}\longmapsto -\gamma^{\mu},\label{ctrans}
\end{equation}
is the complete package of transformation for the Dirac Equation that comes along with the reversal of the particle's electromagnetic properties.
Now, we proceed to derive the sort for equation in which the Dirac Equation is endowed with a cosmological field in the time dimension. We shall consider the two avenues which to arrive at equation (\ref{Emc4}) separately and in the end, put forward a reason for rejecting one over the other.
\subsection{\subsectionfont Case I}
Given the Dirac Equation, the transformation:
\begin{equation}
\frac{\partial}{\partial t}\longrightarrow \frac{\partial}{\partial t} \pm i\Lambda_{0} c,
\end{equation}
leads us to the modified Dirac Equation, namely:
\begin{equation}
i\hbar\gamma^{\mu}\partial_{\mu}\psi\pm\Lambda_{0} \hbar \gamma^{0}\psi=\epsilon Q\psi.\label{cdirac}
\end{equation}
Let us call this the Partial Cosmological Dirac Equation and the reason for calling it the ``Partial Cosmological'' Dirac Equation is because this equation, unlike the original Dirac Equation -- contains a term ($\Lambda_{0}$) of cosmological significance and this is partial because the cosmological field is confined to the time dimension ($x^{0}$) and not the space dimensions ($x^{1}, x^{2}, x^{3}$) -- for this reason, it is only partial in that it does not cover all the $4$-dimensions of spacetime ($x^{0}, x^{1}, x^{2}, x^{3}$).
Unlike the original Dirac Equation, this equation possesses no perfect symmetry but:
\begin{enumerate}
\item Violates $\rm{C}$-\textit{symmetry}.
\item Obeys $\rm{T}$-\textit{symmetry}.
\item Obeys $\rm{P}$-\textit{symmetry}.
\item Violates $\rm{CT}$-\textit{symmetry}.
\item Violates $\rm{CP}$-\textit{symmetry}.
\item Obeys $\rm{PT}$-\textit{symmetry}.
\item Violates $\rm{CPT}$-\textit{symmetry}.
\item Obeys Lorentz invariance.
\end{enumerate}
We shall demonstrate these symmetries for equation (\ref{cdirac}) and, as a word of caution, we hope the reader does not get de-focused from the main theme of this reading, namely that we would like to show that the inclusion of a $4$-vector cosmological field does in principle, explain the existing asymmetry between matter and antimatter. Actually, this $4$-vector cosmological field explains more than the this as will be seen. In this exercise, we shall consider the case $+\Lambda_{0}$ and this clearly proves the case for $-\Lambda_{0}$.
\subsubsection{\subsubsectionfont C-Symmetry\label{c}}
To show invariance under charge conjugation (or lack thereof), we proceed as usual -- by bringing the particle under the influence of an external electromagnetic magnetic field $A_{\mu}^{ex}$ (which is a real function and this is the usual four vector electromagnetic potential) which leads to the transformation: $\partial_{\mu} \longrightarrow \textrm{D}_{\mu}=\partial_{\mu}+iA_{\mu}^{ex}$, hence equation (\ref{cdirac}) will now be given by:
\begin{equation}
i\hbar\gamma^{\mu}\rm{D}_{\mu}\psi+\Lambda_{0} \hbar \gamma^{0}\psi=\epsilon Q\psi.\label{cinv1}
\end{equation}
Now, if the equation is invariant under charge conjugation, then, equation (\ref{cinv1}) must under a reversal of the external electromagnetic field ($A_{\mu}^{ex}\longmapsto-A_{\mu}^{ex}\Longrightarrow\rm{D}_{\mu}\longmapsto\rm{D}^{*}_{\mu}$ where the asterisk represents, as usual, the complex conjugate) and that of the particle ($Q\longmapsto-Q$ \& $\gamma^{\mu}\longmapsto-\gamma^{\mu}$ -- remember \ref{ctrans}), revert back to the original equation (\ref{cdirac}) and this after a set of transformations of the spinor field $\psi$. Thus reversal of the ambient electromagnetic field and that of the particle leads equation (\ref{cinv1}) to be given by: $-i\hbar\gamma^{\mu}\rm{D}_{\mu}^{*}\psi-\Lambda_{0} \hbar \gamma^{0}\psi=-\epsilon Q\psi$. Now, to revert back to the original equation, we begin by taking the complex conjugate on both sides of this equation, that is: $+i\hbar\gamma^{\mu*}\rm{D}_{\mu}\psi^{*}-\Lambda_{0} \hbar \gamma^{0*}\psi^{*}=-\epsilon Q\psi^{*}$. Further, we multiply both sides of this equation by $\gamma^{2}$, and we are lead to: $+i\hbar\gamma^{2}\gamma^{\mu*}\rm{D}_{\mu}\psi^{*}-\Lambda_{0} \hbar \gamma^{2}\gamma^{0*}\psi^{*}=-\epsilon Q\gamma^{2}\psi^{*}$, and using the fact that: $\gamma^{2}\gamma^{\mu}=-\gamma^{\mu}\gamma^{2}$ for $\,\mu\neq2$, and that: $\gamma^{1*}=\gamma^{1}$, $\gamma^{2*}=-\gamma^{2}$ and: $\gamma^{3*}=\gamma^{3}$, we are lead to: $-i\hbar\gamma^{\mu}\rm{D}_{\mu}\psi_{c}+\Lambda_{0} \hbar\gamma^{0}\psi_{c}=-\epsilon Q\psi_{c}$ and multiplying this throughout by $-1$, we will have:
\begin{equation}
i\hbar\gamma^{\mu}\rm{D}_{\mu}\psi_{c}-\Lambda_{0} \hbar\gamma^{0}\psi_{c}=\epsilon Q\psi_{c},\label{cinv2}
\end{equation}
where: $\psi_{c}=\gamma^{2}\psi^{*}$. To revert back to the original equation [which is equation (\ref{cdirac})], the sign in the second term on the right-handside must be positive and for this to be so, one would need a $4\times4$ matrix $M$ such that: $M\gamma^{0}=-\gamma^{0}M$ and: $M\gamma^{\mu}=\gamma^{\mu}M$, and the only matrix satisfying these conditions is the null matrix. Clearly, multiplying by a null matrix is meaningless hence thus this equation \textbf{\underline{violates $\rm{C}$-\textit{symmetry}}}.
\subsubsection{\subsubsectionfont T-Symmetry\label{t}}
To show invariance (or lack thereof) under time reversal, we proceed as usual -- by making the transformation: $t\longmapsto-t$ ($\Longrightarrow \partial_{0}\longmapsto-\partial_{0}$) into (\ref{cdirac}) resulting in this equation reducing to: $-i\hbar\gamma^{0}\partial_{0}\psi+i\hbar\gamma^{k}\partial_{k}\psi+\Lambda_{0} \hbar \gamma^{0}\psi=\epsilon Q\psi$ (NB, $k=1,2,3$). Taking the complex conjugate and then multiplying this equation by $\gamma^{5}\gamma^{2}$ and using the fact that: $\gamma^{2}\gamma^{\mu}=-\gamma^{\mu}\gamma^{2}$ for $\,\mu\neq2$, $\gamma^{5}\gamma^{\mu}=-\gamma^{\mu}\gamma^{5}$, $\gamma^{1*}=\gamma^{1}$, $\gamma^{2*}=-\gamma^{2}$ and $\gamma^{3*}=\gamma^{3}$, one is lead to the original equation (\ref{cdirac}): $i\hbar\gamma^{\mu}\partial_{\mu}\psi_{c}+\Lambda_{0} \hbar \gamma^{0}\psi_{c}=\epsilon Q\psi_{c}$ where: $\psi_{c}=\gamma^{5}\gamma^{2}\psi^{*}$, hence thus equation (\ref{cdirac}) \textbf{\underline{is $\rm{T}$\textit{-symmetric}}}.
\subsubsection{\subsubsectionfont P-Symmetry\label{p}}
To show invariance under space reversal (or lack thereof) , we proceed as usual -- by making the transformation: $x^{k}\longmapsto-x^{k}$ ($\Longrightarrow \partial_{k}\longmapsto-\partial_{k}$) into (\ref{cdirac}) resulting in this equation reducing to: $i\hbar\gamma^{0}\partial_{0}\psi-i\hbar\gamma^{k}\partial_{k}\psi+\Lambda_{0} \hbar \gamma^{0}\psi=\epsilon Q\psi$. Now to revert back to the original equation, we simple multiplying both sides of this equation by $\gamma^{2}$ and then using the fact that: $\gamma^{2}\gamma^{k}=-\gamma^{k}\gamma^{2}$, one is lead to: $i\hbar\gamma^{0}\partial_{0}\psi_{c}+i\hbar\gamma^{k}\partial_{k}\psi_{c}+\Lambda_{0} \hbar \gamma^{0}\psi_{c}=\epsilon Q\psi_{c}$ where: $\psi_{c}=\gamma^{2}\psi$. This equation is the same as (\ref{cdirac}) hence equation (\ref{cdirac}) \textbf{\underline{is $\rm{P}$\textit{-symmetric}}}.
\subsubsection{\subsubsectionfont CT-Symmetry\label{ct}}
To show invariance under charge conjugation and time reversal (or lack thereof), we proceed by making the transformation: $\partial_{\mu} \longrightarrow \textrm{D}_{\mu}$ (for the introduction of the external electromagnetic field), $Q\longmapsto-Q$ \& $\gamma^{\mu}\longmapsto-\gamma^{\mu}$ (for the reversal of the particle's electromagnetic properties -- remember \ref{ctrans}), and $t\longmapsto-t$ ($\Longrightarrow \partial_{0}\longmapsto-\partial_{0}$) (for the reversal of time) into (\ref{cdirac}) and this results in this equation reducing to: $-i\hbar\gamma^{0}\partial_{0}\psi-\hbar\gamma^{0}A_{0}^{ex}\psi+i\hbar\gamma^{k}\rm{D}_{k}\psi+\Lambda_{0} \hbar \gamma^{0}\psi=-\epsilon Q\psi$. Now, reversing the external electromagnetic field and taking the complex conjugate both sides, we are lead to: $i\hbar\gamma^{0*}\partial_{0}\psi^{*}+\hbar\gamma^{0*}A_{0}^{ex}\psi-i\hbar\gamma^{k*}\rm{D}_{k}\psi^{*}+\Lambda_{0} \hbar \gamma^{0*}\psi^{*}=-\epsilon Q\psi^{*}$. Multiplying both sides of this equation by $\gamma^{2}$, and then using the fact that: $\gamma^{2}\gamma^{\mu}=-\gamma^{\mu}\gamma^{2}$ for $\,\mu\neq2$ and that: $\gamma^{1*}=\gamma^{1}$, $\gamma^{2*}=-\gamma^{2}$ and $\gamma^{3*}=\gamma^{3}$, we obtain: $-i\hbar\gamma^{0}\partial_{0}\psi_{c}-\hbar\gamma^{0}A_{0}^{ex}\psi_{c}+i\hbar\gamma^{k}\rm{D}_{k}\psi_{c}-\Lambda_{0} \hbar \gamma^{0}\psi_{c}=-\epsilon Q\psi_{c}$, and now multiplying this by $\gamma^{0}$, and using the fact that: $\gamma^{0}\gamma^{k}=-\gamma^{k}\gamma^{0}$, we will have: $-i\hbar\gamma^{0}\partial_{0}\psi_{c}-\hbar\gamma^{0}A_{0}^{ex}\psi_{c}-i\hbar\gamma^{k}\rm{D}_{k}\psi_{c}-\Lambda_{0} \hbar \gamma^{0}\psi_{c}=-\epsilon Q\psi_{c}$, where: $\psi=\gamma^{0}\gamma^{2}\psi^{*}$. Multiplying this throughout by $-1$, one is lead to: $i\hbar\gamma^{0}\partial_{0}\psi_{c}+\hbar\gamma^{0}A_{0}^{ex}\psi_{c}+i\hbar\gamma^{k}\rm{D}_{k}\psi_{c}+\Lambda_{0} \hbar \gamma^{0}\psi_{c}=\epsilon Q\psi_{c}$. Now, for this equation to be the same as equation (\ref{cdirac}), we would need the second term on the left handside to change its sign while all other terms retain their signs -- there is no operation in existence that can do this, hence thus equation (\ref{cdirac}) \textbf{\underline{violates $\rm{CT}$-\textit{symmetry}}}.
\subsubsection{\subsubsectionfont CP-Symmetry\label{cp}}
To show invariance under charge conjugation and space reversal (or lack thereof), we proceed by making the transformation: $\partial_{\mu} \longrightarrow \textrm{D}_{\mu}$ (for the introduction of the electromagnetic field), $Q\longmapsto-Q$ \& $\gamma^{\mu}\longmapsto-\gamma^{\mu}$ (for the reversal of the electromagnetic properties of the particle -- remember \ref{ctrans}) and $x^{k}\longmapsto-x^{k}$ ($\Longrightarrow\partial_{k}\longmapsto-\partial_{k}$) (for the reversal of the space coordinates) into (\ref{cdirac}), one is lead to: $-i\hbar\gamma^{0}\partial_{0}\psi+\hbar\gamma^{0}A_{0}\psi-i\hbar\gamma^{k}\rm{D}_{k}^{*}\psi-\Lambda_{0} \hbar \gamma^{0}\psi=-\epsilon Q\psi$. Now, reversing the external electromagnetic field and taking the complex conjugate both sides, we are lead to: $+i\hbar\gamma^{0}\partial_{0}\psi-\hbar\gamma^{0}A_{0}\psi+i\hbar\gamma^{k*}\rm{D}_{k}\psi^{*}-\Lambda_{0} \hbar \gamma^{0*}\psi^{*}=-\epsilon Q\psi^{*}$. Multiplying both sides of this equation by $\gamma^{2}$, and then using the fact that: $\gamma^{2}\gamma^{\mu}=-\gamma^{\mu}\gamma^{2}$ for $\,\mu\neq2$ and that: $\gamma^{1*}=\gamma^{1}$, $\gamma^{2*}=-\gamma^{2}$ and $\gamma^{3*}=\gamma^{3}$, we obtain: $\gamma^{0}\gamma^{k}=-\gamma^{k}\gamma^{0}$, we obtain: $+i\hbar\gamma^{0}\partial_{0}\psi_{c}-\hbar\gamma^{0}A_{0}\psi_{c}+i\hbar\gamma^{k}\rm{D}_{k}\psi_{c}-\Lambda_{0} \hbar \gamma^{0}\psi_{c}=\epsilon Q\psi_{c}$ where $\psi_{c}=-\gamma^{2}\psi^{*}$. Now, for this equation to be the same as equation (\ref{cdirac}), we would need the last term on the left handside to change its sign while all other terms retain their signs -- there is no operation in existence that can do this, hence thus equation (\ref{cdirac}) \textbf{\underline{violates $\rm{CP}$-\textit{symmetry}}}.
\subsubsection{\subsubsectionfont PT-Symmetry\label{pt}}
To show invariance under a combined space and time reversal (or lack thereof), we proceed by making the transformation: $x^{\mu}\longmapsto-x^{\mu}$ ($\Longrightarrow\partial_{\mu}\longmapsto-\partial_{\mu}$ and this is for the reversal of the spacetime coordinates) into (\ref{cdirac}), we are lead to: $-i\hbar\gamma^{\mu}\partial_{\mu}\psi+\Lambda_{0} \hbar \gamma^{0}\psi=\epsilon Q\psi$. To revert back to the original equation (\ref{cdirac}), we take the complex conjugate both-sides and then multiply throughout by $\gamma^{0}\gamma^{2}\gamma^{5}$, and we are lead to: $i\hbar\gamma^{\mu}\partial_{\mu}\psi_{c}+\Lambda_{0} \hbar \gamma^{0}\psi_{c}=\epsilon Q\psi_{c}$, where $\psi_{c}=\gamma^{0}\gamma^{2}\gamma^{5}\psi^{*}$, hence thus equation (\ref{cdirac}) \textbf{\underline{is $\rm{PT}$-\textit{symmetry}}}.
\subsubsection{\subsubsectionfont CPT-Symmetry\label{cpt}}
To show invariance under a combined charge, space and time reversal (or lack thereof), we proceed by making the transformation: $\partial_{\mu} \longrightarrow \textrm{D}_{\mu}$ (for the introduction of the external electromagnetic field), $Q\longmapsto-Q$ \& $\gamma^{\mu}\longmapsto-\gamma^{\mu}$ (for the reversal of the electromagnetic properties of the particle -- remember \ref{ctrans}) and $x^{\mu}\longmapsto-x^{\mu}$ ($\Longrightarrow\partial_{\mu}~\longrightarrow~-\partial_{\mu}$ and this for the reversal of the space and time coordinates) into (\ref{cdirac}) and this leads us to: $+i\hbar\gamma^{\mu}\partial_{\mu}\psi-i\hbar\gamma^{\mu}A_{\mu}^{ex}\psi-\Lambda_{0} \hbar \gamma^{0}\psi=-\epsilon Q\psi$. Now, reversing the external electromagnetic field, we are lead to: $i\hbar\gamma^{\mu}\rm{D}_{\mu}\psi-\Lambda_{0} \hbar\gamma^{0}\psi=-\epsilon Q\psi$. Just as in the case of the calculation of $\rm{C}$-\textit{symmetry} in \$ (\ref{c}), to revert back to the original equation (\ref{cdirac}), the sign in the first term on the right-handside must be negative so that the new equation would read: $i\hbar\gamma^{\mu}\rm{D}_{\mu}\psi_{c}+\Lambda_{0} \hbar\gamma^{0}\psi_{c}=\epsilon Q\psi_{c}$ where $\psi_{c}=-\psi$, and for this to be so; one would need a $4\times4$ matrix $M$ such that: $M\gamma^{0}=\gamma^{0}M$ \& $M\gamma^{\mu}=-\gamma^{\mu}M$, and the only matrix satisfying these conditions is the null matrix hence thus this equation \textbf{\underline{violates $\rm{CPT}$-\textit{symmetry}}}.
\subsubsection{\subsubsectionfont Lorentz Invariance\label{l}}
To prove Lorentz invariance for (\ref{cdirac}) (which we shall write as: $[i\hbar\gamma^{\mu}\partial_{\mu}-\Lambda_{0} \hbar \gamma^{0}-\epsilon Q]\psi=0$), two conditions must be satisfied:
1. Given any two inertial observers $\textrm{O}$ and $\textrm{O}^\prime$ anywhere in spacetime, if in the frame $\textrm{O}$ we have $[i\hbar\gamma^{\mu}\partial_{\mu}-\Lambda_{0} \hbar \gamma^{0}-\epsilon Q]\psi(x)=0$, then: $[i\hbar\gamma^{\mu\prime}\partial_{\mu}^{\prime}-\Lambda_{0}^{\prime} \hbar \gamma^{0\prime}-\epsilon Q]\psi^\prime(x^\prime)=0$, is the equation describing the same state but in the frame $\textrm{O}^\prime$.
2. Given that $\psi(x)$ is the wavefunction as measured by observer $\textrm{O}$, there must be a prescription for observer $\textrm{O}^\prime$ to compute $\psi^\prime(x^\prime)$ from $\psi(x)$ and this describes to $\textrm{O}^\prime$ the same physical state as that measured by $\textrm{O}$.
Now, since the Lorentz transformation are linear, it is to be required or expected of the transformations between $\psi(x)$ and $\psi^\prime(x^\prime)$ to be linear too, that is:
\begin{equation}
\psi^\prime(x^\prime) = \psi^\prime(\Gamma x) = S(\Gamma) \psi(x) = S(\Gamma) \psi(\Gamma^{-1}x^\prime)\label{inverse1}
\end{equation}
\\
where $S(\Gamma)$ is a $4\times 4$ matrix which depends only on the relative velocities of $\textrm{O}$ and $\textrm{O}^\prime$ and $\Gamma$ is the Lorentz transformation matrix. $S(\Gamma)$ has an inverse if $\textrm{O}\rightarrow \textrm{O}^\prime$ and also $\textrm{O}^\prime\rightarrow \textrm{O}$. The inverse is:
\begin{equation}
\psi(x) = S^{-1}(\Gamma)\psi^\prime(x^\prime) = S^{-1}(\Gamma)\psi^\prime(\Gamma x) \label{inverse2}
\end{equation}
or we could write:
\begin{equation}
\psi(x)=S(\Gamma^{-1})\psi^\prime(\Gamma x)\Longrightarrow S(\Gamma^{-1}) = S^{-1}(\Gamma)
\end{equation}
We can now write: $[i\hbar\gamma^{\mu}\partial_{\mu}-\Lambda_{0} \hbar \gamma^{0}-\epsilon Q]\psi(x)=0$,
as: $[i\hbar\gamma^{\mu}\partial_{\mu}-\Lambda_{0} \hbar \gamma^{0}-\epsilon Q]S^{-1}(\Gamma)\psi^\prime(x^\prime)=0$, and multiplying this from the left by $S(\Gamma)$, we have: $S(\Gamma)[i\hbar\gamma^{\mu}\partial_{\mu}-\Lambda_{0} \hbar \gamma^{0}-\epsilon Q] S^{-1}(\Gamma)\psi^\prime(x^{\prime})=0$,
and hence:
\begin{equation}
\left[i\hbar S(\Gamma)\gamma^{\mu} S^{-1}(\Gamma)\partial_\mu -\Lambda_{0} \hbar S(\Gamma)\gamma^{0} S^{-1}(\Gamma)-\epsilon Q\right]\psi^\prime(x^\prime)=0.\label{lorenz}
\end{equation}
Now, the reader must take note of the fact that the cosmological field is a vector quantity and transforms as:
\begin{equation}
\Lambda_{0}=\left(\frac{\partial x^{0\prime}}{\partial x^{0}}\right)\Lambda_{0\prime}. \label{l-trans}
\end{equation}
The reason for this is, that, it is a property of time and thus it will have to transforms in the same manner as time does. For better clarity: $(\partial_{0}+\Lambda_{0})=(\partial x^{0\prime}/\partial x^{0})(\partial_{0}+\Lambda_{0})^{\prime}$, and from this -- equation (\ref{l-trans}) smoothly flows. Given this, and also that: $\partial_{\mu}=(\partial x^{\mu\prime}/\partial x^{\mu})\partial_{\mu\prime}$, and putting all this into (\ref{lorenz}), we are lead to:
\begin{widetext}
\begin{equation}
\left[i\hbar \left(\frac{\partial x^{\mu\prime}}{\partial x^{\mu}}\right) S(\Gamma)\gamma^{\mu} S^{-1}(\Gamma)\partial_\mu^\prime -\Lambda_{0}^\prime\hbar\left(\frac{\partial x^{0\prime}}{\partial x^{0}}\right) S(\Gamma)\gamma^{0} S^{-1}(\Gamma)-\epsilon Q\right]\psi^\prime(x^\prime)=0.
\label{lorenz2}
\end{equation}
\end{widetext}
Now making the setting:
\begin{equation}
\gamma^{\mu\prime}=\left(\frac{\partial x^{\mu\prime}}{\partial x^{\mu}}\right)S(\Gamma)\gamma^{\mu} S^{-1}(\Gamma),
\end{equation}
equation (\ref{lorenz2}) can now be written as:
\begin{equation}
\left[i\hbar \gamma^{\mu\prime}\partial_\mu^\prime -\Lambda_{0}^\prime\hbar\gamma^{0\prime}-\epsilon Q\right]\psi^\prime(x^\prime)=0,
\end{equation}
hence thus equation (\ref{cdirac}) \textbf{\underline{is Lorentz invariant}} thus satisfying one of the necessary requirements for this equation to be physically meaningful. Now -- the question is, does this equation have all it takes to have correspondence with physical reality? To answer this question, we shall inspect this equation's Hamiltonian.
\subsubsection{\subsubsectionfont Hamiltonian}
Now that we have investigated the symmetries of equation (\ref{cdirac}), the question is \textit{``Does this equation qualify -- in principle, to describe physical phenomena?''} The answer is yes and first and forest; despite it's violation of $\rm{CPT}$-\textit{symmetry}, it is Lorentz invariant and second; because its Hamiltonian, namely:
\begin{equation}
\mathcal{H}=-iI\hbar\frac{\partial }{c\partial t}=i\hbar\gamma^{0}\gamma^{k}\partial_{k}\pm \Lambda_{0}\hbar I - \epsilon Q\gamma^{0},
\end{equation}
(where $I$ is and hereafter the $4\times4$ identity matrix) is hermitian (one can easily verify this for themself, hermiticity means: $\mathcal{H}^{\dagger}=\mathcal{H}$) means its the energy eigenvalues are real. One could argue that this equation be rejected on the grounds that it violates a cornerstone theorem of Quantum Field Theory (QFT), namely Schwinger-L\"uder-Pauli theorem also known as the $\rm{CPT}$ theorem (see e.g. Greaves $2007$ for a good exposition). The L\"uder-Pauli theorem states that any Lorentz invariant local QFT with a Hermitian Hamiltonian must possess or obey $\rm{CPT}$-\textit{symmetry}. This theorem is derived for a symmetric spacetime and not a none-symmetric spacetime as the present hence thus it does not apply here. We shall address this issue fully in \S (\ref{cptv_s}).
\subsection{\subsectionfont Case II}
Proceeding to the second case, we make the following transformation:
\begin{equation}
\frac{\partial}{\partial t}\longrightarrow \frac{\partial}{\partial t} \pm\Lambda_{0} c,
\end{equation}
and this leads us to:
\begin{equation}
i\hbar\gamma^{\mu}\partial_{\mu}\psi\pm i\Lambda_{0}\hbar \gamma^{0}\psi=\epsilon Q\psi.\label{cdirac2}
\end{equation}
As one can verify for themself, this equation possesses the following symmetries:
\begin{enumerate}
\item Violates $\rm{C}$-\textit{symmetry}.
\item Violates $\rm{T}$-\textit{symmetry}.
\item Obeys P-\textit{symmetry}.
\item Violates $\rm{CT}$-\textit{symmetry}.
\item Violates $\rm{CP}$-\textit{symmetry}.
\item Violates $\rm{PT}$-\textit{symmetry}.
\item Violates $\rm{CPT}$-\textit{symmetry}.
\item Obeys Lorentz invariance.
\end{enumerate}
Using the same methods as shown in \S (\ref{c}) to \S (\ref{l}), one can demonstrate and verify for themselves that indeed, (\ref{cdirac2}) exhibits the above said symmetries. Now the question is, what is the relationship of this equation with physical reality? To answer this question, we shall inspect this equation's Hamiltonian.
\subsubsection{\subsubsectionfont Hamiltonian}
Clearly -- at the very least, equation (\ref{cdirac2}) is in contempt of physical reality since its Hamiltonian, namely:
\begin{equation}
\mathcal{H}=i\hbar\gamma^{0}\gamma^{k}\partial_{k}-i\Lambda_{0} \hbar I-\epsilon Q\gamma^{0},
\end{equation}
is (as one can verify for themselves) not hermitian thus leads to complex energy eigenvalues. Complex energy eigenvalues are physically meaningless hence thus this equation ought to be rejected outright with the simple remark that ``it has no bearing with physical reality as we know it.''
\section{\sectionfont The Arrow of Time\label{at}}
Now having gone through the symmetries of equation (\ref{cdirac}) and having chosen this equation and rejected equation (\ref{cdirac2}), we come to the question of the arrow of time and the results obtained here extend to the equation that we will derive in the next section. We note here -- that, we have two Universes, one described by $-\Lambda_{0}$ and the other by $+\Lambda_{0}$. The question is, what is the arrow of time in these two Universes? Is it directed in the forward or in the backward direction? Within the provinces of the present theory, this question can be answered if we combined the present with one of the basic principles of QM -- namely that the wavefunction ought to be normalizable. To reach this end, we shall solve for the free particle solution for equation (\ref{cdirac}) and inspect these and accept only those solutions that ``behave''. Let -- as is usual: $\psi=u_{p}e^{ip_{\mu}x^{\mu}/\hbar}$, where:
\begin{equation}
u_{p} =
\left(\begin{array}{c}
\Phi\\
\\
\chi
\end{array}
\right),
\,\,
\rm{and \, where:}
\,\,
\begin{array}{c c}
\Phi = \left(\begin{array}{c}
\Phi_{1}\\
\\
\Phi_{2}
\end{array}
\right) & \,\,\rm{and}\,\,\chi = \left(\begin{array}{c}
\chi_{1}\\
\\
\chi_{2}
\end{array}
\right)
\end{array}.
\end{equation}
At this moment, a very important point to remember is that $p_{0}$ now contains the cosmological field, the meaning of which is that we must write: $p_{0}\longmapsto p_{0}\pm i\Lambda_{0}\hbar c$, thus: $\psi=u_{p}e^{\pm\Lambda_{0}\hbar ct}e^{ip_{\mu}x^{\mu}/\hbar}$. Now, if the probability density function: $\rho(t)=\psi^{\dagger}\psi=u^{\dagger}_{p}u_{p}e^{\pm\Lambda ct}$, is to be finite as: $t\longmapsto+\infty$ or $t\longmapsto-\infty$ (according to the basic principles of QM, this is prerequisite for any wavefunction), it is evidently clear that for the case $-\Lambda_{0}$, this will only be so if: $t>0$ [$\rho(t~\longmapsto~\infty)\longmapsto0$] and for the case $+\Lambda_{0}$, we must have: $t<0$ [$\rho(t~\longmapsto~-~\infty)~\longmapsto0$]. Hence thus, the arrow of time in the two Universes is different and moves in opposite directions.
From the above simple calculation, we conclude that the Universe for which: $\Lambda_{0}<0$, time moves in the forward direction and likewise, the Universe for which: $\Lambda_{0}>0$, the arrow of time moves in the backward direction, hence thus we have to different Universes. Let these two Universes be $\mathcal{U}^{+}$ and $\mathcal{U}^{-}$ where $\mathcal{U}^{+}$ is the Universe in which time moves forward ($\Lambda_{0}<0$) and likewise, $\mathcal{U}^{-}$ is the Universe in which time moves backwards ($\Lambda_{0}>0$). This result is independent of the the fact that we have here considered a free particle as it will hold true for all conditions of experience.
\section{\sectionfont Full Cosmological Dirac Equation}
Given the unquenchable thirst to generalise Laws \textit{of} Nature, it is most natural to wonder: ``Why should the cosmological field be confined just to the time dimension alone?'' Why not the space dimension as-well? To introduce a cosmological field into a particular dimension, one simply needs to add this to the partial derivative of that dimension, thus to have this in all the four dimensions we have to perform the transformation:
\begin{equation}
\partial_{\mu}\longmapsto\partial_{\mu}+\Lambda_{\mu},
\end{equation}
where:
\begin{equation}
\Lambda_{\mu}\equiv[\omega_{0}i\Lambda_{0}, \omega_{1}\Lambda_{1}, \omega_{2}\Lambda_{2}, \omega_{3}\Lambda_{3}],\label{cosm_const}
\end{equation}
and: $\omega_{\mu}=\pm1$ ($\omega_{\mu}$ is not a four vector but a simple number). In this way, i.e. -- the addition of the cosmological field in the space dimensions, we have endowed the vacuum of space with some all-pervading and permeating momentum. This modification (equation \ref{cosm_const}) automatically leads to the $4$-momentum transforming: $p^{k}\longrightarrow \mathcal{P}_{k}= p_{k}+\omega_{k}\Lambda_{k}\hbar$, and plucking this into (\ref{Emc3}), one is lead to:
\begin{equation}
\mathcal{E}=\omega_{0}\Lambda_{0} \hbar c\pm\sqrt{\mathcal{P}^{k}\mathcal{P}_{k}c^{2}+\epsilon ^{2}Q^{2}c^{2}}.\label{cosm_eqn}
\end{equation}
Considering the case $+\Lambda_{0}$, this new energy equation (\ref{cosm_eqn}) interestingly has $\textbf{16}+1=17$ energy solutions! Let these energy solutions be: $\mathcal{E}_{j}$ where $j=1,2,3\, ...\, 16,17$. Explicitly these solutions are given:
\begin{widetext}
\begin{equation}
\begin{array}{c c l l l}
\mathcal{E}_{1}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{a})\\
\\
\mathcal{E}_{2}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{b})\\
\\
\mathcal{E}_{3}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{d})\\
\\
\mathcal{E}_{4}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{e})\\
\\
\mathcal{E}_{5}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{g})\\
\\
\mathcal{E}_{6}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{h})\\
\\
\mathcal{E}_{7}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{i})\\
\\
\mathcal{E}_{8}& = &+\Lambda_{0} \hbar c+\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{j})\\
\\
\mathcal{E}_{9}& = &+\Lambda_{0} \hbar c = \mathcal{E}_{vac} & ... & (\textbf{f})\\
\end{array}
\end{equation}
}
and the negative solutions are:
\begin{equation}
\begin{array}{c c l l l}
\mathcal{E}_{10}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{j})\\
\\
\mathcal{E}_{11}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{i})\\
\\
\mathcal{E}_{12}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{h})\\
\\
\mathcal{E}_{13}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{g})\\
\\
\mathcal{E}_{14}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}-\Lambda_{3}\hbar)\cdot(p^{3}-\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{e})\\
\\
\mathcal{E}_{15}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}-\Lambda_{2}\hbar)\cdot(p^{2}-\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{d})\\
\\
\mathcal{E}_{16}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}-\Lambda_{1}\hbar)\cdot(p^{1}-\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{b})\\
\\
\mathcal{E}_{17}& = &+\Lambda_{0} \hbar c-\sqrt{[( p_{1}+\Lambda_{1}\hbar)\cdot(p^{1}+\Lambda^{1}\hbar)+( p_{2}+\Lambda_{2}\hbar)\cdot(p^{2}+\Lambda^{2}\hbar)+( p_{3}+\Lambda_{3}\hbar)\cdot(p^{3}+\Lambda^{3}\hbar)]c^{2}+\epsilon ^{2}Q^{2}c^{2}}& ... & (\textbf{a})
\end{array}
\end{equation}
}
\textbf{NB}: The negative and positive energy solutions are symmetric about vacuum energy level $\mathcal{E}_{vac}=\Lambda_{0}\hbar c$.
\\
\end{widetext}
These energies are such that: $\mathcal{E}_{1}\geq\mathcal{E}_{2}\geq ... \geq\mathcal{E}_{16}\geq\mathcal{E}_{17}$ and this is for the setting: $p_{1}\leq p_{2}\leq p_{3}$. The order of which $p$ is greater than the other does not really matter because one can always rearrange these $p$'s in an assending order as has been done here and then proceed to calculate the energies as above. That said, it is not difficult to see that the spinor field equation describing the energy equation (\ref{cosm_eqn}) is:
\begin{equation}
i\hbar\gamma^{\mu}\partial_{\mu}\psi + i\hbar \gamma^{\mu}\Lambda_{\mu}\psi=\epsilon Q\psi.\label{fsol}
\end{equation}
Let us call this equation the \textsl{\textbf{Full Cosmological Dirac Equation}}. We say full because all the four dimensions: $x^{0}-axis$, $x^{1}-axis$, $x^{2}-axis$ and $x^{3}-axis$; are endowed with the cosmological vector field.
This equation has the following properties:
\begin{enumerate}
\item Violates $\rm{C}$-\textit{symmetry}.
\item Obeys $\rm{T}$-\textit{symmetry}.
\item Violates P-\textit{symmetry}.
\item Violates $\rm{CT}$-\textit{symmetry}.
\item Violates $\rm{CP}$-\textit{symmetry}.
\item Violates $\rm{PT}$-\textit{symmetry}.
\item Violates $\rm{CPT}$-\textit{symmetry}.
\item Obeys Lorentz invariance.
\end{enumerate}
With the aid of the presentations made in \S (\ref{c}) down to \S (\ref{l}), one can verify for themself these symmetries, as doing so [i.e showing these symmetries] here would be a nothing but a reproduction of this same exercise albeit with slight modifications which are not difficult at all.
\section{\sectionfont CPT-violation \& a New Spacetime Model\label{cptv_s}}
Now we come to the problem of $\rm{CPT}$\textit{-violation}. The Schwinger-L\"uder-Pauli theorem, which is a corner stone of QFT, states that any local Lorentz invariant field theory \textbf{\textit{must}} obey $\rm{CPT}$-\textit{symmetry}. $\rm{CPT}$-\textit{symmetry} is thus considered a perfect Symmetry \textit{of} Nature, so much such that any theory that violates it, is not thought to be correct. Despite the strong belief in the preservation of $\rm{CPT}$-\textit{symmetry}, some researchers seeking quantum theories of gravity, take $\rm{CPT}$\textit{-violation} as their point of departure (see e.g. Mavromatos). The equations derived herein are clearly in violation of this \textit{``sacrosanct''} symmetry. Does this mean our ideas are fundamentally incorrect? The answer to this question is -- in our opinion, \textbf{no}! and this is because this theorem applies only to a perfectly symmetric spacetime whereas the spacetime on which the present theory is built is not a symmetric spacetime.
To arrive at the Full Cosmological Dirac Equation (\ref{fsol}), what we have done is to add a constant to the $4$-momentum, that is: $p_{\mu}\longmapsto p_{\mu}+\Lambda_{\mu}\hbar=P_{\mu}$. This suggests that the original spacetime continuum has -- inorder to have the Full Cosmological Dirac Equation (\ref{fsol}), been modified clandestinely, i.e: $x_{\mu}\longmapsto X_{\mu}$. For a particle of whose relativistic mass denoted $m$ and four velocity $\dot{x}_{\mu}$, we know that the four momentum: $p_{\mu}=m\dot{x}_{\mu}$ and in the light of the clandestine modification: $x_{\mu}\longmapsto X_{\mu}$, this implies: $P_{\mu}=m\dot{X}_{\mu}=m\dot{x}_{\mu}+\Lambda_{\mu}\hbar$. Further, this implies: $X_{\mu}=x_{\mu}+(\hbar/m)\int (\Lambda_{\mu}) d\tau$, where $\tau$ is the proper time. Taking $\hbar$ as an absolute fundamental constant and $m$ as having no explicit variation with time, we are left with $\Lambda_{\mu}$ as the only function we can assign a time dependence.
Given: $X_{\mu}=x_{\mu}+(\hbar/m)\int (\Lambda_{\mu}) d\tau$, we propose that the modified spacetime continuum ($X_{\mu}$) be related to the ordinary spacetime continuum [$x_{\mu}$ -- let us call this spacetime the Classical Spacetime (CST)] by the relationship:
\begin{equation}
X_{\mu}(x)=x_{\mu}+\ell_{p}\delta_{\mu}(t),
\end{equation}
where $\ell_{p}$, is a fundamental absolute constant with the dimensions of length and:
\begin{equation}
\left|\delta_{\mu}(t)\right|\geq1,\label{dirich}
\end{equation}
and $\delta_{\mu}(t)$ is defined on the $x^{\mu}$\textit{-axis}. What is this new function $\delta_{\mu}(t)$, and what is its role? and why does it have the limits its has? We shall give the justification for the limits in \S \ref{stl}.
We propose that this function be a $4$-vector differentiable none-smooth dynamic random function that takes any numerical value within the set limits as defined in (\ref{dirich}). By differentiable none-smooth we mean it is not continuous at any-point but is differentiable at every-point. Such type of functions do exist and where first investigated by the German mathematician Johan P. G. L. Dirichlet (1805-1859). We shall here refer to these functions as Dirichlet Type Functions (DTF). Because the present Dirichlet function has to be random and dynamic, let us call it the Dynamic Random Dirichlet Function (DRDF) and let the spacetime on which it is defined be simple called Quantum Spacetime (QST) because it has the desired features of QST, namely:
\begin{enumerate}
\item This spacetime possesses randomness -- randomness is an intrinsic feature of QM. Potentially and very much likely, this randomness may explain the bewildering and mysterious randomness we see in QM.
\item This spacetime, is not itself quantized but it literally quantizes the CST where it is derived. It quantizes the CST such that any-two points (no matter how close to each other they may be) on the CST when transformed or cast onto the QST, they will never be closer than the separation $\ell_{p}$ on the $space-axis$ and $t_{p}$ on the time $axis$ on the QST.
\end{enumerate}
This function -- the DTF, is what makes the QST to be intrinsically and inherently $\rm{P}$\textit{-asymmetric} and this is because the DTF is itself not spatially symmetric, that is to say: $\delta_{k}(-x_{k};t)\neq\delta_{k}(+x_{k};t)$ nor $\delta_{k}(-x_{k};t)\neq-\delta_{k}(+x_{k};t)$ and this implies: $X_{k}(-x_{k})\neq-X_{k}(x_{k})$. If: $X_{k}(-x_{k})=-X_{k}(x_{k})$, then the QST would result in $\rm{P}$\textit{-symmetric} laws on this spacetime.
Said in another manner, in the ordinary $3$-space, the mirror image of the point $x_{k}$ is $-x_{k}$, these points, $x_{k}$ and $-x_{k}$ when cast onto the QST do not transform into mirror images of each other on the QST because the DTF $\delta_{k}(x_{k},t)$, is for every moment in time, unique for every-point. If they where mirror images of each other then: $X_{k}(-x_{k})=-X_{k}(x_{k})$ and as already argued, the nature of: $\delta_{k}(x_{k};t)$ makes this to not hold.
All said and done, the L\"uder-Pauli $\rm{CPT}$ theorem does not hold in the present case as this has been derived on the CST continuum. The coordinate system of the QST is not symmetric as is the coordinate system of CST $x_{\mu}$, hence thus, any Physical Laws dependent on the coordinates of this coordinates system will be intrinsically and inherently $\rm{P}$\textit{-asymmetric}. The fact that Laws \textit{of} Nature are $\rm{T}$-symmetric on the QST implies: $\delta_{0}(x_{k};t)=\delta_{0}(x_{k};-t)$, and this means the past is preserved by $\delta_{0}$ and is not preserved by $\delta_{k}$. In another way, knowing the present value of $\delta_{0}(x_{k};t)$, means not only can one tell is past value but also foretell its future value. This is interesting and suggests a more rigorous study of the QST.
\subsection{\subsectionfont Upper Space and Time Limits\label{stl}}
In this part of the reading, we establish lower space and time limits on spacetime. To achieve this, we use the simple and well accepted Law \textit{of} Nature that the speed of light, $c$, is an upper absolute speed limit for all material bodies and energy in the Universe. Considering the case of motion in one dimension say along the $x-axis$, if a particle happens to be at a point $x_{1}$, at time $t_{1}$, and at a later time $t_{2}>t_{1}$, this particle is located at $x_{2}$, we know that the speed $V$ of this particle is given:
\begin{equation}
V=\left|\frac{\Delta x}{\Delta t}\right|=\left|\frac{x_{2}-x_{1}}{t_{2}-t_{1}}\right|.
\end{equation}
\noindent It is clear from the above that if there exists no limits on the intervals: $\Delta x=x_{2}-x_{1}$ and $\Delta t= t_{2}-t_{1}$, that particle's speed will range from zero to infinity. That is, for any finite duration: $\Delta t>0$, for which: $\Delta x=x_{2}-x_{1}=0$, we will have $V=0$ and for any finite separation: $\Delta x>0$, for which $\Delta t= t_{2}-t_{1}=0$, we will have $V=\infty$, hence: $0\leq V \leq \infty$. So far, so good, no problem -- lets proceed!\\
If we set a minimum time interval, say $t_{p}$, such that for all $t_{2}>t_{1}$, $\Delta t>t_{p}$, where $t_{p}$, is smallest possible interval of time; then, for any space interval: $\Delta x= x_{2}-x_{1}$, there will exist a maximum speed for that particular space interval, let us write this as $V_{max}(x_{2},x_{1})$, and this will be given:
\begin{equation}
V_{max}(x_{2},x_{1})=\frac{\left|x_{2}-x_{1}\right|}{t_{p}}.
\end{equation}
Additionally, if there exists a minimum distance that any two points can ever come closest; that is, the points $x_{2}$, and $x_{1}$, can be brought closer together up until a certain minimum, call this minimum $\ell_{p}$, then, we can talk of an absolute maximum speed, $V_{amax}$, between any two-points of space. This absolute maximum speed, call it $c$, is, unlike $V_{max}(x_{2},x_{1})$, independent of the coordinates hence thus for any object moving in such a spacetime endowed with space limits:
\begin{equation}
V\leq c=\frac{\ell_{p}}{t_{p}}.\label{v}
\end{equation}
Any abject that travels at this speed $c$ is basically traveling the minimum possible distance in the least possible time duration or it travels an integral multiple ($n\ell_{p}:n=1,2,3 ...$) of this distance in an integral multiple time of the least time ($nt_{p}:n=1,2,3 ...$). From the above thesis, what this means is that spacetime must have space and time limits if it is to have a universal and absolute maximum speed; that is, for any two points $x_{2}$, and $x_{1}$, and any two-points on the $time-axis$ $t_{2}$, and $t_{1}$, the following must hold: $x_{2}-x_{1}\geq \ell_{p}$, and $t_{2}-t_{1}\geq t_{p}$. From this, the justification of the limits placed on the function $\delta_{\mu}(t)$ [see (\ref{dirich})] follows smoothly.
The above simple reasoning that has lead us to conclude the implied existence of a minimum length and time has a deeper meaning for it tells us that the STR implies a minimum possible time and a minimum possible length! The meaning is deep given the efforts by Giovanni Amelino-Camelia ($2002$) who has proposed a theory -- known as the the Doubly Special Relativity (DSR), that seeks to extend the STR. This theory's hope (should it be confirmed by experiments) is to supersede the STR. The DSR proposes a new observer-independent scale-length. Giovanni Amelino-Camelia proposed this new observer-independent scale-length completely unaware of the fact just presented that the STR implies a minimum length. As discussed below, he [Giovanni Amelino-Camelia] did this [to propose the existence of minimum length] to solve a problem to do with the Quantum Gravity (QG) regime.
Basing their arguments on logical intuition and known Laws \textit{of} Physics -- democratically, researchers in the field of QG do agree that at a special scale-length known as the Planck scale-legnth, $\ell_{p}$: a full theory of QG is needed to describe the physics at this scale. However -- according to the STR, different observers (depending on their state of motion) will measure different lengths, thus they will (may) not agree on whether or not a particle has reached its Planck length. This presents a ``puzzle/paradox'' for the STR. If they agreed on the Planck scale, then their motions must be similar. If their motions are dissimilar and they agreed on the Planck scale, it would mean the Laws \textit{of} Physics must be different for different observers. This goes against the very foundations of the STR -- clearly, this is unacceptable! To solve this, Giovanni Amelino-Camelia proposed his DSR theory which has been welcome by a significant number of researchers (see e.g. Kowalski-Glikman $2003$; Magueijo \& Smolin $2002a,b$).
Now, given that we have shown that the STR implies a minimum length, this clearly puts the effort of Giovanni Amelino-Camelia into question. Actually, this renders the DSR as an unnecessary effort as it tries to address something already implied by the STR.
In closing this section, we need to mention that beginning in the third paragraph from the preceding, we have digressed from the main theme of this section, so -- to keep the reader on track, we shall remind ourself that all we wanted to do has been to show that the existence of an upper cosmic speed limit implies the existence of a minimum length and a minimum time interval. With this reminder, we proceed to the next section where we hint at the uncertainty principle of QM.
\subsection{\subsectionfont Spacetime Fluctuations/Uncertainty}
An important point to take note of is that, for a point $x_{k}$ on the CST, we are $100\%$ sure where to locate this point, we simple go directly to the point $x_{k}$ and find it there -- this is not true for the QST because $\delta_{k}(t)$ is random and dynamic function -- no one and absolutely no one knows what its value at any given time will be. Given the quantum mechanical uncertainty principle and its intrinsic random nature, could it be that the QST setforth here, could be the answer to randomness and the probabilistic nature of QM? We see here that the point $x_{\mu}$ when cast onto the QST -- $X_{\mu}$, it will be uncertain by a minimum limit of $\ell_{p}$ -- that is to say, this point is cast randomly on the QST. The Laws \textit{of} Nature must be written on the QST and not on the CST. From this, it is clear that there will be an uncertainty ($\Delta x_{\mu}$) in the point $x_{\mu}$, and this uncertainty:
\begin{equation}
\Delta x_{\mu}\geq\ell_{p}\delta_{\mu}(t).\label{x-delta}
\end{equation}
The deeper meaning of what this means is that any-point on the CST -- $x_{\mu}$, can be cast anywhere on the QST -- $X_{\mu}$, expect inside a small hyper-volume sphere of radius $\mathcal{R}_{\mu}=\ell_{p}$ on the QST. This is very interesting and the meaning of it -- I should admit, is beyond me at present. I would like to set this as an area for further research.
\subsection{\subsectionfont Energy-Momentum Fluctuations/Uncertainty}
Taking the time derivative of the coordinates of the QST, we have: $\dot{X}_{\mu}(x)=\dot{x}_{\mu}+\ell_{p}\dot{\delta}_{\mu}(x)$, and this implies: $\mathcal{P}_{\mu}=p_{\mu}+m\ell_{p}\dot{\delta}_{\mu}(x)$ where $m$ is the mass of the particle whose momentum is $\mathcal{P}_{\mu}$; and comparing this with: $\mathcal{P}_{\mu}=p_{\mu}+\Lambda_{\mu}\hbar$, this means: $m\ell_{p}\dot{\delta}_{\mu}(x)=\Lambda_{\mu}\hbar$, hence thus the $4$-vector cosmological field can be written:
\begin{equation}
\Lambda_{\mu}=\left(\frac{m\ell_{p}}{\hbar}\right)\dot{\delta}_{\mu}(t),
\end{equation}
hence thus the field: $\Lambda_{\mu}=\Lambda_{\mu}(t)$, will have the properties of the field $\dot{\delta}_{\mu}(t)$, the meaning of which is that $\Lambda_{\mu}$, will be a random-dynamic field since $\dot{\delta}_{\mu}(t)$, is a random and dynamic field. This means the fluctuation of the energy of the particle will be $\Delta\mathcal{E}_{\mu}=\Lambda_{\mu}\hbar$. We are not going to determine here the limits of the fluctuation in the energy as this will require us to go deeper into the nature of the QST -- we shall reverse this for a separate reading.
\section{\sectionfont C-Violation: Possible Reason for Matter-Antimatter Asymmetry\label{ctv_s}}
In the Dirac Theory, antiparticles are negative energy particles which travel back in time and these have the opposite electronic charge of their particle counterpart. In this theory [Dirac], the Dirac Equation is symmetric under electronic charge conjugation. What this means is that the same law, or more clearly, the same equation that governs particles governs antiparticles. In the present theory, we see that $\rm{C}$-\textit{symmetry} is violated, the meaning of which is that particles are not governed by the same equation that governs particles and vis-versa.
In this case where particles and antiparticles are governed by different laws, if these laws are defined on the same spacetime continuum, then, nothing stops particles and antiparticles from existing in the same spacetime continuum. In this kind of set-up, we would expect to observe particles and antiparticles in equal proportions in the same region of space at all times. If -- as in the present set-up, these laws are defined on separate spacetime continuums (one in which $\Lambda_{0}>0$ and the other in which $\Lambda_{0}<0$: we have advanced this in \S \ref{at}), then, matter and antimatter will exist in separate regions of space at altimes. Simple -- there will exist an apparent asymmetry in the distribution of matter and antimatter.
It must be clear that for any spacetime continuum, the corresponding cosmological field along a given axis can only either be positive or negative and never both -- remember the cosmological field can either be positive or negative along each of the axis -- see equation (\ref{cosm_const}). For example, in the case of the time component of the cosmological field, either $\Lambda_{0}>0$ or $\Lambda_{0}<0$ and never can we have both fields in the same region of the spacetime continuum -- it is just plain meaningless because they could cancel each other rendering its inclusion purposeless. So, what this means is that we are going to have two physically separate spacetime continuums, one in which $\Lambda_{0}>0$ and the other in which, $\Lambda_{0}<0$.
\begin{quote}
\textbf{\textsl{Hence thus, it must be clear from this that the inclusion of the cosmological field in the time dimension leads to an explanation of why matter and antimatter will not be seen to exist in the same region of spacetime.}}
\end{quote}
\begin{table}[h!]
\caption{\tabletitlefont Signature of the New Spacetime\\}
\label{tspace}
\begin{tabular}{|c |c c c | c c c| c c |}
\hline
$\Lambda_{k}\longrightarrow$ & $\omega_{1}$ & $\omega_{2}$ & $\omega_{3}$ & \multicolumn{3}{c|}{\textbf{Space Coordinates}} & \multicolumn{2}{c|}{\textbf{\underline{Particle Energy}}} \\
& & & & \multicolumn{3}{c|}{\textbf{\underline{}}} & $\mathcal{E}>0$ & $\mathcal{E}<0$\\
\hline\hline\hline
\textbf{Region $1$} & $+1$ & $+1$ & $+1$ & ($x>0$, & $y>0$, & $z>0$) & ($\mathcal{E}_{1},|\mathcal{E}_{17}|$) & ($-\mathcal{E}_{1},\mathcal{E}_{17}$)\\
\textbf{Region $2$} & $-1$ & $+1$ & $+1$ & ($x<0$, & $y>0$, & $z>0$) & ($\mathcal{E}_{2},|\mathcal{E}_{16}|$) & ($-\mathcal{E}_{2},\mathcal{E}_{16}$)\\
\textbf{Region $3$} & $+1$ & $-1$ & $+1$ & ($x>0$, & $y<0$, & $z>0$) & ($\mathcal{E}_{3},|\mathcal{E}_{15}|$) & ($-\mathcal{E}_{3},\mathcal{E}_{15}$)\\
\textbf{Region $4$} & $-1$ & $+1$ & $-1$ & ($x>0$, & $y>0$, & $z<0$) & ($\mathcal{E}_{4},|\mathcal{E}_{14}|$) & ($-\mathcal{E}_{4},\mathcal{E}_{14}$)\\
\textbf{Region $5$} & $-1$ & $-1$ & $+1$ & ($x<0$, & $y<0$, & $z>0$) & ($\mathcal{E}_{5},|\mathcal{E}_{13}|$) & ($-\mathcal{E}_{5},\mathcal{E}_{13}$)\\
\textbf{Region $6$} & $-1$ & $+1$ & $-1$ & ($x<0$, & $y>0$, & $z<0$) & ($\mathcal{E}_{6},|\mathcal{E}_{12}|$) & ($-\mathcal{E}_{6},\mathcal{E}_{12}$)\\
\textbf{Region $7$} & $+1$ & $-1$ & $-1$ & ($x>0$, & $y<0$, & $z<0$) & ($\mathcal{E}_{7},|\mathcal{E}_{11}|$) & ($-\mathcal{E}_{7},\mathcal{E}_{11}$)\\
\textbf{Region $8$} & $-1$ & $-1$ & $-1$ & ($x<0$, & $y<0$, & $z<0$) & ($\mathcal{E}_{8},|\mathcal{E}_{10}|$) & ($-\mathcal{E}_{8},\mathcal{E}_{10}$)\\
\hline
\end{tabular}
\end{table}
Further, this fact that the corresponding cosmological field along a given axis can only either be positive or negative and never both implies that each of these spacetime continuums shall have to be sub-divided into $8$ different sections since we have $3$ axis and along each of these we have two choices for $\Lambda_{k}$ -- that is, either $\Lambda_{k}>0$ or $\Lambda_{k}<0$. The the laws of permutations and combination dictate that we must have $8$ different combinations and this is shown in table (\ref{tspace}). In this table, column $1$ gives the section or sector of space and columns $2,3,4$ gives the corresponding sign of the cosmological field along the $x,y,$ and $z-axis$ and columns $5,6,7$ list the segment on which a given point falls on this space continuum and lastly, columns $9,10$ give the energy of the particles that will be found in that sector of space. We shall explain in \S (\ref{t_s}), why these particles fall in that space segment we have placed them.
\begin{figure}[ht!]
\begin{center}
{\tabletitlefont Eight Segment Sectioning of the $3\rm{D}$ Space}
\shadowbox{\epsfysize=6.0cm \epsfbox{ds_spacetime.ps}}
\end{center}
\caption{\figtextfont A schematic diagram illustrating the $8$-segment sectioning of the of the $3\rm{D}$ space. For the case $\Lambda_{0}>0$, this entire $3\rm{D}$-space will be filled at each and every-point with the cosmological field $\Lambda_{0}>0$, and likewise for the case $\Lambda_{0}<0$, this entire $3\rm{D}$-space will be filled at each and every-point with the cosmological field $\Lambda_{0}<0$.}
\label{ds}
\end{figure}
The ``picture'' of the structure of emergent space is shown in figure (\ref{ds}). This picture illustrates the $8$-segment sectioning of the of the $3\rm{D}$ space. Each axis is endowed with the corresponding field $\delta_{k}$ and we have choose the sign of this field to be the same as the sign on the corresponding coordinate. If, we consider here only positive energy particles, then, in each of the regions, we would expect two particles of different masses, much like what we see with the Electron and the Muon. The masses of these particles for the different regions will be different. We will not go any further in this subject of the particles to be found in the different regions of the $8$-segment space but leave this for a fresh reading. All we hope is that the reader does see the potency and the hidden veracity to be found in the ideas propagated herein.
\section{\sectionfont Darkmatter and Darkenergy}
The subject of \textit{Darkmatter} and \textit{Darkenergy} is a hot topic of intense theoretical and experimental research (see e.g. Sofua $1997$; Sofua \textit{et al.} $1997$; Sofua \textit{et al.} $1997$). We find that our search -- for a solution to the problem of why the Universe appears to be composed of matter with no significant quantities of antimatter, has unexpectedly, lead us also to the problem of darkmatter and darkenergy. The presence of darkmatter/darkenergy first come to light in 1933 when the Swiss astronomer Fritz Zwicky conducted observations of galaxies in the Coma Cluster -- the massive cluster of galaxies closest to the Milkway Galaxy -- where he was able to convincingly demonstrate that, given their masses, these galaxies where moving unexpectedly fast relative to one another -- so much that, they should have escaped their gravitational influence - but, for some strange and unknown reason, they had not done so. This presented a puzzle because Newtonian gravitation theory does not agree with this.
There are two possible solutions to this puzzle: either the stars that we observe are merely a tracer (about $1\%$) of the total amount of matter in the cluster and the rest is in the form of some kind of exotic ``dark'' matter; or gravity is much stronger on million light-year scales than the expected Newtonian $r^{-2}$ force law. However, neither Zwicky's observations nor those made in the intervening years have allowed researchers to distinguish conclusively between these solutions. Actually, the presence of darkmatter/darkenergy is so wide spread through the cosmos so much that something is missing in our understanding of the Laws \textit{of} Nature.
As to the question of what this darkmatter/darkenergy is, there exists a plethora of ideas. Our ideas join in the ranks and file of these ideas -- all in the effort to finding answers to this great cosmic mystery. It should be said here that we find no reason to discuss the other ideas that have been proposed because we believe the present ideas stand on their own.
We have already suggested that the vacuum can be assigned an energy $\mathcal{E}_{vac}=\Lambda_{0}\hbar c$. Within the framework of equation (\ref{fsol}), this energy assignment corresponds to the momentum solution:
\begin{equation}
\mathcal{P}^{k}\mathcal{P}_{k}c^{2}+\epsilon ^{2}Q^{2}c^{2}\equiv0,\label{tach1}
\end{equation}
and -- further, this corresponds to a wavefunction ($\psi_{D}$) for a particle that satisfies the following wave equation with decoupled energy and momentum fields:
\begin{equation}
\begin{array}{c c c c}
i\hbar\gamma^{0}\partial_{0}\psi_{D}+i\hbar \gamma^{0}\Lambda_{0}\psi_{D}=0 & ...
& (\textbf{a})\\
\\
i\hbar\gamma^{k}\partial_{k}\psi_{D}+i\hbar\gamma^{k}\Lambda_{k}\psi_{D}=\epsilon Q\psi_{D} & ... & (\textbf{b})
\\
\end{array}.\label{tach2}
\end{equation}
It is not difficult for one to see that equation (\ref{tach1}) implies that the particle's momentum will be imaginary! This momentum is given: $p_{k}=-\omega_{k} \Lambda_{k}\hbar \pm i \epsilon Qc/\sqrt{3}$. We know from the STR that a particle with imaginary momentum will have to travel at speeds greater than the speed of light. These particles, that travel faster than the speed of light are known as Tachyons and are at present nothing but hypothetical particles born out of a deep theoretical curiocity; they [Tachyons] have never been directly or indirectly observed. Discarding Tachyons as mere unphysical means we have to consider neutral particles, i.e. $Q=0$ in which case the particle $\psi_{D}$ will have real momentum. Thus this particle -- $\psi_{D}$, moves at the speed of light and has no electronic charge (rest-mass). We would like to think of this particle -- $\psi_{D}$, as a darkparticle.
The reason for suggesting that the particle $\psi_{D}$, be a darkparticle is a simple one. This particle's four momentum, which is given by $p_{\mu}=\Lambda_{\mu}\hbar$, coinsides with $4$-vector cosmological field that we have just added. We added this $4$-vector cosmological field with the hope that it will be a property of the vacuum. Now, we realize that this $4$-vector cosmological field describes a particle. This particle -- as the vacuum, must be all-pervading and permeating, thus it must be elusive and its physical presence most certainly will be observed via the effects of its energy and momentum field. The $4$-vector cosmological field is a well behaved random field and this property (of randomness) fits well in describing random fluctuations of the vacuum. This descriptions, leads one to the idea that this particle must be a darkparticle.
\textbf{\underline{Proposal}:} \textsl{This description above of $\psi_{D}$ -- in our view or opinion, suits the description of a darkparticle -- hence thus, we propose that the dark-energy-momentum that is thought to fill all of space, is comprised of this particle.}
The inclusion of darkmatter and darkenergy has implications on the way gravitation works. In a seperate reading whose work is currently underway, we shall address this problem. For now, we hope the reader will be content with what we have presented.
\section{\sectionfont A New Model of the Vacuum\label{vac_m}}
In 1930, two years after he proposed his relativistic wave equation, Dirac had to face head-on the inevitable fate of the Dirac Electron foretold by his theory. The intrinsic and inherent spacetime symmetries embodied in the usual mundane CST on which the STR is built that extend to Dirac's Theory, meant negative energies would exist and these extended downwards with no limits in a sort of mirror image of the positive energy levels. As aforementioned -- in the formulation of his theory, these negative energies, is what Dirac had hoped to deracinate. He thought, the negative probabilities exhibited by the Klein-Gordon theory is what lead to the negative energies, thus he reasoned that eliminating these, would in one full-swap, eliminate the negative energies -- he was wrong! Much to his own chagrin and that of others, the negative energies reared their head in the new Theory \textit{of} Dirac, thus bewildering any effort to reed ourself of them.
As is now bona-fide knowledge, what the negative energies of the Dirac Electron really meant is that the usual ground state of say the Hydrogen atom, is not really the true ground state at all but has a bottomless pit of negative states. If this where the case, the Electron would have to fall eon to the bottom of the bottomless pit and in the process endlessly emit energy. This could mean the Electron in the Hydrogen atom must be a source of new matter and energy as it could create more and more energy as it journeys forever down to the bottom of the baseless pit. Seen from the other side of the veil, this in actual fact meant that matter should be inherently unstable. By any stitch of imagination, this is a fact not supported by observations thus Dirac had no choice but to face this problem head-on. To solve this problem, Dirac had to redefine the vacuum otherwise nothing of the beautiful equation he had discovered could remain as it could have been found in serious contempt of physical and natural reality!
At the time Dirac had to redefine the vacuum, it was thought and taken as a self-evident, most logical and rational truth beyond question that the vacuum contained complete nothingness -- thus, this effort by Dirac to give the vacuum physical properties was nothing short of a revolution in thinking. He defined the vacuum to consist of unfilled positive and filled negative energy states of the Electron. According to the Pauli exclusion principle, an Electron would be prevented from making a downward transition if all the negative energy states are occupied hence thus, the positive energy Electron where spared their dooms fate -- that of falling into the endless pit of negative energies. The ``picture'' of this vacuum model is shown in figure (\ref{vacm}) (a).
\begin{widetext}
\begin{figure}[h]
\begin{center}
\shadowbox{
$\begin{array}{c c}
\multicolumn{1}{l}{\mbox{}} &
\multicolumn{1}{l}{\mbox{}} \\ [-0.53cm]
\epsfysize=7.0cm
\epsfbox{dirac_vac.ps} &
\epsfysize=7.0cm
\epsfbox{new_vac.ps} \\ [0.4cm]
\mbox{\bf (a)} & \mbox{\bf (b)}
\end{array}$}
\end{center}
\caption{\figtextfont (\textbf{a}) Dirac's model of the vacuum with a sea of negative energy electrons occupying all the negative energy states and (\textbf{b}) The proposed new model of the vacuum with unfilled negative energy states and having a finite energy composed of pairs of tachyons. These pairs of tachyons have opposite electronic charges thus the vacuum has net zero electronic charge and at the same time, its energy is finite. These tachyon are in a constant state of annihilation and creation. }
\label{vacm}
\end{figure}
\end{widetext}
The only problem with the Dirac vacuum (also known as the Dirac sea) is that it must have an infinite negative energy and infinite electronic charge! As to why we do not ``see'' the infinite electronic charge and energy of this vacuum, Dirac proposed that this would have to be invisible and beyond the realm of measurement. Although this model of the vacuum has great predictive powers, in that the existence of antiparticle is implied and also some of the predictions of QED that have given it widespread acceptance as one of the best theories we have, it has some problems. Actually, the vacuum of QED is different from that of Dirac but retains some features of the Dirac vacuum.
The vacuum of QED contains what is known as the zero-point energy, which is a finite intrinsic minimum none-zero energy contained in the vacuum. This energy continuously transforms some of its energy into mass causing the emergence of random appearance and disappearance of electronically charged particle(s)-antiparticle(s) pairs and these are known as virtual particles. The virtual particles can not be observed in real life just as the Dirac vacuum.
We do not object but say -- \textit{yes}, the Dirac vacuum, has had much success, \textit{but} despite this -- true is that, the idea of the Dirac vacuum [\textit{viz}, its \textbf{\textit{infinite}} electronic charge and energy together with the fact that this (charge and energy) must be unmeasurable and invisible] tends not to strike the layman as well as the esoteric, as very elegant. Another problem is; why should the negative energy Electron be invisible and unmeasurable? As long as this electronic charge is unmeasurable, it is permanently safe as an idea since it can not be refuted! This serious desiderata goes against the true and noble spirit of science, since science concerns its self with physical phenomena that can be measured and falsified and anything beyond the realm of measurement and falsification is beyond the realm of science as-well, it is something else not science.
I will say nothing further, \textit{viz} the Dirac vacuum, lest I may appear to be discrediting this vacuum in the hope of replacing it with a new one -- no! I am simple stating the natural mystification one feels derived from this fact of unmeasurable and invisible Dirac charge and energy. I shall leave the Dirac vacuum here and proceed to give the emergent vacuum model from the present theory.
In complete harmony and resonance with the idea of the QED vacuum model where we have a zero-point energy, the modification we have made to the Dirac Equation enables us to assign a vacuum energy, that is: $\mathcal{E}_{vac}=\mathcal{E}_{9}=\Lambda_{0}\hbar c$. Already pointed in the closing part of the last section, the particle with this energy solution is a darkparticle. Unlike the QED vacuum, these darkparticles are not virtual but real. These darkparticles act a seal to prevent positive energy particles from falling into the negative energy state. To see this, suppose we have an Electron in the energy state: $\mathcal{E}_{j}>\mathcal{E}_{vac}: j=1,2, ..., 8$ and this Electron is to make a downward transition to a negative energy state, it would have first to make a transition to the vacuum energy state $\mathcal{E}_{vac}$. Now here is the catch. For any particle of finite rest-mass to make the transition: $\mathcal{E}_{j}\longrightarrow\mathcal{E}_{vac}:j=1,2, ..., 8$, it would have to travel at the speed of light. As we already know from the STR, a material particle of finite rest-mass, first moving at sub-luminal speeds is forbidden from passing the light-speed barrier because this would require an infinite amount of energy! Thus the transition: $\mathcal{E}_{j}\longrightarrow\mathcal{E}_{vac}:j=1,2, ..., 8$, is impossible hence thus, particles will be forbidden by the light-speed barrier from entering the negative energy states!
A second and much stronger reason is that, if say by shear chance, the Electron manages to get this infinite energy and reaches the light-speed, it would have -- against the Law \textit{of} Conservation of electronic charge; to lose its electronic charge because the vacuum state is only occupied by particles whose electronic charge is zero (or zero rest-mass). Accepting the above thesis, means within the framework of the present ideas, we have solved the problem of why positive energies will be forbidden from making a transition to negative energy states and also how the negative energy states of the vacuum can stay permanently empty without any problem whatsoever.
The light-speed barrier, $v<c$, and the conservation of electronic change save the day, as the dilemma faced by Dirac is not anymore present -- thanks to these two Physical Laws. We can now have empty negative energy states and these can not be occupied because having just one negative energy state occupied will mean this Electron will have to suffer the fate as the Dirac Electron and fall eon to the bottom of the bottomless pitch of the negative energy well and inthe process creating energy endlessly. With the negative energy states empty and the vacuum filled by the darkparticles -- in my modest opinion, we have the perfect vacuum! The picture of this vacuum model is shown in figure (\ref{vacm}) (b) and this [vacuum] needs no infinite energy and it needs no infinite electronic charge but simple has the light speed barrier and the conservation of electronic charge to take care of the troubles that bedeviled Dirac.
\section{\sectionfont T-Symmetry \label{t_s}}
The fact that the Full Cosmological Dirac Equation (\ref{fsol}) is $\rm{T}$-symmetric, means that we can flip the positive and negative energy solutions about the vacuum energy: $\mathcal{E}_{vac}=\Lambda_{0}\hbar c$, that is to say, the energy solutions: $\mathcal{E}_{1}, \mathcal{E}_{2}, \mathcal{E}_{3} ...., \mathcal{E}_{9}, \mathcal{E}_{10},\mathcal{E}_{11},\mathcal{E}_{12}, ..., \mathcal{E}_{17}$, lead after: $t\longmapsto -t$ or $\mathcal{E}\longmapsto-\mathcal{E}$, to the energy solutions: $|\mathcal{E}_{17}|, |\mathcal{E}_{16}|, |\mathcal{E}_{15}| ...., \mathcal{E}_{9}, \mathcal{E}_{8},-\mathcal{E}_{7},-\mathcal{E}_{6}, ..., -\mathcal{E}_{1}$ where the operator $|[]|$ is the usual absolute operator which gives the absolute value of the quantity inside the operator $[]$. From a symmetry view-point, the latter set of energy solutions is obtained after a flipping of these energies about the vacuum energy.
This means doublets emerging from equation (\ref{fsol}) will have energies: ($\mathcal{E}_{1},|\mathcal{E}_{17}|$), ($\mathcal{E}_{2},|\mathcal{E}_{16}|$), ($\mathcal{E}_{3},|\mathcal{E}_{15}|$), ($\mathcal{E}_{4},|\mathcal{E}_{14}|$), ($\mathcal{E}_{5},|\mathcal{E}_{13}|$), ($\mathcal{E}_{6},|\mathcal{E}_{12}|$), ($\mathcal{E}_{7},|\mathcal{E}_{11}|$) and ($\mathcal{E}_{8},|\mathcal{E}_{10}|$), these can be generalized as ($\mathcal{E}_{j},|\mathcal{E}_{18-j}|$) and for the negative energies, we will have ($-\mathcal{E}_{j},\mathcal{E}_{18-j}$). As has been be argued in \S (\ref{vac_m}) above, each of the different Universe, $\mathcal{U}^{+}$ and $\mathcal{U}^{-}$ are filled exclusively with positive and negative energy particles respectively. These particles will belong to the different section of the two Universes $\mathcal{U}^{+}$ and $\mathcal{U}^{-}$ and this is shown in the forth block of table (\ref{tspace}).
In each of the regions of spacetime, a particle must have its other doublet pair which is not the same mass as its self but has all other properties the same. This brings to mind the Electron and Muon which appear similar in all aspects except their mass. We simple want to point this out that the present theory contains such interesting information and that the time to delve into a full throttle on this is not now as this is better done in further readings that expand on the present.
\section{\sectionfont Discussion and Conclusions}
If the reader has gone through this reading up-until the present point, I sincerely believe they [the reader] will agree with me if I say that the intent of this reading -- to address the matter/antimatter asymmetry has been dwarfed, or pretty much appears much less significant when compared to what we have actually discovered along the way. The initial intent was a rather modest ambition -- to use the inclusion of the cosmological field in the time operator to explain the apparent asymmetry as to why the Universe seems to be predominately composed of matter instead of equal portions of matter and antimatter as predicted by the Dirac Equation. Along the way on the voyage, we found that; (1) we had to propose a new model of spacetime that has the potency to explain the quantum mechanical uncertainty and randomness; (2) we had to setforth a new model of the vacuum radically different from that of Dirac; (3) the modified Dirac Equation allows for the existence of a spectrum of eight particles with unique masses and these reside in pair in the 8-different segments of space as layed-out in section \S (\ref{ctv_s}); (4) the predicted vacuum by the present theory is that composed of all-pervading and permeating particles and these particles may explain why they appear to be darkmatter and darkenergy in the Universe.
Further, allow me to say that, given the aforesaid and if correct, and the present theory is anything to go by, that it corresponds to natural reality, then, the modification made here to the Dirac Equation is so simple and trivial yet very deep. I should say, in my perusal through the literature that I have so far been able to lay my hands, I have not come across a modification of this kind. Concurrently, I have not come across an approach where the inclusion of a cosmological $4$-vector field in the spacetime operator (as has been done here) is used to probe space and time asymmetries/symmetries. It may so happen that someone already has done this kind of work given its simplicity and trivial nature. If this is the case, I wonder why for example it is considered a big mystery that the Universe appears to be made up chiefly of matter and that the best explanation we have of this are the ideas laid down by Andrei Sakharov in $1967$ where Nature must adhere to a strict prescription inorder to explain this mystery.
One may ask: If Andrei Sakharov had in $1967$ set conditions to explain the apparent matter/antimatter asymmetry -- why the present work to champion the same endeavor? We feel that these conditions are far too many (four) and that if a simpler solution can be found, then it must be found and set into motion as a contender to prevailing wisdom. On another level , there is a general feeling amongst a good number of researchers that the currently measured $\rm{CP}$\textit{-violation}s fall far too short to explain this apparent matter/antimatter asymmetry (see e.g. Rodger $2001$; Sinha $2009$). Because of the said reasons and others not mentioned here, I felt it was time we seek alternative ideas -- hence the present.
Clearly, the bedrock or the very foundations of our new theory rest entirely on the modification we have made to the CST which has been to transform it into a new spacetime endowed with a $4$-vector cosmological field and we have called this new spacetime -- the QST. This 4-vector cosmological field has the property of pure randomness (unpredictability). In this QST, it is not possible to know exactly where a particular point from the CST will be located upon a transformation to the the QST as this attribute is random -- thanks to the function $\delta_{\mu}(t)$. The QST clearly defies the \textit{sacrosanct} $\rm{CPT}$ theorem of L\"uder, Bell and Pauli, that every Lorentz invariant theory must observe $\rm{CPT}$-\textit{symmetry}. Clearly and without any doubt, the derived Cosmological Dirac equation is Lorentz invariant yet it violates this sacrosanct $\rm{CPT}$ theorem. Not only is this equation in violation of $\rm{CPT}$, it is in violation of all the symmetries with the exception of the $\rm{T}$-\textit{symmetry}.
The new vacuum model setforth in this reading is radically different from that setforth by Dirac in that a particle's negative energy levels -- just as the positive energy levels, need not be filled. The vacuum has an energy level $\mathcal{E}=\Lambda_{0}\hbar c$ (constituting of electrically neutral darkparticles that travel at the speed of light) and this energy level acts as a barrier that can prevents positive energy particles from falling into the negative energy well and vice-versa. If a positive energy particle where to make a transition to a negative energy state, it will need to first enter in this darkparticle meaning to say it must at somepoint travel at the speed of light. From the STR, we know that particles traveling at sub-luminal speed will need an infinite amount of energy to be propelled to the light-speed -- the meaning of which is that it must be impossible for a particle to be propelled to light-speeds hence thus, in this way, the light speed barrier plays an very important role in the stabilizing the vacuum.
We also saw that these darkparticles can actually carry some electrical charge but for this to be so, they will have to travel at superluminal speed. It may well be possible that these charged darkparticles may be real. For no other reason except that we wanted to keep the matters as simple as one can, we choose to discard these and this may not be the correct thing to do.
Furthermore, we saw that excluding the vacuum particles, the modified Dirac Equation predicts a total of $16$ energy levels which invariable means $16$ particles with $8$ of these energy levels being positive while the other energy levels are negative. But given that the vacuum setforth here separates the positive and negative energy particles, one can only talk of $8$ particles and these occupy the eight different region of space as. If one combined the present ideas with the three curved spacetime equation derived in the reading Nyambuya ($2008a$) -- in my view, this positions us on pedestal to perharps explain why fundamental particles have varying masses and why they the generation of leptons come in pairs.
Insofar as symmetry is concerned, we have in the present reading, destroyed the beautiful Dirac-Symmetries and amongst these and top of the list, is the cornerstone and sacrosanct $\rm{CPT}$-\textit{symmetry}. Dirac-Symmetries have inspired physicists to seek highly symmetric theories and amongst these is the Supersymmetry Theory known as SUSY whose endeavor is to unite QM with the GTR. In light of what we have presented here, if it is correct, then, clearly highly symmetric theories may not be the desired thing for a final unified theory. This is just my opinion and obviously this may change as more understanding of reality and nature comes to light.
\textit{\textbf{In closing}}, allow me to say that, writing this reading and my other readings Nyambuya ($2007,\, 2008a,b$) has been such pain because in all these readings, I find myself having to introduce new ideas, terms and concepts and this by any measure is no easy task -- because of this, I ask the reader to bear with me.
{\tabletextfont \underline{\tabletextfont \textbf{Acknowledgments:}} This work was completed under the kind hospitality of my brother -- George, and his wife -- Sarmatha. I am grateful for this and as-well to Donald Ngobeni for his support during the drafting of this manuscript. I dedicate this reading to the Pine o\textit{f} Lilyrose.}
\newpage
|
1,314,259,995,589 | arxiv | \section{Architecture and Implementation Details}
\subsection{Model architecture.}
We implement the convolutional encoder and decoder architectures following~\cite{liu2021fuseformer, li2022towards}, where the channel dimension $C$ is set to $128$ for our standard model and $64$ for the small version. The optical flow completion network $\mathcal{F}$ is employed with MaskFlowNetS~\cite{zhao2020maskflownet}, and we initiate it with pretrained weights on FlyingChairs~\cite{dosovitskiy2015flownet} and FlyingThings~\cite{mayer2016large} flow datasets to take advantage of the rich prior knowledge of optical flow. For FlowLens-s, we adopt pretrained MaskFlowNetS to supervise SpyNet~\cite{ranjan2017optical} for speed consideration. The clip-recurrent transformer includes a mix focal transformer block embedded with a clip-recurrent hub and $8$ other transformer blocks for FlowLens and $4$ blocks for FlowLens-s. The head number $d$ of multi-head focal attention is set to $4$. The hidden
dimension $C_{e}$ of the transformer is set to $512$, and $256$ for the small model. The input features of the transformer are split into $7{\times}7$ overlapping patches with $3{\times}3$ strides. The architecture of the T-PatchGAN is the same as~\cite{chang2019free,zeng2020learning,liu2021fuseformer,li2022towards}.
Fig.~\ref{fig:FLOPs} shows that the proposed Clip-Recurrent Hub only requires negligible cost.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{imgs/FLOPs.pdf}
\caption{Computational cost of the proposed Clip-Recurrent Hub.}
\vspace{-1.0em}
\label{fig:FLOPs}
\end{figure}
\subsection{Training details.}
For the video inpainting task, we set batch size to $8$, learning rate to $1{\times}10^{-4}$ to train $500k$ iterations following~\cite{li2022towards}, and the training image size is $432{\times}240$. The learning rate weight $\lambda_{rec}$ is set to $1$, and $\lambda_{adv}$ is set to $10^{-2}$. For beyond-FoV estimation, we use a single NVIDIA RTX3090 with a batch size of $2$ and learning rate of $2.5 {\times} 10^{-5}$ to train all video inpainting models~\cite{zeng2020learning,liu2021fuseformer,li2022towards} and FlowLens for $500k$ iterations. The training pinhole image size is $432{\times}240$ and the size of the spherical image is $336{\times}336$. During training, the number of local frames $T_{lf}$ is $5$, and the number of past reference frames $T_{pf}$ is $3$. For FlowLens, we use serialized loading of training logic and empty the clip buffer when switching to a new video to avoid interference from irrelevant clips. Ablations are performed on the spherical inward
expansion
with a batch size of $2$ for $250k$ iterations.
\subsection{Evaluation details.}
Following~\cite{li2022towards}, we test our models with the resolution of $432{\times}240$ on YouTube-VOS~\cite{xu2018youtube} and DAVIS~\cite{perazzi2016benchmark} with the same test mask. For the video inpainting, we adopt the same evaluation pipeline of the previous work~\cite{li2022towards} that uses the sliding window with size of $10$ and sample the reference frame from the entire video with a stride of $10$. For beyond-FoV estimation, we consider that only past reference frames can be used, and the sliding window size is set to $5$ with $3$ past reference frames. Note that the sliding window ends with $t+5$ for the video inpainting but $t$ for the expansion at time $t$, since the future information cannot be accessed in this track (see Fig.~\ref{fig:sup_test_logic}). The FLOPs are computed using input images of $432{\times}240$ of temporal length $8$. The speed test is
performed on a single RTX3090.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{imgs/sliding-window.pdf}
\caption{The evaluation logic comparison of video inpainting and beyond-fov estimation. (a) Video inpainting is regarded as a video editing technique and adopts overlapping sliding windows to get latent output. (b) Future frames cannot be accessed in beyond-fov estimation, thus the sliding windows are non-overlapping and output online.}
\label{fig:sup_test_logic}
\end{figure*}
\section{More Qualitative Results}
We visualize more results of LaMa~\cite{suvorov2022resolution}, FuseFormer~\cite{liu2021fuseformer}, E2FGVI~\cite{li2022towards}, and FlowLens on the KITTI360-EX to further demonstrate the effectiveness of our method. More results including object removal and video completion on the public DAVIS and Youtube-VOS are also presented, and we compare FlowLens against the VI-Trans STTN~\cite{zeng2020learning} instead of LaMa on this benchmark.
As shown in the figures, FlowLens produces a more spatio-temporal coherent and visually plausible output than existing methods.
\begin{figure}[!t]
\centering
\includegraphics[width=1\columnwidth]{imgs/user_study_v2.pdf}
\caption{User study results. The vertical axis indicates percentage of ranking first among ${28}$ viewers of ${40}$ videos on (a) video inpainting and (b) beyond-FoV estimation task.}
\vspace{-1.0em}
\label{fig:sup_user_study}
\end{figure}
\section{User Study}
For a comprehensively comparison, a user study was conducted with top-performing methods~\cite{zeng2020learning,liu2021fuseformer,li2022towards}.
To be specific, we randomly sample $20$ videos from DAVIS~\cite{perazzi2016benchmark} to evaluate video inpainting ($10$ for video completion, and $10$ for object removal). $20$ videos are also randomly sampled from KITTI360-EX for evaluating beyond-FoV estimation ($10$ for outward estimation, and $10$ for inner estimation). $28$ volunteers are invited to participate in the survey. Each volunteer simultaneously watch the outputs of the $4$ methods, as well as the original video input. Volunteers need to conduct a total of $40$ trials.
For a fair comparison, there is no time limit for trials, and each video can be paused and replayed at any time. The statics are shown in Fig.~\ref{fig:sup_user_study}, our method clearly performs better for these two types of tasks, demonstrating that the proposed method can produce more visually pleasing results. Interestingly, we find that STTN outperforms FuseFormer on the beyond-FoV task, suggesting an essential difference in the model requirements of the two tasks. We consider that the Clip-Recurrent Transformer of FlowLens is able to mine potential visual cues in past iterations, thus gaining advantages on both tasks at the same time.
\section{Discussion}
\subsection{Limitations}
The outward expansion results for pinhole cameras are not as satisfactory as the inward expansion for spherical cameras at the same FoV expansion rate. This shows that estimation beyond the FoV outwards is yet remaining a challenging task, considering that it has only unidirectional constraints, the larger practical area to be filled brought by the $f{-}{tan}\theta$ camera model, and the larger displacement motion field compared with the inner situation.
Another limitation lies in the time interval between sliding windows of the clip. However, it is a common issue of existing multi-to-multi video inpainting models~\cite{zeng2020learning,liu2021fuseformer,li2022towards}, and our approach is carefully designed to explore the temporal past visual cues of the camera, which has the advantage in the case of one-shot completion and produces results with better spatio-temporal consistency at faster speeds.
We consider two possible solutions to alleviate this problem: 1) Keeping the multi-to-multi model, but narrowing the time window and only collecting results from the last frame to improve the real-time performance, which will obviously damage the accuracy at the same time. 2) Adopting an image-processing model with a large memory cache to further improve the speed, but relying solely on the memory may reduce the fineness of local fine-grained details beyond the FoV. Anyway, we are willing to further explore beyond-FoV estimation in the future, as well as study the feasibility and practical value of beyond-FoV in 3D vision (\eg point clouds of LiDAR and event camera), making algorithms and information-collecting sensors more complementary to each other.
\subsection{Potential negative impact}
In this paper, we propose FlowLens, which aims to see the world outside the physical FoV of information-collecting sensors. However, if this technique is used by people with ulterior motives for military purposes, such as battlefield perception systems on armored vehicles, terminal visual guidance systems for missiles, \etc, it may exacerbate military conflicts in disputed areas and affect world peace and stability. In the meantime, the results of expanding the FoV outward have not been perfect, which may become a safety hazard in applications such as autonomous driving and mobile robots. In the future, We will continue to explore beyond-FoV estimation to further improve the reliability and robustness of the algorithm.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{imgs/quality-supp-kexo-fov10-v3.pdf}
\caption{More visualization results compared with LaMa~\cite{suvorov2022resolution}, FuseFormer~\cite{liu2021fuseformer}, and E2FGVI~\cite{li2022towards} on \textsc{KITTI360-EX} outer pinhole camera Beyond-FoV estimation.}
\label{fig:sup_qualitative_kexo_fov10}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{imgs/quality-supp-kexi-fov20-v3.pdf}
\caption{More visualization results compared with LaMa~\cite{suvorov2022resolution}, FuseFormer~\cite{liu2021fuseformer}, and E2FGVI~\cite{li2022towards} on \textsc{KITTI360-EX} inner spherical camera Beyond-FoV estimation.}
\label{fig:sup_qualitative_kexi_fov20}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{imgs/supp_davis_or_compare.pdf}
\caption{Visualization results compared with STTN~\cite{zeng2020learning}, FuseFormer~\cite{liu2021fuseformer}, and E2FGVI~\cite{li2022towards} on DAVIS~\cite{perazzi2016benchmark} object removal.}
\label{fig:sup_qualitative_davis_or}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{imgs/supp_davis_vi_compare_v2.pdf}
\caption{More visualization results compared with STTN~\cite{zeng2020learning}, FuseFormer~\cite{liu2021fuseformer}, and E2FGVI~\cite{li2022towards} on DAVIS~\cite{perazzi2016benchmark} video inpainting.}
\label{fig:sup_qualitative_davis}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{imgs/supp_youtube_vi_compare.pdf}
\caption{More visualization results compared with STTN~\cite{zeng2020learning}, FuseFormer~\cite{liu2021fuseformer}, and E2FGVI~\cite{li2022towards} on YouTube-VOS~\cite{xu2018youtube} video inpainting.}
\label{fig:sup_qualitative_youtube}
\end{figure*}
\section{Introduction}
\label{sec:intro}
Vision sensors, such as pinhole cameras and spherical cameras, are widely used for visual information acquisition. Due to hardware size and cost constraints, the physical Field-of-View (FoV) of cameras is not always satisfactory.
However, from a spatio-temporal perspective, features other than the current FoV from the past information stream is actually off-the-shelf.
Therefore, we raise an appealing question: \textbf{Can we go beyond the limits of optics and see the world beyond the camera's FoV?}
To address this question, we propose a novel task termed \emph{Beyond-FoV Estimation}, aiming to empower camera sensors to capture visual cues beyond the physical limitation of optical systems.
To this end, we propose a propagation-based approach for extending camera's FoV.
The core intuition is that optical flow as a motion vector field can explicitly guide the propagation of temporal features, while vision transformers~\cite{dosovitskiy2020image} are able to align features implicitly with the help of self-attention mechanisms~\cite{shi2022rethinking}.
To implement this idea, we build \emph{FlowLens}, a novel \emph{Flow-guided Clip-Recurrent Transformer}, to see beyond the FoV.
As shown in Fig.~\ref{fig:teaser}(a), FlowLens can leverage the past and current clips by clip-recurrent propagation,
which works like a set of virtual lenses to recurrently focus a video with a limited FoV into a new video with a complete view.
\newpage
As a video editing technique, video inpainting~\cite{kim2019deep,xu2019deep} aims to fill in the missing part of the video in an offline mode.
Compared with the recent video inpainting transformer (VI-Trans)~\cite{liu2021fuseformer,li2022towards}, the main differences of FlowLens are three-fold (see~Fig.\ref{fig:teaser}(b-c)):
(1)~\textbf{\emph{Online Output.}} VI-Trans relies on the future reference frame inputs and outputs are offline, whereas the output of FlowLens is immediate and online, which serve as critical prerequisites for real-world applications like autonomous driving.
(2)~\textbf{\emph{Past Reference Sampling.}} VI-Trans samples reference frames from the entire video, but when extending FoV we can only sample from past streams and cannot access future information.
(3)~\textbf{\emph{Clip-Recurrent Propagation.}} FlowLens can propagate the past clip features to the current iteration to fully exploit the potential of past reference frames.
Specifically, FlowLens runs on the current clip, but the ``queries'' additionally attend in the ``keys'' and ``values'' encoded and cached in the previous clip, thus recurrently passing valuable encoded features from the past.
To this end, we introduce a \emph{Clip-Recurrent Hub},
and propose a novel \emph{3D-Decoupled Cross Attention (DDCA)}
to implement the query operation.
By decoupling spatio-temporal features in the 3D space,
our model can efficiently propagate relevant clipping features for extending the FoV.
To further enhance the ability of FlowLens to extract multi-scale local information, we introduce a new \emph{Mix Fusion Feed Forward Network (MixF3N)} by splitting two depth-wise convolution branches with different kernel sizes in the feed forward network.
Furthermore, to facilitate training and evaluation, we establish the \emph{KITTI360-EX} dataset, which contains $76k$ frames of pinhole images and spherical images, as well as bidirectional FoV expansion masks. For pinhole cameras, the limitation
can be found on the outer boundaries of the image plane, while for spherical cameras with large FoV, this limitation usually appears on the loss of central FoV (see Fig.~\ref{fig:teaser}(a)) introduced by reflective or catadioptric optics~\cite{gao2022review,zhang2020design,niu2007design}.
We benchmark published representative image inpainting~\cite{wang2019wide,suvorov2022resolution,liu2022reduce}
and video inpainting~\cite{zeng2020learning,liu2021fuseformer,li2022towards} models on KITTI360-EX. Experimental results on both Video Inpainting and Beyond-FoV Estimation reveal that FlowLens achieves state-of-the-art performance.
In summary, we deliver the following contributions:
\begin{compactitem}
\item[(1)]
We propose \emph{FlowLens}, a novel flow-guided clip-recurrent transformer framework for bidirectional expansion of the FoV.
\item[(2)]
The newly introduced \emph{3D-Decoupled Cross Attention (DDCA)} and \emph{Mix Fusion Feed Forward Network (MixF3N)} are seamlessly integrated into the FlowLens architecture, which further enhances the performance.
\item[(3)] We raise a new Beyond-FoV estimation task to encourage exploiting the past spatio-temporal stream for FoV expansion. We establish \emph{KITTI360-EX} and benchmark existing models on the new track.
\item[(4)] Extensive experiments demonstrate that the proposed FlowLens outperforms state-of-the-art video inpainting approaches on both video inpainting and Beyond-FoV estimation tasks.
\end{compactitem}
We hope FlowLens can serve as a powerful baseline for the Beyond-FoV Estimation task and arouse community's interest in surpassing the limits of the camera's optical FoV.
\begin{figure*}[!t]
\centering
\includegraphics[width=1.0\linewidth]{imgs/overview-v4.pdf}
\vspace{-2em}
\caption{\emph{Illustrations of our proposed FlowLens.} (a) An overview. From left to right, it consist of 1) a convolution stem to extract shallow features, 2) an explicit flow-guided feature propagation module, 3) a clip-recurrent transformer to implicitly propagate features and fuse past information stream, 4) output convolution layers to restore the completed frames. (b)-(c) Our proposed Mix Focal Transformer Block and Mix Fusion Feed Forward Network (MixF3N).}
\label{fig:overview}
\vspace{-2.0em}
\end{figure*}
\section{Related Work}
\input{tex_content/related_work}
\section{Methodology}
\noindent\textbf{Preliminary.}
Given a FoV-limited video sequence $\mathbf{X}^t=\{X^t \in \mathbb{R}^{H \times W \times 3} |t=1...T \}$
and a corresponding binary mask $M \in \mathbb{R}^{H \times W \times 1}$ representing the missing FoV, whose values are either $0$ denoting the original image plane or $1$ referring to the regions that require filling.
Note that $M$ can be a sequence of different FoVs during training, but it keeps the same during testing considering the actual limitation of FoV-limited cameras.
The goal of our FoV expansion task is to propagate plausible and spatio-temporally coherent content from $\mathbf{X}^t$ to the complete frames $\mathbf{\hat{Y}}^t=\{\hat{Y}^t \in \mathbb{R}^{H \times W \times 3} |t=1...T \}$ with a larger FoV.
\noindent\textbf{Overview.}
Fig.~\ref{fig:overview} shows the entire pipeline of the proposed \emph{FlowLens} for FoV expansion.
We first use a convolutional stem
to encode the input Local Frames (LF) and Past Reference Frames (PRF) sampled from $\mathbf{X}^t$.
Then, the LF features are fed into the \emph{Explicit Flow-guided Feature Propagation} module (Sec.~\ref{sec:explicit_propagation}) to complete the feature under the motion prior.
Next, the \emph{Clip-Recurrent Transformer} (Sec.~\ref{sec:clip_recurrent_transformer}), with our proposed \emph{Mix Fusion Feed Forward Network (MixF3N)}, queries and implicitly aligns spatio-temporally related features in the PRF with the LF, and retrieves the coherence values from the last iteration via the \emph{Clip-Recurrent Hub} that is equipped with our \emph{3D-Decoupled Cross Attention (DDCA)}.
Finally, we adopt the output convolutional layers to up-sample the complete features back to the original scale and reconstruct the FoV-expanded sequence $\mathbf{\hat{Y}}^t$.
\subsection{Explicit Propagation with Optical Flow}
\label{sec:explicit_propagation}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{imgs/histogram-v2.pdf}
\vspace{-1em}
\caption{\emph{The histogram of the movement for the KITTI360-EX.} The distribution of pixel displacement exhibits a long-tailed feature and contains large motion, especially for the pinhole camera.}
\label{fig:kitti_flow}
\vspace{-1.5em}
\end{figure}
Our proposed \emph{Explicit Flow-guided Feature Propagation} module consists of three stages (see Fig.~\ref{fig:overview}(a)): optical flow completion, warping layer, and deformable compensation.
As previously shown in~\cite{xu2019deep}, it is simpler to complete the corrupted optical flow than directly estimating the missing pixels.
Therefore, we first estimate the beyond-FoV optical flow for two adjacent FoV-limited frames $X^i$ and $X^j$ by a flow network $\mathcal{F}$:
\vspace{-0.5em}
\begin{equation}
\label{equ:flow_complete}
\begin{aligned}
\hat{\mathbf{V}}_{i \rightarrow j}=\mathcal{F}(d_{4}(X^{i}),d_{4}(X^{j})),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $d_4(\cdot)$ denotes downsampling by $\frac{1}{4} \times$.
As shown in Fig.~\ref{fig:kitti_flow}, the distribution of pixel movement on KITTI360-EX exhibits a long-tailed feature and contains large displacements, especially for the pinhole cameras outer estimation.
\newpage
\noindent According to the observed motion distribution, we consider that the accurate flow prior is important for feature propagation, and thus we propose to incorporate a strong flow completion network that generates a $6-$level shared feature pyramid
and makes predictions from level $6$ to $2$
in a coarse-to-fine fashion to output a high-quality flow field.
Next, we exploit
the motion prior to warp the input local frames in the feature space to mitigate the inaccurate flow estimation and
frame-level occlusion.
Specifically, given the feature maps $f_{i}, f_{j} \in \mathbb{R}^{\frac{H}{4} \times \frac{W}{4} \times C}$ of local frames, we have:
\vspace{-0.5em}
\begin{equation}
\label{equ:flow_warp}
\begin{aligned}
\tilde{f}_{i}(\mathbf{x})=f_{j}(\mathbf{x}+\hat{\mathbf{V}}_{i \rightarrow j}(\mathbf{x})),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\tilde{f}_{i}$ is the first-order propagation feature and $\mathbf{x}$ is the pixel index.
In explicit feature propagation, an important challenge lies in aligning multiple frames. Advanced video processing models, such as state-of-the-art video super-resolution transformers~\cite{lin2021fdan,liang2022vrt,chan2022basicvsr++} and frame interpolation networks~\cite{lee2020adacof,ding2021cdfi} are generally equipped with well-designed alignment modules combining optical flow and deformable convolution networks (DCN). FlowLens is no exception, as we introduce DCN
after the warping layer to sample from diverse spatial locations, the error accumulation caused by flow-guided feature propagation can be further compensated.
To be concrete, we compute the offset $\mathbf{o}_{i \rightarrow j}$ and the modulation
weight $m_{i \rightarrow j}$ of modulated deformable convolution~\cite{zhu2019deformable} based on the flow prediction:
\vspace{-0.5em}
\begin{equation}
\label{equ:dcn_compensate_1}
\begin{aligned}
& \mathbf{o}_{i \rightarrow j}=\hat{\mathbf{V}}_{i \rightarrow j} + r_{max} \cdot tanh(\mathcal{C}_{off}(\tilde{f}_{i}, f_{i}, \hat{\mathbf{V}}_{i \rightarrow j})),\\
& m_{i \rightarrow j}=\sigma( \mathcal{C}_{mod}(\tilde{f}_{i}, f_{i}, \hat{\mathbf{V}}_{i \rightarrow j}))),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\mathcal{C}_{off}$ and $\mathcal{C}_{mod}$ are sets of convolutional layers and $\sigma$ represents the Sigmoid activation function.
We introduce $r_{max}$ as the max compensate residue magnification, which is set to $10$ in all experiments.
The DCN is then applied as:
\vspace{-0.5em}
\begin{equation}
\label{equ:dcn_compensate_2}
\begin{aligned}
\hat{f}_{i} = \mathcal{C}_{prop}(f_{i}, \mathcal{D}(f_{j} | \mathbf{o}_{i \rightarrow j}, m_{i \rightarrow j})),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\hat{f}_{i}$ is the second-order propagation feature, $\mathcal{C}_{prop}$ is several stacked convolutional layers, and $\mathcal{D}$ denotes the DCN.
The above propagation is performed between adjacent local frames in a bidirectional manner.
We finally apply $\mathcal{C}_{fuse}$ as a $1 {\times} 1$ convolutional layer to adaptively fuse the forward- and backward propagation feature $\hat{f}^{f}_{i},\hat{f}^{b}_{i}$:
\vspace{-0.5em}
\begin{equation}
\label{equ:dcn_compensate_3}
\begin{aligned}
\hat{F}_{i} = \mathcal{C}_{fuse}(\hat{f}^{f}_{i}, \hat{f}^{b}_{i}).
\end{aligned}
\end{equation}
\subsection{Clip-Recurrent Transformer}
\label{sec:clip_recurrent_transformer}
For beyond-FoV estimation, it is not sufficient to only rely on the current clip. Since future videos are not available, we must consider how to further explore past information streams. To this end, we propose a novel \emph{Clip-Recurrent Transformer} framework, with a \emph{Clip-Recurrent Hub} enhanced transformer architecture and use \emph{3D-Decoupled Cross Attention (DDCA)} to query the spatio-temporal coherent features from the previous iteration. Another challenge comes from the local finesse.
As previously shown in MiT~\cite{xie2021segformer}, convolutions could leak local information in a direct way while maintaining the performance.
Therefore, we introduce the \emph{Mix Fusion Feed Forward Network (MixF3N)} to further facilitate the local features flow between soft split tokens in a multi-scale manner for performing the FoV expansion.
As shown in Fig.~\ref{fig:overview}(a), the Clip-Recurrent Transformer consists of a Clip-Recurrent Hub and $N$ Mix Focal Transformer blocks.
The Mix Focal Transformer blocks is the same to Temporal Focal Transformer~\cite{yang2021focal,li2022towards} block except that the fusion feed forward network is replaced with our proposed MixF3N.
Suppose $\hat{F}_{lf} \in \mathbb{R}^{T_{lf} \times \frac{H}{4} \times \frac{W}{4} \times C}$ and $F_{pf} \in \mathbb{R}^{T_{pf} \times \frac{H}{4} \times \frac{W}{4} \times C}$ are encoded local- and past reference frames, respectively, we use soft split~\cite{liu2021fuseformer} to embed them into overlapped patches $X \in \mathbb{R}^{(T_{lf}+T_{pf}) \times W_{h} \times W_{w} \times C_{e}}$:
\vspace{-0.5em}
\begin{equation}
\label{equ:soft_split}
\begin{aligned}
X = {\rm SS}(\hat{F}_{lf} \oplus F_{pf}),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where ${\rm SS}(\cdot)$ denotes the soft split operation. $T_{lf}$ and $T_{pf}$ are the time dimension of LF and PRF, and $W_{h}$ and $W_{w}$ are the spatial dimension of embedded tokens. $\oplus$ denotes the feature concatenation.
Then, $X$ is linearly projected to queries $Q$, keys $K$, and values $V$ for computing the focal attention and MixF3N to obtain the output tensor $Z \in \mathbb{R}^{(T_{lf}+T_{pf}) \times W_{h} \times W_{w} \times C_{out}}$:
\vspace{-0.5em}
\begin{equation}
\label{equ:block}
\begin{aligned}
& Q, K, V = \mathcal{P}_{qkv}({\rm LN_{1}}(X)), \\
& Z' = {\rm MHFA}(Q, K, V) + X, \\
& Z = {\rm MixF3N}({\rm LN_{2}}(Z')) + Z',
\end{aligned}
\vspace{-0.5em}
\end{equation}
where ${\rm LN}$ and ${\rm MHFA}$ denote the layer normalization and multi-head focal attention, respectively, $\mathcal{P}_{qkv}$ is the linear projection layer, and the key difference of our Mix Focal Transformer compared with the previous VI-Trans~\cite{li2022towards} lies in the newly-designed Mix Fusion Feed Forward Network (MixF3N). We omit the time dimension for simplicity.
\newpage
\noindent\textbf{Clip-Recurrent Hub.}
To realize the progressive propagation of spatio-temporal correlation features along the iterations, we implement Clip-Recurrent Hub as the crucial transit center for information exchange.
Algorithm~\ref{alg:recurrent} presents the pseudo code in a pytorch-like style for our design.
Concretely, we first cache the keys and values $\mathbf{K}^{t}_{i},\mathbf{V}^{t}_{i}=\{K^{t}_{i},V^{t}_{i} \in \mathbb{R}^{W_{h} \times W_{w} \times C_{e}} |t \in T_{lf} \cup T_{pf} \}$ of the $i$-th iteration:
\vspace{-0.5em}
\begin{equation}
\label{equ:cache}
\begin{aligned}
\bar{\mathbf{K}}^{t}_{i}, \bar{\mathbf{V}}^{t}_{i} := {\rm SG}(\mathbf{K}^{t}_{i}, \mathbf{V}^{t}_{i}),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where ${\rm SG}(\cdot)$ is the stop gradient operator to avoid past backward propagation. $\bar{\mathbf{K}}^{t}_{i}$ and $\bar{\mathbf{V}}^{t}_{i}$ denote the cached clip keys and values.
Whenever the camera is initiated (\ie $i=0$), $\bar{\mathbf{K}}^{t}_{0}$ and $\bar{\mathbf{V}}^{t}_{0}$ are associated with the same as $\mathbf{K}^{t}_{1}$ and $\mathbf{V}^{t}_{1}$, and it is iteratively updated at further time stamp $i>0$.
Then, we introduce the 3D-Decoupled Cross Attention to query from the clip buffer at the next iteration:
\vspace{-0.5em}
\begin{equation}
\label{equ:clip_recurrent_hub}
\begin{aligned}
& \mathbf{\bar{Z}}'_{i+1} = {\rm DDCA}(\mathbf{Q}_{i+1}, \mathcal{P}_{kv}(\bar{\mathbf{K}}_{i}, \bar{\mathbf{V}}_{i})), \\
& \mathbf{\hat{Z}}'_{i+1} = \mathbf{Z}'_{i+1} + \mathcal{P}_{fuse}(\mathbf{\bar{Z}}'_{i+1} \oplus \mathbf{Z}'_{i+1}),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where ${\rm DDCA}$ denotes our proposed 3D-Decoupled Cross Attention, which will be described in detail below.
$\mathcal{P}_{kv}$ and $\mathcal{P}_{fuse}$ are linear projections for cache updating and token fusion, respectively.
$\mathbf{\bar{Z}}'_{i+1}$ is the spatio-temporal coherent feature queried from the previous clip, whereas $\mathbf{\hat{Z}}'_{i+1}$ is the clip-recurrent enhanced feature of the $(i+1)$-th iteration.
The Clip-Recurrent Hub constitutes a core design of FlowLens.
Given an arbitrary frame with a missing FoV, FlowLens can either exploit the global information accumulated in the
temporal dimension with the help of Clip-Recurrent Hub, or extract fine-grained local information between adjacent frames by the Mix Focal Transformer, which is essentially different from previous VI-Trans methods~\cite{zeng2020learning,liu2021fuseformer,li2022towards} that only extract features from local windows and limited reference frames.
\noindent\textbf{3D-Decoupled Cross Attention.}
An intuitive idea is to use vanilla multi-head self-attention (MSA)~\cite{dosovitskiy2020image} directly for cross attention implementation.
However, recent study~\cite{park2022vision} on the nature of ViT suggest that the purely global receptive field of MSA may introduce unnecessary degrees of freedom and thus lack focus on locally relevant features.
Therefore, we propose 3D-Decoupled Cross Attention (DDCA) to search from both local and non-local spatio-temporal neighborhoods.
Specifically, DDCA first performs a vanilla attention along the time axis:
\vspace{-0.5em}
\begin{equation}
\label{equ:time_att}
\begin{aligned}
Z_{t} := {\rm Attn}(Q_{t}, K_{t}, V_{t}) = {\rm Softmax}(\frac{Q_{t} K_{t}^{\top}V_{t}}{\sqrt{d}}),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $Q_{t},K_{t},V_{t} \in \mathbb{R}^{(W_h \times W_w) \times T \times C_{e}}$ are respectively reshaped queries, keys and values.
According to the computational cost of the transformer models, we consider that DDCA should be embedded into the Clip-Recurrent Hub in a lightweight way to avoid affecting the propagation ability of fine-grained local features from the current clip.
\begin{algorithm}[H]
\caption{Pseudo code of FlowLens Clip-Recurrent Hub in a PyTorch-like style.}\label{alg:recurrent}
\input{codes/recurrent-code}
\vspace{-0.5em}
\end{algorithm}
\vspace{-1.0em}
Hence, we subsequently present the 2D-decoupled window-based attention with the strip pooling strategy to effectively achieve local-global interactions.
The horizontal keys $K_{h}$ can be naturally formulated as:
\vspace{-0.5em}
\begin{equation}
\label{equ:2D_att_1}
\begin{aligned}
& K_{h}^{l}=[K_{h}^{1}, K_{h}^{2},..., K_{h}^{W}], \\
& K_{h}^{g}={\rm Unfold}(\mathcal{P}_{h}(K_{h}^{l})), \\
& K_{h} = [K_{h}^{l}, K_{h}^{g}],
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $K_{h}^{l}$ denotes evenly partitioned non-overlapping horizontal strips, $\mathcal{P}_{h}$ denotes the horizontal strip pooling layer, and ${\rm Unfold}(\cdot)$ represents the unfold function along the strip.
The horizontal values $V_{h}$ can be similarly derived.
For the horizontal attention, we have:
\vspace{-0.5em}
\begin{equation}
\label{equ:2D_att_2}
\begin{aligned}
& Z_{h}^{i} = {\rm Attn}(Q_{h}^{i}, K_{h}^{i}, V_{h}^{i}), \\
& Z_{h} = [Z_{h}^{1}, Z_{h}^{2},..., Z_{h}^{W}],
\end{aligned}
\vspace{-0.5em}
\end{equation}
The vertically decoupled attention is similar. Finally, the output of these three parallel dimensions will be gathered:
\vspace{-0.5em}
\begin{equation}
\label{equ:3D_att}
\begin{aligned}
Z = \mathcal{P}_{t}(Z_t) + \mathcal{P}_{h,w}([Z_h, Z_v]),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\mathcal{P}_{t}$ and $\mathcal{P}_{h,w}$ are linear projections.
Note that the above formulas omit the head dimension for simplicity. Therefore, our DDCA covers both global and local receptive fields by the cross-strip attention
and the unfolded coarse-grained pooling-strip in a single layer.
Different from the recent CSWin~\cite{dong2022cswin} which introduced a cross-shaped window attention, our DDCA is put forward to process sequence data and bridge the connection between windows to enhance the local-global interaction via strip pooling mechanism.
Our strategy also fundamentally differs from the axial attention~\cite{ho2019axial}, because it performs horizontal and vertical attention sequentially, while we compute the attention map in parallel in a decoupled 3D space.
\newpage
\noindent\textbf{Mix Fusion Feed Forward Network (MixF3N).}
The previous state-of-the-art VI-Trans~\cite{liu2021fuseformer,li2022towards} are equipped with overlapped-patch strategy (\ie soft split \& composite) to aggregate information from
patches.
However, this is insufficient for the beyond-FoV estimation task, as it needs more realistic and fine-grained local details.
As previously shown in MiT~\cite{xie2021segformer}, convolutions could introduce local inductive bias in a direct way for transformer backbone.
Thus, we introduce the two-branch depth-wise convolutions with kernels of different sizes in FFN to further enhance the free flow of sub-token information.
Suppose $A$ is the soft composite token vectors, the MixF3N is formulated as:
\vspace{-0.5em}
\begin{equation}
\label{equ:mix_f3n}
\begin{aligned}
Z = {\rm MLP}({\rm GELU}({\rm SS}([\mathcal{C}_{3 \times 3}(A_{:\frac{C}{2}}),\mathcal{C}_{5 \times 5}(A_{\frac{C}{2}:})]))),
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\mathcal{C}_{3 \times 3}$ and $\mathcal{C}_{5 \times 5}$ denote depth-wise convolutions, $A_{:\frac{C}{2}}$ and $A_{\frac{C}{2}:}$ are parallel features.
The MixF3N mixes a $3{\times}3$ and a $5{\times}5$ convolution into each FFN to model the spatial relationship among tokens in a multi-scale manner and reinforce the fine-grained local information extraction.
\begin{figure}[!t]
\centering
\includegraphics[width=0.75\linewidth]{imgs/cross-att-v4.pdf}
\vspace{-1em}
\caption{\emph{3D-Decoupled Cross Attention.} By decoupling in the dimensions of time, width, and height, we are able to efficiently query the most correlated features from the past. With an additional non-local strip pooling window, the information flow flexibly in intersecting directions during spatial query.}
\label{fig:cross_att}
\vspace{-1.5em}
\end{figure}
\subsection{Training Objective}
\label{sec:training_objective}
Following~\cite{zeng2020learning,liu2021fuseformer,li2022towards}, our FlowLens generator is optimized with reconstruction loss and adversarial loss:
\vspace{-0.5em}
\begin{equation}
\label{equ:loss_rec}
\begin{aligned}
\mathcal{L} = \lambda_{rec} \cdot \mathcal{L}_{rec} + \lambda_{adv} \cdot \mathcal{L}_{adv},
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\lambda_{rec}$ and $\lambda_{adv}$ are hyperparameters to balance different losses. The reconstruction loss measures the $L1$ distance between the FoV-expanded sequence and the ground truth:
\vspace{-0.5em}
\begin{equation}
\label{equ:loss_l1}
\begin{aligned}
\mathcal{L}_{rec} = \Vert \mathbf{\hat{Y}}^{t} - \mathbf{Y}^{t} \Vert_{1}.
\end{aligned}
\vspace{-0.5em}
\end{equation}
A discriminator~\cite{chang2019free} D is also equipped
to assist the training of FlowLens for more realistic and structurally-consistent FoV expansion results. The adversarial loss and the loss of discriminator D are formulated as:
\vspace{-0.5em}
\begin{equation}
\label{equ:loss_adv}
\begin{aligned}
\mathcal{L}_{adv} & = -E_{z \sim P_{\mathbf{\hat{Y}}^{t}}(z)}[D(z)], \\
\mathcal{L}_{D} & = E_{x \sim P_{\mathbf{Y}^{t}}(x)}[{\rm ReLU}(1-D(x))] \\
& + E_{z \sim P_{\mathbf{\hat{Y}}^{t}}(z)}[{\rm ReLU}(1+D(z))].
\end{aligned}
\vspace{-0.5em}
\end{equation}
L1 loss are used for supervising the optical flow completion:
\vspace{-0.5em}
\begin{equation}
\label{equ:loss_flow}
\begin{aligned}
\mathcal{L}_{flow} = \Vert \hat{\mathbf{V}} - \mathbf{V}_{gt} \Vert_{1},
\end{aligned}
\vspace{-0.5em}
\end{equation}
where $\mathbf{V}_{gt}$ is the ground-truth flow calculated from the video with a complete FoV.
\section{Experiments}
\label{sec:experiments}
\subsection{Settings}
\label{sec:settings}
\noindent\textbf{Datasets.}
We experiment with three datasets, including two video inpainting datasets and one newly presented KITTI360-EX dataset for beyond-FoV estimation.
\begin{compactitem}
\item[(1)] \textbf{YouTube-VOS}~\cite{xu2018youtube} contains $3471$, $474$, and $508$ videos for training, validation, and test. We follow the original split for training and testing.
\item[(2)] \textbf{DAVIS}~\cite{perazzi2016benchmark} provides $90$ videos for training and $60$ for testing.
Following~\cite{liu2021fuseformer,li2022towards}, we use $50$ videos for testing the model that is trained on the YouTube-VOS.
\item[(3)] \textbf{KITTI360-EX} is used for beyond-FoV estimation. Derived from KITTI360~\cite{liao2022kitti}, it contains $76k$ pinhole images as well as $76k$ spherical images, which will be detailed in Sec.~\ref{sec:dataset}. We use ``seq10'' for testing.
\end{compactitem}
\noindent\textbf{Metrics.}
We adopt PSNR, SSIM, VFID, and $E_{warp}$ to comprehensively evaluate the performance. PSNR and SSIM are widely used for measuring the reconstructed image quality.
VFID~\cite{wang2018video} is used to compare the visual perceptual similarities between two input videos.
$E_{warp}$~\cite{lai2018learning} is used to measure the stability of the reconstructed video.
Training details can be found in the supplementary material.
\subsection{KITTI360-EX Dataset}
\label{sec:dataset}
Different from the inpainting tasks, the goal of Beyond-FoV estimation is to overcome the physical FoV limitations of the camera itself, whether it is a pinhole camera or a spherical camera.
However, existing video inpainting datasets neither provide camera intrinsics for each video, nor spherical images.
To address this issue, we exploit KITTI360~\cite{liao2022kitti} to derive the KITTI360-EX dataset to facilitate Beyond-FoV training and evaluation. Based on the calibrated camera intrinsics, we use the $f{-}tan\theta$ camera model and the $f{-}\theta$ camera model to build $5\%$, $10\%$, and $20\%$ FoV masks for pinhole camera outward expansion and spherical camera inward expansion, respectively.
Besides, when evaluating on KITTI360-EX, only past frames can be used for FoV expansion, reflecting real-world online applications.
\begin{table*}[!t]
\renewcommand{\thetable}{2}
\begin{center}
\caption{\emph{Quantitative comparisons on KITTI360-EX beyond-FoV estimation.} Inner FoV expansion results are not reported for SRN, which is an image outpainting method. $E_{warp}^{*}$ denotes $E_{warp} \times 10^{-2}$. The best are shown in \textbf{bold}, and the second best are \underline{underlined}.}
\label{tab:fov_expansion}
\vspace{-1.0em}
\input{tables/fov-expansion-v2}
\vspace{-2.25em}
\end{center}
\end{table*}
\begin{table}[!t]
\renewcommand{\thetable}{1}
\begin{center}
\caption{\emph{Quantitative comparisons on video inpainting}.}
\label{tab:video_inpainting}
\vspace{-1.0em}
\input{tables/video-inpainting-v2}
\vspace{-3.0em}
\end{center}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{imgs/quality-v2.pdf}
\vspace{-2.25em}
\caption{\emph{Qualitative comparison on KITTI360-EX beyond-FoV estimation} with LaMa\cite{suvorov2022resolution}, FuseFormer\cite{liu2021fuseformer}, and E2FGVI\cite{li2022towards}.}
\label{fig:compare}
\vspace{-1.75em}
\end{figure}
\subsection{Comparisons}
\label{sec:comparison}
\vspace{-0.5em}
\noindent\textbf{Quantitative results.}
We report quantitative comparison results on both video inpainting and Beyond-FoV estimation. With ``-s'' indicates the small version of our method, and ``+'' denotes that horizontal and vertical flip augmentation are equipped during inference.
For the video inpainting task, we compare with previous top-performing video inpainting models~\cite{kim2019deep,xu2019deep,chang2019learnable,lee2019copy,gao2020flow,zeng2020learning,liu2021fuseformer,li2022towards} using the same test setting and mask following~\cite{liu2021fuseformer,li2022towards}. As shown in Tab.~\ref{tab:video_inpainting}, FlowLens achieves state-of-the-art performance, especially in terms of quality and structural similarity of reconstructed videos, indicating the superiority of the proposed method.
For the beyond-FoV estimation task, all methods only allow to use past video frames as references.
We compare with recent image inpainting methods~\cite{suvorov2022resolution,liu2022reduce}, the representative image outpainting method SRN~\cite{wang2019wide}, and state-of-the-art video inpainting models~\cite{zeng2020learning,liu2021fuseformer,li2022towards}.
All video inpainting models and FlowLens use the same training setting for a fair comparison.
More details
are
in the
appendix.
\newpage
\noindent As shown in Tab.~\ref{tab:fov_expansion}, our method clearly outperforms previous methods on the bidirectional beyond-FoV estimation, setting a promising baseline for the new track. Note that the small version of FlowLens surpasses all other VI-Trans
consuming only $37\%$ FLOPs compared with the E2FGVI~\cite{li2022towards}. The results on both tasks verify the superiority of the proposed clip-recurrent transformer.
\noindent\textbf{Qualitative results.}
We conduct qualitative comparisons with the competitive LaMa~\cite{suvorov2022resolution} and advance video inpainting models~\cite{zeng2020learning,liu2021fuseformer,li2022towards}.
Fig.~\ref{fig:compare} and Fig.~\ref{fig:compare_vi} show the results of beyond-FoV estimation and video inpainting, respectively.
FlowLens is able to propagate more faithful textures and structures to the filling area and achieves competitive qualitative performance on both estimation tasks, demonstrating the effectiveness of our approach.
More visualizations can be found in the supplementary.
\begin{table}[!t]
\begin{center}
\caption{\emph{Ablation studies} on clip-recurrent hub.}
\label{tab:ablation-recurrent}
\vspace{-1.0em}
\input{tables/ablation-recurrent}
\vspace{-2.0em}
\end{center}
\end{table}
\begin{table}[!t]
\begin{center}
\caption{\emph{Ablation studies} on various cross-attention mechanism.}
\label{tab:ablation-cross}
\vspace{-1.0em}
\input{tables/ablation-cross}
\vspace{-2.0em}
\end{center}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\linewidth]{imgs/davis_youtube_vi_compare-v3.pdf}
\vspace{-2.25em}
\caption{\emph{Qualitative comparison for video inpainting} against recent VI-Trans~\cite{zeng2020learning,liu2021fuseformer,li2022towards} on DAVIS~\cite{perazzi2016benchmark} and YouTube-VOS~\cite{xu2018youtube}.}
\label{fig:compare_vi}
\vspace{-1.75em}
\end{figure}
\vspace{-0.25em}
\subsection{Ablations}
\label{sec:ablations}
We conduct ablations on clip-recurrent hub, cross attention, MixF3N, and flow completion, on the KITTI360-EX Spherical track with $250k$ training iterations, and the results averaged across metrics are reported.
\noindent\textbf{Study of clip-recurrent hub.}
We explore whether all layers are required to equip the clip recurrent hubs, and if not, what positions are most efficient. Interestingly, Tab.~\ref{tab:ablation-recurrent} shows that all layers with cross attention are not necessary, and introducing the hub at early stage of transformer works better. We consider that the early fusion can enable the subsequent mix focal transformer to better handle relevant visual cues without causing confusion for the feature extraction of the current clip. Besides, the proposed clip-recurrent hub only requires negligible cost at $1.4\%$ FLOPs and $2.9\%$ parameters of the entire model.
\noindent\textbf{Study of cross attention mechanism.}
Tab.~\ref{tab:ablation-cross} compares the effects of different cross-attention mechanisms on beyond-FoV estimation. We observe that the proposed DDCA works satisfactorily with the hub, especially considering its moderate computational complexity.
\noindent\textbf{Study of flow-guided feature propagation.}
Tab.~\ref{tab:ablation-flow} shows that performance drops dramatically when optical flow guidance or DCN compensation is removed, presumably because the accurate motion field and sampling error compensation are both important for utilizing multi-frame information, especially for beyond-FoV estimation. The result confirms that we further improve the performance by incorporating the strong flow completion network~\cite{zhao2020maskflownet} instead of the previous SpyNet~\cite{ranjan2017optical}.
\begin{table}[!t]
\begin{center}
\caption{\emph{Ablation studies} on flow-guided feature propagation.}
\label{tab:ablation-flow}
\vspace{-0.975em}
\input{tables/ablation-flow}
\vspace{-1.95em}
\end{center}
\end{table}
\noindent\textbf{The effectiveness of MixF3N in FlowLens.}
In Tab.~\ref{tab:ablation-mix}, we ablate the variants of FFN. Previous VI-Trans work~\cite{liu2021fuseformer,li2022towards} propose to use F3N instead of FFN, however we find its minor improvement for beyond-FoV estimation.
Our results also show that Mix-FFN~\cite{xie2021segformer} does improve the transformer's ability of implicitly feature propagation.
The newly introduced MixF3N, which combines the patch overlapping strategy and the dual-branch mix convolution together, further boosts the state-of-the-art score.
\begin{table}[!t]
\begin{center}
\caption{\emph{Ablation studies} on Mix Fusion Feed Forward Network.}
\label{tab:ablation-mix}
\vspace{-1.0em}
\input{tables/ablation-mix}
\vspace{-2.75em}
\end{center}
\end{table}
\vspace{-0.5em}
\section{Conclusion}
In this paper, we explore beyond-FoV estimation, a new task which aims to exploit past spatio-temporal information to see the world beyond the camera's physical FoV. To this end, we propose FlowLens, a novel clip-recurrent transformer architecture that is capable of efficiently querying previous 3D coherent visual cues. Experimental results show that our approach achieves state-of-the-art
quantitative and qualitative
performance on both
classical video inpainting and beyond-FoV estimation.
We hope that FlowLens can pave the way on the new track and arouse community interest in breaking the physical limitations of information-collecting sensors.
\clearpage
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,995,590 | arxiv | \section{#2
\newcommand{\Sec}[1]{{Sec.~\ref{sec:#1}}}
\renewcommand{\subsection}{\@startsection{subsection}{2}{0pt}%
{-3.25ex plus -1ex minus -.2ex}%
{1.5ex plus .2ex}%
{\centering\normalsize\itshape}}
\newcommand{\startappendices}{%
\setcounter{equation}{0}%
\setcounter{section}{1}%
\setcounter{subsection}{1}%
\renewcommand{\thesection}{\Alph{section}}}
\newcommand\fakesection{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\centering\normalsize\bfseries}}
\newcounter{appendixcount}%
\setcounter{appendixcount}{0}%
\renewcommand{ |
1,314,259,995,591 | arxiv | \section{Introduction}
\label{intro}
The problem of packing of spheres plays a major role in the modeling of
many physical systems and has been studied for more than four
decades. Some of the early examples~\cite{alder1,alder2,hoover1} of
the computer simulations of hard sphere liquids suggest the existence
of a first order freezing transition. The problem of
packing of spheres in two and three dimensions is of great
interest. Recent investigations of such systems have focused on the
study of the statistical geometry of the dense sphere
packing. Such studies are important in the understanding of physical
properties of many systems, composed of a large number of
particles~\cite{speedy1,reiss1,torqu1,torqu2,rintoul1,reiss2,speedy2,sastri1,sastri2,sastri3}.
In this context we pose a question, with the motivation of studying the
transport across a two dimensional structure of packed circular disks
({\it membrane}), how does the packing change when the membrane is
doped with objects of various shapes and sizes (e.g. spheres
arranged rigidly in the form of rods of different lengths, L, T, X shapes
etc. See Fig. 1) ? In particular we investigate the effect of these
shapes on the distribution of {\it ``voids''}. The {\it``anisotropy''} in
the interaction potential appears to play a key role in the
induction of large voids.
As pointed out by Sastri et. al.~\cite{sastri1}, no
algorithm is available to compute void statistics for the packing of
shapes other than spheres. In this paper we propose a simple numerical
algorithm to compute void statistics. Unlike a probabilistic algorithm
(Monte Carlo), our algorithm is based on digitization and cell counting.
The paper is organized as follows. In Sec.~\ref{The-model-system}, we
describe the model system. A definition of ``void'' and an algorithm
to compute void statistics is given in Sec.~\ref{Voids}. The
results of numerical simulations and their relevance in lipid
biomembranes is discussed in Sec.~\ref{Results-and-Discussion} We
summarize the paper in Sec.~\ref{Summary}.
\section{The model system}
\label{The-model-system}
The configuration space of the model system (membrane)
is considered as a two dimensional space with periodic (toroidal) boundary
conditions. The constituents of the membrane are disks and dopants.
\subsection{ The basic model }
We consider a membrane made up of only circular disks interacting
pairwise via the {\it Lennard-Jones} potential:
\begin{eqnarray}
V_{LJ}(r_{ij}) = 4 \epsilon \sum_{i=1}^N \sum_{j = i+1}^N \Big( ({\sigma
\over r_{ij}} )^{12} - ( { \sigma \over r_{ij}})^6 \Big)\nonumber
\end{eqnarray}
where, $r_{ij}$ is the distance between the centers of the $i^{\rm
th}$ and $j^{\rm th}$ disks, $\sigma$ determines the range of hard
core part in the potential and $\epsilon$ signifies the depth of the
attractive part. We choose the number of disks such that
the area occupied by these disks is around $70\%$, which is less than
that of the close-packed structure but still large enough to produce
some closed voids.
\subsection { The model with impurities }
Further, we consider different {\it shape anisotropic} combinations
(dopants) consisting of $\kappa$ number of circular disks. We treat
each of these
combinations as a single rigid cluster. Several such dopants
(impurities) are considered. Fig. 1 shows some of these
impurities. The interaction between impurities and disks or other
impurities is obtained by superposing the {\it Lennard-Jones}
potential corresponding to each of the constituent disk in impurity.
We consider a membrane with circular disks and impurities amounting to
$10\%$ of the total number of circular disks, such that the area occupied
is still
$70\%$.
These membranes are brought to an equilibrium configuration by the Monte
Carlo method~\cite{MC} at a fixed temperature. Fig. 2 and Fig. 3 show
typical
equilibrium configurations of membrane without and with impurities
respectively (The impurity in Fig. 3 is a rod shaped structure made up
of five disks (Rod$_5$), in general Rod$_\kappa$ for rod made up of
$\kappa$ number of disks). In the simulation the temperature is so chosen
that
$k_B T < 4 \epsilon$, where $k_B$ is the {\it Boltzmann}
constant. The equilibrium is confirmed by simulated annealing.
\section{Voids and an algorithm for void statistics}
\label{Voids}
Now, we introduce the notion of an ``$r$-void'' in a membrane which is
suitable for the description of transport across membrane and further,
propose an algorithm to compute statistical quantities such as
the number of voids in the membrane, the void size distribution
etc.
We define an $r$-void as a closed area in a membrane devoid of disks or
impurities, and big enough to accommodate a circular disk of
radius $r$. Of course an $r$-void is also an $r^\prime$-void if
$r^\prime < r$.
\subsection{The algorithm to compute void statistics}
To compute the void statistics for $r$-voids, we increase the
radii of the disks forming the membrane (including the disks in the
impurities, without altering the positions of the centers) by an
amount $r$ (See Fig. 4). Then we digitize the entire membrane on
a suitably chosen grid. The choice of grid size depends on the
required accuracy and the typical sizes of the voids. The digitization
of circular disks is carried out by the Bressenham circle drawing
algorithm~\cite{Schaum}, modified to incorporate periodic boundary
conditions. The number of voids in the membrane are
computed by flood filling~\cite{Schaum} every closed void with a different
color and then counting the number of colors. The sizes of various
voids can be obtained by counting the number of grid-cells filled by the
corresponding color. The termination of flood fill algorithm is
ensured since the voids are closed. In our case this condition is
automatically fulfilled in view of periodic boundary conditions.
The geometric algorithms involving Vorenoi
polygons~\cite{sastri1,sastri2,sastri3} are mathematically satisfying
and are expected to be accurate but would take much more computation
time. On the other hand, as pointed in~\cite{sastri1}, the
probabilistic algorithm is time efficient but requires a very large
sample
size while dealing with small voids.
Our algorithm is quite efficient as well as suitable even when
there are small voids in the membrane. We further note that the
algorithm can be easily generalized to higher dimensions. We expect
that the efficiency of this algorithm can be further enhanced by the use
of a multi-resolution adaptive grid.
\section{Results and Discussions}
\label{Results-and-Discussion}
The simulations were carried out for membranes of
different compositions. Fig. 5 shows the graphs of the number of
$r$-voids as a function of $r$ measured in units of the radius of the constituent
disks. Curve (a) shows void distribution in absence of impurities.
Curve (b) represents the void distribution in a membrane
with rod shaped impurities made up of two disks (Rod$_2$).
Curves (c) and (d) show the void distribution with L shaped
impurities made up of four disks (L$_4$) and rod like impurities made
up of four disks (Rod$_4$) respectively. It is clear from the graph
that the number of large voids increases with an increase in the
anisotropy of the impurity. Even though L$_4$ and Rod$_4$ occupy the
same area, Rod$_4$ being more anisotropic induces a larger number
of big voids than L$_4$. This fact can be clearly seen in Fig. 5, curves (c) and
(d). Moreover, the Fig. 2 and Fig. 3 demonstrate the fact that the
voids are mostly found in the neighborhood of the centers of anisotropy.
Further, to strengthen our claim that the shape anisotropy induces
voids, we compared two membranes. In one case we added rod impurities
made up of two disks (Rod$_2$) in the assembly of circular disks, and
in the other case we added circular impurities of larger size, which
occupied the same area as that of Rod$_2$. We found that the former,
being more anisotropic, induced larger and more
numerous voids as compared to the later, though they occupied the same area.
Thus, reduced to the bare essentials, the anisotropy in the
interaction potential of the constituents, is seen to be responsible for the
induction of large voids. If studied from the perspective of energy
minimization, as the potential becomes direction dependent, some
positions of the constituents are preferred over the other
positions. This induces large voids.
These features show a remarkable similarity with the observations reported
in certain biological experiments~\cite{john}. These experiments deal
with the size-dependent permeation of non-electrolytes across
biological membranes. The effect of doping on the permeation of large
molecules was studied in these experiments. The liposome-membrane used
in these experiments was made up of mixture of two types of lipids
(cardiolipins and phosphatidylcholine) in a proportion
1:10. The understanding of the enhancement of transport in doped
membranes
needed an algorithmic statement. The ingredients at the algorithmic
level involved:
\begin{enumerate}
\item consideration of the structure as a strictly 2--dimensional assembly
\item the cross sections of molecules being considered as constituents
\item interactions of the constituents via the Lennard Jones potential
\item permeating particles being considered as hard disks.
\end{enumerate}
The features reported in~\cite{john} bear a similarity with the
simulation carried out with Rod$_2$ as dopants. We have already seen
in numerical simulations (See Fig. 5, curves (a) and (b)) that the Rod$_2$
type of impurities induced large voids in the membrane. The appearance
of larger voids naturally enhances the transport of large
particles. Thus an enhancement in the transport of large
non-electrolytes like glucose, which was observed in the lipid mixture
~\cite{john} can possibly be understood using our simple approach.
Further, apart from the biological implications, the model discussed is
general enough to incorporate the studies of transport in various weakly
bound granular media.
\section{Summary}
\label{Summary}
We have presented a numerical algorithm to compute the entire void
statistics in a two dimensional membrane consisting of circular disks
and dopants. We found that our simple two dimensional model has shown
results consistent with features observed in a complex biological
system. The biological justification of the model and implications are
discussed elsewhere~\cite{gauri}. Nevertheless, our model and
the proposed numerical algorithm which finds out the void statistics
in the model system are quite general and use no specific features of
any particular system. Therefore it is possible to use this method
effectively in various systems from diverse disciplines. The result
that the shape anisotropy induces large voids in mixtures
may be used as a tool for achieving controlled selective permeability
across such a system by merely changing the shape of the constituents of
the mixture.
{\bf Acknowledgments}
We thank N.V. Joshi, Deepak Dhar, H. E. Stanley and S.S. Manna for fruitful
discussions.
|
1,314,259,995,592 | arxiv | \section{Introduction}
Antiferromagnets (AFs) are considered perspective materials for spintronic applications: they exhibit fast magnetic dynamics with excitations in the THz range, are fundamentally insensitive to external magnetic fields, and produce no stray fields.\cite{MacDonald2011, Gomonay2014, Jungwirth2016} One of the further advantages of AFs, important for fast switching between different states, is related with the motion of domain walls (DWs). In contrast to ferromagnets (FM), the dynamics of AF DWs shows no Walker breakdown.
Thus, the DW velocity is only limited by the group velocity of spin waves, which is of the order of tens of km/s (e.g., 40 km/s for NiO). This is orders of magnitude larger than the typical velocities in FM, where the Walker breakdown limits the achievable velocities, and also larger than velocities in synthetic AFs. \cite{Yang2015a}
However, the manipulation of AF DWs faces significant difficulties. In particular, 180$^\circ$ AF domains are indistiguishable even in the presence of a constant homogeneous magnetic field. So, in contrast to FMs, an applied external field cannot move the 180$^\circ$ AF DWs at all. In addition, coupling between the external magnetic field and the AF order parameter (N\'eel vector) is suppressed due to the strong exchange coupling between the magnetic sublattices. In this case typical values of fields necessary to produce any noticeable shift of the DW are of the order of the spin-flop field and range 1-10 T. \cite{Barthem2016}
Recently the possibility to move DW in an AF with the help of a staggered N\'eel spin-orbit torque was demonstrated in Ref. \onlinecite{Gomonay2016}. While this mechanism can be very effective, its application is restricted to metals that have a broken local inversion symmetry, which the vast majority of the AF systems do not have. Furthermore, manipulation using regular spin-orbit torques has been shown to be
restricted to specific DW types, sample geometry and AF spin structure configuration, which narrows the applicability of these torques.\cite{Shiino2016}
Finally, recent calculations predict that temperature gradients can move the AF DWs in metals and isolators as well.\cite{Kim2015d,Selzer2016} However, manipulation of the DWs using this mechanism is restricted to one-directional motion and is yet to be observed.
Hence, at present there is no broadly applicable approach to manipulate AF DWs.
In this Letter we develop such a broadly applicable approach to manipulate AF DWs based on the use of asymmetric magnetic field pulses. We show that this approach is highly efficient for devices as it enables to attain high DW mobilities, to induce synchronous motion of multiple DWs and to control the DW displacement through a ratchet effect.
We compare the dynamics of AF DWs induced by static and by time-dependent magnetic fields and show that a time-dependent field produces a larger effective force than its static counterpart. This difference originates from the strong exchange field which reduces the magnetic static susceptibility.
Our results show that the force produced by the rate of change of the magnetic field will move DWs with similar structure in the same direction.
In contrast, the force produced by a static magnetic field is independent of the DW structure and induces
a shrinking and disappearance of unfavourable domains.
We find the conditions for ratchet-like motion by calculating the critical rate of magnetic field that overcomes a static friction force.
We also propose an optimal configuration to implement controlled DW motion for the archetypical AFs Mn$_2$Au and NiO.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_stripe_3}
\caption[fig_stripe]{Evolution of the stripe AF domain structure induced by a constant, $\mathbf{H}_\mathrm{dc}$, (a) and a time-dependent, $\mathbf{H}_\mathrm{ac}$, (b) magnetic field. Black arrows show the directions of the ponderomotive, $\mathbf{F}^\mathrm{pond}$, and the dissipative, $\mathbf{F}^\mathrm{diss}$, forces. The grey dashed lines in (b) mark the previous position of the DWs.}
\vspace{-0.2 cm}
\label{fig_stripe}
\end{figure}
We consider the generic case of a compensated AF with two magnetic sublattices with magnetizations $\mathbf{M}_1$ and $\mathbf{M}_2$ ($|\mathbf{M}_1|=|\mathbf{M}_2|=M_s/2$). The metallic Mn$_2$Au and the isolating NiO are good examples of such AFs.
The magnetic structure of an AF texture can be explicitly decribed in terms of the AF (N\'eel) vector $\mathbf{L}=\mathbf{M}_1-\mathbf{M}_2$ which is considered as a field variable, $\mathbf{L}(\mathbf{r},t)$. The closed equations of motion for AF vector \cite{Haldane1983a, Kosevich1990, Ivanov1995} have the following form:\cite{Gomonay2010}
\begin{equation}\label{eq_motion_AF_ideal}
\mathbf{L}\times\left[\ddot{\mathbf{L}}-c^2\Delta\mathbf{L} +\gamma^2H_\mathrm{ex}M_s\frac{\partial w_\mathrm{AF}}{\partial\mathbf{L}}\right]=\mathbf{T}-\gamma\alpha_GH_\mathrm{ex}\mathbf{L}\times\dot{\mathbf{L}}.
\end{equation}
In Eqs.~(\ref{eq_motion_AF_ideal}) we introduced the magnon velocity $c$ which coincides with the limiting velocity for the DW motion, $\gamma$ is the gyromagnetic ratio, and $w_\mathrm{an}$ is the density of magnetic anisotropy energy which depends upon the crystal structure. The effective field $H_\mathrm{ex}$ parametrizes the exchange coupling between the magnetic sublattices. The last term in the r.h.s. of Eq.~(\ref{eq_motion_AF_ideal}) describes viscous damping parametrized by the Gilbert constant $\alpha_G$.
The vector $\mathbf{T}$ in the r.h.s. of Eqs.~(\ref{eq_motion_AF_ideal}) describes the effective forces (torques) induced by the external magnetic field $\mathbf{H}$:
\begin{eqnarray}
\mathbf{T}=
\gamma\mathbf{L}\times\dot{\mathbf{H}}\times\mathbf{L}-2\gamma \dot{\mathbf{L}}(\mathbf{H}\mathbf{L})-\gamma^2\mathbf{L}\times \mathbf{H} \left (\mathbf{L}
\cdot \mathbf{H} \right ).
\end{eqnarray}
In many practical cases the shape of the moving DW does not change or changes slightly. So, the DW can be considered as a point particle, whose dynamics is described by only two vectors: the generalized momentum $\mathbf{P}$ and its canonically conjugated coordinate $\mathbf{R}$ (position of the DW center). The dynamics of Eq.~(\ref{eq_motion_AF_ideal}) can then be reduced to a standard equation for a point mass: \cite{Kosevich1990}
\begin{equation}\label{eq_point-mass_equation}
\frac{d\mathbf{P}}{dt}=-\gamma\alpha_GH_\mathrm{ex}\mathbf{P}+\mathbf{F},
\end{equation}
where $\mathbf{F}$ is the resulting external force, and the first term in the r.h.s. is analogous to viscous damping with relaxation time $\tau_\mathrm{relax}=1/(\gamma\alpha_GH_\mathrm{ex})$.
Equation (\ref{eq_point-mass_equation}) is derived from the original Eq.~(\ref{eq_motion_AF_ideal}) in the following way. First, we define the DW momentum $\mathbf{P}$ as an integral of motion related with homogeneity of space:
\begin{equation}\label{eq_canonical_momentum}
P_j=-\frac{1}{\gamma^2M_sH_\mathrm{ex}}\int \dot{\mathbf{L}}^{(0)}\partial_j{\mathbf{L}^{(0)}}dV,\,j=x,y,z.
\end{equation}
Here $\mathbf{L}^{(0)}(\mathbf{r},t)$ is a solution of Eq.~(\ref{eq_motion_AF_ideal}) in the absence of a field ($\mathbf{T}=0$) and damping ($\alpha_G=0$). Second, we assume that $\mathbf{L}(t,\mathbf{r})=\mathbf{L}^{(0)}(t,\mathbf{r}-\mathbf{R})$. Finally, calculating explicitly the time derivative of Eq.~(\ref{eq_canonical_momentum}) and taking into account Eq.~(\ref{eq_motion_AF_ideal}) we obtain Eq.~(\ref{eq_point-mass_equation}).
Among the forces, acting on the DW, we specify three types, essential for our consideration, $\mathbf{F}=\mathbf{F}^\mathrm{pond}+\mathbf{F}^\mathrm{diss}+\mathbf{F}^\mathrm{fric}$. The first one is the ponderomotive force
\begin{equation}\label{eq_ponderomotive force}
\mathbf{F}^\mathrm{pond}=\frac{\mathbf{n}S}{2 M_{s} H_\mathrm{ex}} \left[\left (\mathbf{L}_2
\mathbf{H}\right )^{2}-\left (\mathbf{L}_1
\mathbf{H}\right )^{2}\right].
\end{equation}
It stems from the difference in energy density between the left (N\'eel vector $\mathbf{L}_1$) and right (N\'eel vector $\mathbf{L}_2$) AF domains and is directed along the
normal to the DW plane, $\mathbf{n}$ (see, e.g. Fig.~\ref{fig_stripe}). The ponderomotive force is proportional to the square of the magnetic field. Its value is weakened due to the strong exchange coupling between the magnetic sublattices. In addition, this force is insensitive to the structure of the DW itself and acts equally on the Bloch-like and N\'eel-like DW.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{fig_sample}
\caption{Sample with a single N\'eel (a) or Bloch (b) domain wall and optimal orientation of static ($\mathbf{H}_\mathrm{dc}$) and time-dependent ($\mathbf{H}_\mathrm{ac}$) field. Curved arrows show the direction in which the AF vector rotates within the DW.}
\label{fig_sample}
\end{figure}
The second force is dissipative and it is given by
\begin{equation}\label{eq_dissipative}
\mathbf{F}^\mathrm{diss}\cdot\mathbf{n}=- \frac{1}{\gamma M_sH_\mathrm{ex}}\int\dot{\mathbf{H}}\cdot\mathbf{L}^{(0)}\times(\mathbf{n}\cdot\nabla)\mathbf{L}^{(0)}dV.
\end{equation}
It is induced by the time-dependent component of the magnetic field and is sensitive to the relative orientation of the the external field and AF vectors inside the DW, i.e. the DW structure. This force is maximal if the magnetic field is perpendicular to the $(\mathbf{L}_1,\mathbf{L}_2)$ plane, see, e.g. Fig.~\ref{fig_sample}.
In spite of the small factor $1/H_\mathrm{ex}$, the dissipative force can be larger than $\mathbf{F}^\mathrm{pond}$, especially for high frequencies. Moreover, in contrast to $\mathbf{F}^\mathrm{pond}$, the dissipative force can move 180$^\circ$ domain walls.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{fig_steady_velocity}
\caption[fig3]{Velocity of a $90^\circ$ AF DW vs static field (magenta line) and an amplitude of steadily increasing time-dependent field (blue line) $H_\mathrm{ac}(t)=H^{(0)}_\mathrm{ac}t/\tau_\mathrm{raise}$ calculated according to Eq.~(\ref{eq_velocity of steady motion_parallel}) for Mn$_2$Au, $\tau_\mathrm{raise}=5$ ps. The vertical dashed line separates regions with linear and log-scale. }
\label{fig_steady_velocity}
\end{figure}
Lastly, the third force is a friction force, $\mathbf{F}^\mathrm{fric}$. It is related to the magnetic defect distribution within the crystal and pinning strength of the defects. We consider this force as a static friction force which defines a threshold for the dynamics. This force is sample-dependent and can be estimated from the coercitivity. In the calculations below we
take it to be 10\% of the spin-flop field.
The important difference between the ponderomotive and dissipative forces is illustrated in Fig.~\ref{fig_stripe}, where we consider a stripe AF domain structure. The ponderomotive force is directed from the favourable domain (with lower energy density) to the unfavourable. So, in a stripe structure adjacent DWs move in opposite directions, thus shrinking the fraction of unfavourable domains. Contrary to this, the orientation of the dissipative force depends upon the AF DW structure. So, all the DWs with the same chirality move in the same direction. Thus, the application of a time-dependent field provides an effective tool for manipulating AF-domains in an AF-based race-track type memory.
We next analyse the dynamics of AF DWs through Eq.~(\ref{eq_point-mass_equation}).
We consider the simple case of a tetragonal AF (e.g. Mn$_2$Au) and 90$^\circ$ domain structure with orthogonal AF vectors in neighboring domains, $\mathbf{L}_1\perp\mathbf{L}_2$. In this case the optimal orientation of the static magnetic field, $\mathbf{H}_\mathrm{dc}$, which produces the ponderomotive force, is parallel to one of the N\'eel vectors, e.g. $\mathbf{H}_\mathrm{dc}\|\mathbf{L}_1$ (see Fig.~\ref{fig_sample}). On the other hand, the optimal orientation of the time dependent field, $\mathbf{H}_\mathrm{ac}$, is related to the DW type.
In a thin film the AF vectors inside the DW rotate within the film plane and the DW is of a N\'eel type. For this case, the most efficient $\mathbf{H}_\mathrm{ac}$ is perpendicular to the film plane (Fig. ~\ref{fig_sample}a). In a bulk sample, a Bloch wall is also possible and the most efficient $\mathbf{H}_\mathrm{ac}$ is perpendicular to the DW plane (Fig. ~\ref{fig_sample}b). In both geometries $\mathbf{H}_\mathrm{ac}\perp\mathbf{H}_\mathrm{dc}$ and thus the time-dependent component does not contribute to the ponderomotive force.
Although $\mathbf{H}_\mathrm{dc}$ and $\mathbf{H}_\mathrm{ac}$ fields have different orientations, the corresponding forces, $\mathbf{F}^\mathrm{pond}$ and $\mathbf{F}^\mathrm{diss}$, are both parallel to the DW normal.
The time dependent component of the magnetic field allows one to manipulate the AF DW motion in a very effective way. To illustrate this fact, we start from the constant (time-independent) forces produced by $\mathbf{H}_\mathrm{dc}$ and steadily increasing/decreasing $\mathbf{H}_\mathrm{ac}$. The velocity of the steady motion is
\begin{equation}
v_\mathrm{steady}=c\frac{(\pi\dot{H}_\mathrm{ac}/\gamma +H_\mathrm{dc}^{2})/(2H_{\mathrm{ex}})}{\sqrt{\alpha _{G}^{2}H_{\mathrm{an}}H_{%
\mathrm{ex}}+(\pi\dot{H}_\mathrm{ac}/\gamma-H_\mathrm{dc}^{2})^2/(2H_{\mathrm{ex}})^{2}}},
\label{eq_velocity of steady motion_parallel}
\end{equation}%
as can be obtained from Eq.~(\ref{eq_point-mass_equation}). Here $H_{\mathrm{an}}$ is the anisotropy field.
Contributions of the time-dependent and the static component to $v_\mathrm{steady}$ are compared in Fig.~\ref{fig_steady_velocity}. For the calculations we use field values $H_\mathrm{ex}$=1400 T, $H_\mathrm{an}$=30 mT typical for AFs with high N\'eel temperature (like Mn$_{2}$Au \cite{Wu2012, Shick2010} and NiO \cite{Hutchings1972}). We set the AF magnon velocity $c=30$ km/s. As the damping parameters of metals and insulators are different, we take $\alpha_G=10^{-4}$ for insulating NiO \cite{Kampfrath2010} and $\alpha_G=10^{-3}$ for metalic Mn$_{2}$Au. These values correspond to relaxation times $\tau_\mathrm{relax}=$ 50 ps and 5 ps, respectively. The friction force per unit DW area is taken 9 N/m$^2$ which corresponds to an effective coercive field of 0.1 T.
Fig.~\ref{fig_steady_velocity} shows that the mobility ($=dv/dH$) of the DW in an ac field is much higher than in the static field and an amplitude value ${H}_\mathrm{ac}=$1 T is enough to reach the limiting velocity.
However, a practical fast increase of the magnetic field is only possible on short time scales and up to a limited amplitude of $H_\mathrm{ac}$. These facts exclude monotonously varying $\mathbf{H}_\mathrm{ac}$ as a useful tool for DW manipulation.
A more experimentally realistic alternating (cos-like) field $H_\mathrm{ac}\propto\cos(\omega t)$ can only induce oscillations of the DW with zero permanent displacement by drift. The green line in Fig.\ref{fig_asymmetric_pulse} (left axis) shows the displacement of the DW induced by a symmetric field pulse. {For all pulses we have taken the time between rise and fall times to be 700 ps and field amplitude $H_\mathrm{ac}$=10 mT}.
During the rising edge and falling edge periods of the pulses, the DW moves in opposite directions with exactly the same velocity (Fig.\ref{fig_asymmetric_pulse}(right axis)),
resulting in zero displacement.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{fig_asymmetric_pulse_2}
\caption[fig_single_pulse]{Time dependence of DW displacement (left axis, solid lines) and velocity (right axis, dashed lines) induced by field pulses with different fall times. $\tau_\mathrm{raise}=$50~ps. Fall time is 50 ps (green line), 100 ps (magenta line), and 200 ps (blue line). Relaxation time $\tau_\mathrm{relax}=$50 ps. The grey dotted line shows the pulse shape with exponential rise/fall $\propto \exp(-t/\tau)$. The rising/falling interval is shown with shaded area.}
\vspace{-0.4 cm}
\label{fig_asymmetric_pulse}
\end{figure}
Nonzero displacement can be achieved with an asymmetric pulse, as illustrated by the magenta (fall time 100 ps) and blue (fall time 200 ps) lines in Fig.\ref{fig_asymmetric_pulse}. The corresponding asymmetry of the velocity during the raising and falling intervals (Fig.\ref{fig_asymmetric_pulse} (b)) is due to the frictional force. Friction sets a threshold for the DW depinning and prevents DW motion for small field rates $\dot{H}_\mathrm{ac}$. As a result, the velocity of backward motion diminishes with increasing fall time. At some critical value of fall time the backward motion of the DW is blocked (blue lines in Fig.\ref{fig_asymmetric_pulse}) and the displacement of the DW is maximal.
The maximal DW displacement during the pulse depends upon the relation between risetime $\tau_\mathrm{raise}$ and relaxation time of the DW, $\tau_\mathrm{relax}$.
For a given material (fixed relaxation time) the optimal rising time is close to $\tau_{\rm relax}$. For a given experimental technique (fixed raise time) a longer relaxation time is preferable (magenta vs blue lines). Note, that a small relaxation time is typical for the metallic systems like Mn$_2$Au, while a large $\tau_\mathrm{relax}$ is more typical for insulators like NiO.
Although the displacement of a DW during one pulse is limited by the internal damping, the friction, and the attainable pulse parameters, a DW can be moved to any distance by a periodic set of pulses, as shown in
Fig.~\ref{fig_rachet} (b). The average velocity of such rachet-like motion (in this example is 0.44 m/s), can be controlled by a proper choice of the pulse duration and the interval between the pulses.
To attain maximal velocity, the time between rise and fall times should be minimized (white range in Fig.~\ref{fig_rachet}).
We also note that this type of ratchet force is different from its counterpart in FM materials, where an oscillating motion is induced instead.\cite{Kruger2014}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{fig_rachet_3}
\caption[fig_single_pulse]{ Rachet-like displacement (magenta line) of the DW under the sawtooth-shaped pulses (grey dotted line). Blue line shows the effective displacement as a function of time, average velocity being 0.44 m/s. $\tau_\mathrm{relax}=\tau_\mathrm{raise}$=50 ps.
The fall time is 100 ps.
\label{fig_rachet}
\end{figure}
In summary, we exploit the use of asymmetric field pulses to displace
AF DWs. We ascertain that asymmetric sawtooth-shaped pulses of the magnetic field in combination with the natural defect-induced static friction enable unidirectional
controlled rachet-like motion of an AF DW. This mechanism is broadly applicable to many different types of AF materials and can induced synchronous motion of multiple domain walls as required for applications.
{We acknowledge} support from Humboldt Foundation, the EU (Wall PEOPLE-2013-ITN 608031; MultiRev ERC-2014-PoC 665672) as well as the Center of Innovative and Emerging Materials at Johannes Gutenberg University Mainz, the Graduate School
of Excellence Materials Science in Mainz (CSC 266),
the DFG (in particular SFB TRR 173 Spin+X), the Ministry of Education of the Czech Republic Grant No. LM2011026, and from the Grant Agency of the Czech Republic Grant no. 14-37427
|
1,314,259,995,593 | arxiv | \section{Introduction}
It is straightforward that the distribution of a homogenous Poisson point process on $\mathbb{R}^d$ is preserved
by isometries. In the literature, various \emph{translation-equivariant} and \emph{isometry-equivariant}
operations on Poisson process have been considered:
\begin{itemize}
\item{\textbf{Poisson thinning: } A (deterministic) \emph{Poisson-thinning} is a rule for selecting a
subset of the points in the Poisson process which are equal in distribution to
a lower intensity homogenous Poisson process.
Ball \cite{ball_thinning} demonstrated a deterministic
Poisson-thinning on $\mathbb{R}$ which was \emph{translation equivariant} -
that is, if a translation is applied to the original process,
the new points selected are translates of the original ones by the same vector.
This was extended and refined by Holroyd, Lyons and Soo \cite{holroyd_lyons_soo_poisson_splitting_2011}
to show that for any $d \ge 1$,
there is an \emph{isometry-equivariant} Poisson-thinning
on $\mathbb{R}^d$.
}
\item{\textbf{Poisson allocation:}
Given a realization $\omega$
of a Poisson process on $\mathbb{R}^d$, a \emph{Poisson allocation} partitions $\mathbb{R}^d$ up to measure $0$
by assigning to each point in $\omega$ a \emph{cell} which is a finite-measure subset of $\mathbb{R}^d$.
Hoffman, Holroyd and Peres \cite{hoffman_holroyd_peres_stable_allocation}
constructed an isometry-equivariant allocation scheme for any stationary point process of finite intensity.
The above allocation scheme had the characteristic property of being ``stable''.
Subsequent work demonstrated isometry-equivariant Poisson allocations with other nice properties
such as connectedness of the allocated cells \cite{krikun_allocation}
or good stochastic bounds on the diameter of the cells \cite{chatterjee_peled_romik_gravitational_allocation}.}
\item{\textbf{Poisson matching:} A \emph{Poisson matching} is a deterministic scheme which finds a perfect
matching of two identically distributed independent Poisson processes.
Different isometry-equivariant Poisson matching schemes have been constructed
\cite{holrodyd_matching2011,holroyd_pemantle_peres_schramm_poisson_matching}.
}
\end{itemize}
Consider a transformation
of $\mathbb{R}^d$ which preserves Lebesgue measure. Does there
exist a Poisson thinning which is equivariant with respect to the
given transformation? What about an equivariant Poisson allocation
or matching?
To have a couple of examples in mind, consider the following
transformations $T_{RW},T_{Boole}:\mathbb{R}\to\mathbb{R}$ of the
real line given by
\begin{equation}
T_{RW}(x)= \lfloor x \rfloor + (2x \mod 1) -1 + 2\cdot 1_{(0,\frac{1}{2}]}(x \mod 1)
\end{equation}
and
\begin{equation}
T_{bool}(x)= x - \frac{1}{x}
\end{equation}
$T_{Boole}$ is known as Boole's transformation. It is a
is a classical example of an ergodic transformation preserving Lebesgue measure.
See \cite{adler_weiss73} for a proof of ergodicity and discussions of this transformation.
You may notice that $T_{RW}$ is isomorphic to the shift map on the space of forward trajectories of the simple random walk
on $\mathbb{Z}$.
From our perspective, it is natural (although mathematically equivalent) to consider an abstract standard $\sigma$-finite measure space $(X,\mathcal{B},\mu)$,
instead of $\mathbb{R}^d$ with Lebesgue measure.
We consider a Poisson point process on this space,
which denoted by $(X^*,\mathcal{B}^*,\mu^*)$.
Any measure preserving transformation $T:X \to X$ naturally induces a map
$T_*:X^* \to X^*$ on the Poisson process.
This transformation $T_*$ is the \emph{Poisson suspension} of $T$ \cite{roy_2009}.
We prove:
\begin{theorem}\label{thm:no_poisson_thin}
Let $T:X \to X$ be any conservative and ergodic measure preserving
transformation of $(X,\mathcal{B},\mu)$ with $\mu(X)=\infty$. There does not exist a $T$-equivariant Poisson thinning, allocation or matching.
\end{theorem}
We prove theorem \ref{thm:no_poisson_thin} by studying ergodic properties of the map
$T \times T_*$, which acts on the product space $(X \times X^*, \mathcal{B}\times \mathcal{B}^*, \mu^* \times \mu)$.
We refer to this system as the \emph{Poisson-product} associated with $T$.
The space $X\times X^*$ can be considered as a countable set of ``indistinguishable'' points in $X$, with a unique ``distinguished'' point.
The Poisson-product $T\times T_*$ acts on this by applying the same map $T$ to each point, including the distinguished point.
Our main result about Poisson-products is the following theorem:
\begin{theorem}
\label{thm:poisson_product_ergodic} Let $(X,\mathcal{B},\mu,T)$ be a
conservative, measure preserving transformation with
$\mu(X)=\infty$. Then the Poisson-product $T\times T_*$ is ergodic
if and only if $T$ is ergodic.
\end{theorem}
Before concluding the introduction and proceeding with the details, we recall a couple of results regarding non-existence of certain equivariant operations on Poisson processes. Evans proved in
\cite{evens2010} that with respect to any non-compact group of
\emph{linear} transformations there is no invariant Poisson-thinning on
$\mathbb{R}^d$. Gurel-Gurevich and Peled proved the non-existence of translation equivariant \emph{Poisson thickening} on the real line \cite{gurel_peled}, which means that there is no measurable function on realizations of the a homogenous Poisson process that sends a Poisson process to a higher intensity homogenous Poisson process.
This paper is organized as follows: In section \ref{sec:prelim} we briefly provide some terminology and necessary
background. Section \ref{sec:proof_poisson_product_ergodic}
contains a short proof of theorem \ref{thm:poisson_product_ergodic}
stated above, based on previous work in
ergodic theory.
In section \ref{sec:thinnings}
we prove any $T$-equivariant thinning is trivial, assuming $T \times T_*$ is ergodic.
In section \ref{sec:allocation_matching} we show that under the same assumptions
there are no $T$-equivariant Poisson allocations or Poisson matchings, using an intermediate result about non-existence of positive equivariant maps into $L^1$.
Section \ref{sec:FROL} discusses the ``leftmost position transformation'' and contains a proof of ergodicity,
yet another application of theorem \ref{thm:poisson_product_ergodic}. Section \ref{sec:group_actions} is a discussion of ergodicity of Poisson products for measure preserving group actions.
\emph{Acknowledgments}: Thanks Emmanual Roy for inspiring
conversations and in particular for suggesting the ``leftmost position transformation'' and asking about its
ergodicity. This work is indebted to Jon Aaronson for numerous
contributions, in particular for recalling the paper
\cite{aaro_nadkarni_1987}, which contains key points of the main
result. To Omer Angel and Ori Gurel-Gurevich, thanks for helpful discussions
about equivariant operations on Poisson processes.
\section{Preliminaries}\label{sec:prelim}
In this section we briefly recall some definitions and background from ergodic theory required for the rest of the paper. We also recall some
properties of the Poisson point process on a $\sigma$-finite measure
space.
\subsection{Ergodicity, conservative transformations, and induced transformations}
Throughout this paper
$(X,\mathcal{B},\mu)$ is a standard $\sigma$-finite measure space. We will mostly be interested in the case where $\mu(X)=\infty$.
Also throughout the paper, $T:X \to X$ is a measure preserving transformation,
unless explicitly stated otherwise,
where $T$ denotes an action of a group by measure preserving transformations of $(X,\mathcal{B},\mu)$.
The collection of measurable
sets of positive measure by will be denoted by $\mathcal{B}^+ := \{B \in \mathcal{B}~:~ \mu(B)>0\}$.
Recall that $T$ is \emph{ergodic} if any set $A \in \mathcal{B}$ which is $T$-invariant has either $\mu(A)=0$ or $\mu(A^c)=0$. Equivalently, $T$ is ergodic if any measurable function $f:X \to \mathbb{R}$ satisfying $f \circ T= f$ $\mu$-almost everywhere is constant on a set of full measure.
A set $W \in \mathcal{B}$ is called a \emph{wandering set} if $\mu(T^{-n}W \cap W)=0$ for all $n > 0$.
The transformation $T$ is called \emph{conservative} if there are no wandering sets in $\mathcal{B}^+$.
The \emph{Poincar\'{e} Recurrence Theorem} asserts that any $T$ which preserves a \emph{finite} measure is conservative.
For a conservative $T$ and $A \in \mathcal{B}^+$, the \emph{first return time function} is
defined for $x \in A$ by $\varphi_A(x)= \min\{ n \ge 1~:~ T^{n}(x) \in A\}$. $\varphi_A$ is finite $\mu$-a.e - this is a direct consequence of $T$ being conservative.
The \emph{induced transformation} on $A$ is defined by $T_A(x) := T^{\varphi_A(x)}(x)$.
If $T$ is conservative and ergodic and $A \in \mathcal{B}^+$, $T_A:A \to A$ is a conservative,
ergodic transformation of $(A,\mathcal{B} \cap A,\mu\mid_A)$.
See \cite{aaro_book} for a comprehensive introduction to ergodic theory of infinite measure preserving transformations.
\subsection{Cartesian product transformations}
Suppose $T$ is conservative, and $S:Y \to Y$ is a probability preserving transformation
of $(Y,\mathcal{C},\nu)$, namely $\nu(Y)=1$.
It follows (as in proposition $1.2.4$ in \cite{aaro_book})
that the \emph{cartesian product transformation} $T\times S:X\times Y \to X\times Y$ is a conservative,
measure-preserving transforation of the cartesian product measure-space
$(X\times Y,\mathcal{B} \otimes \mathcal{C},\mu \times \nu)$.
\subsection{$L^\infty$-Eigenvalues of measure preserving transformations}
A function $f \in L^{\infty}(X,\mathcal{B},\mu)$ is an
\emph{$L^\infty$-eigenfunction} of $T$ if $f\ne 0$ and $Tf=\lambda
f$ for some $\lambda \in \mathbb{C}$. The corresponding $\lambda$ is
called an $L^\infty$-\emph{eigenvalue} of $T$.
We briefly recall some well known results
If $T$ is ergodic and $f$ is an
$L^\infty$-eigenfunction, it follows that $|f|$ is constant
almost-everywhere. The $L^\infty$-eigenvalues of $T$ are
$$e(T):= \{ \lambda \in \mathbb{C} ~:~ \exists~ f\in L^{\infty}(X,\mathcal{B},\mu)~ f\ne0 \mbox{ and } Tf=\lambda f\}.$$
If $T$ is conservative, then $|\lambda| = 1$ for any eigenvalue $\lambda$, for otherwise
the set
$$\{x
\in X~:~ |f(x)| \in (|\lambda|^{k},|\lambda|^{k+1}]\}$$ would be a
non trivial wandering set for some $k \in \mathbb{Z}$ if
$|\lambda|>1$. Thus, for any conservative transforation $T$, $e(T)$ is a subset if the unit sphere
$$\mathbb{S}^{1}= \{ x
\in \mathbb{C} ~:~ |x|=1\}.$$
$e(T)$ is a group with respect to multiplication, and
carries a natural polish topology, with respect to which the natural
embedding in
$\mathbb{S}^{1}$ is continuous.
When $T$ preserves a finite measure, $e(T)$ is at most countable.
For a general infinite-measure preserving $T$ however, $e(T)$
can be uncountable, and quite ``large''. For instance arbitrary Hausdorff dimension
$\alpha \in (0,1)$. Importantly for us however, there are limitations on how ``large'' $e(T)$ can be. For instance, $e(T)$ is a \emph{weak Dirichlet} set. This means that
$$\liminf_{n \to \infty}\int |1- \chi_n(s)|dp(s)=0$$
whenever $p$ is a
probability measure on $\mathbb{S}^{1}$ with $p(e(T))=1$, and $\chi_n(s):=\exp\left(2\pi i n s\right)$.
In particular the set $e(T)$ has measure zero with respect to Haar measure on $\mathbb{S}^1$.
We refer the reader to existing literature for further details
\cite{aaro_book,aaro_nadkarni_1987,nadkarni_spectral_ds_book,schmidt_spectra_1982}.
\subsection{The $L^2$-spectrum}
Let
$U_T:L^2(\mu)\to L^2(\mu)$ denote the unitary operator defined by $U_T(f):=f \circ T$.
The \emph{spectral type} of a unitary operator $U$ on a Hilbert
space $H$, denoted $\sigma_U$, is a positive measure on $\mathbb{S}^1$
satisfying:
\begin{enumerate}
\item[(a)]{
$$\left< U^nf,g\right> =
\int_{\mathbb{S}^1}\chi_n(s)h(f,g)(s)d\sigma_U(s),$$ where $h:H\times H \to
L^1(\sigma_U)$ is a sesquilinear map.}
\item[(b)]{$\sigma_U$ is minimal with that property, in the sense that
it satisfies $\sigma_U \ll \sigma$ for any measure $\sigma$ on $\mathbb{S}^1$
satisfying $(a)$.}
\end{enumerate}
In $(b)$ above and throughout the paper, we write $\mu_1 \ll \mu_2$ to indicate that the measure $\mu_1$ is absolutely continuous with respect to $\mu_2$. If $\mu_1 \ll \mu_2$ and $\mu_2 \ll \mu_1$, we say they are in the same measure class.
The spectral type $\sigma_U$ is defined only up to measure class.
Existence of $\sigma_U$ is a formulation of \emph{the scalar spectral
theorem}.
For a measure-preserving transformation $T$, The \emph{spectral
type} of $T$ $\sigma_T$ is the spectral type of the associated
unitary operator $U_T$ on $L^2(\mu)$. For a probability preserving
transformation $S$ , the \emph{restricted spectral type} is the
spectral type the unitary operator $U_S$ restricted to
$L^2$-functions with integral zero.
Our brief exposition here follows $\S2.5$ of \cite{aaro_book}.
\subsection{Poisson processes and the Poisson suspension}\label{subsec:poisson_processes}
For a standard $\sigma$-finite measure space $(X,\mathcal{B},\mu)$, $(X^*,\mathcal{B}^*,\mu^*)$ denotes the
associated \emph{Poisson point process}, which we now describe. $X^*$ is the space of countable subsets of $X$. We will typically denote an element of $X^*$ by $\omega$, $\omega_1$, $\omega_2$ and so on. The $\sigma$-algebra $\mathcal{B}^*$ is generated by sets of the form
\begin{equation}
\label{eq:gen_sigma_algebra_B_star}
\left[ |\omega \cap B\right|=n] := \{\omega \in X^* ~:~ |\omega \cap B| = n\},
\end{equation}
for $n \ge 0$ and $B \in \mathcal{B}$.
The probability measure $\mu^*$ is
is uniquely defined by requiring that for any
pairwise disjoint $A_1,A_2,\ldots,A_n \in \mathcal{B}$,
if $\omega \in X^*$ is sampled according to $\mu^*$, then
$|\omega \cap A_i|$ are jointly independent random variables
individually distributed Poisson with expectation $\mu(A_i)$:
\begin{equation}\label{eq:poisson_def}
\mu^*\left(|\omega \cap A|=k \right)=e^{-\mu(A)}\frac{\mu(A)^k}{k!}.
\end{equation}
The underlaying measure $\mu^*$ is called the \emph{intensity} of the Poisson process. We will assume that the measure $\mu$ has no atoms, namely $\mu(\{x\})=0$ for any $x \in X$. This is a necessary and sufficient condition to avoid multiplicity of points almost surely with respect to $\mu^*$.
A Poisson point process can be defined on very general measure spaces, under milder assumptions than ``standard''.
Details of the construction and general properties of Poisson processes can be found for instance in \cite{kingman_poisson_book,kingman_poisson_process_revisted}.
To make various measurability statements in the following sections more transparent, we assume the following technical condition:
There is a fixed sequence $\{\beta_n\}_{n=1}^\infty$ of countable partitions of $X$ into $\mathcal{B}$-measurable sets,
such that $\beta_{n+1}$ refines $\beta_n$, with the additional property that the mesh of these partitions goes to
$0$, namely:
$$\lambda(\beta_n):=\sup\{ \mu(B) ~:~ B \in \beta_n\} \to 0 \mbox{ as } n \to \infty.$$
We assume that $\mathcal{B} = \bigvee_{n=1}^\infty \sigma(\beta_n)$ is the $\sigma$-algebra generated by the union of these partitions. For instance, if $(X,\mathcal{B},\mu)$ is the real line with Lebesgue measure on the Borel sets, we can take $\beta_n$ to be the partition into half-open intervals with endpoints on the lattice $\frac{1}{2^n}\mathbb{Z}$.
The $\sigma$-algebra $\mathcal{B}^*$ can now be defined by
$$\mathcal{B}^* = \bigvee_{n=1}^\infty \beta_n^*,$$
Where $\beta_n^*$ is the $\sigma$-algebra generated by sets of the form \eqref{eq:gen_sigma_algebra_B_star} with $B \in \beta_n$ and $n \in \{0,1,2,\ldots\}$. Different sequences $\beta_n$ with the above properties will not change the completion with respect to $\mu^*$ of the resulting $\sigma$-algebra $\mathcal{B}^*$.
The \emph{Poisson suspension} of a measure preserving map $T:X \to
X$, is the natural map obtained by applying $T$ on $X^*$. As in \cite{roy_2009}, we denote it by $T_*:X^* \to X^*$. This transformation is formally defined by: $$T_*(\omega)=\{T(x)~:~ x
\in\omega\}.$$ $T_*$ is a probability-preserving transformation of $(X^*,\mathcal{B}^*,\mu^*)$.
The following proposition relates the spectral measures of $T$ and $T_*$ \cite{roy_2009}:
\begin{prop}
\label{prop_poisson_spectral}
If $\sigma$ is the spectral-type of $T$.
The restricted spectral type of $T_*$ is given by:
$$\sigma_{T_*} = \sum_{n \ge 1} \frac{1}{n!}\sigma^{\otimes n}.$$
\end{prop}
It is a classical result that a probability-preserving transformation is ergodic iff
its restricted spectral type has no atom at $\lambda=1$, and is
\emph{weakly mixing} iff its restricted spectral type has no atoms in
$\mathbb{S}^1$ (this property is also equivalent to ergodicity of $T\times T$).
It follows that $T_*$ is ergodic
iff $T_*$ is weakly mixing iff there are no $T$-invariant sets of
finite measure in $\mathcal{B}^+$ \cite{roy_2009}.
In the following sections we will use the map $\pi:X \times X^* \to X^*$ given by
\begin{equation}
\label{eq:pi_factor_def}
\pi(x,\omega) = \{x\} \cup \omega.
\end{equation}
The map $\pi$ defined by \eqref{eq:pi_factor_def}
is a measurable map from between the measure spaces $(X\times X^*, \mathcal{B} \otimes \mathcal{B}^*)$ and
$(X^*,\mathcal{B}^*)$.
This is can be verified directly using the following equalities of sets:
$$\pi^{-1}\left[ |\omega \cap A| = 0\right] = (X\setminus A) \times [ |\omega \cap A| = 0],$$
and
$$\pi^{-1}\left[ |\omega \cap A | = n\right] = \left((X \setminus A) \times [ |\omega \cap A| = n]
\right) \cup \left(A \times \left[ |\omega \cap A| \in\{n-1,n\}\right]\right),$$
for $A \in \mathcal{B}$ and $n \in \mathbb{N}$.
In fact, $\pi$ is a \emph{$\infty$-factor map} between the measure preserving maps $T \times T_*$ and $T_*$,
in the sense of chapter $3$ of \cite{aaro_book}: This means that $\pi \circ T_* = (T \times T_*) \circ \pi$ and for $A \in \mathcal{B}^*$
$$(\mu \times \mu^*) \circ \pi^{-1} (A)=
\begin{cases}
0 & \mbox{ if } \mu^*(A)=0\\
\infty & \mbox{otherwise}
\end{cases}
$$
\section{Ergodicity of Poisson product for conservative transformations}
\label{sec:proof_poisson_product_ergodic}
We now provide a proof of Theorem \ref{thm:poisson_product_ergodic}.
The argument we use is an adaptation of \cite{aaro_nadkarni_1987}.
To prove our result, we invoke the following condition for
ergodicity of cartesian products, due to M. Keane:
\begin{theorem*}\textbf{(The Ergodic Multiplier Theorem)}
Let $S$ be a probability preserving transformation and $T$ a conservative, ergodic, non-singular transformation.
$S \times T$ is ergodic iff $\sigma_S(e(T))=0$, where:
\begin{itemize}
\item{$\sigma_S$ is the restricted spectral type of $S$,}
\item{$e(T)$ is the group of $L_\infty$-eigenvalues of $T$.}
\end{itemize}
\end{theorem*}
A proof of this result is provided for instance in section $2.7$ of \cite{aaro_book}.
By proposition \ref{prop_poisson_spectral}, the restricted spectral-type of the Poisson suspension $T_*$
is a linear combination of convolution powers of the spectral type of $T$.
We make use of the following basic lemma about convolution of
measures and equivalence of measure classes. A short proof is provided here for the
sake of completeness:
\begin{lemma}\label{lem:convolution_respects_measure_class}
Let $\mu_1$ and $\mu_2$ be Borel probability measures on
$\mathbb{S}^1$ with the same
null-sets. For any Borel probability measure $\nu$ on $\mathbb{S}^1$,
the measures $\mu_1 * \nu$ and $\mu_2 * \nu$ have the same
null-sets.
\end{lemma}
\begin{proof}
We will prove that $\mu_1 \ll \mu_2$ implies that $\mu_1 * \nu \ll \mu_2 * \nu$ which suffices by symmetry.
We assume $\mu_1 \ll \mu_2$, and show that for any $\epsilon>0$, there exists $\delta>0$ so that any set $A \in \mathcal{P}(\mathbb{S}^1)$ with $(\mu_1 * \nu)(A) \ge \epsilon$ has $(\mu_2 * \nu)(A) \ge \delta$.
Fix $\epsilon >0$ and choose any $A \in \mathcal{B}(\mathbb{S}^1)$ with $(\mu_1 * \nu)(A) \ge \epsilon$. It follows that
$$\nu\left((\{ x \in \mathbb{S}^1~:~ \mu_1(A\cdot x) \ge \frac{\epsilon}{2} \}\right) \ge \frac{\epsilon}{2}.$$
Since $\mu_1 \ll \mu_2$, there exists $\delta'>0$ so that
$\mu_1(B) \ge \frac{\epsilon}{2}$ implies $\mu_2(B) \ge \delta'$.
Thus,
$$\nu\left((\{ x \in \mathbb{S}^1~:~ \mu_2(A\cdot x) \ge \delta' \}\right) \ge \frac{\epsilon}{2}.$$
It follows that
$(\mu_2 * \nu)(A) \ge \delta'\cdot\frac{\epsilon}{2}$, which establishes the claim with $\delta = \delta'\cdot\frac{\epsilon}{2}$.
\end{proof}
From this we deduce the following lemma:
\begin{lemma}\label{lem:eignvalues_act_non_singularly}
Let $T$ be a conservative,
measure-preserving transformation.
For any $n \ge 1$, the group $e(T)$ acts non-singularly on
$\sigma_{T}^{\otimes n}$, the $n$'th convolution power of the restricted spectral type of $T$.
\end{lemma}
\begin{proof}
Our claim is that
\begin{equation}
\label{eq:eig_non_singular}
\forall t \in e(T), ~\sigma_{T}^{\otimes n} \sim \delta_t * \sigma_{T}^{\otimes n},
\end{equation}
where $\delta_t$ denotes dirac measure at $t$, and $\sim$ denotes equivalence of measure classes.
For $n=1$, a proof can be found in \cite{aaro_nadkarni_1987,hann_79}.
Equation \eqref{eq:eig_non_singular} follows
for $n >1$
by induction using lemma \ref{lem:convolution_respects_measure_class},
with $t \in e(T)$, $\sigma_T$ and $\delta_t * \sigma_T$
substituting for $\mu_1$ and $\mu_2$ respectively, and $\sigma_T^{\otimes (n-1)}$ substituting for $\nu$.
\end{proof}
\noindent\textbf{Completing the proof of theorem
\ref{thm:poisson_product_ergodic}:}
By the ergodic multiplier Theorem above, proving ergodicity of
the Poisson-product amounts to proving $\sigma_{T_*}(e(T))=0$.
Since $\sigma_{T_*}=\sum_{n \ge 1}\frac{1}{n!}\sigma_{T}^{\otimes
n}$, it is sufficient to prove that for all $n \ge 1$,
\begin{equation}
\label{eq:eignvalues_spectral_null}
\sigma_{T}^{\otimes n}(e(T))=0.
\end{equation}
A proof that $\sigma_T(e(T))=0$ is provided in \cite{hann_79} (see also \cite{aaro_nadkarni_1987}). This is the case $n=1$ of equation \eqref{eq:eignvalues_spectral_null}. We also refer to the discussion in chapter $9$ of \cite{nadkarni_spectral_ds_book}.
For convenience of the reader and in preparation for the discussion in section \ref{sec:group_actions}, we briefly recall the arguments leading to this result:
Suppose the contrary: $\sigma_{T}(e(T)) >0$. Since $e(T)$ acts non-singularly on $\sigma_T$, it follow that
$\sigma_{T}\mid_{e(T)}$ is a quasi-invariant measure on
$e(T)$. Thus,
$e(T)$
can be furnished with a locally-compact second-countable topology,
respecting the Borel structure inherited from $\mathbb{S}^1$. Haar
measure on $e(T)$ must be is equivalent to $\sigma_{T}\mid_{e(T)}$.
With respect to this topology, we have that $e(T)$ is a locally compact group,
continuously embedded in $\mathbb{S}^{1}$, where the topological embedding is also a group embedding.
In this situation, it follows as in \cite{aaro_nadkarni_1987} that $e(T)$ is either discrete or $e(T)=\mathbb{S}^1$.
The possibility that $e(T)$ is discrete is ruled out since this would imply $\sigma_{T}$ has atoms, which
means $T$ has $L^2(\mu)$ eigenfunctions. This is impossible since $T$ is an ergodic transformation preserving an infinite measure.
The alternative is that $e(T)=\mathbb{S}^1$. This is impossible since $e(T)$
is weak Dirichlet, thus must be a null set with respect to Haar measure on $\mathbb{S}^1$ \cite{schmidt_spectra_1982}.
To prove the equality in \eqref{eq:eignvalues_spectral_null} for $n >1$, note that the convolution power of an atom-free measure is itself atom-free and that by lemma \ref{lem:eignvalues_act_non_singularly} above $e(T)$ also acts non-singularly on $\sigma_T^{\otimes n}$. The result now follows using the same arguments outlined above for the case $n=1$.
This completes the proof of theorem \ref{thm:poisson_product_ergodic}.
\section{Non-existence of equivariant thinning}
\label{sec:thinnings}
Here is a formalization of the notion of a (deterministic) \emph{thinning}. This is a
$\mathcal{B}^*$-measurable map $\Psi:X^* \to X^*$, satisfying
$$\mu^*([|\Psi(\omega) \cap B| \le |\omega \cap B|])=1 ~ \forall B \in \mathcal{B}.$$
This essentially means that $\Psi$ is a measurable map on the space $X^*$ of countable sets of $X$,
for which almost-surely $\Psi(\omega) \subset \omega$.
A \emph{Poisson thinning} satisfies the extra condition that $\mu^*\circ \Psi^{-1} =
(\theta \mu)^*$ for some $\theta \in (0,1)$.
By $(\theta \mu)^*$ we mean the measure on $(X^*,\mathcal{B}^*)$ which corresponds to a Poisson process with intensity given by $\theta \cdot \mu$. In other words, the law of the countable set $\Psi(\omega)$ is that of a lower-intensity Poisson process.
Given a measure preserving transformation $T:X \to X$, a thinning
$\Psi$ is called \emph{$T$-equivariant} if $\Psi \circ T_* = T_* \circ \Psi$.
A thinning $\Psi$ is \emph{trivial} if
$$\mu^*( [\Psi(\omega)= \emptyset])=1 \mbox{ or } \mu^*( [\Psi(\omega)= \omega])=1.$$
\begin{prop}\label{prop:no_poisson_thinning}
Let $T$ be a group-action by measure preserving transformations.
If ~$T \times T_*$ is ergodic,
there does not exist a non-trivial $T$-equivariant thinning.
\end{prop}
\begin{proof}
Suppose by contradiction that $\Psi$ is a non-trivial $T$-equivariant thinning.
Consider the set
\begin{equation}
A = \{ (x,\omega) \in X \times X^*~:~ x \in \Psi(\omega \cup \{x\})\}.
\end{equation}
Measurability of the set $A$ is verified by the following:
$$A = \bigcap_{n=1}^\infty\bigcup_{B \in \beta_n}\left(B \times X^* \right)\cap \left((\Psi \circ \pi)^{-1}[|\omega \cap B| >0] \right) \mod \mu \times \mu^*,$$
where $\{\beta_n\}_{n=1}^\infty$ is a ``decreasing net'' of countable partitions as in section \ref{sec:prelim}.
Since $\Psi$ is $T$-equivariant, the set A is a $T \times T_*$ invariant set. By ergodicity of $T \times T_*$, either $(\mu \times \mu^*)(A)=0$ or $(\mu \times \mu^*)(A^c)=0$.
Intuitively, $A$ is the subset of $X\times X^*$ where applying the thinning $\Psi$ on the union of the ``indistinguishable points'' with the ``distinguished point'' does not delete the distinguished point. We will complete the proof by showing that this
implies that the thinning $\Psi$ is trivial
For $j \in \mathbb{N}$, define $\pi_{(j)}:\overbrace{X\times\ldots \times X}^j \times X^* \to X^*$ by $$\pi(x_1,\ldots ,x_j,\omega)=\bigcup_{k=1}^j\{x_k\}\cup \omega.$$
$\pi_{(j)}$ is $\mathcal{B}^{\otimes j} \otimes \mathcal{B}^*$-measurable.
This follows from measurability of the map $\pi$ given by \eqref{eq:pi_factor_def}, which coincides with $\pi_{(1)}$.
For any $B \in \mathcal{B}$ with $0<\mu(B)< \infty$, and $j \in \mathbb{N}$, we consider the following probability measures:
\begin{enumerate}
\item[(i)]{$$\mu^*_{B,j}(\cdot) := \mu^*\left(\cdot~\mid {[(\omega \cap B) =j]})\right).$$
This is a probability measure on $(X^*,\mathcal{B}^*)$ corresponding to a Poisson process with intensity $\mu$, conditioned to have exactly $j$ points in the set $B$.
}
\item[(ii)]{
$$\hat{\mu}_{B,j}(\cdot) :=\frac{ (\mu \times \mu^*)\mid_{B \times [(\omega \cap B) =j]} }{\mu(B)\cdot\mu^*([\omega \cap B] =j)}(\cdot).$$
$\hat{\mu}_{B,j}$ is a probability measure on $X\times X^*$ given by the product of a random point in $B$, distributed according to $\mu\mid_B$ and an independent Poisson process with intensity $\mu$, conditioned to have exactly $j$ points inside the set $B$.
}
\item[(iii)]{
$$\tilde{\mu}_{B,j}(\cdot) := \frac{\overbrace{\mu\mid_B\times \ldots \times \mu\mid_B}^j\times (\mu\mid_{B^c})^* }{\mu(B)^j}(\cdot)$$
This is the probability on $(X^j \times X^*,\mathcal{B}^{\otimes j} \otimes \mathcal{B}^*)$ which corresponds to $j$ independent random points identically distributed according to $\mu\mid_B$ and an independent Poisson process of intensity $\mu\mid_{B^c}$.
}
\end{enumerate}
From the properties of the Poisson process, it directly follows that the probability measures defined above are related as follows:
\begin{equation}\label{eq:poisson_cond_iid}
\hat{\mu}_{B,j}\circ \pi^{-1}= \tilde{\mu}_{B,j+1} \circ \pi_{(j)}^{-1}= \mu^*_{B,j+1}.
\end{equation}
and
\begin{equation}\label{eq:poisson_cond_iid2}
\hat{\mu}_{B,j}= \tilde{\mu}_{B,j+1} \circ \pi_{[2,j]}^{-1}
\end{equation}
Where $\pi_{[2,j]}:\overbrace{X\times\ldots \times X}^j \times X^* \to X\times X^*$ is given by $$\pi_{[2,j]}(x_1,\ldots ,x_j,\omega)=\left(x_1,\bigcup_{k=2}^j\{x_k\}\cup \omega\right).$$
In particular, it follows that $\pi_{(j)}$ is a nonsingular map for all $j \ge 1$, in the sense that
the inverse image of a $\mu^*$-null set is always $\overbrace{\mu \times \ldots \mu}^j \times \mu^*$-null.
Assuming $\Psi$ is not a trivial thinning implies that
there exist $B \in \mathcal{B}$ with $0<\mu(B)<\infty$
so that
$$\mu^*\left( 0 < | \Psi(\omega) \cap B| < | \omega \cap B| \right)>0.$$
It follows that for some $j >1$,
\begin{equation}\label{eq:prob_delete}
\mu^*_{B,j}\left( 0 < \frac{| \Psi(\omega) \cap B|}{ | \omega \cap B|} < 1\right)>0.
\end{equation}
Now by \eqref{eq:poisson_cond_iid} and \eqref{eq:poisson_cond_iid2}, using symmetry of $\tilde{\mu}_{B,j}$ with respect to the variables $(x_1,\ldots,x_j)$, it follows that the probability
$\hat{\mu}_{B,j}\left( x \in \Psi(\pi(x,\omega))\right)$ is equal to the expectation of $\frac{| \Psi(\omega) \cap B|}{ | \omega \cap B|}$ under $\mu^*_{B,j}$. By \eqref{eq:prob_delete} this expectation must be strictly positive and smaller than one. This contradicts triviality of the set $A$: Either $(\mu\times \mu^*)(A)=0$ in which case $\hat{\mu}_{B,j}\left( x \in \Psi(\pi(x,\omega))\right)=0$ or $(\mu\times \mu^*)(A^c)=0$ in which case $\hat{\mu}_{B,j}\left( x \in \Psi(\pi(x,\omega))\right)=1$.
\end{proof}
\section{Non-existence of equivariant allocation and matching}\label{sec:allocation_matching}
The aim of this section is to establish
the non-existence of $T$-equivariant Poisson allocation and Poisson matching, under an ergodicity assumption of a certain extension of $T$.
Combined with theorem \ref{thm:poisson_product_ergodic}, this will establish the last part of theorem \ref{thm:no_poisson_thin}.
We begin with an intermediate result about measure-preserving systems.
Consider
a measurable function $ \Phi:X \to L^1(\mu)$, sending $x \in X$ to $\Phi_x \in L^1(\mu)$.
which is $T$-equivariant in the sense that $\Phi_{Tx} \circ T= \Phi_x $. Such a function $\Phi$ can be interpreted as a $T$-equivariant ``mass allocation'' scheme.
For instance, on $X=\mathbb{R}^d$ with Lebesgue measure, $\Phi_x(y) = 1_{B_1(x)}(y)$ and $\Phi_x(y) = \exp(-\|x-y\|)$ both define isometry-equivariant ``mass allocations''. The later can be considered a ``fractional allocation'', in the sense that it obtains values in the interval $(0,1)$. Non-existence of $T$-equivariant Poisson allocation and Poisson matching will be a consequence of the following:
\begin{prop}\label{prop:no_mass_allocation}
Let $T$ be a measure-preserving group action on $(X,\mathcal{B},\mu)$.
If $T \times T_*$ is ergodic, and $\mu(X)=\infty$,
any $T$-equivariant
measurable function
$ \Phi:X \to L^1(\mu)$
must be equal to $0$ $\mu$-a.e.
\end{prop}
\begin{proof}
Suppose $\Phi:X \to L^1(\mu)$ satisfies $\Phi_{Tx} \circ T= \Phi_x$ .
Note that ergodicity of $T$ implies that $\|\Phi_x\|_{L^1(\mu)} $ is constant $\mu$-a.e, as this is a $T$-invariant function.
Consider the function $F: X \times X^* \to \mathbb{R}$ given by:
$$ F(x,\omega) = \sum_{y \in \omega} |\Phi_x(y)| .$$
We verify that $F$ indeed coincides with a $\mathcal{B}\otimes \mathcal{B}^*$-measurable function on a set of full $\mu\times\mu^*$-measure:
Indeed, $$\Phi_x = \sum_{B \in \beta_1} \sum_{ y \in \omega \cap B} | \Phi_x(y)|,$$
by Martingale convergence,
$$\sum_{ y \in \omega \cap B} | \Phi_x(y)| = \lim_{n \to \infty} E_{\mu^*}\left(\sum_{ y \in \omega \cap B} | \Phi_x(y)| \mid \beta_n^*\right),$$
for $\mu \times \mu^*$-almost-every $(x,\omega)$.
For $B \in \beta_1$ and $n \ge 1$ we have
$$ E_{\mu^*}\left(\sum_{ y \in \omega \cap B} | \Phi_x(y)| \mid \beta_n^*\right) = \sum_{D \in \beta_n \cap B}E_{\mu^*}\left(\sum_{y \in (\omega \cap D)}|\Phi_x(y)|\right),$$
and the right hand side is clearly $\mathcal{B} \times \beta_n^*$-measurable.
Let
$$ \tilde{F}(x):= \int |F(x,\omega)| d\mu^*(\omega) = \int \sum_{y \in \omega} |\Phi_x(y)| d\mu^*(\omega),$$
It follows from the definition of $\mu^*$ that
$\tilde{F}= \| \Phi_x\|_{L^1(\mu)}$. Thus, by ergodicity of $T$, $\tilde{F}$
is equal to a non-zero (finite) constant $\mu$-almost everywhere. In particular, $F$ is finite $\mu \times \mu^*$-almost everywhere.
Observe that $F$ is $T\times T_*$-invariant, so by ergodicity of $T \times T_*$ must be constant $\mu \times \mu^*$-a.e. On the other hand, for any $\epsilon >0$ and $M >0$, we have $F(x,\omega) > M$ whenever $(x,\omega) \in X \times X^*$ satisfy $|\omega \cap \{y \in X~:~ |\Phi_x(y)| > \epsilon\}| > \frac{M}{\epsilon}$.
From the definition of the Poisson process it thus follows that
$$ (\mu \times \mu^*)\left( [ F \ge M]\right) \ge \mu(\{ x \in X ~:~ ||\Phi_x||_{L^1(\mu)} \ge \epsilon\}) \cdot \frac{\epsilon^{\frac{M}{\epsilon}}}{M!}\exp\left( -\frac{M}{\epsilon}\right).$$
Because the right hand side is strictly positive for any $M >0$, whenever $\epsilon >0$ is sufficiently small,
it follows that $F$ is not essentially bounded, which contradicts $F$ being almost-everywhere constant.
\end{proof}
Together with Theorem \ref{thm:poisson_product_ergodic}, Proposition \ref{prop:no_mass_allocation}, immediately gives the following corollary, which does not seem to involve Poisson processes at all:
\begin{corollary}
Let $T:X\to X$ be a conservative and ergodic measure preserving transformation of $(X,\mathcal{B},\mu)$ with $\mu(X)=\infty$.
Any measurable function
$ \Phi:X \to L^1(\mu)$
satisfying $\Phi_{Tx}\circ T = \Phi_x $ must be equal to $0$ $\mu$-a.e.
\end{corollary}
We now turn to define and establish a non-existence result for equivariant Poisson allocations:
By a \emph{Poisson allocation rule} we mean a $\mathcal{B}^* \otimes \mathcal{B}$-measurable map $\Upsilon:X \times X^* \to L^1(\mu)$ satisfying the following properties:
\begin{enumerate}
\item[(A1)]{\emph{Non-negativity:} $\Upsilon_{(x,\omega)}(y) \ge 0$.}
\item[(A2)]{\emph{Partition of unity:} $ \sum_{x \in \omega}(y) \Upsilon_{(x,\omega)} = 1$ $\mu^*$-a.e.}
\item[(A3)]{$\Upsilon_{(x,\omega)} \equiv 0$ if $x \not\in \omega$.}
\end{enumerate
If $x \in \omega$, we think of $\Upsilon_{(x,\omega)}$ as the ``the cell allocated to $x$''. Properties $(A1)$ and $(A2)$ above guarantee that $\Upsilon$ essentially takes values in the interval $[0,1]$.
The three above properties together express the statement that $\Upsilon_{(\cdot,\omega)}$ corresponds to a partition of $X$ up to a null set between the points in $\omega$, which assigns each $x \in \omega$ finite mass. For a ``proper'' allocation, we would require that $\Phi_{(x,\omega)}$ only takes values in $\{0,1\}$, but this extra requirement is not necessary in order to prove our result.
For it is often useful to consider a wider class of Poisson allocation rules, where $\Upsilon_{(x,\omega)}$ is undefined for a null set of $(x,\omega)$'s, and $\Upsilon$ is only measurable with respect to the $\mu \times \mu^*$-completion of the $\sigma$-algebra $\mathcal{B}^* \otimes \mathcal{B}$. However, conditions $(A2)$ and $(A3)$ above apply to $\mu \times \mu^*$-null sets, so we need to be careful and restate them as follows:
\begin{enumerate}
\item[(A1)]{\emph{Non-negativity:} $\Upsilon_{(x,\omega)}(y) \ge 0$.}
\item[(A2')]{\emph{Partition of unity:} $ \int_X \Upsilon_{(x,\omega)} d\mu(x) = 1$ $\mu^*$-a.e.}
\item[(A3')]{$\int_A \Upsilon_{(x,\omega)}d\mu(x) \equiv 0$ $\mu^*$-a.e on $\{ \omega \in X^*~:~ \omega \cap A = \emptyset\}$ whenever $A \in \mathcal{B}$.}
\end{enumerate
A poisson allocation $\Upsilon$ is $T$-equivariant if $\Upsilon_{(Tx,T_*\omega)}\circ T = \Upsilon_{(x,\omega)} $.
\begin{prop}
\label{prop:no_poisson_allocation}
Let $T$ be a group-action by measure preserving transformations, and denote
$S:=T \times T_*$.
If $S \times S_*$ is ergodic,
there does not exist a $T$-equivariant Poisson-allocation.
\end{prop}
\begin{proof}
Given a Poisson allocation $\Upsilon:X \times X^* \to L^1(\mu)$, we will define a $T \times T_*$-equivariant
function $\Phi:X\times X^* \to L^1(\mu \times \mu^*)$
,
which by ergodicity of $S=T \times T_*$ will contradict proposition \ref{prop:no_mass_allocation}.
This is given by:
$$ \Phi_{(x,\omega)}(y,\omega_2)= \Upsilon_{(x,\omega \cup \{x\})}(y).$$
It follows directly that:
$$\| \Phi_{(x,\omega)}\|_{L^1(\mu \times \mu^*)} = \| \Upsilon_{(x,\omega \cup \{x\})}\|_{L^1(\mu)},$$
which is positive and finite $\mu \times \mu^*$-a.e.
Measurability of $\Phi$ follows from the measurability assumptions on $\Upsilon$ and from measurability of the map $(x,\omega) \to \{x\} \cup \omega$.
\end{proof}
We now consider the existence of equivariant Poisson matching
schemes:
Given a pair of independent Poisson processes realizations a (deterministic) \emph{Poisson matching}
assigns a perfect matching (or bijection) between the points of the two realizations, almost surely.
To formalize this we define a Poisson matching as a
measurable-function $\Psi:X^* \times X^* \to (X \times X)^*$,
satisfying the following:
\begin{enumerate}
\item[(M1)]{$$ \mu^* \left( \left\{ \omega_2\in X^*~:~ |\Psi(\omega_1,\omega_2) \cap (B_1 \times B_2) | \le \min\{|\omega_1 \cap B_1 |,| \omega_2 \cap B_2 |\}\right\}\right) =1,$$
for $\mu^*$-a.e $\omega_1$ and all $B_1,B_2 \in \mathcal{B}$.
}
\item[(M2)]{
$$ \mu^* \left(\left\{\omega_2 \in X^*~:~ |\Psi(\omega_1,\omega_2) \cap (B_1 \times X) | = |\omega_1 \cap B_1 |\right\}\right) =1,$$
for $\mu^*$-a.e $\omega_1$ and all $B_1 \in \mathcal{B}$.
}
\item[(M3)]{
$$ \mu^* \left(\left\{\omega_1 \in X^*~:~ |\Psi(\omega_1,\omega_2) \cap (X \times B_2) | = |\omega_2 \cap B_2 |\right\}\right) =1,$$
for $\mu^*$-a.e $\omega_2$ and all $B_2 \in \mathcal{B}$.
}
\end{enumerate}
\begin{prop}\label{prop:no_poisson_matching}
Under the assumptions of proposition \ref{prop:no_poisson_allocation},
there does not exist a non-trivial $T$-equivariant Poisson matching.
\end{prop}
\begin{proof}
Suppose $\Psi$ is a $T$-equivariant Poisson matching.
We will define a ``fractional'' $T$-equivariant Poisson allocation $\Upsilon:X \times X^* \to L^1(\mu)$, contradicting Proposition \ref{prop:no_poisson_allocation}.
The (implicit) definition of $\Upsilon$ is given by:
\begin{equation}\label{eq:matching_from_allocation}
\int_A \Upsilon_{(x,\omega_1)}(y)d\mu(y) = \mu^*\left(\{ \omega_2 ~:~ |\Psi(\omega_1,\omega_2)\cap(\{x\} \times A)|>0 \}\right).
\end{equation}
for all $A \in \mathcal{B}$, $\omega_1 \in X^*$ and $x \in X$.
In other words, if $x\in \omega_1$, $\Upsilon_{(x,\omega_1)}$ is the density with respect to Lebesgue measure of the
conditional distribution of the \emph{partner} of $x$ under the matching $\Psi$, given $\omega_1$.
This defines $\Upsilon$ up to a null set.
It follows from the properties of $\Psi$ that $\Upsilon$ satisfies the conditions $(A1)$,$(A2')$ and $(A3')$ above.
Thus, $\Upsilon$ is indeed a Poisson allocation.
Because $\Psi$ is a $T$-equivariant matching, it follows directly that $\Upsilon$ is a $T$-equivariant allocation.
\end{proof}
To conclude the proof of the last part of Theorem \ref{thm:no_poisson_thin}, we note that if $T$ is a conservative and ergodic measure-preserving transformation, $S=T \times T_*$ is also conservative and ergodic by Theorem \ref{thm:poisson_product_ergodic}, and so $S \times S_*$ is also ergodic, again by Theorem \ref{thm:poisson_product_ergodic}.
\section{The Leftmost position transformation}
\label{sec:FROL}
In this section $X=\mathbb{R}_+$
is the set of positive real numbers, $\mathcal{B}$ is the Borel $\sigma$-algebra on $X$, and $\mu$
is Lebesgue measure on the positive real numbers. $T:X \to X$ is an arbitrary conservative, ergodic, Lebesgue-measure-preserving
map of the positive real numbers.
In order to have a concrete example for such transformation $T$ in hand, the reader can consider the unsigned version of Boole's transformation, given by $T(x)=|x-\frac{1}{x}|$.
We define the following function:
\begin{equation}
\label{eq:t1}
t_1:X^* \to X \mbox{ by } t_1(\omega)=\inf \omega.
\end{equation}
The map $t_1$ is well defined on a set of full $\mu^*$-measure, namely whenever $\omega \ne \emptyset$.
Note that
$t_1(\omega)$ is the leftmost point of $\omega$ whenever $\omega$ is a discrete countable subset of $\mathbb{R}_+$ .
The map $t_1$ is $\mathcal{B}^*$-measurable since
$$t_1^{-1}(a,b) = \{\omega \in X^*~:~ \omega \cap (0,a]=\emptyset \mbox{ and } \omega \cap(a,b) \ne \emptyset\}.$$
From this, it also follows directly that
$$\mu^*\circ t_1^{-1}(a,b) = e^{-\mu(0,a)}\left(1-e^{-\mu(a,b)}\right) = e^{-a}-e^{-b}.$$
In particular it follows that $ \mu^*\circ t^{-1} \ll \mu$.
Define the \emph{leftmost return time} $\kappa:X^* \to \mathbb{N}\cup\{+\infty\}$ by:
\begin{equation}
\label{eq:kappa}
\kappa(\omega) = \inf\{k \ge 1 \; :\; t_1(T_*^k(\omega))=T^k(t_1(\omega))\}.
\end{equation}
$\mu^*$-Almost surely, $\kappa(\omega)$ is the smallest positive
number of iterations of $T_*$ which must be applied to $\omega$ in
order for the left-most point to return to the leftmost location. A
priory, $\kappa_T$ is could be infinite. Nevertheless, we will soon
show that when $T$ is conservative and measure preserving,
$\kappa$ is finite $\mu^*$-almost surely. Finally, the
\emph{leftmost position transformation} associated with $T$,
$T_*^\kappa:\omega \to \omega$, is defined by
$$T_*^\kappa(\omega):=T_*^{\kappa(\omega)}(\omega).$$
This is the map of
$X_*$ obtained by reapplying $T_*$ till once again there are no
points to the left of the point which was originally leftmost.
The reminder of this section relates the leftmost transformation associated with $T$ with the Poisson-product $T \times T_*$.
Let
\begin{equation}
\label{eq:X_0}
X_0 = \{(x,\omega) \in X\times X^*~:~ \omega\cap(0,x] = \emptyset\}.
\end{equation}
The set $X_0$ is simply the subset of $X \times X^*$ in which the ``distinguished point'' is strictly to the left of any ``undistinguished point''.
The formula below verifies
measurability of $X_0$:
$$ X_0 = \bigcap_{n \in \mathbb{N}} \bigcup_{q \in \mathbb{Q}}\left((q-\frac{1}{n},q+\frac{1}{n})\times \{\omega \in X^*~:~ \omega \cap (0,q+\frac{2}{n}) = \emptyset \}\right) ~ \mod \mu\times\mu^*.$$
\begin{prop}\label{prop:leftmost_is_induced_product}
Let $T:\mathbb{R}_+\to\mathbb{R}_+$ be conservative and
Lebesgue-measure-preserving. Then the leftmost position
transformation associated with $T$ is well defined and is isomorphic
to the induced map of the Poisson product on the set $X_0$ defined by equation \eqref{eq:X_0}:
$$(X^*,\mathcal{B}^*,\mu_*,T_*^\kappa) \cong
\left(X_0,\mathcal{B}_0,
\mu_0,(T\times T_*)_{X_0}\right)$$
Where $\mu_0=(\mu\times\mu_*)\mid_{X_0}$ is the restriction of the
measure product $\mu\times\mu_*$ to the set $X_0$, and
$\mathcal{B}_0 = \left(\mathcal{B}\otimes\mathcal{B}^*\right)\cap
X_0$ is the restriction of the $\sigma$-algebra on the product space
to subset of $X_0$.
In particular,
$\mu_0(X_0)=1$,
so $(X_0,\mathcal{B}_0,\mu_0)$ is a probability space.
\end{prop}
\begin{proof}
Consider the map $\pi_0:X_0 \to X^*$ which is the restriction to $X_0$ of the map $\pi(x,\omega) =
\{x\}\cup \omega$ described in section \ref{subsec:poisson_processes} above.
For a non-empty, discrete $\omega \in X^*$ we have:
$$\pi_0^{-1}(\omega)=(t_1(\omega),\omega\setminus t_1(\omega)).$$
Thus $\pi_0$ is invertible on a set of full $\mu^*$-measure in $X^*$.
As $T$ is conservative and $T_*$ is a probability preserving
transformation, the Poisson product $T\times T_*$ is also
conservative. We will show below that $\mu\times\mu^*(X_0)>0$.
Therefore, the return time $\varphi_{X_0}$ is finite almost
everywhere on $X_0$.
Since $\kappa \circ \pi_0 = \pi_0 \circ \varphi_{X_0}$, it follows that $\kappa$ is finite $\mu^*$-a.e.
We also have
$$ \pi_0(T^nx,T_*^n\omega)=T_*^n(\pi_0(x,\omega))$$
whenever $(x,\omega)$ and $(T^nx,T_*^n\omega)$ are in $X_0$.
Thus, $$\pi_0 \circ (T \times T_*)_{X_0} = T_*^\kappa \circ \pi_0.$$
It remains to check that $\pi_0^{-1}\mu^*=\mu_0$.
it is sufficient to
verify that $\mu^*(A)=\mu_0(\pi_0^{-1}(A))$ for sets $A \in
\mathcal{B}^*$ of the form
$$A= \bigcap_{k=1}^N[ |\omega \cap A_k | = n_k],$$
where $A_i=(a_{i-1},a_{i}]$,
$0=a_0 <a_1 < a_2 <\ldots <a_N$ and $n_k \ge 0$ for $k=1,\ldots N$.
Given the definition of $\mu^*$, this amounts to an exercise in elementary calculus.
By
definition of $\mu^*$:
$$\mu^*(A)=\prod_{k=1}^N
\frac{\mu(A_k)^{n_k}}{n_k!}\exp\left(-\mu(A_k)\right)$$
which simplifies to:
\begin{equation}
\label{eq:muA}
\mu^*(A)
=\exp(-a_N)\prod_{k=1}^{N}\frac{(a_{k}-a_{k-1})^{n_k}}{n_k!}
\end{equation}
Assuming the $n_k$'s are not all zero, let $k$ the smallest index for which $n_k > 0$. We have:
$$\pi_0^{-1}(A) = \bigcap_{j \ne k}\left(X \times [|\omega \cap A_j| = n_j]
\right)\cap \bigcup_{ x \in A_{k}}\{x\}\times
\left([|\omega \cap [a_{k-1},x)=0] \cap
[|\omega \cap [x,a_{k})=n_k-1]\right).$$
Thus,
$$\mu_0(\Phi^{-1}(A))=T_0
\int_{A_k}\exp(-(x-a_{k-1}))\exp(-(a_{k}-x))\frac{(a_{k}-x)^{n_{k}-1}}{(n_{k}-1)!}dx$$
where $$T_0=\prod_{j\ne
k}\frac{(a_{j}-a_{j-1})^{n_j}}{n_j!}\exp\left(a_{j}-a_{j-1}\right)$$
Integrating this rational function of a single variable, we see that the last expression is equal to the expression
on right hand side of \eqref{eq:muA}.
In particular, it follows that $\mu_0(X_0)=1$.
It remains to check the case that $n_k=0$ for all $k=1,\ldots N$: In this case then $A= [ \omega \cap (0,a_N]=0 ]$ and $$\pi_0^{-1}(A)= \{(x,\omega) \in X_0 :~ x > a_n \},$$ Thus,
$$\mu_0(\pi_0^{-1}(A))=\int_{[a_N,\infty)}
e^{-\mu[x,\infty)}d\mu(x)=\exp(-a_N),$$ which is equal to
$\mu^*(A)$.
\end{proof}
\begin{corollary}\label{cor:leftmost_ergodic}
Let $T:\mathbb{R}_+\to\mathbb{R}_+$ be a conservative and ergodic
Lebesgue-measure-preserving transformation. Then the leftmost position
transformation $T_*^\kappa:(\mathbb{R}_+)^* \to (\mathbb{R}_+)$ is an ergodic probability preserving transformation.
\end{corollary}
\begin{proof}
Let $T$ be as above.
By proposition \ref{prop:leftmost_is_induced_product}, $T_*^\kappa$ is isomorphic to the map obtained by inducing the Poisson product $T \times T_*$ onto the set $X_0$. It is well known that inducing a conservative and ergodic transformation on a set of positive measure results in an ergodic transformation. By theorem \ref{thm:poisson_product_ergodic}, $T\times T_*$ is indeed ergodic.
\end{proof}
It would be interesting to establish other ergodic properties of
$T^\kappa$. For example, what conditions on $T$ are required for
$T^\kappa_*$ to be weakly mixing?
\section{Poisson-products and measure-preserving group actions}\label{sec:group_actions}
The purpose of this section is to discuss counterparts of our pervious
results on ergodicity of Poisson products,
and various equivariant operations in the context of a group
of measure preserving transformations.
Some motivating examples for this are groups of
$\mathbb{R}^n$-isometries, which naturally act on $\mathbb{R}^n$ preserving Lebesgue measure.
Briefly recall the basic setup:
We fix a topological group $\mathbb{G}$ and a $\sigma$-finite measure space $(X,\mathcal{B},\mu)$. A
measure-preserving $\mathbb{G}$-action $T$ on the $\sigma$-finite measure space $(X,\mathcal{B},\mu)$
is a representation $g \mapsto T_g \in \mathit{Aut}(X,\mathcal{B},\mu)$ of $\mathbb{G}$
into the measure preserving automorphisms of $(X,\mathcal{B},\mu)$.
A $\mathbb{G}$-action $T$ is \emph{ergodic} if for some $A \in \mathcal{B}$, $\mu(T_g A \setminus A)=0$
for all $g \in \mathbb{G}$ then either $\mu(A)=0$ or $\mu(X \setminus A)=0$.
Any measure preserving $\mathbb{G}$-action $T$ induces an action $T_*$ on the Poisson process by probability preserving transformations \cite{roy_poisson_pinsker}. The Poisson-product $\mathbb{G}$-action $T \times T_*$ is thus defined the same way as in the case of a single transformation.
The proofs of propositions \ref{prop:no_poisson_thinning}, \ref{prop:no_mass_allocation}, \ref{prop:no_poisson_allocation} and \ref{prop:no_poisson_matching} above are still valid in this generality.
Let us recall the definition of a conservative $\mathbb{G}$-action:
Say $W \in \mathcal{B}$ is a \emph{wandering set} with resect to the
action $T$ of a locally-compact group $\mathbb{G}$ if
$\mu(T(g,W)\cap W)=0$ for all $g$ in the complement of some compact
$K \subset \mathbb{G}$. Call a $\mathbb{G}$-action
\emph{conservative} if there are no non-trivial wandering sets.
If in the statement of Theorem \ref{thm:poisson_product_ergodic} we
let $T$ be a conservative ergodic $\mathbb{G}$-action for a group
other than $\mathbb{Z}$, ergodicity of $T\times T_*$ may fail. This
can happen even for conservative and ergodic $\mathbb{Z}^2$-actions,
as we demonstrate in the example below:
Let $a,b \in \mathbb{R}\setminus\{0\}$ with $\frac{a}{b} \not\in \mathbb{Q}$
~Define a $\mathbb{Z}^2$-action $T$ on $\mathbb{R}$ by:
$$T_{(m,n)}(x)=x+am+bn, \mbox{ for } (m,n) \in \mathbb{Z}^2.$$
It is a simple exercise to show that the $\mathbb{Z}^2$-action above
is both conservative and ergodic. Nevertheless, it is easy to see
that $T \times T_*$ is not ergodic, for instance by noting that
$$\{ (x,\omega) \in \mathbb{R}\times \mathbb{R}^* ~:~ (x+1,x-1)\cap \omega = \emptyset \}$$
is a non-trivial $T\times T_*$-invariant set.
Since this action $T$ consists of translations, as noted in the introduction, there do exist $T$-equivariant Poisson allocations, Poisson matchings and Poisson thinning.
Although the example above demonstrates theorem \ref{thm:poisson_product_ergodic} does not generalize, for abelian group actions most components of the proof given in section \ref{sec:proof_poisson_product_ergodic} remain intact. Our next goal is to explain this, and point out where the proof of theorem \ref{thm:poisson_product_ergodic} breaks down for the example above:
Let $\mathbb{G}$ be a locally compact \emph{abelian} group, and let $\widehat{\mathbb{G}}$ denote its dual.
Generalizing the discussion in section \ref{sec:prelim}, the $L^{\infty}$-\emph{spectra}
of a $\mathbb{G}$-action $T$, denoted $Sp(T)$,
is the set of homomorphisms $\chi:\mathbb{G} \to \mathbb{C}^*$
such that $f(T_gx)=\chi(g)f(x)$ for
some non zero $f \in L^{\infty}(X,\mu)$.
In case $\mathbb{G}=Z$, the spectra is simply the group $L^{\infty}$-eigenvalues. As in the case
$\mathbb{G}=\mathbb{Z}$ discussed earlier, the $L^\infty$-spectra is a weak-Dirichlet set in
$\widehat{\mathbb{G}}$ \cite{schmidt_spectra_1982}.
The $L^{2}$-\emph{spectral type} of $T$ is an equivalence class of Borel measures $\sigma_T$
on $\widehat{\mathbb{G}}$. For any non-zero $f \in L^2(\mu)$ $\sigma_f \ll \sigma_T$
, where the measure $\sigma_f$ is given by:
$$\hat\sigma_f(g) = \int f(T_g(x))\overline{f(x)}d\mu(x).$$
The spectral type of $\sigma_T$ is the minimal equivalence class of measures on $\widehat{\mathbb{G}}$ with respect to which all the $\sigma_f$'s are absolutely continuous..
With these definitions, Keane's Ergodic Multiplier Theorem above to
generalizes as follows:
The product of an ergodic measure preserving $\mathbb{G}$-action $T$ and a probability preserving $\mathbb{G}$-action $S$
is ergodic iff $Sp(T)$ is null with respect to the restricted spectral type of $\sigma_T$. The discussion in the end of section \ref{sec:proof_poisson_product_ergodic} following \cite{aaro_nadkarni_1987,schmidt_spectra_1982} still shows that in this case $Sp(T)$ must be a locally compact group continuously which embeds continuously in $\widehat{\mathbb{G}}$. However, when $\mathbb{G} \ne \mathbb{Z}$, this does not imply that $Sp(T)$ is either discrete or equal to $\widehat{\mathbb{G}}$:
Getting back to the example of the $\mathbb{Z}^2$-action $T$ above, we
note that for any $\tau \in
\mathbb{R}$, the function $f_\tau \in L^{\infty}(\mathbb{R})$
defined by
$$ f_\tau(x) = \exp(i \tau x),$$
is an $L^\infty$ eigenfunction of $T$, since it satisfies
$$f_\tau(T_{(m,n)}(x)) = \exp(i \tau(x+am+bn))= \chi_{(ta,tb)}(m,n)\exp (i\tau x),$$
where $\chi_{(a,b)}(m,n)=\exp(i am+ bn)$. The map $t \to
\chi_{(ta,tb)}$ is a continuous group embedding of $\mathbb{R}$ in
$Sp(T) \subsetneq \widehat{\mathbb{Z}^2}$.
\bibliographystyle{abbrv}
|
1,314,259,995,594 | arxiv | \section{Introduction}
Free-electron devices based on field emission played an important role in the early days of electronic systems development. In fact, vacuum tubes constituted the main building blocks of the first electronic computers, including Colossus\cite{randell1982}, used by the British to decipher German encrypted communication during WWII, and ENIAC\cite{hartree1946}, the first general purpose electronic computer, developed by the US Army to calculate ballistic trajectories. Vacuum tubes were then gradually substituted by semiconductor technology that could deliver faster switching times, lower power consumption, improved scalability and integrability, and did not require vacuum packaging.\cite{brinkman1997} The technology survived, but only in a few niche applications.\cite{gilmour2011,qiu2009,symons1998,barbour1998} However, in the last few decades, the advancement in nanofabrication techniques have allowed for the miniaturization of vacuum free-electron devices, which have started to regain interest due to their interesting properties when shrunk to the nanoscale.\cite{han2012}
Nano vacuum channel (NVC) electronics promise fast switching times, and low power-delay product with robust operation in harsh environments \cite{han2017}. Nanoscale vacuum channels allow for true ballistic transport with no phonon and charged impurity scattering, enabling higher electron velocities in the channel. Since these devices do not require vulnerable oxides and free-electrons are effectively insensitive to ionizing radiation and temperature fluctuations, these devices are attractive for applications in harsh environments such as space technology \cite{han2017, gaertner2012}. Field-emitter devices commonly use vertical geometries \cite{ding2000,ding2002,driskill1997,spindt1991} because with this approach it is easier to achieve sharp nanotips. However, it also makes them difficult to integrate with traditional electronics. On the other hand, planar NVC field-emitters could be easily incorporated into integrated circuits on a large scale. Moreover, thanks to their small size and low capacitance (down to tens of attofarads), they can be operated at petahertz-scale bandwidths, which makes them an ideal candidate for femtosecond electronics \cite{karnetzky2018} and other optoelectronic applications that require sub-optical-cycle response times \cite{schotz2019,rybka2016,yang2020,krausz2014}.
Despite these clear advantages, the underlying emission mechanisms of these devices is poorly understood. In literature, these devices are typically described using a pure Fowler-Nordheim tunneling emission model\cite{forbes2013}. While such a model can be used to fit the measured data, to do so requires the use of field enhancement factors of $\gamma > 100 \times$. This stands in stark contrast to electromagnetic modeling of the tips which indicate only modest field enhancement factors of $\gamma \sim 10\times$ for tips with nanometer-scale radii of curvature and gaps of few to tens of nanometers. For instance, Nirantar et al. \cite{nirantar2018} had to assume a $\gamma = 590\times$ to accurately fit their results. Such discrepancies, exceeding more than one order of magnitude, were also noted by De Rose et al. \cite{de2020}, where they had to assume $\gamma = 133\times$ while their electromagnetic simulation would suggest $\gamma = 3.5\times$. This implies that Fowler-Nordheim tunneling is not physically consistent with the observed data, and implies some other emission physics was dominant. Understanding the dominant emission mechanisms involved and demonstrating how to reliably determine the proper regime of operation is critical if these device are going to be used to design and build electronic circuits that operate robustly in extreme environments.
In this work, we compared the emission characteristics of metallic (Au) and refractory (TiN) vacuum-channel bow-tie diodes having few-nm radii of curvature and sub-20-nm air/vacuum gaps. This comparison is of interest as the TiN devices are more resilient, allowing us to reach higher current densities from the emitters and thus transition to different emission regimes.
While prior work has focused on three-terminal devices, we have chosen to focus on two-terminal diodes which allowed us to simplify the device geometry and focus on the underlying emission physics. We showed that these vacuum nano-diodes can be operated reliably with turn-on voltages of $<$ 10 V and from nA to $\mu A$-level operating currents per device. We demonstrated repeatable behavior over many devices and over several scans per device. We analyzed the measured IV characteristics under variable temperature and atmospheric pressure to reveal the dominant mechanisms responsible for electron emission, and to rule out substrate conduction. In particular, we isolated threee distinct emission regimes from single devices for the first time: Schottky, Fowler-Nordheim field emission, and saturation. The transition between these regimes is still under scrutiny from a theoretical perspective, where some important effort has been done to build a model that would encompass all three.\cite{darr2020} We fitted our result with analytical models to better understand these behaviors ensuring that the field enhancement factor reasonably matches that from electromagnetic simulations.
\section{Results and Discussion}
\label{S:3}
To analyze the emission behavior of these planar vacuum nanoemitters we fabricated both metallic (Au) and refractory (TiN) planar nano vacuum channel (pNVC) bow-tie diodes having ~10-20 nm vacuum gaps, using the procedures laid out in the methods section. Typical examples of the resulting structures are illustrated in the scanning electron microscope (SEM) micrographs shown in Fig. \ref{Fig1}a and Fig. \ref{Fig1}b. Particularly important was the implementation of an undercut in the fabrication process. In fact, preliminary testing showed that, without this undercut, the device often showed hysteretic behavior, which we attribute to the charging of the insulating layer underneath the devices. We found that introducing an undercut reliably eliminated this effect. An example of the effect of the undercut can be seen in Fig. \ref{Fig1}c where we measured an I-V curve of a metallic emitter with (main figure) and without (inset) the undercut. Moreover, an undercut allows us to ensure that the current we see is actually all due to emission in the vacuum gap and there is not a significant contribution due to surface conduction which can skew the analysis. Additionally, we never imaged the devices before testing since the SEM electron beam causes the deposition of a carbon layer, and we observed that this can contribute to ohmic conduction. For imaging purposes, we always fabricated a twin device next to each device.
\begin{figure}[h!]
\centering
\includegraphics[width = 1\linewidth]{Figures/Fig1b.pdf}
\caption{(a) SEM micrograph of a typical Au device. (b) SEM micrograph of a typical TiN device. We note that the thicknesses of the devices are different: the Au device is 25 nm thick while the TiN one is 50 nm thick. (c) Schematic of an IV measurement (bottom-right inset) and I-V curve of a metallic emitter with (main figure) and without (top-left inset) an undercut. As can be seen from the inset, without the undercut the current is very low and hysteresis is present. With the undercut, the current is much higher and the hysteresis disappears.}
\label{Fig1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width = 1\linewidth]{Figures/Fig3c.pdf}
\caption{Typical behavior of an IV sweep of a Au and a TiN devices. (a) Au experimental results with inset $log(I)$ vs $V^{1/2}$ plot, which highlight Schottky behavior. We can see that the Au device exhibits Schottky behavior in this test, which can be identified by a linear dependence in the inset. (b) Au experimental results in $log(I/V^2)$ vs $1/V$ plot, which highlights Fowler-Nordheim behavior. Inset illustrates a 3D electromagnetic simulation showing the field enhancement factor ($\gamma$) in the region around the two tips, highlighting a $\gamma = 7\times$ at the edge of the tip. From these data, to fit a Fowler-Nordheim emission we would need to assume a $\gamma = 50\times$, which is inconsistent with simulation.(c) TiN device experimental results with inset Schottky plot. In this plot, we can identify a Schottky regime which manifests as a linear dependence in the inset. This is followed by a superlinear regime and then a saturation. (d) TiN experimental results in Fowler-Nordheim plot. Here, we can identify that the superlinear regime that was visible in the Schottky plot is indeed driven by Fowler-Nordheim emission. In fact, this region can be fitted with a FN model, assuming a $\gamma = 10\times$, which is consistent with the simulation. These tests were performed at $10^{-6}$ mbar.}
\label{Fig3}
\end{figure}
After the fabrication, we proceeded with analyzing the I-V response of tens of devices fabricated to investigate the underlying emission mechanisms. The testing was performed in a vacuum chamber with pressure of $10^{-6}$ mbar using a Keysight B2912A SMU. Fig. \ref{Fig3}a shows an example of experimental data for a gold bowtie nano emitters having a gap of $<20$ nm. The device current exhibited an exponential behavior with respect to the applied field with a turn on voltage of approximately 5V which is consistent with the Au work function of 5.1eV.
The inset shows the same curve plotted in a $log(I)$ vs $V^{1/2}$ plot, which is useful for identifying emission in Schottky regime. The Schottky emission regime applies to field-enhanced thermionic emission. In the Schottky emission regime, the applied field reduces the work function barrier height, and enhances the thermionic emission. Schottky emission can be modeled as \cite{tomer2015}:
\begin{equation}
I \propto T^2 \mathrm{exp} \left( \frac{q}{2k_B T}\sqrt{ \frac{q\gamma V}{d\pi\epsilon_0}}\right) \mbox{,}
\end{equation}
where $\gamma$ is the field enhancement factor, $T$ is the temperature, $\epsilon_0$ is the vacuum permittivity, $q$ is the electron charge, $k_B$ is the Boltzmann constant, $V$ is the potential, and $d$ is the gap between the tips. Schottky emission therefore would appear linear with a positive slope when plotting $log(I)$ vs $V^{1/2}$. As such, we refer to these plots as ``Schottky plots'' for simplicity throughout the remainder of this work. We can see that the tested gold device exhibits a linear characteristic in this plot over the entire range of bias voltages tested, indicating that Schottky emission could be dominant.
To ensure our interpretation is correct, we also considered emission due to Fowler-Nordheim tunneling. The Fowler-Nordheim regime applies to cold-field-emission where the electrons tunnel through the work-function barrier from the Fermi surface of the material, and is the most commonly used theory for modeling field-induced electron emission in literature. Fowler-Nordheim emission can be modeled analytically as:
\begin{equation}
I \propto \phi^{-1}\left(\gamma \frac{V}{d}\right)^{2} \mathrm{exp} \left( -b\frac{d \phi^{3/2}}{\gamma V} v(y) \right) \mbox{,}
\end{equation}
where $v(y) = 1 - y^2 - y^2\mathrm{log}(y)/3 $, $y=2\sqrt{\frac{e^2 \gamma V}{16d\pi\epsilon_0}}\frac{1}{\phi}$, $\phi$ work function and $b = 6.83$ $eV^{-3/2} Vnm^{-1}$.
For the FN fit we used a more complete version of this formula, which can be found in Kyritsakis e al. \cite{kyritsakis2015}
Fig. \ref{Fig3}b illustrates the same data as in \ref{Fig3}a, but now plotted in a Fowler-Nordheim plot ($log(I/V^2)$ vs $1/V$). In such a plot, FN emission would appear linear with negative slope. While a linear trend with negative slope does indicate field-emission as likely, it unfortunately does not guarantee that Fowler-Nordheim-like tunneling is truly dominant. We note that Schottky emission can also appear quasi-linear when plotted in this fashion over a given bias voltage range. Indeed, this is the case with the data from our gold devices.
However, when we then fit the curve with a FN model we have to assume $\gamma = 50\times$, which is not consistent with our electromagnetic simulation results. Electromagnetic simulations consistently predict $\gamma \sim 10\times$. An example of such a simulation illustrating $\gamma$ around the two tips is shown in the inset of Fig. \ref{Fig3}b, which shows a peak of $\gamma \approx 7\times$. Taken together with the Schottky plot, this provides strong evidence that across the tested bias range Schottky emission dominated from the gold devices. This is further confirmed with temperature testing which is described below. Unfortunately, we were not able to run the device at higher potentials to determine if there was a point where Fowler-Nordheim emission appears dominant due to degradation of the devices. This may be due to reshaping of the tips due to high current density or modification of the work function due to current-assisted adsorption on the tips.
To investigate what impact the emitter material might have on the emission properties, we then performed similar testing on the TiN devices. Thanks to their more resilient nature and a thicker oxide, we were able to run the TiN devices at higher fields and current densities, which allowed us to see transitions between different emission regimes as can be seen in Figs. \ref{Fig3}c and d. At low voltage (approximately from 4 V to 10 V) we observe what appears to be Schottky emission behavior (see linear response over this range in the Schottky plot shown in the inset of Fig. \ref{Fig3}c). Unlike the gold devices, at higher potentials (between 10V and 13V) we can see a transition away from Schottky emission where the emission grows at an even higher exponential rate. Fig. \ref{Fig3}d illustrates the same data in a Fowler-Nordheim plot. In this case, we note that the slope is considerably steeper than the slope for Au for bias voltages between 10 to 13 V. Indeed, the data in this region can be fitted using the aforementioned FN model, which predicts $\gamma \approx 10\times$, which is physically consistent with the $\gamma \approx 7\times$ obtained from our electromagnetic simulations. Finally, above 13V the current reaches a saturation region. We note that this behavior is repeatable in forward and backward scans so it is not due to damage of the devices. We attribute this saturation behavior to Child-Langmuir regime\cite{umstattd2005,lau1994}, where the emission current is limited by the space-charge-effect.
\begin{figure}[h!]
\centering
\includegraphics[width = 1\linewidth]{Figures/Fig5d.pdf}
\caption{Temperature dependence of Au (a) and TiN (b) devices. The devices' IV characteristics are recorded varying the device temperature using a heater placed in thermal contact with the sample. The insets illustrate the same data plotted in a $log(I)$ vs $V^{1/2}$ axes. The Au devices exhibit Schottky behavior which which manifests as parallel traces in this plot. On the other hand TiN devices exhibit all three regimes (Schottky, Fowler-Nordheim and saturation). While Schottky regime shows temperature dependence similarly to the Au devices, Fowler-Nordheim does not which is consistent with the model. Saturation regime also does not show any temperature dependence. It is worth noticing that in this case the TiN device enter in Fowler-Nordheim and then saturation at a lower voltage than the Au device. This can be due to different gap sizes or a sharper tip: both parameters that can vary with slightly different fabrication conditions. The same Au and TiN data is plotted in a FN plot in (c) and (d) respectively. }
\label{Fig5}
\end{figure}
To further investigate our findings, we next tested the temperature dependence of the devices' IV response. Such tests should clearly differentiate between Schottky and Fowler-Nordheim emission as Schottky emkission depends strongly on temperature, while Fowler-Nordheim emission does not. The temperature tests were done by placing a heating stage inside the vacuum chamber in thermal contact with the sample. The results of this test are shown in Fig. \ref{Fig5} for both Au and TiN devices. The Au devices (Fig. \ref{Fig5}a) exhibits a clear temperature dependence over the entire range of applied voltages. Each scan appears in the Schottky plot as a series of spaced, roughly linear traces consistent with the expected behavior for Schottky emission. In this regime the temperature provides the necessary energy to overcome the barrier set by the work function and applied bias voltage.
On the other hand, for the TiN devices, we can clearly see all three regimes. In the Schottky regime, which dominates at the lowest voltages, there is a temperature dependence which manifests as vertically-spaced parallel traces in the Schottky plot in the inset of Fig. \ref{Fig5}b. This dependence gradually shrinks and then disappears when Fowler-Nordheim tunneling begins to dominate at higher voltages. This reduction in temperature dependence is consistent with cold field emission, where tunneling from near the Fermi level dominates. Finally, we also observe no temperature dependence in the saturation regime. This is consistent with Child-Langmuir space-charce saturation which is a charge-density-induced limitation that does not depend on temperature or material properties.
When the same data are plotted on a FN graph (Fig. \ref{Fig5}c,d), we can see that the TiN device reaches Fowler-Nordeim regime approaching the $\gamma \approx 13\times$ curve (fitted using the $20 ^{\circ}$C data) and then saturates similarly to what is predicted by Lau et al. \cite{lau1994} for a Fowler-Nordheim to Child-Langmuir transition .
\begin{figure}[h!]
\centering
\includegraphics[width = 1\linewidth]{Figures/Fig6d.pdf}
\caption{Pressure dependence of Au (a) and TiN (b) devices. The devices are first tested in a vacuum chamber with a $10^{-6}$ mbar vacuum. Then the chamber is vented with air and a series of consecutive traces are recorded at different time intervals: 1, 5, 10, 30 and 60 min. The drop in current is due to adsorption on the tip surface and shows that the conduction is indeed in the vacuum channel with no significant contribution due to substrate conduction, which would not be affected by the pressure.}
\label{Fig6}
\end{figure}
We also investigated the influence of ambient air pressure on the devices. To do this, we initially tested the devices at 1e-6 mbar and then we vented the chamber and recorded the IV characteristic at different intervals after exposure to ambient air. The sub-20nm gaps ensure that even in ambient pressure the conduction through the emitter to collector gap happens in an effective vacuum, not mediated by the gas. This is a because the air molecules mean free path at atmospheric pressure is larger than the gap size. However, as can be seen in Fig. \ref{Fig6}, when exposed to the atmosphere, these devices, nonetheless experience a strong reduction in current. The current stabilizes after about an hour to a a level much lower than that in vacuum conditions, both for the metallic and refractory devices. We attribute this effect to adsorption of molecules (e.g. water) on the tips which modify the work function at the emission surfaces. In the case of TiN devices oxidation might also play a role. In both cases, the vacuum emission characteristics can be fully recovered after a few burn-in cycles (i.e. running a few IV sweeps that clean the emission surface from absorbed molecules) once they are placed back in vacuum. It is also noteworthy that the current drop experienced by the refractory devices when exposed to atmospheric pressure is less dramatic than that experienced by the metallic devices. Indeed, the refractory device in Fig. \ref{Fig6} experience a current drop of $16\times$, while the gold device experience a current drop of $50\times$. These findings further confirm the ballistic nature of the electron emission and transport through the vacuum channel as contribution to the conduction due to leakage through the substrate would not exhibit such strong sensitivity to air exposure.
\section{Conclusion}
\label{S:4}
We investigated the emission physics of planar NVC diodes. To do so, we analyzed the IV characteristics of two different emitter materials, Au and TiN, under varying temperatures and atmospheric conditions. While past work had primarily used Fowler-Nordheim tunneling to model the emission physics of such devices, a large discrepancy was found between the fitted electric field enhancement factor and that expected from electromagnetic modeling. Upon closer inspection in this work, we found that Schottky emission tends to dominate at lower applied bias values. In particular, we found that for the Au devices we were only able to observe temperature-dependent Schottky emission before the onset of damage. Instead, for TiN, thanks to the possibility of exploring higher potentials given their higher physical robustness, we were able to identify the transition from Shottky to Fowler-Nordheim tunneling before a final transition to saturation. We ascribe the saturation regime to the Child-Langmuir space charge limitation. Depending on the device requirements (e.g. low voltage or high transconductance) devices could be designed to operate in different regimes. These findings mark an important step toward the development of accurate models necessary for the designa and realization of high-speed\cite{karnetzky2018}, robust\cite{bhattacharya2021} and radiation-resistant\cite{han2017} vacuum nanoelectronics.
Finally, the pressure analysis revealed a large reduction in the emission rate and increase in the turn-on voltage of the devices due to a combination of oxidation and absorption of molecules on the surface. While this sensitivity verifies tunneling emission and transport through free-space dominates compared to substrate leakage, it unfortunately indicates that such devices should be properly packaged despite the reduced free-space channel width. However, we emphasize that this degradation is reversible once vacuum was restored.
\section{Methods}
\label{S:2}
We explored two different materials for these devices: Au and TiN. Therefore, we developed two different fabrication techniques. The patterning of the Au devices is achieved through a lift-off process. Instead, the patterning of the TiN is achieved through etching using an hard mask. In the following we are going to illustrate the main steps of these fabrication.
\subsection{Gold structures}
We developed this process for Au devices but, in general, it can be extended to the patterning of any metallic material that can be e-beam evaporated. Fig \ref{Fig2}a is a graphical illustration of the different steps of the process:
\begin{enumerate}
\item EBL patterning of PMMA A2 resist on a thermal oxide on Si substrate: this step is performed to pattern the devices;
\item resist development and ebeam evaporation of 5 nm Cr and 20 nm Au;
\item lift off in heated NMP;
\item photolithography of a bilayer PMGI+S1813 resist: this step is performed to pattern the pads for the electrical connections;
\item resist development and ebeam evaporation of 30 nm Cr and 150 nm Au
\item lift off in NMP
\item CF4 RIE and 40s of (9:1) DI:BOE HF: step that creates an undercut at the tip
\end{enumerate}
We did some preliminary test without the last step, but we observed a low current and hysteretic IV characteristics of the devices. We concluded that such effect was caused by emitted electrons that gets trapped in the oxide creating a repulsive potential which prevented further electrons to be emitted. Once the etching step used to create the undercut was introduced the effect disappeared.
An example of a completed structure done with this process is shown in Fig \ref{Fig1}a.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.8\linewidth]{Figures/Fig1c.pdf}
\caption{(a)Au nanofabrication process. (b) TiN nanofabrication process. (c) Measurement setup.}
\label{Fig2}
\end{figure}
\subsection{TiN structures}
We developed this process for TiN devices but, in general, it can be adapted to the patterning of many hard materials, modifying the first etching step chemistry. Fig \ref{Fig2}b is a graphical illustration of the different steps of the process:
\begin{enumerate}
\item reactive sputtering of TiN
\item EBL patterning of PMMA A2 resist on a thermal oxide on Si substrate: this step is performed to pattern the hard mask;
\item resist development and e-beam evaporation of bilayer hard mask (15 nm Al and 15 nm Cr). We determined that Cr hard mask gives the best results in terms of sidewalls steepness but it is hard to remove. Using a bilayer mask allows us to exploit the Cr benefits and then remove it with an Al lift-off;
\item lift off in heated NMP;
\item CF4/O2 RIE etching: this step is performed to pattern the devices;
\item EBL patterning of ZEP502A resist: this step is performed to pattern a region that will undergo the etching for creating the undercut;
\item CF4/O2 RIE etching;
\item ZEP502A resist stripping in NMP;
\item mask lift off through sonication in TMAH;
\item photolithography of a bilayer PMGI+S1813 resist: this step is performed to pattern the pads for the electrical connections;
\item resist development and ebeam evaporation of 30 nm Cr and 150 nm Au
\item lift off in NMP
\item 70s of (9:1) DI:BOE HF: step that creates an undercut at the tip
\end{enumerate}
An example of a completed structure done with this process is shown in Fig. \ref{Fig1}b.
Fig. \ref{Fig2}c illustrates the schematic of the testing apparatus.
|
1,314,259,995,595 | arxiv | \section{The Lie algebra rank condition of geometric control theory}
\label{intro} Consider a control system of the form \be{genconsys}
\dot x=f(x,u), \end{equation} where $x$ is the state varying on a compact Lie
group and $u$ the control. The system is said to be {\it right
invariant} if, denoting by $x(t,u,s)$ the solution of
(\ref{genconsys}) corresponding to initial condition $s$ and control
function $u$, we have \be{r1} x(t,u,s)= x(t,u,{\bf 1})\circ s, \end{equation}
where ${\bf 1}$ denotes the identity of the group and $\circ$ is the
multiplication of the group. To be concrete, we shall consider the
case of matrix groups where the group operation is the standard
matrix multiplication, with particular attention to subgroups of
$SU(n)$, given the potential application to quantum systems. In
particular, we shall consider systems of the form \be{d} \dot
X=A(u)X, \qquad X(0)={\bf 1}, \end{equation} where ${\bf 1}$ is the identity
matrix and the matrix $A(u)$ is in the Lie algebra associated with
$G$ for every value of the control $u$. This equation models many
systems of interest. In particular closed (i.e., not interacting
with the environment) finite dimensional quantum systems which are
coherently controlled (i.e., through a variation of their
Hamiltonian) are modeled this way. In this case, equation (\ref{d})
is Schr\"odinger equation. We refer to \cite{Mikobook} and
references therein for several examples and introductory notions on
Lie groups and Lie algebras in the context of quantum control.
If we restrict ourselves to piecewise constant controls, the problem
of control for systems (\ref{d}) can be described as follows.
Assume that we have a linearly independent set of matrices
\be{effematrices} {\cal F}:=\{ A_1,\ldots, A_m \}. \end{equation} To each of
them there corresponds a semigroup \be{semigruppi} {\cal
S}_j:=\{e^{A_j t} | t\geq 0\}, \qquad j=1,\ldots,m. \end{equation} The problem
of control to a matrix $X_f$ is to choose $N$ elements $X_l$,
$l=1,\ldots,N$ $ \in {\cal S}_j$, for some $j=1,\ldots,m$, such
that $\prod_{l=1}^N X_l=X_f$. If such elements exist $X_f$ is said
to be {\it reachable}. The question of the set of reachable
matrices is a standard one in geometric control theory. The result
in the following Theorem \ref{LARC}, known as the {\it Lie algebra
rank condition}, is classical \cite{JS} and provides the answer for
compact Lie groups.
Let ${\cal L}$ be the Lie algebra generated by the elements in
${\cal F}$ defined as the smallest Lie algebra containing ${\cal F}$
and denote by $e^{\cal L}$ the connected Lie group associated with
${\cal L}$. We shall call ${\cal L}$ the {\it dynamical Lie
algebra} associated to the system.
\begin{theorem}\label{LARC} \cite{JS} Consider the Lie group
$e^{\cal L}$ and assume it is compact. Then, the set of reachable
values for $X$ in (\ref{d}) is equal to $e^{\cal L}$.
\end{theorem}
This result has been elaborated upon in several papers and applied
to quantum mechanical systems (cf. \cite{AlbeMiko}, \cite{Mikobook},
\cite{Tarn}, \cite{Rama1}). In particular, in the case of (closed)
quantum mechanical systems ${\cal L}$ is a subalgebra of the unitary
Lie algebra $u(n)$ and, as such, can be written as the direct sum of
an Abelian subalgebra and a semisimple subalgebra to which there
corresponds a compact Lie group. That is, modulo an Abelian
subgroup which commutes with all of $e^{\cal L}$, $e^{\cal L}$ is
compact (cf. \cite{MikoLiealg} and \cite{Tannor}). In particular,
$e^{\cal L}$ is compact if ${\cal L}=u(n)$ or ${\cal L}=su(n)$ in
which case, the system is called controllable and $e^{\cal L}$ is
the group of unitary matrices $U(n)$ or special unitary matrices
$SU(n)$, respectively.
The original proof given in \cite{JS} is not constructive, i.e., in
our setting, it does not show how to alternate elements in the
semigroups ${\cal S}_j$ in (\ref{semigruppi}) to obtain a given
target $X_f \in e^{\cal L}$. We show how to obtain this in two ways
in the following two sections. The main ideas are then combined in a
third method in section \ref{M3}. The first method, described in
section \ref{M1}, achieves exact control if the subgroups
corresponding to the semigroups in (\ref{semigruppi}), i.e.,
\be{subgruppi} \tilde {\cal S}_j:=\{ e^{A_jt}|t \in \RR\} , \qquad
j=1,\ldots,m, \end{equation} are closed. Otherwise it obtains control with
arbitrary accuracy as it follows from Proposition \ref{PJS} and
Remark \ref{nt} below. This proposition allows us to replace an
exponential of the form $e^{A t}$ with $t <0$ with an exponential of
the form $e^{A t}$ with $t
>0$ which approximates it with arbitrary accuracy. This result will
be utilized for the following two methods as well.
\section{Method 1: Exact constructive controllability}
\label{M1}
The method we are going to describe is a consequence of the proof
of the Lie algebra rank condition, Theorem \ref{LARC}, given in
\cite{Mikobook} and the result on uniform finite generation of
compact Lie groups given in \cite{UFG}. Let $X_f \in e^{\cal L}$ be
the target state. We want to show a way to obtain $X_f$ as a product
of elements in (\ref{semigruppi}), if not exactly, at least, with
arbitrary accuracy. We are first going to relax the problem by
allowing the use of elements in the subgroups $(\ref{subgruppi})$
rather than only elements of the semigroups (\ref{semigruppi}). We
shall show later how to overcome this problem (see Proposition
\ref{PJS} and Remark \ref{nt}).
\vspace{0.25cm}
Since $e^{\cal L}$ is compact the exponential map is surjective,
that is, there exists a matrix $A \in {\cal L}$ such that
$e^{A}=X_f$, for every $X_f$.\footnote{See, e.g., \cite{Knapp} and
\cite{Mosko} for a study on the generalization of this result.See
also \cite{HornJohnsonT} (Theorem 6.4.15) for the theorem on
existence of the logarithm of a matrix.} This also implies that,
given any neighborhood $K$ of the identity in $e^{\cal L}$, we can
choose an integer $M$ sufficiently large such that
$e^{\frac{A}{M}}=X_f^{\frac{1}{M}} \in K$. Now, assume first that
${\cal F}$ is a basis for ${\cal L}$, that is, no Lie bracket is
necessary to obtain a basis of ${\cal L}$. This implies that, by
varying $t_1,\ldots,t_m$ in a neighborhood of the origin in $\RR^m$,
$K:=\{ X=e^{A_s t_s} e^{A_{m-1} t_{m-1}} \cdots e^{A_1 t_1}|
t_1,\ldots,t_m \in \RR\}$, gives a neighborhood of the identity in
$e^{\cal L}$ and, in particular, it contains $e^{\frac{A}{M}}$ for
sufficiently large $M$. That is, we can find real values $\bar
t_1,\ldots,\bar t_m$ such that \be{poi} e^{\frac{A}{M}}=e^{A_m \bar
t_m} e^{A_{m-1} \bar t_{m-1}} \cdots e^{A_1 \bar t_1}.
\end{equation}
Therefore, by using elements from the subgroups (\ref{subgruppi})
we can obtain $e^{\frac{A}{M}}$. Now assume ${\cal F}$ is not a
basis for ${\cal L}$. Since ${\cal F}:=\{ A_1, \ldots, A_m \}$
generates all of ${\cal L}$, there exist two values $1 \leq k, l
\leq m$ such that the commutator $[A_l,A_k]$ is linearly independent
of $\{A_1, \ldots, A_m\}$. This implies that there exists a value $t
\in \RR$ such that $F:=e^{A_lt} A_k e^{-A_lt}$ is also linearly
independent. To see this, assume it is not true and write $e^{A_l
t} A_k e^{-A_lt}$ as \be{lineacomb} e^{A_l t} A_k
e^{-A_lt}=\sum_{j=1}^m a_j(t) A_j, \end{equation} for every $t$. Taking the
derivative with respect to $t$ at $t=0$, gives
$[A_l,A_k]=\sum_{j=1}^m \dot a_j(0)A_j$, which contradicts the fact
that $[A_l,A_k]$ is linearly independent of $\{A_1,\ldots,A_m\}$.
Let $\bar t$ be such that \be{effe} F:= e^{A_l \bar t} A_k e^{-A_l
\bar t}. \end{equation} We can add $F$ to $\{A_1, \ldots, A_m\}$ and still
have a linearly independent set. Moreover, we can express every
exponential $e^{Ft}$ in terms of exponentials of $A_l$ and $A_k$
since $e^{Ft}=e^{A_l \bar t} e^{A_k t} e^{-A_l \bar t}$. Define
$A_{m+1}:=F$. If $\{A_1,\ldots,A_m,A_{m+1}\}$ is a basis of ${\cal
L}$ then we can proceed as above and obtain a neighborhood of the
identity in $e^{\cal L}$ by varying $ \{ t_1,\ldots t_{m+1} \} \in
\RR^{m+1}$. Such a neighborhood is given by $K:=\{
\prod_{j=1}^{m+1} e^{A_j t_j}| t_1,\ldots,t_{m+1} \in \RR\}$. If
that is not the case, then we observe that $\{A_1,\ldots,A_{m+1} \}$
is still a set of generators for ${\cal L}$ and, as above, there
must exist two elements $A_k$ and $A_l$ in $\{A_1,\ldots,A_{m+1}
\}$, such that $[A_k,A_l]$ is linearly independent of
$\{A_1,\ldots,A_{m+1} \}$ and therefore for some $\bar t$,
$A_{m+2}:=e^{A_l \bar t} A_k e^{-A_l \bar t}$ is linearly
independent of $\{ A_1,\ldots,A_{m+1}\}$. The exponential
$e^{A_{m+2}t}$ again can be expressed in terms of exponentials of $
A_1,\ldots,A_{m+1}$ and therefore in terms of exponentials of $
A_1,\ldots,A_{m}$. Proceeding this way, one finds $\dim ({\cal
L})-m$ new matrices, $\{A_{m+1}, A_{m+2}, \ldots, A_{\dim({\cal
L})}\}$ which together with $\{A_1,\ldots,A_m\}$ form a basis for
${\cal L}$. By taking $\prod_{j=1}^{\dim({\cal L})} e^{A_j t_j}$
with $t_j \in \RR$, $j=1,\ldots,\dim({\cal L})$, we obtain all the
elements in a neighborhood of the identity and in particular
$e^{\frac{A}{M}}$. Repeating the sequence $M$ times we obtain
$e^{A}$.
\vspace{0.25cm}
In the expression of $e^{\frac{A}{M}}$ and therefore in the
expression of $e^{A}$, there will be some exponentials with negative
$t$, i.e., some elements in the subgroups (\ref{subgruppi}) which
are (possibly) not in the semigroups (\ref{semigruppi}). There are
ways to minimize the number of these elements in the full product,
for example by placing together matrices which come from similarity
transformations with the same matrix so as to have cancelations of
the type $e^{A_j t_1}e^{-A_j t_2}=e^{A_j(t_1-t_2)}$. Also, in many
cases, the orbits $\{ e^{A_jt}|t \in \RR\}$ are periodic (closed),
which allows us to assume all the $\bar t_j$'s positive, without
loss of generality. However, if this is not the case we can use the
following fact.
\bp{PJS} Let $e^{-B|t|}$ an element of a compact Lie group $e^{\cal
L}$. For every $\epsilon >0$ there exists a $\bar t >0$ such
that\footnote{Whenever we do specific computations involving norms
of matrices we use the Frobenius norm
$\|A\|:=\sqrt{Trace(AA^\dagger)}$.} \be{lko} \| e^{-B|t|}-e^{B \bar
t} \| < \epsilon. \end{equation} \end{proposition} \begin{proof} Consider $e^{-B|t|}$ and the sequence
$e^{nB|t|}$, which by compactness of $e^{\cal L}$ has a converging
subsequence $e^{n(k)B|t|}$. We have $\lim_{k \rightarrow \infty}
e^{(n(k+1)-n(k)-1)B|t|}=e^{-B|t|}$. Therefore there is $\bar k$ such
that $ \|e^{(n(\bar k+1)-n(\bar k)-1)B|t|} - e^{-B|t|}\| <
\epsilon$, and the proposition holds with $\bar t= (n(\bar
k+1)-n(\bar k)-1)|t|$. \end{proof}
\vspace{0.25cm}
\br{nt} The proof given above follows the one given in \cite{JS}. A
different, more concrete, proof can be given for Lie subgroups of
$U(n)$, which is the case that interests us the most. In that case,
using the Frobenius norm of matrices, we have \be{Frob} \left\|
e^{B\bar t}- e^{-B |t|} \right\|=\sqrt{2} \sqrt{n-\sum_{j=1}^n
cos(\omega_j(\bar t+|t|))}, \end{equation} where $i\omega_j$, $j=1,\ldots,n$
are the eigenvalues (possibly repeated) of $B$. If we can choose
$\bar t >0 $ so that \be{Dirich} \left[1-\cos(\omega_j(\bar
t+|t|))\right]< \frac{\epsilon^2}{2n}, \end{equation} for every
$j=1,\ldots,n$, then (\ref{lko}) is certainly satisfied. If
$g:=\arccos\left(1-\frac{\epsilon^2}{2n}\right)$, then, we satisfy
condition (\ref{Dirich}) if we are able to find $\bar t$ and
integers $m_j$, $j=1,\ldots,n$ such that \be{hui} \left|
\omega_j(\bar t +|t|)-2 \pi m_j \right| <g. \end{equation} However, according
to Dirichlet's approximation theorem (see, e.g., \cite{Cassels}),
given a natural number $N$ and $n$ reals $\alpha_1,\ldots,
\alpha_n$, we can find positive integers $a, b_1,\ldots,b_n$, with
$1 \leq a \leq N^n$ so that $|\alpha_j a-b_j| < \frac{1}{N}$. This
result can be applied to satisfy condition (\ref{hui}) identifying
$\alpha_j$ with $\frac{\omega_j |t|}{2 \pi}$ and choosing
$\frac{1}{N} < \frac{g}{2\pi}$ and choosing $m_j=b_j$ and $\bar t$
so that $\frac{\bar t +|t|}{|t|}=a$. Notice that since $a \geq 1$,
$\bar t \geq 0$ as desired. For the problem to find $a$ and $b_j$'s,
there are several algorithms in the literature (cf. \cite{Germans}
and \cite{Continuedfractions}). Notice, in any case, that we are
only interested in $a$, which determines $\bar t$, and since $a$ is
bounded from above by $N^n$, it can be always found, in principle,
by exhaustive search. \end{remark}
\vspace{0.25cm}
\vspace{0.25cm}
We can summarize the given method as follows:
\begin{enumerate}
\item Given ${\cal F}:=\{A_1,\ldots,A_m \}$ find, via similarity transformations,
$\dim{\cal L}-m$ more matrices $\{A_{m+1},\ldots,A_{\dim({\cal
L})}\}$ so that $\{A_1,\ldots,A_{\dim{({\cal L})}} \}$ is a basis
for ${\cal L}$.
\item Take the (principal) logarithm of $X_f$, $A$, so that
$e^{A}=X_f$.
\item Find $M$ (sufficiently large) and $t_1, \ldots, t_{\dim({\cal L})}$,
so that \be{basicprod} e^{\frac{A}{M}}=\prod_{j=1}^{\dim({\cal L})}
e^{A_j t_j}. \end{equation}
Then
$X_f=e^{A}=\left(\prod_{j=1}^{\dim({\cal L})} e^{A_j t_j}\right)^M$.
\item Replace the exponentials of the matrices
$A_{m+1},\ldots,A_{\dim({\cal L})}$ with expressions involving the
exponentials of $\{ A_1, \ldots, A_m\}$ as obtained from step 1.
\item Replace every exponential $e^{Bt}$, $(B \in {\cal F})$ involving negative $t$ with
its approximation involving positive $t$. This can be obtained with
arbitrary accuracy according to Proposition \ref{PJS} and Remark
\ref{nt}.
\end{enumerate}
\vspace{0.25cm}
\br{rem1} In the above procedure, step 3. is decidedly the most
difficult one since it requires the solution of nonlinear equations
involving the exponentials of matrices. The solution is guaranteed
to exist for $M$ sufficiently large. This task is obviously easier
for low dimensional systems. It must be remarked however that there
is some flexibility in the choice of the matrices $A_{m+1}, \ldots,
A_{\dim({\cal L})}$, because of the choice of the pair $A_k,A_l$ and
of the times $\bar t$ (cf. (\ref{effe})). We can use this
flexibility to make these matrices as simple as possible (e.g.,
block diagonal, sparse, etc.) so that calculating the exponential is
easier. Another type of flexibility, which may be used in
calculations, is the fact that the way exponentials are arranged in
(\ref{basicprod}) is arbitrary. Any different order will give a
neighborhood of the identity also. The methods described in the
following two sections do not present this problem.
\end{remark}
\br{rem2} The last step of the method can be achieved exactly (i.e.,
without involving an approximation) if the orbit associated with the
given matrices ${\cal F}:=\{A_1,\ldots,A_m\}$ are periodic. In this
respect, notice that, if this is the case, all the other matrices
obtained by the method also have associated periodic orbits (their
eigenvalues are the same as the ones of the original matrices).
Therefore, for a given matrix $B$, and negative $\bar t$, we can
choose a positive $t$, such that $e^{Bt}=e^{B\bar t}$. \end{remark}
\br{rem3} \cite{UFG} It is interesting to give an upper bound to
the number of exponentials involved in obtaining a neighborhood of
the identity according to the described method. Let us assume that,
at every step, we only produce one new linearly independent matrix.
For the given matrices $\{A_1, \ldots, A_m\}$, we need only one
exponential, but for the matrix obtained at step 1 we need three
exponentials. In general, at step $j$, $j \geq 2$, the worst case
scenario is when we combine a matrix obtained at step $j-1$ (giving
the similarity transformation ($A_l$ in (\ref{effe})), which
requires $d_{j-1}$ exponentials, with a matrix obtained at step
$j-2$, which requires $d_{j-2}$ exponentials. The total number of
exponentials at step $j$ is therefore $d_j=2d_{j-1}+ d_{j-2}$.
Therefore having defined recursively the numbers $d_j$ as \be{recor}
d_0=1,\qquad d_1=3, \qquad d_j=2d_{j-1}+d_{j-2}, \end{equation} the number of
exponentials required is \be{numberexpo} md_0+\sum_{j=1}^{\dim{\cal
L}-m} d_j. \end{equation} \end{remark}
\subsection{Example}
We illustrate this method with a simple example of the quantum
control of a two level system, i.e., a control problem on $SU(2)$,
which is compact. Recall the definition of the Pauli matrices
\be{PauliMat} \sigma_x:=\pmatrix{0 & 1 \cr 1 & 0}, \qquad
\sigma_y:=\pmatrix{0 & i \cr -i & 0}, \qquad \sigma_z:=\pmatrix{1 &
0 \cr 0 & -1}. \end{equation}
Let ${\cal F}:=\{A_1, A_2\}$, with $A_1:=i\sigma_z$ and $A_2:=i
(\sigma_x+ \sigma_y)$. Calculate $e^{A_1 \bar t}A_2 e^{-A_1 \bar t}$
which for $\bar t=-\frac{3}{8}\pi$ gives $A_3=-i\sqrt{2} \sigma_y$,
which is linearly independent of $A_1$ and $A_2$, and along with
them it forms a basis of $su(2)$. A straightforward calculation
gives \be{esponenziali} e^{A_1t_1}=\pmatrix{e^{it_1} & 0 \cr 0 &
e^{-it_1}}, \quad e^{A_2t_2}=\pmatrix{\cos(\sqrt{2}t_2) & e^{i
\frac{3\pi}{4}} \sin(\sqrt{2}t_2) \cr - e^{-i \frac{3\pi}{4}}
\sin(\sqrt{2}t_2) & \cos(\sqrt{2}t_2)} \end{equation}
$$
e^{A_3t_3}=\pmatrix{\cos(\sqrt{2}t_3) & \sin(\sqrt{2}t_3) \cr -
\sin(\sqrt{2}t_3) & \cos(\sqrt{2}t_3)}.
$$
and the the set \be{sdlfirst} S_{1,2,3}:=\{e^{A_1 t_1} e^{A_2 t_2}
e^{A_3 t_3}|t_1,t_2,t_3 \in \RR\}, \end{equation} covers a neighborhood of the
identity in $SU(2)$. Assume now our target state $X_f$ is
\be{targets} X_f:=\pmatrix{\frac{1}{\sqrt{2}} &
i\frac{1}{\sqrt{2}}\cr i \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}}.
\end{equation} We first try to see if $X_f$ is in the set $S_{1,2,3}$ in
(\ref{sdlfirst}). Therefore we must be able to choose $t_1$ and
$t_3$ so that $P:=e^{-A_1 t_1} X_f e^{-A_3 t_3}$ has the form
$e^{A_2 t_2}$. This means in particular that the difference between
the phases of the $P_{1,2}$ element and $P_{1,1}$ elements in $P$ is
$\frac{3 \pi}{4}$. As a straightforward calculation shows,
$P_{1,2}P_{1,1}^*=\frac{i}{2}$ independently of the choice of $t_1$
and $t_3$. Therefore $X_f \notin S_{1,2,3}$. We replace $X_f$ with
$X_f^{\frac{1}{2}}$. The same calculation shows that, for every
$t_1$, $P_{1,2}P_{1,1}^*=\frac{\sqrt{2}}{2} \sin(2 \sqrt{2}t_3)+i
\frac{\sqrt{2}}{2}$ and, therefore, the choice $t_3:=\frac{3 \pi}{4
\sqrt{2}}$ achieves the desired phase difference. Then, we can
choose $t_1$ to impose that the element $P_{1,1}$ has phase zero (it
is real). This leads to $t_1=\frac{9 \pi}{8}$. With these choices,
we have \be{afterchoices} e^{-A_1 t_1} X_f^{\frac{1}{2}} e^{-A_3
t_3}=\pmatrix{ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} e^{i \frac{3
\pi}{4}} \cr - \frac{1}{\sqrt{2}} e^{-i \frac{3 \pi}{4}} &
\frac{1}{\sqrt{2}}}. \end{equation} Comparing this with $e^{A_2t_2}$ in
(\ref{esponenziali}) leads to the choice
$t_2=\frac{\pi}{4\sqrt{2}}$. With these choices $X_f=\left(e^{A_1
t_1}e^{A_2 t_2} e^{A_3 t_3} \right)^2$.
In terms of the original available matrices, $A_1$ and $A_2$, we
have \be{hjklo} X_f=\left(e^{A_1 t_1}e^{A_2 t_2} e^{-A_1 \frac{3
\pi}{8}} e^{A_2t_3} e^{A_1 \frac{3 \pi}{8}} \right)^2, \end{equation} where
$t_1,t_2,t_3$ are the ones found above. The presence of the
negative `time' $-\frac{3 \pi}{8}$ in the third exponential, does
not pose any problems since the one dimensional subgroup associated
with $A_1$ (as well as any other matrix in $su(2)$) is periodic.
\vspace{0.25cm}
A similar treatment shows that, had we chosen to work with the set
\be{sdl}
S_{1,3,2}:=\{e^{A_1 t_1} e^{A_3 t_3} e^{A_2
t_2}|t_1,t_2,t_3 \in \RR\}, \end{equation} we would have achieved $X_f$ with
just three exponentials. This shows that the order in which the
exponentials are chosen may be important.
\vspace{0.25cm}
It must be said that for the special case of $SU(2)$ there are many
more techniques which may be preferable to the one advocated here.
For example, since one has available both $i\sigma_z$ and $i
\sigma_y$ one could have applied a simple Euler decomposition. In
general it is also possible, for general target matrices, to find
the factorization with the minimum number of factors \cite{ioopt}.
Our goal here was to illustrate the method on a simple, easily
computable, case. We remark that even for large dimensional Lie
groups, one can combine these ideas with Lie group decompositions
for which there exists a large set of tools \cite{Mikobook}.
\section{Method 2: Constructive controllability with arbitrarily
small error} \label{M2}
In this and the following section we illustrate methods which do not
require the solution of nonlinear algebraic equations, such as
(\ref{poi}), but can be implemented with simple linear algebraic
techniques. The algorithms achieve control to the target with
arbitrary small error.
Reconsider the available set of matrices
${\cal F}$ in (\ref{effematrices}). As before, we relax the
requirement to use only elements in the semigroups
(\ref{semigruppi}) and use elements in the subgroups
(\ref{subgruppi}). We can then replace elements in the subgroups
with elements in the semigroups as done in the previous section.
We
start with a definition
\bd{approximable} A matrix $H$ is said to be {\it simulable} with
the set ${\cal F}$ if there exist $r$ continuous, strictly
increasing, functions $f_j$, $j=1,\ldots,r$, with $f_j(0)=0$,
defined in an interval $[0,\epsilon)$, such that \be{AIG}
e^{Hx}=\prod_{j=1}^r e^{L_j f_j(x)}+O(x^{1+\delta}), \end{equation} for some
matrices $L_j \in {\cal F} \bigcup -{\cal F} $ and\footnote{$-{\cal
F}$ denotes the set $\{-A_1,-A_2,\ldots,-A_m\}$.} a $\delta
>0$. \end{definition}
If a matrix $H$ is simulable, we can control from the identity to
$e^H$ with the desired accuracy using elements in the subgroups
(\ref{subgruppi}) (and therefore of the semigroups
(\ref{semigruppi})).
\bl{Limitefondamentale} Assume (\ref{AIG}) holds. Then \be{baslim}
\lim_{n \rightarrow \infty} \left( \prod_{j=1}^r e^{L_j
f_j(\frac{1}{n})} \right)^n=e^H \end{equation} \end{lemma}
\begin{proof} If (\ref{AIG}) holds then \be{polp} \lim_{n \rightarrow \infty}
\left( \prod_{j=1}^r e^{L_j f_j(\frac{1}{n})} \right)^n= \lim_{n
\rightarrow \infty} \left[e^{H \frac{1}{n}}-
O\left(\frac{1}{n^{1+\delta}} \right) \right]^n. \end{equation} However, we
have this standard limit in matrix analysis (see,
\cite{HornJohnsonT} section 6.5) \be{standlim} \lim_{n \rightarrow
\infty} \left[e^{H \frac{1}{n}}- O\left( \frac{1}{n^{1+\delta}}
\right) \right]^n =e^H, \end{equation} which proves the lemma. \end{proof}
\vspace{0.25cm}
From the point of view of constructive controllability, this lemma
says that, for each simulable $H$, we can put together a product of
exponentials of elements in ${\cal F}$ which, repeated a large
enough number of times, approximates, with arbitrary accuracy,
$e^{H}$.
\vspace{0.25cm}
\bt{fondafonda} Every $H$ in the dynamical Lie algebra ${\cal L}$ is
simulable.
\end{theorem}
\br{rem1p} This theorem along with Lemma \ref{Limitefondamentale}
and Proposition \ref{PJS} give an alternative proof of a slightly
weaker form of the Lie algebra rank condition of Theorem \ref{LARC}.
Since $e^{\cal L}$ is compact, for every $X_f$ in $e^{\cal L}$,
there exists an $H \in {\cal L}$ such that $e^{H}=X_f$. Theorem
\ref{fondafonda} and Lemma \ref{Limitefondamentale} say that we can
find a sequence of reachable points converging to $X_f$ for every
$X_f$. Therefore the set of reachable states is dense in $e^{\cal
L}$. \end{remark}
\br{rem2p} Elaborating on the proof of the Theorem
\ref{fondafonda}, we will also show how to choose the elements $L_j
\in {\cal F} \bigcup -{\cal F}$ and the functions $f_j$ in
(\ref{AIG}) so as to make the controllability result constructive.
We shall discuss this after the proof. \end{remark}
\begin{proof} The proof is similar to the one given in \cite{IOQW} in the
context of quantum walks dynamics. In particular, we will show that
the set of simulable elements $H$ is a Lie algebra containing ${\cal
F}$ and this will be sufficient since ${\cal L}$ is the smallest Lie
algebra containing ${\cal F}$, by definition.
First of all, it is clear that every element in ${\cal F}$ is
simulable, since equation (\ref{AIG}) holds with $r=1$ and $O\equiv
0$. Therefore the set of simulable matrices contains ${\cal F}$.
Moreover if $H$ satisfies equation (\ref{AIG}), then we have
\be{AIGinverse} e^{-Hx}= \prod_{j=r}^1e^{-L_j f_j(x)}-
\prod_{j=r}^1e^{-L_j f_j(x)} O(x^{1+\delta}) e^{-Hx}, \end{equation} and by
expanding the exponentials it follows that the last term is also an
$O(x^{1+\delta})$. Therefore $-H$ is also simulable. Moreover, for
$a \geq 0$, (\ref{AIG}) holds for $aH$ with $f_j(x)$ replaced by
$f_j(ax)$ and $O(x^{1+\delta})$ replaced by
$O(a^{1+\delta}x^{1+\delta})=O(x^{1+\delta})$. If (\ref{AIG}) holds
for $H_1$ and $H_2$, i.e., we have \be{H12} e^{H_i
x}=\prod_{j=1}^{r_i} e^{L_j^i f_j^i(x)}+ O_i(x^{1+\delta_i}), \qquad
i=1,2, \end{equation} combining this with \be{somma}
e^{(H_1+H_2)x}+O(x^2)=e^{H_1x} e^{H_2x}, \end{equation} gives\footnote{Here
and elsewhere, we use the notation $O$ for a generic $O$-function
and we use indexes like in $O_1$ and $O_2$ when we want to highlight
a particular $O$-function.} \be{popg}
e^{(H_1+H_2)x}=\prod_{j=1}^{r_2} e^{L_j^2 f_j^2(x)}
\prod_{j=1}^{r_2} e^{L_j^1 f_j^1(x)}+ O(x^{1+ \delta}), \end{equation} with
$\delta=\min \{ \delta_1,\delta_2,1 \}.$ Therefore, if $H_1$ and
$H_2$ are simulable, so is $H_1 + H_2$. These arguments show that
the set of simulable matrices is a vector space.
To show that it is also a Lie algebra, we have to show that if $H_1$
and $H_2$ are both simulable so is $[H_1,H_2]$. In order to see
that, write (\ref{H12}) in the form \be{H12form}
e^{H_1t}=T_1(t)+O_1(t^{1+\delta_1}), \qquad
e^{H_2t}=T_2(t)+O_2(t^{1+\delta_1}), \end{equation} i.e., by replacing the
products with the functions $T_1$ and $T_2$. This also gives (cf.
(\ref{AIGinverse})) \be{H12forminverse} e^{-H_1 t}=T_1^{-1}(t)-
T_1^{-1}(t)O_1(t^{1+\delta_1}) e^{-H_1t}, \quad e^{-H_2
t}=T_2^{-1}(t)- T_2^{-1}(t)O_2(t^{1+\delta_2}) e^{-H_2t}. \end{equation} We use
the exponential formula (see, e.g., \cite{HornJohnsonT} Section 6.5)
\be{expofor} e^{[H_1,H_2]t^2}+O(t^3)=e^{-H_1 t}e^{-H_2t} e^{H_1
t}e^{H_2t}.\end{equation} Using (\ref{H12form}) and (\ref{H12forminverse}) in
(\ref{expofor}), we have \be{pplm} e^{[H_1, H_2]t^2}+O(t^3)= \left(
T_1^{-1} - T_1^{-1} O_1 e^{-H_1t} \right) \left( T_2^{-1}- T_2^{-1}
O_2 e^{-H_2 t} \right) \left( T_1+O_1 \right) \left( T_2 +O_2
\right). \end{equation} Expanding the right hand side, omitting terms that are
clearly $O(t^{\alpha})$, $\alpha > 2$, since they contain the
product of two $O$ functions, we have \be{dfd}
e^{[H_1,H_2]t^2}+O(t^3)=T^{-1}_1 T^{-1}_2 T_1 T_2 +
T_1^{-1}T_2^{-1}T_1 O_2+ T_1^{-1} T_2^{-1} O_1 T_2 \end{equation} $$- T_1^{-1}
T_2^{-1} O_2 e^{-H_2 t} T_1 T_2+ T_1^{-1} O_1 e^{-H_1 t} T_2^{-1}
T_1 T_2 +O(t^\alpha). $$ Expanding in McLaurin series the functions
multiplying the $O_1$ and $O_2$, we see that the terms corresponding
to the first terms of the expansion cancel, leaving only terms of
the form $O(t^{\beta'})$ with $\beta' >2$. In conclusion, we have
\be{jhu} e^{[H_1, H_2]t^2}=T_1^{-1}(t)T_2^{-1}(t) T_1(t) T_2(t)+
O(t^\beta), \qquad \beta >2, \end{equation} and by setting $t=\sqrt{x}$, we
obtain \be{klko} e^{[H_1,
H_2]x}=T_1^{-1}(\sqrt{x})T_2^{-1}(\sqrt{x}) T_1(\sqrt{x})
T_2(\sqrt{x})+ O(x^{\frac{\beta}{2}}), \qquad \beta
>0,
\end{equation} which shows that $[H_1,H_2]$ is simulable as well, and completes
the proof. \end{proof}
\vspace{0.25cm}
In order to use Lemma \ref{Limitefondamentale} and Theorem
\ref{fondafonda} for control, we need to show, given $H$, how to
find the matrices $L_j$ in ${\cal F} \bigcup - {\cal F}$ so that
(\ref{AIG}) holds. We first find a basis of ${\cal L}$ by taking
repeated Lie brackets of elements in ${\cal F}$. More precisely, set
\be{D0} {\cal D}_0:={\cal F}, \end{equation} a linearly independent set of
elements of `{\it depth}' $0$ (no Lie bracket necessary), and let
\be{olop} \tilde {\cal D}_1:=[{\cal D}_0, {\cal F}], \end{equation} a set of
elements of depth 1, which are Lie brackets of elements of depth $0$
with elements of ${\cal F}$. From the set $\tilde {\cal D}_1$ we
extract a possibly smaller set ${\cal D}_1$ such that ${\cal D}_0
\bigcup {\cal D}_1$ is a maximal linearly independent set in ${\cal
D}_0 \bigcup \tilde {\cal D}_1$. Proceeding this way, we now
calculate a set of Lie brackets of depth $2$ \be{D2} \tilde {\cal
D}_2:=[{\cal D}_1, {\cal F}], \end{equation} and extract a subset ${\cal D}_2
\subseteq \tilde {\cal D}_2$ so that ${\cal D}_0 \bigcup {\cal D}_1
\bigcup {\cal D}_2$ is a maximal linearly independent set in ${\cal
D}_0 \bigcup {\cal D}_1 \bigcup \tilde {\cal D}_2$. Proceeding this
way, we obtain a set $\bigcup_{k=0}^r {\cal D}_k$, which spans all
of ${\cal L}$. As a consequence of ${\cal L}$ being finite
dimensional, the procedure will end at some finite depth $r$ after
which we cannot find any new linearly independent matrix. We write,
for $k=0, \ldots, r$, \be{jjkk} {\cal D}_k;=\{ D_{1k},D_{2k},
\ldots, D_{n_kk} \}. \end{equation} We can decompose $H$ as \be{klo}
H=\sum_{k=0}^rH_k, \end{equation} with $H_k$ a linear combination of elements
of depth $k$, that is, \be{AKK} H_k:=\sum_{j=1}^{n_k}
\alpha_{kj}D_{jk}. \end{equation} Now, following the proof of the theorem, we
can write \be{ops} e^{Hx}=\prod_{k=0}^r e^{H_k x} + O(x^{1+
\delta}). \end{equation} Then we can write each of the $e^{H_k x}$ as \be{dfdp}
e^{H_k x}=\prod_{j=1}^{n_k} e^{D_{jk}f_j(x)}+ O(x^{1+\delta_k}), \end{equation}
for some $\delta_k >0$. This is straightforward for $k=0$ and it
has to be done iteratively for Lie brackets of higher depth
following the procedure indicated in the proof of theorem.
Summarizing the method is as follows:
\begin{enumerate}
\item Find a basis for ${\cal L}$ by repeated Lie brackets of
elements of ${\cal F}$. Let $r$ denote the maximum depth.
\item Expand $H$ as a sum of linear combinations of matrices of
depth $0,1,\ldots$, as in (\ref{klo}), (\ref{AKK}).
\item For each of these linear combinations approximate the
exponential with a product of exponentials involving elements in
the basis according to the proof of theorem \ref{fondafonda}. In
particular the rules to obtain the approximating products are as
follows (see proof of theorem \ref{fondafonda}).
\begin{enumerate}
\item If $A \in {\cal F} \cup -{\cal F}$, then the associated
product is $T(x)=e^{Ax}$ (only one factor).
\item If $T(x)$ is the product associated with $A$, then $T^{-1}(x)$ is
the product associated with $-A$.
\item If $T(x)$ is the product associated with $A$, then $T(a
x)$ is the product associated with $a A$ for any $a \geq 0$.
\item If $T_A(x)$ and $T_B(x)$ are the products associated with $A$
and $B$ respectively, then $T_A(x)T_B(x)$ is the product associated
with $A+B$.
\item If $T_A(x)$ and $T_B(x)$ are the products associated with $A$
and $B$ respectively, then
$T_A^{-1}(\sqrt{x})T_B^{-1}(\sqrt{x})T_A(\sqrt{x})T_B(\sqrt{x})$ is
the product associated with $[A,B]$.
\end{enumerate}
\item Combine all the products in a unique product approximating
$e^{Hx}$, which contains only exponentials of elements in ${\cal F}$
and $-{\cal F}$. By repeating this product for $x=\frac{1}{n}$ a
large number of times $n$ we obtain a matrix arbitrarily close to
$e^{H}$.
\item Replace every exponential $e^{At}$ with $A \in {\cal F}$ and $t <0$
in the approximating product with an approximating exponential of
the form $e^{A \bar t}$ with $\bar t >0$, according to proposition
\ref{PJS} and remark \ref{nt}.
\end{enumerate}
\subsection{Example}
\label{subsec}
We illustrate the previous procedure with an example taken from the
theory of electrical networks. In particular, we consider the LC
switching network in \cite{Wood} (see also \cite{Ramak}) whose
dynamical equation is given by \be{dyneq} \dot x=\pmatrix{0 & -\nu &
0 & 0 \cr \nu & 0 & 0 & 0 \cr 0 & 0 & 0 & -\beta \cr 0 & 0 & \beta &
0} x + \pmatrix{0 & 0 & 0 & \gamma \cr 0 & 0 & \delta & 0 \cr 0 &
-\delta & 0 & 0 \cr - \gamma & 0 & 0 & 0 }x u(t), \end{equation} where $\nu$,
$\beta$, $\gamma$ and $\delta$ are positive parameters depending the
inductances and capacitances of the electrical network. The vector
$x$ represents voltages and currents in the network and $u$ is a
switching control variable which takes values in $\{ 0,1 \}$. To
make the discussion concrete, we choose the parameters $\nu=1$,
$\beta=3$, $\gamma=1$ and $\delta=2$, so that the set of available
matrices is \be{calfexample} {\cal F}:=\left\{A_1:=\pmatrix{0 & -1 &
0 & 1 \cr 1 & 0 & 2 & 0 \cr 0 & -2 & 0 & -3 \cr -1 & 0 & 3 &
0},\quad A_2:= \pmatrix{0 & -1 & 0 & 0 \cr 1 & 0 & 0 & 0 \cr 0 & 0
& 0 & -3 \cr 0 & 0 & 3 & 0} \right\}. \end{equation} The solution of
(\ref{dyneq}) is \be{hjd} x(t)=X(t) x(0), \end{equation} where $X=X(t)$ is the
solution of the matrix equation \be{newad} \dot X=A(u) X, \quad
X(0)={\bf 1}, \quad A(1)=A_1, \quad A(0)=A_2. \end{equation}
Let us use the notation $E_{jk}$ for the skew-symmetric $ 4 \times 4
$ matrix which has all the entries equal to zero except for the
$(jk)$-th and $(kj)$-th ($1\leq j<k \leq 4$) which are equal to $1$
and $-1$, respectively. Therefore, we can write \be{fdg}
A_1=-E_{12}+E_{14}+2E_{23}-3E_{34}, \qquad A_2=-E_{12}-3E_{34}. \end{equation}
By calculating Lie brackets, at depth 1, we obtain \be{A3}
A_3:=[A_2,A_1]=-5 E_{12}+7 E_{24}, \end{equation} at depth 2 \be{A4A5}
A_4:=[A_3, A_1]=17 E_{12}+22 E_{14}+26E_{23}+19 E_{34}, \text{ and }
A_5:=[A_3, A_2]=22 E_{14}+ 26 E_{23}. \end{equation} At depth 3, we obtain
\be{A6} A_6:=[A_4,A_1]= 145 E_{13}-155 E_{24}. \end{equation} As the matrices
$\{A_l \}$, $l=1,\ldots,6$, are linearly independent, they span all
of $so(4)$ and system (\ref{newad}) varies on the Lie group $SO(4)$,
a compact Lie group.
Let us denote by $T_j=T_j(x)$ the products approximating $e^{A_j
x}$, $j=1,\ldots,6$, and let us assume that the control problem is
to transfer the state $[0,0,0,1]^T$ to $[1,0,0,0]^T$. We choose to
drive the transition matrix $X$ in (\ref{newad}) to the value
\be{pog} e^{A_5 \frac{\pi}{44}}=\pmatrix{0 & 0 & 0 & 1 \cr 0 &
\cos(\frac{13 \pi}{22}) & \sin (\frac{13 \pi}{22}) & 0 \cr 0 & -
\sin (\frac{13 \pi}{22}) & \cos(\frac{13 \pi}{22}) & 0\cr -1 & 0 &
0 & 0}.\end{equation} We proceed using the composition rules illustrated in
(a)-(e) above. Since $A_5=[A_3,A_2]$, we have \be{lol}
T_5(x)=T_3^{-1}(\sqrt{x}) T_2^{-1}(\sqrt{x}) T_3(\sqrt{x})
T_2(\sqrt{x}). \end{equation} Moreover, since $A_3=[A_2,A_1]$ we have \be{3lol}
T_3(x)=T_2^{-1}(\sqrt{x}) T_1^{-1}(\sqrt{x}) T_2(\sqrt{x})
T_1(\sqrt{x}), \end{equation} and replacing into (\ref{lol}), we obtain
\be{lolll} T_5(x)= \end{equation} $$T_1^{-1}(\root {4} \of {x}) T_2^{-1}(\root
{4} \of {x}) T_1(\root {4} \of {x}) T_2(\root {4} \of {x})
T_2^{-1}(\sqrt{x}) T_2^{-1}(\root {4} \of {x}) T_1^{-1}(\root {4}
\of {x}) T_2(\root {4} \of {x}) T_1(\root {4} \of {x})
T_2(\sqrt{x}).$$ The product approximating $e^{A_5 \frac{\pi}{44}t}$
is $T_5(\frac{\pi}{44}t)$ which we can express in terms of
exponentials of $A_1$ and $A_2$ only by replacing $T_1$ and $T_2$
(and $T_1^{-1}$ and $T_2^{-1}$) according to the rules in (a) and
(b) above. In conclusion, we have \be{oood} T_5\left(\frac{\pi}{44}t
\right)= e^{-A_1(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{-A_2(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{A_1(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{A_2(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{-A_2(\frac{\pi}{44}t)^{\frac{1}{2}}}\times \end{equation}
$$
e^{-A_2(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{-A_1(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{A_2(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{A_1(\frac{\pi}{44}t)^{\frac{1}{4}}}
e^{A_2(\frac{\pi}{44}t)^{\frac{1}{2}}}. $$ We numerically calculated
the error \be{errore} Err^2(n)=\left\| e^{A_5
\frac{\pi}{44}}-\left[T_5\left(\frac{\pi}{44}\frac{1}{n}\right)\right]^n
\right\|^2= 8-2Tr \left[
\left(T_5\left(\frac{\pi}{44}\frac{1}{n}\right)\right)^n e^{A_5^T
\frac{\pi}{44}} \right], \end{equation} for various values of $n$ and the
behavior of the Error as a function of the number of iterations $n$
is reported in Table \ref{tavola}. The error goes to zero as
predicted by the above treatment. In a log-log scale the behavior is
essentially linear.
\begin{table}[ht]
\caption{Results of numerical experiments for the method in section
\ref{M2}.} \vspace{0.25cm} \centering
\begin{tabular}{c c}
\hline \hline number of iterations $n$ & Error $Err$\\
\hline 2 & 3.1531 \\
\hline 10 & 2.3964 \\
\hline 20 & 2.0500 \\
\hline 30 & 1.8604 \\
\hline 100 & 1.3761 \\
\hline 500 & 0.9089 \\
\hline 1000 & 0.7599 \\
\hline 5000 & 0.5022 \\
\hline 50000 & 0.2791\\
\hline 100000 & 0.2341 \\
\hline 500000 & 0.1558 \\
\hline 1000000 & 0.1301 \\
\hline 5000000 & 0.0873 \\
\hline 10000000 & 0.0733 \\
\hline 50000000 & 0.0490 \\
\hline 100000000 & 0.0411 \\
\hline
\label{tavola}
\end{tabular}
\end{table}
\vspace{0.25cm}
To conclude the example we have to solve the problem that negative
times are not allowed and therefore we have to replace terms of the
form $e^{-A_1 x}$ and $e^{-A_2 x}$, with $x >0$ in the expression of
$T_5$ with approximations of the form $e^{A_1 x}$ and $e^{A_2 x}$,
respectively. In the case of $A_2$, since $\{e^{A_2 t}| t \in \RR\}$
is periodic we can always find $x_1 >0$ such that $e^{-A_2 x}=e^{A_2
x_1}$ for every $x$, and we can simply replace the exponential with
$x$ with the exponential with $x_1$ in $T_5$, without changing the
error. For the exponentials of $A_1$ however we need to find an
approximation and this is always possible with arbitrary accuracy
according to proposition \ref{PJS} and remark \ref{nt}.
\vspace{0.25cm}
To be concrete let us assume that the maximum error we can tolerate
is $0.4$. From Table 1, we choose $n=10^{5}$. Fix $x:=\frac{\pi}{44}
{10^{-5}}$. We have (cf. (\ref{errore}) and Table \ref{tavola})
\be{hhff} Err(10^5)=\left\| e^{A_5 \frac{\pi}{44}}-\left( T_5(x)
\right)^{10^5} \right\| < 0.235. \end{equation} Let $\tilde T_5$ be the
approximation of $T_5$ in (\ref{oood}) where we only use positive
values in the exponentials, appropriately replacing the
exponentials of $A_1$. In particular, by rewriting $T_5(x)$ in
(\ref{oood}) as \be{rewrit} T_5(x)=e^{-A_1 x^{\frac{1}{4}}} \Pi_1(x)
e^{-A_1 x^{\frac{1}{4}}} \Pi_2(x), \end{equation} with $\Pi_1(x)=e^{-A_2
x^{\frac{1}{4}}} e^{A_1 x^{\frac{1}{4}}} e^{A_2 x^{\frac{1}{4}}}
e^{-A_2 x^{\frac{1}{2}}} e^{-A_2 x^{\frac{1}{4}}}$ and
$\Pi_2(x)=e^{A_2 x^{\frac{1}{4}}} e^{A_1 x^{\frac{1}{4}}} e^{A_2
x^{\frac{1}{2}}}$, we have \be{tildeT5} \tilde T_5:=\tilde
T_5(x_1,x)=e^{A_1 x_1} \Pi_1(x) e^{A_1 x_1} \Pi_2(x). \end{equation} Therefore
the actual error $\tilde Err$ is given by \be{actualerr} \tilde
Err=\left\| e^{A_5 \frac{\pi}{44}} -\left[\tilde
T_5(x,x_1)\right]^{10^5} \right\| \leq \left\| e^{A_5
\frac{\pi}{44}} -\left[T_5(x) \right]^{10^5} \right\|+ \left\|
\left[T_5(x) \right]^{10^5}- \left[\tilde T_5(x,x_1) \right]^{10^5}
\right\| \end{equation}
$$<0.235 + \left\| \left[T_5(x)\right]^{10^5}-
\left[\tilde T_5(x,x_1)\right]^{10^5} \right\|,
$$ where we used (\ref{hhff}). Using the formula for $A$
and $B$ unitary matrices\footnote{This formula is proved by writing
$A^n-B^n=\sum_{k=1}^n A^{n-k}(A-B)B^{k-1}$, which gives
$$
\left\| A^n-B^n \right\| \leq \sum_{k=1}^n \left\|
A^{n-k}(A-B)B^{k-1} \right\|=n \| A- B \|,
$$
since multiplication (right or left) by a unitary matrix does not
modify the Frobenius norm.} \be{Poonformula} \left\| A^n -B^n
\right\| \leq n \left\| A - B \right\|, \end{equation} we write \be{forerr}
\tilde Err= < 0.235 +10^5 \left\| T_5(x)- \tilde T_5(x,x_1)
\right\|. \end{equation} In view of our bound on the error of $0.4$, we need
to find $x_1>0$ so that $\left\| T_5(x)- \tilde T_5(x,x_1)
\right\| \leq 0.165 \times 10^{-5}$. Now, we have \be{normeinequa}
\left\| T_5 - \tilde T_5 \right\|= \left\|e^{-A_1 x^{\frac{1}{4}}}
\Pi_1 e^{-A_1 x^{\frac{1}{4}}} \Pi_2 - e^{A_1 x_1} \Pi_1 e^{A_1 x_1}
\Pi_2 \right\| = \left\| \Pi_1 - e^{A_1 (x^{\frac{1}{4}}+x_1)} \Pi_1
e^{A_1 (x^{\frac{1}{4}}+x_1)} \right\|.
\end{equation}
Therefore we have \be{contnormineq} \left\| T_5 - \tilde T_5
\right\| \leq \left\| \Pi_1 - e^{A_1 (x^{\frac{1}{4}}+x_1)} \Pi_1
\right\| \end{equation} $$+ \left\| e^{A_1 (x^{\frac{1}{4}}+x_1)} \Pi_1 -
e^{A_1 (x^{\frac{1}{4}}+x_1)} \Pi_1 e^{A_1 (x^{\frac{1}{4}}+x_1)}
\right\| \ =2 \left\| {\bf 1}- e^{A_1 (x^{\frac{1}{4}}+x_1)}
\right\|.
$$
Therefore, we need to find $x_1 \geq 0$ so that \be{pprr} \| {\bf 1}
- e^{A_1 (x^{\frac{1}{4}}+x_1)} \| \leq \frac{0.165 \times
10^{-5}}{2}. \end{equation} We calculate explicitly the eigenvalues of $A_1$
which are given by $\pm i r$ and $\pm il$, with $r$ and $l$ given by
\be{erreeelle} r:=\sqrt{\frac{15+ \sqrt{125}}{2}}, \qquad l:=
\sqrt{\frac{15 - \sqrt{125}}{2}}. \end{equation} We have \be{klop} \| {\bf
1} - e^{A_1 (x^{\frac{1}{4}}+x_1)} \|=2\sqrt{1-
\cos(r(x^{\frac{1}{4}}+x_1))+ 1- \cos(l(x^{\frac{1}{4}}+x_1))}. \end{equation}
Therefore, setting $t:=x^{\frac{1}{4}}+x_1$, formula (\ref{pprr})
is certainly satisfied if \be{condi1} 1-\cos(rt) < 8 \times
10^{-14}, \end{equation} and \be{condi2} 1-\cos(lt) < 8 \times 10^{-14}. \end{equation}
Setting $\epsilon:=\arccos(1-8 \times 10^{-14})$, we need to find $t
> x^{\frac{1}{4}}$, positive integers $p$ and $q$ such that
\be{fth} \left| rt -2\pi p \right| <\epsilon, \qquad \left| lt
-2\pi q \right| <\epsilon. \end{equation} One way to do this is as follows. Fix
an integer $k >0$ large enough so that \be{ght} \frac{1}{k} <
\frac{\epsilon}{2 \pi}. \end{equation} According to Dirichlet's approximation
theorem (see, e.g., \cite{RNG} Theorem 1.3) we can find $p$ and $q$,
with $1 \leq p \leq k$ positive integers so that \be{pkpkpk} \left|
\frac{l}{r} p -q \right| < \frac{1}{k}. \end{equation} Choose $p$ and $q$ this
way and \be{t} t=\frac{2 \pi p}{r}. \end{equation} Using this value of $t$,
the first one of (\ref{fth}) is verified because the left hand side
becomes zero. Replacing this value of $t$ in the second one of
(\ref{fth}) and using (\ref{ght}) and (\ref{pkpkpk}) we obtain that
the second inequality is satisfied as well. Moreover, since $q \geq
1$, we have that \be{kla} t \geq \frac{2 \pi}{r}\approx 1.7366
> x^{\frac{1}{4}}= \left( \frac{\pi}{44} 10^{-5}
\right)^{\frac{1}{4}} \approx 0.0291. \end{equation} This concludes the
example.
\section{Combination of the two methods}
\label{M3} The main ideas in the two methods of control described in
the previous sections can be combined in a third method. The main
idea of the method in Section \ref{M1} was to use similarity
transformation to generate a basis of the dynamical Lie algebra
${\cal L}$ starting from the given matrices in ${\cal F}$ in
(\ref{effematrices}) (cf. (\ref{effe})). The main idea of the method
in section \ref{M2} is the use of the limit in Lemma
\ref{Limitefondamentale}, once (\ref{AIG}) holds. This allows us
to control to the target, by repeating a given sequence of available
exponentials, with arbitrary accuracy. We can combine the two ideas.
We first use similarity transformations to obtain a basis of ${\cal
L}$, ${A}_1,\ldots,A_{\dim{\cal L}}$. Then, if $e^{H}$ is the target
and $H=\sum_{j=1}^{\dim{\cal L}} \alpha_j A_j$, we use the fact
that \be{xfg} e^{Hx}=e^{\sum_{j=1}^{\dim{\cal L}} \alpha_j
A_j}=\prod_{j=1}^{\dim{\cal L}} e^{\alpha_j A_j x}+O(x^2), \end{equation}
along with Lemma \ref{Limitefondamentale} to approximate with
arbitrary accuracy the target state, i.e., \be{furthrd}
e^{H}=\lim_{n \rightarrow \infty} \left[ \prod_{j=1}^{\dim{\cal L}}
e^{\alpha_j A_j \frac{1}{n}} \right]^n.\end{equation} At the end of the
process, we replace all the exponentials of the form $e^{A_j t}$
with $t < 0$ with approximating exponentials of the form $e^{A_j
\bar t}$ with $\bar t >0$.
\vspace{0.25cm}
We test this method on the example in subsection \ref{subsec}.
Given $A_1$ and $A_2$ in (\ref{calfexample}) we calculate
\be{effeex} F:=e^{A_2 \frac{\pi}{2}} A_1
e^{-A_2\frac{\pi}{2}}=\pmatrix{0 & -1 & 0 & 2 \cr 1 & 0 & 1 & 0 \cr
0 & -1 & 0 & -3 \cr -2 & 0 & 3 & 0}. \end{equation} Our target is $e^{A_5
\frac{\pi}{44}}$ in (\ref{pog}). We have the decomposition
\be{decoA5} A_5=10 A_1+6F-16 A_2, \end{equation} so that \be{gfa} e^{A_5
\frac{\pi}{44}x}=R_5(x)+O(x^2), \end{equation} with \be{R5xxxx} R_5(x):= e^{10
A_1 \frac{\pi}{44}x} e^{6 F \frac{\pi}{44}x} e^{-16 A_2
\frac{\pi}{44}x}. \end{equation} We have, according to Lemma
\ref{Limitefondamentale}, \be{LMF} \lim_{n \rightarrow \infty}
\left[ R_5\left(\frac{1}{n}\right) \right]^n=e^{A_5 \frac{\pi}{44}}. \end{equation} Table
\ref{tavola2} shows the results of numerical experiments with this
scheme displaying the error $Err$ as a function of the number of
iterations. Compared with Table \ref{tavola}, it is clear that this
method converges much faster. Another advantage is that the all the
exponentials $e^{At}$ with negative $t$ are for $A=A_2$ (cf.
(\ref{R5xxxx}) and
(\ref{effeex})) and the one dimensional
subgroup associated with $A_2$ is closed. Therefore no further
approximation is needed.
\begin{table}[ht]
\caption{Results of numerical experiments for the method in section
\ref{M3}.} \vspace{0.25cm} \centering
\begin{tabular}{c c}
\hline \hline number of iterations $n$ & Error $Err$\\
\hline 2 & 2.2819 \\
\hline 10 & 0.4544 \\
\hline 20 & 0.2267 \\
\hline 50 & 0.0906 \\
\hline 100 & 0.0453 \\
\hline 1000 & 0.0045 \\
\hline 10000 & 0.0005 \\
\label{tavola2}
\end{tabular}
\end{table}
\section{Conclusions}
The methods described in this paper can be seen as a constructive
proof of the Lie algebra rank condition of Theorem \ref{LARC}. It
is expected that the ideas described above can be extended and
improved by using more sophisticated exponential formulas
\cite{Y25}, in many ways. It is also expected that it will be
possible to obtain estimates of the convergence rate in various
cases. Our goal here was to propose ideas that, although at an early
stage, are very general and, in principle, allow us to control
every system on a compact Lie group. These systems in particular
include the important class of closed, finite dimensional, quantum
systems which are coherently controlled, namely controlled through a
change in the Hamiltonian.
\vspace{0.25cm}
In the future, it will be important to improve the algorithms by
minimizing the number of switches in the control laws that mainly
depends on the number of iterations, in the last two sections. In
this respect, the algorithm of section \ref{M3} is expected to be
faster than the algorithm in section \ref{M2}, as a consequence of
the exponent $2$, in the $O(x^2)$ in (\ref{xfg}) as opposed to the
exponent $1+\delta$ (with $\delta$ typically $ < 1$) in (\ref{AIG}).
If our main concern is however the time of implementation, the
effect of an increasing number of iterations $n$ in (\ref{polp}) is
balanced by the $\frac{1}{n}$ exponents inside the limit. The main
problem, in terms of time, is the approximation of matrices of the
form $e^{At}$ with $t < 0$ with matrices of the form $e^{At}$, with
$t >0$, in the case of non-closed subgroups. In fact, we might have
to `travel' for a long time inside the Lie group $e^{\cal L}$ before
we get close enough to the original $e^{At}$. In special situations,
however, it might be possible to transform $A$ into $-A$ via
available similarity transformations, or reduce ourselves to a
smaller dimensional Lie subgroup where the problem is more easily
tractable. Nevertheless, it is always possible to find such an
approximation and therefore the control. Remark \ref{nt} shows how
this problem can be reduced to a standard problem of Diophantine
approximation in number theory for which there exist a vast
literature and that can be always solved in principle.
In conclusion, would like to comment on the assumption of
compactness which is used in the paper only in two instances. In
particular, compactness is used to have a surjective exponential map
and to be able to approximate an exponential of the form $e^{At}$
with $t$ negative with an exponential of the same type with $t$
positive. Whenever these two properties hold, the methods of this
paper can still be applied to more general Lie groups. In particular
this is the case for finite dimensional closed quantum mechanical
systems whose dynamical Lie algebra $\cal L$ is a subalgebra of
$u(n)$.
\vspace{0.25cm}
\vspace{0.25cm}
\vspace{0.25cm}
{\bf Acknowledgment} This research was supported by NSF under Grant
ECCS0824085. The author would like to thank Y.T. Poon for suggesting
formula (\ref{Poonformula}). He also would like to thank Richard Ng
for indicating the relevant literature on Diophantine approximation
and Dirichlet's approximation theorem, and for helpful discussions
on this topic.
|
1,314,259,995,596 | arxiv | \section{Introduction}
Some irreversible fractal aggregates display in nature peculiar structural
transitions during their growing processes. Electrochemical deposition
experiments \cite{Fle92} and bacterial colony growth \cite{Ben92} are two
examples of such phenomena. It has been reported that, at a certain threshold
distance far from the cluster center, these systems exhibit an intriguing
transition from a rather dense to a more multibranched growth of which little
is understood.
Besides the classical diffusion limited aggregation for 'regular' fractals
(having fractal dimension $d_{f}\sim 1.7$) \cite{Wit83,Ziq95}, the Laplacian
aggregation model has been proposed which is based on the discretized Laplace
equation\cite{Nie84}
\begin{eqnarray}\label{eq:l1}
\phi_{i,j} &=& \frac{1}{4}(\phi_{i+1,j}+\phi_{i-1,j}+\phi_{i,j+1}+
\phi_{i,j-1}) \;\;\; ,
\end{eqnarray}
with $\phi_{i,j}$ the potential growth site. However, neither of these simple
approaches lead to explain a complex structural transition during irreversible
fractal growth.
Two alternative attempts have recently been reported in the literature to
understand the physics behind a structural transition in fractal growth.
In the first approach, due to Louis {\em et al.}
\cite{Lou92,Cas93,Wan93a}, the transition is derived by solving
the Poisson equation (on a squared lattice)
\begin{equation}\label{eq:w0}
\nabla^{2}\phi = \lambda^{2}\phi \;\;\; ,
\end{equation}
which becomes dependent on the potential $\phi$ at two boundaries, the distance
$L$ between them, and a screening length $\lambda$.
The second theoretical approach is due to Wang and Canessa \cite{Wan93b,Can93}
and is based on the Biharmonic equation in two-dimensional (2D)
isotropic defect-free media, namely
\begin{equation}\label{eq:w1}
\nabla^{2} (\nabla^{2} u)=0 \;\;\; .
\end{equation}
By discretizing this equation, these authors have shown that a structural
transition can also be a consequence of the different
coupling of displacements $u$ within the pattern formation.
Both of these approaches describe a similar class of complex topology
from different perspectives and on different systems. However, the Poisson
and Biharmonic growth models identically relate the growth probability for
each site to the local field as
\begin{eqnarray}\label{eq:w2}
P \approx \phi & \;\;\; or \;\;\; &
P \approx \nabla ^{2}u \;\;\; ,
\end{eqnarray}
These assumptions phenomenologically follow the Laplacian
(or dielectric breakdown) model.
It is the aim of this progress report to review the Poisson and Biharmonic
approaches. We shall focus on these two model for irreversible fractal
aggregates showing up a structural transition and shall consider the
effects of the surrounding medium
and geometrical constraints for the seed particles. Following recent ideas by
P\'erez-Rodr\'iguez {\em et al.} \cite{Per94}, we shall also shown how the
optical diffraction method leads to characterize the structural
transition by a decrease in the fractal dimension for this class of aggregates.
\section{The Poisson Growth Model}
The origin of screening in the Poisson model lies in the presence of free
charges and leads to a rich variety of patterns \cite{Lou92}. As compared to
the Laplacian model of Eq.(\ref{eq:l1}), this model introduces a new length
scale, {\em i.e.}, $\lambda$, and a nontrivial dependence on the boundary
conditions that are responsible for a structural transition on growing.
Fig.1(a) shows an example of the Poisson model in planar geometry by
adding 4051 particles to the cluster and using $\phi^{i}=5 \times 10^{-7}$,
$\phi^{o}=1$ and $\lambda^{-1}=10$.
{}From this figure it can be seen that the Poisson patterns obtained can have a
fractal character (at scales shorter than $\lambda$), to then display a
structural transition.
Louis {\em at.} \cite{Lou92} have analytically demostrated that this
structural transition can be characterized by a change in the sign of
the electrostatic field at the surface of the aggregate besides the
minimum of the growth velocity (to be shown later). However, Wan Wei
\cite{Wan93a} has shown that such a transition may be
altered by the existence of a critical field.
It is important to note that within the Poisson model several fractal
branch might also grow when attaching more than one particle at each
computer step. For details see Refs.\cite{Lou92,Cas93}.
\section{The Biharmonic Growth Model}
Fig.1(b) shows the final stage of a Biharmonic fractal displaying features
(in circular geometry) of a transition. Below the transition point
$r_{\ell}$, the fractal dimension approaches
the value for Laplacian growth within error bars. To generate this class of
patterns one solves numerically the Biharmonic Eq.(\ref{eq:w1}). Its discrete
form on the ($i,j$) lattice site, yields the following expression
\begin{eqnarray}\label{eq:w4}
& & u_{i-2,j}+2u_{i-1,j-1}-8u_{i-1,j}+2u_{i-1,j+1}+
u_{i,j-2}-8u_{i,j-1}+20u_{i,j} \\ \nonumber
& & -8u_{i,j+1}+u_{i,j+2}+
2u_{i+1,j-1}-8u_{i+1,j}+2u_{i+1,j+1}+u_{i+2,j}=0 \;\;\;
\end{eqnarray}
requiring the values of the normal derivative for the order parameter $u$
\cite{Gde86}. For the sake of simplicity one sets the derivative boundary
condition (necessary along the radial direction) equal to zero.
Throughout calculations one uses lattice sites enclosed within a circle of
(normalized) radius $r=\sqrt{i^{2}+j^{2}}$ such that $u^{o}$ and $u^{i}$ are
unity and zero at the outer circular boundary and the inner growing Biharmonic
aggregate, respectively. The seed particle can be placed either centered or
distributed under different geometrical constraints as discussed later.
The procedure for growing fractals then follows standard techniques
\cite{Nie84}, till solutions of the discretization of Eq.(\ref{eq:w1})
converge. Aggregates then stochastically grow under
the relation between the growth probability $P$ (at the grid site $(i,j)$) and
$u$ as in Eq.(\ref{eq:w2}).
\section{Growth Velocity}
Results for the grow velocity $v$ along the $y$-direction for the Biharmonic
model are plotted in Fig.2. For comparison, also included in this figure are
the results for Poisson growth assuming $v$ to be proportional to the field
$\mid \phi_{i,j}-\phi^{i}\mid$ by following Ref.\cite{Lou92}.
{}From Fig.2, it can be seen that the structural transition coincides
with the fact that $v$ on the growing surface presents a minimum.
The same is true for Biharmonic growth (even independently of how one relates
the probability $P_{ij}$ to $u(i,j)$ \cite{Wan93b}). For Laplacian
growth this phenomenon does not appear because the trend is to generate a
single tip at faster velocity than in the cases of Poisson or Biharmonic
growth.
Biharmonic patterns below $r_{\ell}$, ({\em i.e.}, within the dense region of
Fig.1(b)), does not become that dense as for Poisson of Fig.1(a). This
effect can be understood from the velocity plot since for Poisson growth
the transition occurs at smaller velocities than for Biharmonic
growth. Henceforth, a Eden-like pattern can be generated due to screening.
Both Poisson and Biharmonic models lead to growth velocities along the
$y$-direction with parallel slopes.
\section{Effects of Surrounding Medium and Geometrical Constraints}
Within the Biharmonic model, the effects on fractal growth of the surrounding
medium and of different geometrical constraints (or boundary conditions) for
the seed particles has been analysed in Ref.\cite{Can93} by following
the $\eta$ (or dielectric breakdown) model \cite{Nie84,Mea91} and assuming
an extention of Eq.(\ref{eq:w2}) as
\begin{equation}\label{eq:w22}
P_{ij}= \frac{\mid \nabla ^{2}u_{i,j}\mid^{\eta} }{\sum \mid \nabla^{2}
u_{ij}\mid^{\eta} } \;\;\; ,
\end{equation}
where the sum runs over nearest neighbor sites and $\eta \ge 0$.
Figure 3(a) shows typical numerical results by using $\eta =2$ for a cluster
of about 1000 particles. In the limit $\eta \rightarrow \infty$, a
{\em 'transition from slow to faster growth'} is found
in such a way that one end of the
needle-like structure presents greater growth probability than the other.
Such structures become dendritic below and above a transition point.
On decreasing the value of $\eta$, one finds a transition from
{\em dendritic-to-compact} growth such that the inner region of
the aggregates becomes denser. If $\eta \rightarrow 0$, the growth
probability becomes purely random and independent of the Biharmonic field
-as in a Eden-like pattern.
The effects of geometrical constraints for the seed particles has also been
studied in Ref.\cite{Can93} by fixing the value of $\eta$ and using
two different geometries for the seed particles, namely circular and linear.
Figure 3(b) shows a Biharmonic fractal under the constrains of a circular
(empty) area of seed particles. As can be seen
a structural transition still survives independently of such a
simple configuration adopted.
\section{Optical Diffraction Analysis}
The effect of the structural transition on the diffracted intensity and,
consequently, on $d_{f}$ follows by considering the
fractal to be composed of $N$ identical and similarly oriented particles on
the plane $x-y$ \cite{Ber91,Kor92}. The position of their centers of mass
is given by
${\bf R}_{n}=(x_{n},y_{n})$, where $n=1,...,N$. In this system, the Fraunhofer
diffraction pattern for the fractal is derived by assuming that each particle
corresponds to one aperture.
The form factor, corresponding to the intensity scattered by one 'aperture',
is determined by the integral of the diffraction amplitude. The structure
factor is
\begin{equation}\label{eq:a4}
S({\bf k})\equiv \mid \frac {1}{N} \sum _{n=1}^{N}
e^{-i{\bf k}{\bf R}_{n}}\mid ^{2} \;\;\; ,
\end{equation}
where {\bf k} is the component, parallel to the $xy$-plane, of the
scattered wave vector whose modulus is $k = \frac {2\pi }{\lambda }
{\rm sin}\theta \approx \frac {2\pi }{\lambda } \theta$, with $\theta $ the
(small) angle that the scattered wave vector makes with the $z$-axis.
$\lambda $ is the wavelength of the incident light. If $ka<1$ ($a$ being the
size of an elementary particle), the form factor is practically unity and the
light distribution in the diffraction pattern is given by the structure factor
such that the normalized diffraction intensity is
\begin{equation}
I({\bf k})\approx S({\bf k}) \;\;\; .
\end{equation}
For $\frac{a}{L}<ka<1$ ($L$ being the size of the whole aggregate) the expected
value of the intensity -or, alternatively, of the structure
factor $S({\bf k})$- is
\begin{equation}\label{eq:lq2}
<I({\bf k})>= \int d^{2}{\bf R} e^{-i{\bf k}{\bf R}}
( \int d^{2}{\bf R}_{0} \rho ({\bf R}_{0})
\rho ({\bf R}+{\bf R}_{0}) \; \sim R^{-\alpha} )\;\;\; .
\end{equation}
This relation for fractal aggregates obeys a power law behaviour \cite{Wit83},
where the exponent $\alpha$ is related to the Hausdorff dimension
$d_{f}=d-\alpha$ with $d$ the Euclidean space dimension. This variation, in
turn, leads to the power law behavior of the diffracted intensity as a function
of the wave vector
\begin{equation}
I({\bf k})\sim k^{-d_{f}} \;\;\; .
\end{equation}
Information about $d_{f}$ can be obtained at low {\bf k} values (such that
$\frac{a}{L} \ll \mid k_{x}a \mid \ll 1$) from the averaged intensity defined
by the expression
\begin{equation}\label{eq:a16}
<I(k)>\equiv \frac {1}{2\pi} \oint d\phi I({\bf k}) \ \ ;
\ \ \ k_{x}=k{\rm cos}\phi, \ \ k_{y}=k{\rm sin}\phi,
\end{equation}
As seen in Fig.4, this quantity for Poisson fractals has a power law behavior
as a function of the modulus $k$. However, the structural transition during
fractal growth leads to a change in $d_{f}$. The slopes of this plot represent
a decrease of the fractal dimension for Poisson aggregates. Similar results
have been found in Biharmonic fractals \cite{Per94}.
\section{Discussion}
In both models studied the global influence of a growing pattern to the growth
probability for each lattice site under a certain power-law form is set
phenomenologically as in Eq.(\ref{eq:w2}) or (\ref{eq:w22}).
However an important difference to note
is that within the Biharmonic model, iterative procedures are carried
out around {\em thirteen} next nearest neighbors and not on {\em four} as
within the Poisson and Laplacian models. Thus the formation of connected
patterns within the Biharmonic equation becomes non trivial and more involved.
Within the Poisson model, screening lies in the presence of free charges.
The physical relevance for the Biharmonic equation might follow from elasticity
theory (the deflection of a thin plate subjected to uniform loading over its
surface with fixed edges), the steady slow 2D motion of a viscous
fluid or the vibration modes in the acoustic of drums. Besides this, a higher
order differential equation containing the Biharmonic term also appears in the
study of kinetic growth with surface relaxation (see, {\em e.g.}, recent work
in \cite{Yan92} and references therein).
The influence of ramified Biharmonic patterns on the growth probability
for each lattice site is on the type of pattern obtained and not on locating
the transition. Several branches may also develop within the Poisson and
Biharmonic growth models by attaching -simultaneously, and stochastically-
more than one particle at each time step. For Poisson growth the structural
transition is the result of including many-body contributions via screening
in a sort of mean field approach. In the Biharmonic model, long range
coupling appears naturally as a consequence of discretizing Eq.(\ref{eq:w1})
(as given in Eq.(\ref{eq:w4})).
For planar symmetry, the transition point within both models appears when the
growth velocities exhibit a minimum. This implies that Poisson and Biharmonic
growth describe a similar class of complex structural transition phenonema
from two different perspectives and on different systems.
The structural transition point can, to a good approximation, be estimated
from the continuous limit of Eq.(\ref{eq:w1}) in cylindrical coordinates.
Canessa and Wang \cite{Can93} have shown that the transition point depends on
the system size $L$ only and occurs approximately at a distance (in lattice
units) about $60 \%$ far from the seed particle indicated by the circle arrows
in Fig.1(b). This prediction is in accord with their numerical
simulations for $\eta =1$.
By tuning $\eta \rightarrow 0$ in Eq.(\ref{eq:w22}), the structural transition
corresponds to a {\em 'dense-to-multibranched transition'} whereas
for $\eta \rightarrow \infty$ one founds a {\em 'transition from slow to
faster growth'}. On decreasing
$\eta$ from infinity to zero, Biharmonic fractals become denser as for
Laplacian growth. For large $\eta$, the needle-like structures grow faster
at one end and, in the overall, remain denditric. Furthermore the transition
point is independent of geometrical constrains adopted for the seed particles.
Results of optical diffraction have enabled to identify and relate changes
in $d_{f}$ of aggregates to variations in the diffracted intensity as a
function of the wave vector. Such findings may be experimentally confirmed.
In fact, the averaged intensity
$<I(k)>$ might be determined by using the set up as the one described in
\cite{All86}. Therein, a photomultiplier connected to a multichannel
analyzer records $I(k_{x},k_{y})$ and the displacement of the photomultiplier
is controlled by a high-precision motorized micrometer. Then, after
scanning the diffraction patterns displaying a structural transition, the
average of intensity over concentric circles ($<I(k)>$) might be obtained.
\begin{center}
{\bf Acknowledgments}
\end{center}
The author would like to thank Dr Wang Wei (Najing University, China) and
Dr Felipe P\'erez-Rodr\'iguez (Universidad Autonoma de Puebla, M\'exico) for
a fruitful collaboration on the present topic. The Scientific Computer
Section and the Condensed Matter Group at ICTP-Trieste, Italy, are also
acknowledged for financial support.
\newpage
|
1,314,259,995,597 | arxiv | \section{Magnetic field from synchrotron self-absorption}
\label{a:mf_turnover}
Interpretation of a radio spectrum with the low-frequency turnover caused by
synchrotron self-absorption, and determination of physical parameters within
this assumption date back to the 1960s \citep[see, for example, one of the
pioneering works][]{Slysh63}. In particular, magnetic field associated with
a source of synchrotron emission can be inferred. However, the approximate
values of the numerical coefficient in the formula that are a function of a
spectral index $\alpha$ ($S_\nu \propto \nu^\alpha$) of the optically thin
part of a synchrotron spectrum are tabulated for a limited number of $\alpha$
values ranging from $-0.25$ to $-$1.0 \citep{Marscher83}. The relation for
this coefficient was previously discussed by \cite{Gould79}. In this Appendix,
we derive a formula for this coefficient that can be computed precisely.
Following \cite{Pacholczyk70}, intensity of emission in a case of synchrotron
self-absorption is
\begin{equation}
S_\nu=F \left( \nu_1 \right) J \left( \frac{\nu}{\nu_1},s \right)\,,
\label{eq: FSSA}
\end{equation}
where $s=1-2\alpha$ is the power-law index of energy distribution $N \left( E \right)=N_0 E^{-s}$
of emitting electrons, and
\begin{equation}
J \left( \frac{\nu}{\nu_1},s \right)=\left(\frac{\nu}{\nu_1} \right)^{5/2}
\left\{ 1- \exp \left[-\left(\frac{\nu}{\nu_1} \right)^{-\left(s+4 \right)/2} \right] \right\}\,,
\label{eq: J}
\end{equation}
where $\nu_1$ is the frequency at which the optical depth $\tau=1$. Source
function $F \left( \nu_1 \right)$ for a spherical, uniform emitting region with observed angular
size $\theta=2R(1+z)^2/D$ at the luminosity distance $D$ is
\begin{multline}
F \left(\nu_1 \right)=\frac{ \left( \pi \theta^2 R /3 \right) \varepsilon_{\nu_1}
\left[\delta/\left( 1+z\right) \right]^{2-\alpha}}{2 \kappa _{\nu_1} R
\left[\delta/\left(1+z \right) \right]^{\left( 3 - 2 \alpha\right)/2}} =\\
=\frac{\pi}{6} \theta^2 \frac{\varepsilon_{\nu_1}}{\kappa_{\nu_1}} \left(
\frac{\delta}{1+z}\right)^{1/2},
\label{eq:Sor}
\end{multline}
where
\begin{equation}
\varepsilon_{\nu_1}=c_5\left(s \right)N_0 B_{\bot}^{\left( s+1\right)/2} \left(\frac{\nu_1}{2 c_1} \right)^{\left( 1-s \right)/2}
\label{eq:emis}
\end{equation}
and
\begin{equation}
\kappa_{\nu_1}=c_6\left(s \right) N_0 B_{\bot}^{\left(s+2 \right)/2} \left(\frac{\nu_1}{2 c_1} \right)^{-\left( s+4 \right)/2}
\label{eq:abs}
\end{equation}
are the emission and absorption coefficients, respectively, $B_\bot$ is
the component of the magnetic field perpendicular to the line of sight.
Constants and functions are:
\begin{equation}
c_1=\frac{3e}{4 \pi m^3 c^5}\,, \qquad
\label{eq:c1}
c_3=\frac{\sqrt{3} e^3}{4 \pi m c^2}\,,
\end{equation}
\begin{equation}
c_5=\frac{1}{4} c_3 \frac{s+7/3}{s+1}\,\, \Gamma \left(\frac{3 s-1}{12} \right) \Gamma \left(\frac{3 s+7}{12} \right)\,,
\label{eq:c5}
\end{equation}
\begin{equation}
c_6=\frac{1}{32} \left(\frac{c}{c_1} \right)^2 c_3 \left(s+\frac{10}{3} \right) \Gamma \left(\frac{3 s+2}{12} \right) \Gamma \left(\frac{3 s+10}{12} \right),
\label{eq:c6}
\end{equation}
where $e$ and $m$ are the charge and mass of electron, respectively, $c$ is
the speed of light in vacuum, and $\Gamma$ is the Euler gamma function.
Substituting Eqs.~(\ref{eq: J})-(\ref{eq:c6}) into (\ref{eq: FSSA}) we obtain
\begin{equation}
S_\nu=\frac{\pi}{6}\,\frac{c_5 \left( s \right)}{c_6 \left( s \right)} \left(2 c_1
\right)^{-5/2} \theta^2 B_\bot^{-1/2} \nu^{5/2} \left(\frac{\delta}{1+z} \right)^{1/2}
\left[ 1- \text{e}^{-\tau_\nu} \right],
\label{eq:Fnu}
\end{equation}
where $\tau_\nu=\left(\nu/\nu_1 \right)^{-\left(s+4 \right)/2}$.
Then the flux density at the turnover frequency $\nu_{\text{m}}$ extrapolated from
the straight-line slope of the optically thin part of synchrotron spectrum is
\begin{multline}
S_\text{m}^\prime=\left.S_\nu \right|_{\nu=\nu_m} \text{e}^{\tau_\text{m}}= \\
=\left[\frac{\pi}{6} \frac{c_5 \left( s \right)}{c_6 \left( s \right)} \left( 2 c_1
\right)^{-5/2} (e^{\tau_\text{m}}-1) \right] \theta^2 B_\bot^{-1/2} \nu_\text{m}^{5/2} \left(\frac{\delta}{1+z} \right)^{1/2}\,.
\label{eq:S_extr}
\end{multline}
The optical depth $\tau_\text{m}$ at $\nu_\text{m}$ is found from equation
\begin{equation}
\text{e}^{\tau_\text{m}}=1+\left(1-\frac{2\alpha}{5}\right)\tau_\text{m}\,.
\label{eq:tau}
\end{equation}
By developing the exponential in Eq.~(\ref{eq:tau}) to the third order,
\citet{Tuerler99} obtained the approximate solution
\begin{equation}
\tau_\text{m}=\frac{3}{2} \left(\sqrt{1-\frac{16 \alpha}{15}}-1 \right)\,,
\label{eq:Tuerler}
\end{equation}
which is accurate enough, deviating less than 5\% from the exact numerical
solution
for $\alpha \gtrsim -1.55$ (Fig.~\ref{f:b_alpha}).
Solving Eq.~(\ref{eq:S_extr}) for the normal component of magnetic field, we have
\begin{equation}
B_\bot=10^{-5}\,b \left(s \right)\, \theta^4\, S_\text{m}^{\prime\,-2}\, \nu_\text{m}^5 \left(\frac{\delta}{1+z} \right)\,,
\label{eq:B}
\end{equation}
where the source size $\theta$ is in milliarcseconds, flux density in Jy,
the turnover frequency $\nu_{m}$ in GHz, and the magnetic field in G.
\begin{equation}
b \left( s \right) = 5.5\cdot10^{62}\left[ \frac{\pi}{6} \frac{c_5 \left( s \right)}{c_6 \left( s \right)} (e^{\tau_\text{m}}-1) \right]^2 \left( 2 c_1\right)^{-5}\,.
\label{eq:b}
\end{equation}
In Fig.~\ref{f:b_alpha}, we plot the coefficient $b$ as a function of $\alpha=(1-s)/2$.
The departure of the approximate solutions from exact ones exceeds 5\% for
$\alpha\lesssim-0.82$.
\begin{figure}
\centering
\includegraphics[height=0.7\columnwidth,angle=-90]{figs/appendix/b_vs_alpha.ps}
\caption{Coefficient $b(\alpha)$ and optical depth $\tau_\text{m}(\alpha)$ as a function of
spectral index $\alpha$. Dashed lines represent approximate solutions derived
by using Eq.~\ref{eq:Tuerler}.}
\label{f:b_alpha}
\end{figure}
\section{Introduction}
The location of the $\gamma$-ray production zone in active galactic nuclei (AGN) is still an open
and actively debated question. Due to a limited angular resolution of the $\gamma$-ray telescopes
it is impossible to directly locate the region responsible for the high-energy emission in AGN.
A variaty of approaches has been considered to address this problem, and our current understanding
is that the regions of $\gamma$-ray production may be at different locations in different sources
as evident from observations. One of the two main competing scenarios is based on the observed rapid
$\gamma$-ray variability on time scales of a few hours and suggests that the high-energy emission
from blazars is generated on sub-pc scales, near the central black hole \citep[e.g.,][]{Tavecchio10,Yan18}.
Similarly, the observed strength and variability of the absorption of the $\gamma$-ray emission in
the blazar 3C454.3 suggests the location of the $\gamma$-ray emitting zone within the broad-line
region \citep{Bai09,Poutanen10}. The second scenario, in contrary, concludes that the dominant
population of $\gamma$-ray photons is produced at larger, parsec scales, at distances up to 10--20~pc
\citep{Marscher10,Agudo11,Schinzel12,Fuhrmann16,Karamanavis16}, and is based on a joint analysis
of data in the $\gamma$-ray and radio bands. \cite{FM2} and \cite{Pushkarev10} showed that variability
in $\gamma$-rays leads that of 15~GHz radio core on timescale of up to a few months.
In this paper we are concerned with one particular AGN, the BL Lac object 2233$-$148, which was
observed during and after the flare in $\gamma$-rays registered in April 2010 by the {\it Fermi}-LAT.
The structure of the paper is as follows: in Section~\ref{s:obs} we describe our and archival
observational data and reduction schemes; in Section~\ref{s:results}, we discuss our results;
and our main conclusions are summarized in Section~\ref{s:summary}. We use the term ``core''
as the apparent origin of AGN jets that commonly appears as the brightest feature in VLBI
images of blazars \citep[e.g.,][]{Lobanov_98}. The spectral index $\alpha$ is defined as
$S_\nu\propto\nu^\alpha$, where $S_\nu$ is the observed flux density at frequency $\nu$.
All position angles are given in degrees east of north. We adopt a cosmology with
$\Omega_m=0.27$, $\Omega_\Lambda=0.73$ and $H_0=71$~km~s$^{-1}$~Mpc$^{-1}$ \citep{Komatsu09}.
\section{Observations and data processing}
\label{s:obs}
\subsection{Multi-epoch 4.6--43.2~GHz VLBA observations}
For the purposes of our study, we made use of data of the BL Lac object PKS 2233$-$148
observed (code S2087D) with the Very Long Baseline Array (VLBA) of the National Radio
Astronomy Observatory (NRAO) during four sessions at epochs 2010-05-15, 2010-06-25,
2010-08-01, and 2010-09-09. All ten VLBA antennas participated in each experiment.
The observations were performed in a full polarimetric mode simultaneously at C, X,
U, K and Q frequency bands, which correspond to 6, 4, 2, 1.3, and 0.7~cm wavelengths,
respectively (Table~\ref{t:freqs}). Each band was separated into four 8~MHz-wide
intermediate frequency channels (IFs) having 16 spectral channels per IF. The signal
was recorded with 2-bit sampling and total recording rate of 256~Mbps with analog base
band converter. The data were correlated at the NRAO VLBA Operations Center in Socorro
(New Mexico,USA) with averaging time of 2~sec. We split C and X bands into two sub-bands
(each of 16~MHz width) centered at 4608.5, 5003.5~MHz and 8108.5, 8429.5~MHz. respectively,
and in the subsequent analysis the data were processed independently. U, K and Q bands
were not split into sub-bands, resulting in 32~MHz band widths centered at 15365.5,
23804.5 and 43217.5~MHz. On-source time at each epoch was, in total, about 45~min at C
and X bands, 53~min at U and K bands, and 83~min at Q band split into 12 scans
distributed over 8 hours. The scans are scheduled over a number of different hour angles
to maximize the $(u,v)$ plane coverage. The increase of the on-source time with frequency
was scheduled with the aim of obtaining comparable image sensitivity at all bands.
\begin{table}
\centering
\caption{Frequency setup for the S2087D VLBA experiment.}
\label{t:freqs}
\begin{tabular}{c r r r r}
\hline\noalign{\smallskip}
Band & \multicolumn{4}{c}{Frequency channels} \\
& IF1 & IF2 & IF3 & IF4 \\
& (MHz) & (MHz) & (MHz) & (MHz) \\
\hline\noalign{\smallskip}
C & 4600.5 & 4608.5 & 4995.5 & 5003.5 \\
X & 8100.5 & 8108.5 & 8421.5 & 8429.5 \\
U & 15349.5 & 15357.5 & 15365.5 & 15373.5 \\
K & 23788.5 & 23796.5 & 23804.5 & 23812.5 \\
Q & 43201.5 & 43209.5 & 43217.5 & 43225.5 \\
\hline
\end{tabular}
\end{table}
The data reduction was performed with the NRAO Astronomical Image Processing System
\citep[{\sc aips},][]{AIPS} following the standard procedure. The individual IFs for
each frequency band were processed separately throughout the data reduction. The antenna
gain curves and system temperatures measured during the sessions were used for a
priori amplitude calibration. Global gain correction factors for each station for
each IF were derived from the results of self-calibration. We applied the significant
amplitude scale corrections listed in Table~\ref{t:gain_corr} by running the AIPS task
{\sc clcor}. The phase corrections for station-based residual delays and delay rates were
found and applied using the AIPS task {\sc fring} in two steps. First, the manual fringe
fitting was run on a short interval on a bright quasar 3C454.3 (2251+158) to determine
the relative instrumental phase and residual group delay for each individual IF.
Secondly, the global fringe fitting was run by specifying a point-like source model
and a signal-to-noise ratio cutoff of 5 to omit noisy solutions. The fringe-fit
solution interval was chosen to be 10, 4, 2, 1.5, and 1 minute for C, X, U, K, and
Q band, respectively. After fringe fitting, a complex bandpass calibration was made.
The estimated accuracy of the VLBA amplitude calibration in the 5--15~GHz frequency
range is of about 5\% and at 24--43~GHz of about 10\%
\citep[see also][]{2cmPaperIV,Sokolovsky11}.
\begin{table}
\centering
\caption{Amplitude scale corrections for the S2087D VLBA experiment.
The full table is available online.}
\label{t:gain_corr}
\begin{tabular}{c c c c c}
\hline\noalign{\smallskip}
Antenna & Band & Epoch & IF & Correction \\
(1) & (2) & (3) & (4) & (5) \\
\hline\noalign{\smallskip}
BR & K & 1 & 1--2 & 0.88 \\
BR & K & 1 & 3--4 & 0.85 \\
BR & K & 2,3,4 & 1--4 & 0.80 \\
FD & U & 1 & 1--4 & 1.09 \\
FD & Q & 1 & 1--4 & 1.15 \\
\hline
\end{tabular}
\end{table}
{\sc clean}ing \citep{CLEAN}, phase and amplitude self-calibration \citep{Jennison58,Twiss60},
and hybrid imaging \citep{Readhead80,Schwab80,Cornwell81} were performed in the Caltech
{\sc difmap} \citep{difmap} package. A point-source model was used as an initial model for the
iterative procedure. Final maps were produced by applying a natural weighting of the
visibility function. The spanned bandwidth of the IFs in each band is small ($<0.2$\% of
fractional bandwidth in all bands), thus no spectral correction technique was applied.
In this paper, we present results inferred from the total intensity images.
The polarization calibration and results will be published in a separate paper.
\subsection{Multi-epoch 15.4~GHz MOJAVE observations}
We also made use of the data at 15.4~GHz from the MOJAVE (Monitoring of Jets in Active
Galactic Nuclei With VLBA Experiments) program\footnote{\url{http://www.astro.purdue.edu/MOJAVE}}.
The data were obtained at eight more epochs at 15.4~GHz: 2009-12-26, 2010-06-19, 2010-12-24,
2011-09-12, 2012-05-24, 2012-07-12, 2012-12-10, and 2016-09-17. We used the fully calibrated
publicly available data. For a more detailed discussion of the data reduction and imaging
process schemes, see \cite{MOJAVE_XV}. The absolute flux density of the observations is
accurate within 5\% \citep{MOJAVE_I,MOJAVE_VIII}.
\begin{figure*}
\centering
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C1_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C1_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C1_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C1_2010_09_09_pus_map.ps}\vspace{2mm}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C2_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C2_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C2_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_C2_2010_09_09_pus_map.ps}\vspace{2mm}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X1_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X1_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X1_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X1_2010_09_09_pus_map.ps}\vspace{2mm}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X2_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X2_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X2_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_X2_2010_09_09_pus_map.ps}\vspace{2mm}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_U0_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_U0_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_U0_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_U0_2010_09_09_pus_map.ps}\vspace{2mm}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_K0_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_K0_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_K0_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_K0_2010_09_09_pus_map.ps}\vspace{2mm}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_Q0_2010_05_15_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_Q0_2010_06_25_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_Q0_2010_08_01_pus_map.ps}
\includegraphics[width=2.92cm,angle=-90]{figs/01/J2236-1433_Q0_2010_09_09_pus_map.ps}
\caption{Naturally weighted total intensity contour maps of PKS~2233$-$148 at four epochs during 2010
at 4.6, 5.0, 8.1, 8.4, 15.4, 23.8 and 43.2~GHz, with a cell size of 0.3, 0.3, 0.2, 0.2,
0.1, 0.06, 0.03 mas per pixel, respectively. The x and y axes are given in mas of
relative right ascension and relative declination, respectively. The contours are plotted at
increasing powers of $\sqrt{2}$ starting from 4~rms level. The full width at half maximum
(FWHM) of the restoring beam is shown as a shaded ellipse in the lower left corner.
Notice that the scales in the different images are different.
The image parameters are listed in Table~\ref{t:maps}.}
\label{f:maps}
\end{figure*}
\subsection{15~GHz OVRO observations}
We also used public data\footnote{\url{http://www.astro.caltech.edu/ovroblazars}} of
PKS~2233$-$148 observations performed within the Owens Valley Radio Observatory 40-m
Telescope monitoring program \citep{Richards11}. Observations are done at 15~GHz in
a 3~GHz bandwidth since 2008-10-23 to 2018-02-05 with a typical time sampling of about
four days. Details of the data reduction and calibration are given in \cite{Richards11}.
\subsection{Gamma-ray {\it Fermi}-LAT data}
The $\gamma$-ray light curve was generated from data obtained with the LAT \citep{LAT}
onboard the {\it Fermi} $\gamma$-ray space telescope between 2008-08-09 and 2016-10-17.
In the analysis, we used the {\it Fermi} ScienceTools software
package\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone}}
version v10r0p5 and Pass 8 data. In generation of the light curve, we first selected
all photons between 100 MeV and 300 GeV within a $10\degr$ region of interest (ROI)
around the source. In the event selection and analysis we followed the recommendations
for Pass 8 data, given by the LAT
team\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8\_usage.html}}.
The photon flux over each 7-day bin was calculated using the tool {\it gtlike} with
instrument response function version P8R2\_SOURCE\_V6. The source model was generated
using the external tool make3FGLxml.py version 01 by selecting all sources within $20\degr$
of the target in the 3FGL catalogue \citep{3FGL}, and including also the Galactic diffuse
emission model version {\it gll\_iem\_v06} and isotropic diffuse emission model version
{\it iso\_source\_v06}. Based on 3FGL catalogue, the target was modeled with a log-parabola
spectrum, defined as $dN/dE=N_0(E/E_b)^{-(\alpha+\beta\log(E/E_b))}$. To account for low
number of photons in each weekly bin and to reduce the number of free parameters in the fit,
we froze the spectral parameters of the target and all other sources in the model to the
values reported in 3FGL. For the target the 3FGL values are $\alpha=2.04$, $\beta=0.09$,
and $E_b=581.68$. Additionally, if the source was beyond the $10\degr$ ROI or had a test
statistic (TS) value \citep[e.g.,][]{Mattox96} less than five in 3FGL, we also froze the
flux to the value reported in 3FGL. If the TS of the bin was less than four (corresponding
to about 2$\sigma$) or if the number of predicted photons in that bin was
less than 10, we calculated a 95\% upper limit of the photon flux \citep{Abdo11}.
\section{Results}
\label{s:results}
\subsection{Parsec-scale jet structure}
Final naturally weighted VLBA maps of the source brightness distribution at the seven
frequencies at each of the four observing epochs are presented in Fig.~\ref{f:maps}.
The source shows a typical parsec-scale AGN morphology of a bright compact core and
one-sided jet, which propagates towards the east and is detected up to a distance of
about 2~mas at 43~GHz and progressively farther, up to 8~mas at lower (4.6, 5.0~GHz)
frequencies due to a steep spectrum of the jet emission (see more detailed discussion
in Section~\ref{s:sp_ind}). At the frequency of 8~GHz and higher the outer jet regions
are transversely resolved. The lower frequency images show the faint emission beyond
the core, most probably caused by the uncompensated side-lobes due to low declination
of the source. The images at 8~GHz are most sensitive, with a typical noise level of
about 0.16~mJy beam$^{-1}$ and a dynamic range of the order of 3000, determined as a
ratio of the peak flux density to the rms noise level. The noise level was calculated
as the average of rms estimates in three corner quadrants of the image, each of 1/16
of the map size. The forth quadrant, with a maximum rms was excluded being affected by
the source structure.
In Table~\ref{t:maps}, we summarize the VLBA map parameters.
\begin{table*}
\caption{Summary of image parameters. Columns are as follows:
(1) epoch of observations,
(2) central observing frequency,
(3) I peak of image,
(4) rms noise level of image,
(5) theoretical thermal noise estimate,
(6) bottom I contour level,
(7) dynamic range of image,
(8) total flux density from map,
(9) FWHM major axis of restoring beam,
(10) FWHM minor axis of restoring beam,
(11) position angle of major axis of restoring beam.
The full table is available online.
}
\label{t:maps}
\begin{tabular}{c c c c c c c c c c r}
\hline\noalign{\smallskip}
Epoch & Freq. & $I_\textrm{peak}$ & $I_\textrm{rms}$ & Thermal noise & $I_\textrm{base}$ & DR &$S_\textrm{VLBA}$ & $B_\textrm{maj}$ & $B_\textrm{min}$ & $B_\textrm{PA}$ \\
& (GHz) & (mJy bm$^{-1}$) & (mJy bm$^{-1}$) & (mJy bm$^{-1}$) & (mJy bm$^{-1}$) & & (mJy) & (mas) & (mas) & (deg) \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\
\hline\noalign{\smallskip}
2010--05--15 & 4.608 & 335 & 0.19 & 0.11 & 0.76 & 1756 & 505 & 4.40 & 1.74 & $-$2.0 \\
2010--06--25 & 4.608 & 408 & 0.15 & 0.11 & 0.61 & 2676 & 569 & 4.97 & 1.87 & $-$7.6 \\
2010--08--01 & 4.608 & 382 & 0.17 & 0.11 & 0.68 & 2235 & 538 & 4.54 & 1.80 & $-$3.5 \\
2010--09--09 & 4.608 & 357 & 0.21 & 0.11 & 0.83 & 1723 & 510 & 4.50 & 1.79 & $-$2.0 \\
2010--05--15 & 5.003 & 350 & 0.18 & 0.15 & 0.71 & 1979 & 519 & 4.14 & 1.65 & $-$3.3 \\
2010--06--25 & 5.003 & 413 & 0.15 & 0.15 & 0.60 & 2737 & 570 & 4.74 & 1.76 & $-$8.1 \\
2010--08--01 & 5.003 & 371 & 0.30 & 0.15 & 1.19 & 1243 & 542 & 3.51 & 1.39 & $-$2.2 \\
\hline
\end{tabular}
\end{table*}
Structure modeling of source brightness distribution was performed with the procedure
{\it modelfit} in DIFMAP package by fitting several circular Gaussian components to
the calibrated visibility data and minimizing $\chi^2$ in the spatial frequency
plane. We used a minimum number of components (three at lower and four at higher
frequencies) that after being convolved with the restoring beam, adequately reproduce
the constructed source morphology. The obtained source models are listed in
Table~\ref{t:models} and provide flux densities, positions, and sizes of the fitted
components. All the positions are given with respect to the core component.
\begin{table*}
\caption{Source models. Columns are as follows:
(1) observation date,
(2) name of the component,
(3) flux density of the fitted Gaussian component,
(4) position offset from the core component,
(5) position angle of the component with respect to the core component,
(6) FWHM of the fitted circular Gaussian,
(7) SNR of the fitted Gaussian.
The full table is available online.
}
\label{t:models}
\begin{tabular}{c c c c c c r}
\hline\noalign{\smallskip}
Date & Comp. & Flux density & Distance & P.A. & Size & SNR \\
& & (Jy) & (mas) & (deg) & (mas) & \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) \\
\hline\noalign{\smallskip}
\multicolumn{7}{c}{4.6 GHz}\\
\hline
2010--05--15 & Core & $0.304\pm0.019$ & $0.000\phantom{\,\pm\,0.000}$ & \ldots & $0.322\pm0.014$ & 535 \\
& J2 & $0.127\pm0.012$ & $1.454\pm0.036$ & $113.7\pm 1.4$ & $1.009\pm0.070$ & 207 \\
& J1 & $0.068\pm0.015$ & $5.049\pm0.515$ & $100.5\pm 5.8$ & $4.884\pm1.030$ & 23 \\
2010--06--25 & Core & $0.363\pm0.025$ & $0.000\phantom{\,\pm\,0.000}$ & \ldots & $0.333\pm0.016$ & 428 \\
& J2 & $0.127\pm0.015$ & $1.408\pm0.048$ & $114.2\pm 1.9$ & $1.092\pm0.094$ & 134 \\
& J1 & $0.068\pm0.016$ & $5.012\pm0.557$ & $103.0\pm 6.3$ & $4.926\pm1.114$ & 20 \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/02/gamma_radio_lc.ps}
\caption{OVRO 15~GHz flux density evolution (top panel). Grey rectangles together with red
stars indicate VLBA total flux density at 15.4 GHz, observed within our campaign S2087D
and the MOJAVE program. {\it Fermi} weekly-binned $\gamma$-ray light curve at 0.1--300~GeV
(bottom panel). Upper limits are given by blue arrows. Dotted vertical lines indicate the
epochs of the multi-frequency VLBA observations.
}
\label{f:lc}
\end{figure}
\subsection{Radio and $\gamma$-ray light curves}
In Fig.~\ref{f:lc}, we present light curves of PKS~2233$-$148 based on the {\it Fermi}-LAT and OVRO
monitoring data, complemented also by measurements of the MOJAVE program and our VLBA observations
at 15~GHz. Prominent variability at high energies detected during April and June 2010 has triggered
the four-epoch VLBA multi-frequency campaign. The values of the correlated VLBA total flux density
are in good agreement with the single-dish OVRO flux density measurements, implying that there is
almost no extended emission on kpc scales, as it was also previously concluded by \cite{Drinkwater97}.
We performed a cross-correlation analysis of the light curves using the z-transformed discrete
correlation function \citep{zDCF}, specifically developed for sparse, unevenly sampled light curves.
The correlation between the radio and $\gamma$-ray light curves with and without upper limits is
insignificant, suggesting that the $\gamma$-ray production region in the source might have a complex
structure. We discuss it in more details in Section~\ref{s:gr_site}.
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/03/coreshifts.ps}
\caption{Core shift vectors measured in all frequency pairs.
Typical error is 0.045~mas.
Shaded grey area encompasses 68\% of the vectors deviating less than $10\degr$
from the median jet direction shown by a dotted line.}
\label{f:coreshifts}
\end{figure}
\subsection{Core shifts}
\label{s:coreshifts}
The VLBI core is believed to represent the apparent jet starting region, located at the distance
$r_\text{core}$ to the central engine, at which its optical depth reaches $\tau_\nu\approx1$ at
a given frequency. Thus, due to nuclear opacity, the absolute position of the radio core is
frequency-dependent and varies as $r_\text{core}\propto\nu^{-1/k_\text{r}}$ \citep{BK79,Koenigl81}.
There is a growing observational evidence from recent multi-frequency studies of the core shift
effect for $k_\text{r}\approx1$ \citep[e.g.,][]{Sullivan09,Fromm10,Sokolovsky11,Hada11_M87,Kravchenko16,Lisakov17}.
This is consistent with the \cite{BK79} model of a synchrotron self-absorbed conical jet in
equipartition between energy densities of the magnetic field and the radiating particles.
Departures in $k_\text{r}$ from unity are also possible \citep{Plavin18_cs,Kutkin14}
and can be caused by pressure and density gradients in the jet or by external absorption from the
surrounding medium \citep{Lobanov_98,Kadler04}.
\begin{figure*}
\centering
\includegraphics[width=4.56cm,angle=-90]{figs/04/coreshifts_da.ps}
\includegraphics[width=4.56cm,angle=-90]{figs/04/coreshifts_db.ps}
\includegraphics[width=4.56cm,angle=-90]{figs/04/coreshifts_dc.ps}
\includegraphics[width=4.56cm,angle=-90]{figs/04/coreshifts_dd.ps}
\caption{Frequency dependence of core shifts measured relative to the core position at
43~GHz (23~GHz for the epoch 2010-08-01) for all observational multi-frequency epochs.
Solid lines represent the best power-law fits.
Shaded areas show $1\sigma$ confidence regions of the fit.}
\label{f:cs_vs_nu}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=4.56cm,angle=-90]{figs/05/cs_vs_delta_lambda_da.ps}
\includegraphics[width=4.56cm,angle=-90]{figs/05/cs_vs_delta_lambda_db.ps}
\includegraphics[width=4.56cm,angle=-90]{figs/05/cs_vs_delta_lambda_dc.ps}
\includegraphics[width=4.56cm,angle=-90]{figs/05/cs_vs_delta_lambda_dd.ps}
\caption{Core shifts as a function of difference of observing wavelengths. Solid lines
represent the best linear fits. Shaded areas show $1\sigma$ confidence regions
of the fit. Stars denote the expected core shifts from the true jet origin
($\lambda_1=0$) at wavelengths of our observations, 0.7, 1.3, 2.0, 3.6, 3.7,
6.0, 6.5~cm.}
\label{f:cs_vs_dl}
\end{figure*}
We calculated the core shift vector as
$\bmath{\Delta r}_{\text{core},\nu_1\nu_2} = \bmath{\Delta r}_{12} - (\bmath{r}_1 - \bmath{r}_2)$,
where $\bmath{\Delta r}_{12}$ is the displacement of the phase centers of the images at different
frequencies, and $\bmath{r}_1$, $\bmath{r}_2$ are VLBI core position offsets from the phase center.
To derive the image shift vector $\bmath{\Delta r}_{12}$, we used the fast normalized
cross-correlation algorithm \citep{Lewis95} to align the images to the same position on the sky,
selecting the jet regions of optically thin emission and assuming that their positions are
achromatic. Every pair of images was restored with the average beam size using a pixel size of
0.03~mas.
In Fig.~\ref{f:coreshifts}, we present a plot of 65 derived core shift vectors, where the head
of each vector represents the core position at lower frequency, while all core positions at higher
frequency are placed at the origin. The dotted line corresponds the median jet
direction of $\text{P.A.}=112\degr$. The core shift effect occurs predominantly along the jet
direction. In 68\% of cases, the core shift vectors deviate less than $10\degr$ from the median
jet position angle. This good alignment is achievable for a relatively straight jet, without
substantial curvature in the core region. Assuming that the core shift takes place along the jet
and errors are random in direction, then the standard deviation of the transverse projections of
the core shift vectors onto the jet direction yields the typical error of 45~$\mu$as. Thus, in
90\% of cases the derived core shifts are significantly ($>2\sigma$) different from zero. In
Table~\ref{t:coreshifts} we list the core shift measurements: (1) epoch of observations, (2) a pair
of frequencies, (3) core shift magnitude, (4) core shift direction, (5) difference of observing
wavelengths.
\begin{table}
\centering
\caption{Core shift vectors measured for the frequency pairs $\nu_1$ and $\nu_2$.
The full table is available online.}
\label{t:coreshifts}
\begin{tabular}{c c c r c}
\hline\noalign{\smallskip}
Epoch & $\nu_1$ ~$\nu_2$ & $\Delta r_{\text{core},\nu_1\nu_2}$ & P.A.& $\lambda_2-\lambda_1$ \\
& (GHz) & (mas) &(deg)& (cm) \\
(1) & (2) & (3) & (4) & (5) \\
\hline\noalign{\smallskip}
2010--05--15 & 43.2~ 23.8 & 0.038 & 73 & 0.566 \\
2010--05--15 & 43.2~ 15.4 & 0.088 & 82 & 1.258 \\
2010--05--15 & 43.2~ \,~8.4 & 0.278 & 106 & 2.865 \\
2010--05--15 & 43.2~ \,~8.1 & 0.244 & 95 & 3.006 \\
2010--05--15 & 43.2~ \,~5.0 & 0.539 & 96 & 5.302 \\
2010--05--15 & 43.2~ \,~4.6 & 0.615 & 96 & 5.816 \\
2010--05--15 & 23.8~ 15.4 & 0.078 & 138 & 0.692 \\
2010--05--15 & 23.8~ \,~8.4 & 0.220 & 113 & 2.299 \\
\hline
\end{tabular}
\end{table}
We have studied the frequency dependence of the core shifts (Fig.~\ref{f:cs_vs_nu}) by
fitting a function $\Delta r_\text{core} = b(\nu^{-1/k_\text{r}}-\nu_\text{max}^{-1/k_\text{r}})$,
where $b$ and $k_\text{r}$ are fitted parameters, and $\nu_\text{max}$ is fixed to the maximum
frequency to which the core shifts were measured (43~GHz for the epochs 2010-05-15, 2010-06-25,
and 2010-09-09; 23~GHz for the epoch 2010-08-01, at which we could not reliably measure core
shift with respect to the core position at 43~GHz). The fitted $k_\text{r}$ values are smaller
but close to one and not significantly differ from it. This can hold even during an outburst.
As discussed in \cite{Plavin18_cs} a flare propagating down the jet disturbs only a
limited portion of it, thus deviating $k_\text{r}$ from unity in a limited frequency range,
significantly narrower compared to that of our observations. Therefore, for further analysis
we use $k_\text{r}=1$. In this case, $r_\text{core}\propto\lambda$. Following the approach
proposed in \cite{Voitsik18}, in Fig.~\ref{f:cs_vs_dl}, we plot the measured core shifts
against difference in observing wavelengths (Table~\ref{t:coreshifts}) and fit the dependence
by a straight line, from which one can also estimate an offset of the apparent jet base at a
given wavelength from the true jet origin by setting $\lambda_1=0$.
\subsection{Jet shape}
\label{s:jet_shape}
The core shift measurements allow us to perform a jet geometry analysis for the whole set of
the fitted components including the cores. In Fig.~\ref{f:jet_shape}, we plot the transverse
jet widths $d$ as the FWHM of the fitted Gaussian components at all four multi-frequency
VLBA epochs (Table~\ref{t:models}) or the corresponding resolution limits \citep{2cmPaperIV}
whichever is larger, as a function of their distance $r$ from the true jet
base taking into account the core shift effect. The respective shifts
$\Delta r_\text{core} = a(t)\lambda$ were added to the fitted core separations, where $a(t)$
is the fitted slope at a corresponding observational epoch (Fig.~\ref{f:cs_vs_dl}). From
this analysis we excluded 16 weak components with $\text{SNR}<15$ to reduce the influence
of low-SNR data points, though the whole set of 96 components yields qualitatively similar
result. The BL~Lac object PKS~2233$-$148 shows a conical streamline, with
$d\propto r^{1.01\pm0.04}$ at scales probed by the multi-frequency VLBA observations down
to 0.1~mas. This is consistent with $k_\text{r}\approx1$ derived from the core shift
analysis.
The apparent jet opening angle of the source is $38\degr\pm3\degr$, as reported by
\cite{MOJAVE_XIV}, who measured it from a stacked total intensity image at 15.4~GHz as a
result of combining VLBA maps from 11 epochs distributed over a time range of about 3~yr,
2009 through 2012. The wide opening angle suggests that the jet viewing angle is
rather small, of the order of a few degrees.
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/06/d_vs_r_log.ps}
\caption{Jet width versus distance to the jet vertex for 78 components from structure
model fits at seven frequencies of four epochs. Cores are marked by larger
symbols. Dashed line is the best fit from the least square method. The
jet shape at scales probed by our multi-frequency VLBA observations is conical.}
\label{f:jet_shape}
\end{figure}
\subsection{Redshift constraint from the jet geometry}
The optical spectrum of the source shows no prominent emission lines \citep{Drinkwater97}
and its redshift is still unknown but the inferred conical jet shape indicates that this
BL Lac object is not too close, and likely located at a redshift exceeding $\sim0.1$.
Otherwise, our VLBA observations would be sensitive enough to reveal a jet geometry
transition from parabolic to conical shape, as we detect this transition in a number of
nearby ($z\lesssim0.1$) sources and explain it by a transition from magnetically to
particle dominated regime in the outflows (Kovalev et al. 2018, in prep.). More distant
AGNs ($z\gtrsim0.1$) typically show close to conical jet streamlines \citep{MOJAVE_XIV}
since the scales probed by VLBI observations are beyond the shape transition region.
\subsection{Spectral index distribution}
\label{s:sp_ind}
The procedure of image registration by means of 2D cross correlation described in
Section~\ref{s:coreshifts} allows to align the images at different frequencies and
accurately reconstruct a distribution of spectral index $\alpha$ over the source
morphology. As an example, in Fig.~\ref{f:alpha_map} we present spectral index map of
PKS~2233$-$148 calculated between 4.6 and 23.8~GHz at the epoch 2010-09-09 of our
multi-frequency VLBA observations. It shows that the core is partially opaque, with
spectral index about 0.3. while the outer jet regions are optically thin, with the
median value of $\alpha_\text{jet}=-0.95$, which is typical for many other parsec-scale
AGN jets \citep[e.g.,][]{RDV_paper,MOJAVE_XI}. The spectral index error map manifests
higher $\alpha$ accuracy in the innermost jet area and progressively larger
uncertainties towards regions with lower brightness, where random errors dominate
arising from the image noise. Systematic errors (from image alignment) dominate
in the core area, especially behind it. Same result was obtained statistically for
a large sample of sources in our earlier paper \citep[see Appendix B in][]{MOJAVE_XI}
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/07/J2236-1433_K0_2010_09_09_but_sp_combined_error.ps}
\caption{ The distribution of the spectral index in PKS~2233$-$148 is shown in the top left
panel at the epoch 2010-09-09 calculated between 4.6 and 23.8~GHz; it is shown in
colour, with the 23.8~GHz total-intensity contours overlaid. The contours are plotted
at increasing powers of 2, starting from 0.35\% of the peak brightness of
303~mJy~beam$^{-1}$. White curve denotes the total intensity ridgeline.
The restoring beam is depicted, as a shaded ellipse in the lower left corner.
Spectral index $1\sigma$ error map is shown in the top right panel. The bottom
left and right panels show total intensity and spectral index profiles along
the ridgeline, respectively. The vertical dashed line indicates the edge of the
convolved VLBI core along the inner jet direction, while the horizontal dashed
line represents the median jet spectral index. The grey area shows $1\sigma$
errors on the spectral index.}
\label{f:alpha_map}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/08/sp_ind_downstream.ps}
\caption{
Spectral index evolution along the ridgeline for $\alpha$-maps restored
at all frequency pairs except the 43~GHz data.
The grey bars show $1\sigma$ errors.
}
\label{f:alpha_downstream}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-05-15_spectrum_fit_core.ps}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-06-25_spectrum_fit_core.ps}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-08-01_spectrum_fit_core.ps}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-09-09_spectrum_fit_core.ps}\vspace{0.1cm}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-05-15_spectrum_fit_jet.ps}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-06-25_spectrum_fit_jet.ps}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-08-01_spectrum_fit_jet.ps}
\includegraphics[width=4.7cm,angle=-90]{figs/09/J2236-1433_2010-09-09_spectrum_fit_jet.ps}
\caption{Spectral fits to the core (top) and jet feature J2 (bottom) data. Solid lines represent
the spectra derived from the homogeneous synchrotron source model. Dashed lines show a
simple power-law model. Best fit parameters of models are shown on each plot.}
\label{f:spectra}
\end{figure*}
To analyze how spectral index changes
along the jet, we reconstructed the ridgeline of the outflow in total intensity using
a procedure described in \cite{MOJAVE_XIV}. As seen from Fig.~\ref{f:alpha_map} (bottom
right), the spectral index along the ridgeline slightly flattens in the jet knots
indicating reacceleration of emitting particles, while between them it is steeper.
Similar behaviour was found to be typical in AGN jets \citep{RDV_paper,MOJAVE_XI}.
Evolution of spectral index along the ridgeline for all the frequency pairs (except
Q-band data) taken at all four epochs is presented in Fig.~\ref{f:alpha_downstream}.
Beyond the core region, the spectral index deviates around the value of about $-1$.
The spectral index of optically thin synchrotron radiation parametrizes the energy
spectrum of relativistic radiative particles. Assuming a power-law energy distribution
$N(E)=N_0E^{-s}$, the power index $s=1-2\alpha$ has the mean value of $\sim3.0$.
The evolution of $\alpha_\text{jet}$ down the jet shows no effect of spectral aging
(steepening downstream) owing to radiative losses of relativistic electrons
\citep{Kardashev62}, which is often seen in AGN jets on parsec-scales \citep{RDV_paper}.
In case of PKS~2233$-$148, the absence of this effect is likely caused by the dominance
of a few quasi-stationary jet features, which could be standing shocks that effectively
accelerate the emitting particles.
\subsection{Synchrotron spectrum fitting and magnetic field estimates}
\label{s:sp_fits}
For spectral fitting of the core component (Table~\ref{t:models}), we use the standard
spectrum of a homogeneous incoherent synchrotron source of relativistic plasma with a
power-law energy distribution of the form $N(E)\propto E^{-s}$ \citep{Pacholczyk70}
\begin{equation}
S_\nu\propto\nu^{5/2}\left(1-\exp\left[-\left(\frac{\nu_1}{\nu}\right)^{5/2-\alpha}\right]\right)\,,
\end{equation}
where $\nu_1$ is the frequency at which the optical depth is $\tau=1$. The fitted spectra of
the core are presented in Fig.~\ref{f:spectra} (top). Best fit parameters, namely the optically
thin spectral index $\alpha=(1-s)/2$, the peak flux density $S_\text{m}$ and the corresponding
self-absorption turnover frequency $\nu_\text{m}$ are given for every spectrum. The core
component shows a spectral turnover within the frequency range of our VLBA observations.
With the fitted parameters $\nu_\text{m}$, $S_\text{m}$, and $\alpha$ of the synchrotron
spectrum, we can estimate the magnetic field $B$ within the source adopting the
standard synchrotron theory and assuming that the emission region is uniform and spherical.
Then the commonly used expression of the component of the magnetic field perpendicular to
the line of sight is (e.g. \cite{Marscher83}; see Appendix~\ref{a:mf_turnover} for more details)
\begin{equation}
B=10^{-5}\,b(\alpha)\,\theta_\text{m}^4\,\nu_\text{m}^5 S_\text{m}^{\prime\,-2}\,\left(\frac{\delta}{1+z}\right)\quad[G],
\label{eq:b_s}
\end{equation}
where $\delta$ is the Doppler factor, $z$ is the redshift, $\theta_\text{m}$ is the diameter of
the spherical component at the turnover frequency, $S_\text{m}^\prime$ is the flux density at
$\nu_\text{m}$ extrapolated from the straight-line optically thin slope, and $b(\alpha)$
(Fig.~\ref{f:b_alpha}) is a function of spectral index $\alpha$, optical depth $\tau_\text{m}$
at $\nu_\text{m}$, physical constants and a conversion factor, which allows one to express
$\nu_\text{m}$ in GHz, angular size $\theta_\text{m}$ in mas, and $S^\prime_m$ in Jy. To derive
$\theta_\text{m}$ we applied logarithmic interpolation between the measured component sizes at
different frequencies (Table~\ref{t:models}) and multiplied the result by a correction factor of
1.8 \citep{Marscher87} to take into account that the emission feature is assumed to have a spherical
shape, while performing model fitting we measure the FWHM of the circular Gaussian components.
The flux density $S_\text{m}^\prime$ can be calculated using the following relation:
\begin{equation}
S_\text{m}^\prime=S_\text{m}e^{\tau_\text{m}}\,,
\end{equation}
where the optical depth at the turnover can be derived numerically from the equation
$\exp(\tau_\text{m})=1+\tau_\text{m}(1-2\alpha/5)$
or approximated as \citep{Tuerler99}
\begin{equation}
\tau=\frac{3}{2}\left(\sqrt{1-\frac{16\alpha}{15}}-1\right)\,.
\end{equation}
The mean value of the magnetic field inferred from Eq.~(\ref{eq:b_s}) for the apparent core
at the turnover frequency is $(0.03\pm0.01)\delta/(1+z)$~G.
The only jet component detected at all the frequencies is J2 (see Fig.~\ref{f:jet_shape} and
Table~\ref{t:models}). It is located at a distance of about 2~mas from the core at 43~GHz.
The spectra of this jet feature J2 in addition to a synchrotron fit was also fitted by a simple
power-law (Fig.~\ref{f:spectra}, bottom). The spectrum of J2 is steep, with a spectral index
gradually decreasing from about $-1$ to $-1.6$, while the turnover frequency slightly
increases from $4.9\pm0.4$~GHz to $5.9\pm0.3$~GHz.
\subsection{Evolution of the turnover frequency and source kinematics}
The self-absorption turnover frequency derived from the core spectra (Fig.~\ref{f:spectra},
top) gradually decreases from $16.8\pm5.3$~GHz on May 15, 2010 to $6.4\pm1.1$~GHz on September
9, 2010 following inverse proportionality to time ($\nu_\text{m}\propto t^{-1}$), as predicted
by a model of a conical jet with constant plasma speed \citep{Blandford90}. We interpret these
changes as a direct observational evidence of a flare propagation downstream. Due to synchrotron
opacity in the nuclear region, the flare developing along the jet becomes detectable at progressively
larger distances from the true jet origin corresponding to the VLBI core locations $r_\text{core}$
at longer wavelengths. After the disturbance crossed the apparent core ($\tau\approx1$ zone) at a
given frequency, its flux density starts to decrease resulting in steepening the core spectrum
(Fig.~\ref{f:spectra}, top) due to energy losses to synchrotron radiation or Compton scattering
\citep{Kardashev62,MG85}. The core offset from the jet apex can be calculated as
$r_\text{core}=r_\text{flare}=a(t)\lambda_\text{m}(t)$, where the parameter $a(t)$ was derived
from the core shift analysis for each epoch of the multi-frequency VLBA observations
(Fig.~\ref{f:cs_vs_dl}).
In Fig.~\ref{f:flare_prop}, we show $r_\text{flare}(\lambda_\text{m})$ as a function of time.
The slope of the weighted linear fit is $1.17\pm0.10$~mas~yr$^{-1}$. The derived proper motion
of the flare propagation is significantly higher than that inferred from kinematics analysis
based on tracing bright jet components. The two jet knots, J2 and J3, of the source studied within
the MOJAVE program at 15~GHz are slow pattern features, with the angular speed $-71\pm35$~$\mu$as
(apparent inward motion) and $45\pm41$~$\mu$as \citep{MOJAVE_XIII}, respectively. These components
are quasi-stationary and can be standing recollimation shocks observed in sources with
super-magnetosonic jets \citep[e.g.,][]{Asada12,Cohen14} and also obtained in numerical
two-dimensional relativistic (magneto)-hydrodynamic simulations \citep{Mizuno15,Fromm15,Fuentes18}.
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/10/beta_app_flare_log_log_68_95.ps}
\caption{Flare propagation down the jet. The measurements denote distances from the jet origin to
the VLBI core at the frequency of maximum emission (Fig.~\ref{f:spectra}, top) at the
four observing VLBA epochs. Solid line represents the best linear fit. Dark and light
shaded area show $1\sigma$ and $2\sigma$ confidence regions of the fit, respectively.
Dot-dashed lines indicate the epoch 2010.31 of the $\gamma$-ray flare and the
corresponding distance 0.12~mas of the $\gamma$-ray emission zone from the central
engine. Dashed lines indicate the expected epoch range (2012) when the flare reaches
the jet component J2 at 2~mas separation from the true jet base assuming constant
flare propagation speed.
}
\label{f:flare_prop}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=0,trim = 0.5cm 0cm 1.3cm 1.5cm,clip]{figs/11/lc_jet_GP.ps}
\caption{Flux density at 15~GHz of the jet component J2 located at 2~mas from the jet origin.
The significant increase of the flux density in 2012 is likely due to the flare
which originated in 2010 and reached this jet region in 2012, as expected from
the assumed constant flare propagation speed. Error bars represent $1\sigma$
uncertainties of individual measurements. Smooth solid curve shows the Gaussian
process fit. Dark and light filled areas correspond to 68\% and 95\% confidence
intervals of the fit, respectively.}
\label{f:comp_flare}
\end{figure}
Since the jet geometry is found to be conical at scales probed by the VLBA observations
(Sec.~\ref{s:jet_shape}), we assume that the regime of constant flow speed holds at least
up to 6~mas from the jet apex. Then the expected epoch for the flare to reach the
quasi-stationary jet feature J2 at a distance of about 2~mas from the true jet base is
$\sim$ 2012.0 (Fig.~\ref{f:flare_prop}), at which the turnover frequency is expected to
decrease down to about 1.5~GHz. This scenario is supported by a flux density evolution of
the component at 15~GHz (Fig.~\ref{f:comp_flare}). The component shows an increase of the
flux density by a factor of about 2 during a period of time since $\sim2011.5$ through
$\sim2012.5$. The epoch of the peak around 2012.0 was established by fitting the data with
Gaussian process regression performed with the PyMC3 python module for Bayesian modeling,
for which we used exponential quadratic covariance function. Note that moving jet components
behave in a completely different manner. Typically, their brightness rapidly fades due to
energy losses and adiabatic expansion \citep[e.g.,][]{RDV_paper,Kravchenko16,MOJAVE_XIII}.
It is therefore possible that the flare propagation rate represents the bulk flow speed. Taking
into account the lower limit on redshift ($z>0.49$) derived from spectroscopy of the absorption
lines formed by the intervening gas \citep{Sbarufatti06}, the proper motion of the disturbance
$\mu=1.17\pm 0.10$~mas~yr$^{-1}$ corresponds to apparent speed $\beta_\text{app}>34\pm2\,c$. It
is much faster than a typical apparent speed $\approx4\,c$ derived from kinematics analysis for
a sample of 42 BL Lacs and also significantly higher than $\beta_\text{app}^\text{max}=21\,c$
detected in the high-redshift ($z=1.07$) BL Lac object 1514+197 \citep{MOJAVE_XIII}.
Similarly, analyzing multi-frequency time delays of the flares and measuring the core shifts
in the blazars 3C~454.3 \citep{Kutkin14} and 0235+164 \citep{Kutkin18} it was found that this
approach yields the source jet speed, which is by a factor of a few higher than the estimates
based on kinematic analysis.
Now, we assume that the jet of PKS~2233$-$148 in the core region is in equipartition between
the particle and magnetic field energy density ($k_\text{r}=1$), has a spectral index
$-0.5$, and is viewed at the critical angle $\theta\simeq\Gamma^{-1}$. Then the magnetic field
in Gauss at 1~pc of actual distance from the jet apex can be estimated using the following
relation \citep{MOJAVE_IX}
\begin{equation}
B_1\simeq0.04\,\Omega_{r\nu}^{3/4}\,(1+z)^{1/2}(1+\beta_\text{app}^2)^{1/8}\,,
\end{equation}
where $\Omega_{r\nu}$ is the core shift measure defined in \cite{Lobanov_98} as:
\begin{equation}
\Omega_{r\nu}=4.85\cdot10^{-9}\,\frac{\Delta r_\mathrm{core,\,\nu_1\nu_2}\,D_L}{(1+z)^2}\cdot\frac{\nu_1\nu_2}{\nu_2-\nu_1}\ \mathrm{pc \cdot GHz},
\end{equation}
where $\Delta r_\mathrm{core,\,\nu_1\nu_2}$ is the core shift in milliarcseconds, $D_L$ is
the luminosity distance in parsecs, and $\beta_\text{app}=1.58\times10^{-8}D_L\mu/(1+z)$. The
magnetic field strength at the apparent VLBI core at a given frequency can be calculated as
$B_\text{core}=B_1r_\text{core}^{-1}$, where the absolute distance in parsecs of the core from
the true jet base $r_\text{core}$ is given by
\cite{Lobanov_98}
\begin{equation}
r_\text{core}(\nu)=\frac{\Omega_{r\nu}}{\nu\sin\theta}\approx\frac{\Omega_{r\nu}(1+\beta^2_\text{app})^{1/2}}{\nu}\,,
\end{equation}
where $\nu$ is the observed frequency in GHz.
In Fig.~\ref{f:B_cs}, we plot the derived $B_1$ and $B_\text{core}$ for the 43, 15, and 5~GHz cores
as functions of redshift. Assuming $z>0.5$ the magnetic field at a distance of 1~pc from the central
engine is of the order of 1~G. We note that the estimates of $B_\text{core}$ at 15~GHz derived from
the core shift analysis are comparable (lower by a factor of a few) to those inferred from the
synchrotron spectrum fits (Sec.~\ref{s:sp_fits}), if a source Doppler factor is moderate
($\delta\lesssim5$), as often observed in BL Lacertae objects \citep{Hovatta09,Liodakis17}.
\subsection{Location of the $\gamma$-ray emission region and the source of seed photons}
\label{s:gr_site}
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/12/B1_vs_z.ps}
\caption{Magnetic field constraints obtained from the core shift measurements and the flare
propagation speed. Dark grey area show estimates of the magnetic field at 1~pc
distance from the jet origin, while light grey stripes represent the magnetic
field at the core at 43,2, 15,4 and 5.0~GHz.}
\label{f:B_cs}
\end{figure}
To estimate the location of the $\gamma$-ray emission zone in PKS~2233$-$148 we extrapolated the
$r_\text{flare}$ dependence (Fig.~\ref{f:flare_prop}) back to the epoch of the $\gamma$-ray flare,
2010.31 (Fig.~\ref{f:lc}). This yields an angular separation of $0.12\pm0.03$~mas from the true jet
base, which corresponds to the VLBI core position at 24~GHz (Fig.~\ref{f:jet_shape}). On a linear
scale, this separation is $>0.7\pm0.2$~pc in projection, which exceeds a de-projected separation of
$8\pm2$~pc if we assume a jet viewing angle of $5\degr$. Considering the second peak of the
$\gamma$-ray data at epoch 2010.46 (June 17, 2010), we inferred distance of about 0.3~mas from the
jet apex. This distance corresponds to the innermost jet feature J4 detected at 24 and 43~GHz setting
an absolute distance of about 20~pc for another possible location of the $\gamma$-ray emission site.
These assessments favour a scenario for the $\gamma$-ray production zone to be located at large
distances from a central energy generator (beyond the broad-line region or torus) and likely
associated with one or more standing shocks in a relativistic outflow of PKS~2233$-$148, as hinted
by a complex structure of the major high-energy flares in the source. Similar conclusion regarding
remotness of the $\gamma$-ray emission region in blazars on scales of parsecs from the central black
hole was also made from other arguments in a number of recent single-source studies of 1510$-$089
\citep{Marscher10}, OJ~287 \citep{Agudo11}, 3C~345 \citep{Schinzel12}, CTA~102 \citep{Casadio15},
1502+106 \citep{Karamanavis16}, BL Lacertae \citep{Wehrle16}, and also statistical results from
F-GAMMA project \citep{Fuhrmann16}. At the same time, the arguments based on short-scale variability
and breaks in GeV spectra discussed in Introduction indicate that the high-energy production site
is in the immediate vicinity of the black hole.
Another noticeable feature of the pc-scale morphology of the source is a presence of a sheath around
the jet, the indications of which are seen at 8 and 15~GHz maps (Fig.~\ref{f:maps}) that provide a
combination of a high angular resolution and sensitivity. To better visualize the sheath emission
we convolved the 8.1~GHz image at the epoch 2010-05-15 with a circular beam setting its FWHM to
that of the minor axis of the original map (Fig.~\ref{f:map_sheath}). The fact that this sheath is
slower than a central spine might mean that it acts as a source of additional seed photons for the
$\gamma$-ray radiation \citep[e.g.,][]{Marscher10,Aleksic14}. Thus, the high-energy emission of the
source can be formed through (i) synchrotron self-Compton mechanism acting in its relativistic
outflow and upscattering of low-energy synchrotron seed photons and (ii) external Compton scattering
due to a photon field in the sheath.
While detailed modeling of the spectral energy distribution (SED) is beyond the scope of this paper,
based on the publicly available non-simultaneous data in the SSDC SED builder
tool\footnote{\url{https://tools.ssdc.asi.it}}, it seems that when the source is in a high state in
$\gamma$-rays, the luminosity of the inverse Compton peak is higher than that of the synchrotron peak,
indicating that an additional external photon field is indeed needed. However, this should be verified
with simultaneous data in all bands, taken at both low and high activity states of the source.
We can also speculate that if instead of the epoch of the $\gamma$-ray peak we consider the epoch
when the flare starts rising (a few weeks before the peak), then the $\gamma$-ray emission site
could be in the immediate vicinity of the central machine. But this scenario is vulnerable as a plasma
cloud moving fast down the jet leaves the seed photon area rapidly, while the flare is still reaching
its maximum.
\begin{figure}
\centering
\includegraphics[height=\columnwidth,angle=-90]{figs/13/J2236-1433_X1_2010_05_15_circ_beam.ps}
\caption{Total intensity map of PKS 2233$-$148 at 8.1~GHz at the epoch 2010-05-15 from Fig.~\ref{f:maps}
but convolved with a circular beam of a size of the minor axis of the original image. Emission
at the jet edges indicate a presence of a distinct boundary layer.
}
\label{f:map_sheath}
\end{figure}
\section{Summary}
\label{s:summary}
We performed a radio and $\gamma$-ray joint study of the BL Lacertae object PKS~2233$-$148, using multiwavelength
data in the period of 2009--2012. The 4.6--43.2~GHz VLBA observations reveal the core dominated, one-sided
and relatively straight jet morphology of the source extending up to 8~mas in a position angle $112\degr$.
Analyzing jet widths derived from the structure model fits we have established that the outflow has a conical
shape. This sets a lower limit of about 0.1 on the source unknown redshift.
We have measured the frequency-dependent shift vectors of the apparent core position using a method based on
results from (i) structure model fitting and (ii) image alignment achieved by implementing a two-dimensional
cross-correlation technique on the optically thin jet regions. The magnitude of the core shifts ranges from
0.04 to 0.7~mas, with a typical uncertainty of 45~$\mu$as. The directions of the shift vectors are predominantly
aligned with the median jet position angle, deviating from it by $\lesssim10\degr$ in 68\% of cases. The derived
core shifts show a frequency dependence $\propto\nu^{-1/k_\text{r}}$, with $k_r\approx1$ indicating that nuclear
opacity is dominated by synchrotron self-absorption, and physical conditions in the jet on scales probed by the
VLBA observations are close to equipartition. We did not find an evidence for significant changes in $k_\text{r}$
between the observing epochs covering a time scale of four months, during which a flare was developing down the
jet. It suggests that the transverse size of the disturbance area is significantly smaller than the jet part
constrained by the magnitude of the core shift effect within a frequency range 5--43~GHz. The VLBI core position
$r_\text{core}$, as a function of wavelength follows an $r_\text{mas}^\text{core}\approx0.1\lambda_\text{cm}$
dependence. The magnetic field at a distance of 1~pc from the jet apex derived from the core shift measurements
is about 1~G.
We present a method of independent assessment of jet kinematics based on core shift measurements and evolution
of synchrotron spectrum of the VLBI core. The turnover frequency of the core spectrum linearly shifts towards
lower frequencies with time, as the flare originated in April 2010 in $\gamma$-rays propagates down the jet.
The speed of this propagation is about 1.2~mas~yr$^{-1}$ and likely represents the bulk flow speed. It is
much higher comparing to results from traditional kinematics based on tracking bright jet features,
0.045~mas~yr$^{-1}$ \citep{MOJAVE_XIII}.
We have found indications that the $\gamma$-ray production zone in the source is located at large distances,
10--20~pc, from a central engine, and can be associated with the stationary radio-emitting jet features
observed with VLBI. This favours synchrotron self-Compton scattering as a dominant high-energy radiation
mechanism in the relativistic jet of the source. Direct observational evidence for a boundary layer around
the jet suggests that the sheath might be an additional source of seed photons for external Compton scattering
acting in the source.
\section*{Acknowledgements}
We would like to thank the anonymous referee as well as E.~Ros for useful comments
and suggestions. The VLBA data
processing and core shift analysis were supported by the Russian Science Foundation
grant 16-12-10481. The radio/$\gamma$-ray joint analysis was supported by the
Academy of Finland projects 296010 and 318431. T.H. acknowledges support from the
Turku Collegium of Science and Medicine. This research has made use of data from
the MOJAVE data base, which is maintained by the MOJAVE team \citep{MOJAVE_XV}.
The MOJAVE project was supported by NASA-\textit{Fermi} GI grants NNX08AV67G,
NNX12A087G, and NNX15AU76G. This work made use of the Swinburne University of
Technology software correlator \citep{DiFX}, developed as part of the Australian
Major National Research Facilities Programme and operated under licence. This
research has made use of data from the OVRO 40-m monitoring program
\citep{Richards11} which is supported in part by NASA grants NNX08AW31G,
NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST-1109911. The
National Radio Astronomy Observatory is a facility of the National Science
Foundation operated under cooperative agreement by Associated Universities, Inc.
\bibliographystyle{mnras}
|
1,314,259,995,598 | arxiv | \section{Introduction}
Prediction is often considered as one of the key and fundamental components of intelligence \cite{bubic}. Visual prediction often deciphers much useful information about the environment in an information-rich but high dimensional format which presents both opportunities and challenges \cite{ebert}. A successful mechanism capable of predicting future video frames would have many applications in the automation industry. For robotics, the path planning problem in an unknown dynamic environment with moving obstacles still largely remains an unsolved challenge. A key component of the problem that remains unsolved is to make predictions about the motion of the obstacles. To illustrate the complexity of the problem, consider an Unmanned Aerial Vehicle (UAV) flying at a speed of 5m/s or higher (which translates to a speed of $\geq$ 18km/hr) in a cluttered environment, such as a forest trail in a high wind condition. In this scenario, the perception of the camera not only depends upon the dynamics of the objects present in the scene but also on the control actions taken by the UAV. The interaction between the robot's state and action with the dynamics of the scene renders the motion prediction problem almost impossible to solve using conventional vision-based methods such as visual servoing \cite{visp}, \cite{vkumar}. While making pixel level prediction on the motion on robotic agents is a challenging task, designing a motion planner based on raw predicted image frames is an even harder task. However, it is imperative to have a mechanism to predict the motion of the other objects present in the environment in order to solve the motion planning problem for autonomous agents operating in a rapidly changing environment.
Recent works (\cite{villegas}, \cite{xu}) on motion prediction delved into forecasting human motion but these models use very deep architectures that ultimately renders them computationally expensive. Given that human motion is much slower than automated vehicles, predicting higher speed motion will have a much higher level of difficulty and computation cost. Even with the recent advancements in mobile graphics units such as NVIDIA Jetson boards, implementation of deep architectures to solve path planning problems on small mobile agents still remains a challenge.
Designing a light-weight motion prediction framework is only the first step in addressing the challenge. Once the prediction network is designed we need to devise a mechanism to transfer raw predicted image frames into control commands for the robot. Recent advancements in deep reinforcement learning (DRL) \cite{mnih}, \cite{levine} have shown us ways to convert raw sensory inputs into meaningful control commands. While a few model-free learning algorithms have out-performed human operators \cite{mnih}, these frameworks were designed for very limited simulated environments of video games. Learning tasks in the real world present a wide range of challenges as the environment becomes dynamic with sparsely available reward feedback while the agent can only access a partial state of the world. Finally, we need to design the entire framework in such a manner that learning can be enabled without the supervision of human operators.
In this work, we present a novel light-weight framework that can forecast the trajectory of an object moving in the robot's work-space. The proposed Predicting Robot Motion Network (PROM-Net) can easily be trained on raw video data without supervision. Once trained, this network can be implemented on an autonomous mobile agent. The network generates the visual prediction of the surrounding environment from the first-person perspective of the robotic agent. In order to train and test the network we also created our own data set, using two LEGO Mindstorms under 4 different scenarios. To the best of our knowledge, this is the first data set of its kind where the motion of a robotic agent is captured from the first-person view of another robot. We also discuss how these predicted frames can be used to design a model-based reinforcement learning algorithm that would be able to translate the raw predicted image frames into a meaningful reward function to optimize the trajectories of the control policies.
The paper is organized as follows: We first discuss the existing literature on video prediction networks and model predictive controllers (MPC) using raw image frames as input and introduce the Predicting Robot Motion Network (PROM-Net) model. Then we discuss the virtual experimental setup we created in the OpenAI-Gym framework and give a detailed description of the real robotic dataset that was created for testing the performance of PROM-Net. A detailed analysis of the performance of PROM-Net is presented next, followed by a discussion on the future scope of the work.
\section{Related Work}
The problem of video frame prediction \cite{mathieu}, \cite{vondrick2}, \cite{villegas}, \cite{xu} has gained considerable popularity in the computer vision community in recent years. However, video data comes with the issue of larger dimensionality with complex spatio-temporal dynamics in raw pixel values which makes the pixel label frame prediction task very challenging \cite{finn}. While Convolutional Neural Networks (CNN) have proven to be very successful at learning features from static images, \cite{imagenet}, \cite{resnet}, the idea of Convolutional Long Short Term Memory (LSTM) networks that were designed specifically to capture the spatial and temporal dependencies in video data was proposed by \cite{convlstm}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig11.png}
\caption{ Visual motion planning framework}
\label{ICAPS_fig1}
\end{figure}
The paper \cite{oh}, designed an action conditional encoder-decoder network predicting future frames for Atari games. The work in \cite{mathieu} employed a new adversarial loss function for additional regularization and sharper frame prediction. The paper \cite{vondrick2} designed a multi-scale feedforward architecture combined with an adversarial objective to generate a foreground-background mask to create realistic looking video sequences. The work in \cite{casas} presented a framework that predicts the intention of autonomous cars from 3D point clouds and HD maps. The paper \cite{walker} proposed a framework that generated a coarse hallucination of the possible future events from a wide angle view. In \cite{xu}, a framework that balances between reconstruction and adversarial loss for the predicted frames is designed. However, most of the current state of the art video prediction models often require a high-end GPU enabled system to train and test the networks which is not often a feasible option for robotic applications.
\begin{figure*}[h]
\centering
\includegraphics[width=.98\textwidth, height=4cm]{convlstm1.png}
\caption{Schematic architecture of the PROM- Network}
\label{ICAPS_fig2}
\end{figure*}
While considerable progress has been made in DRL \cite{mnih},\cite{mnih2} that learns meaningful skills directly from high dimensional raw sensory data (especially images), most of these are restricted to simulated applications of computer games. Only a few works like \cite{finn2}, \cite{ebert} talks about the application of a model based RL algorithm for robotic manipulation tasks using visual foresight. To the best of our knowledge, there is no existing work that addresses the problem of an end to end motion planning for autonomous mobile agents using visual prediction from a first person (robot) perspective.
\section{Our Approach}
Figure \ref{ICAPS_fig1} shows the schematic representation of the visual prediction based motion planning framework. It shows that the MPC algorithm takes the frames generated by the prediction network as input. The prediction network also generates a reduced dimensional state representation of the world from the raw image inputs for the model-based controller. This is possible as the architecture of the prediction network is based on the encoder-decoder network philosophy. We started with designing the prediction network with a goal of predicting the next $N$ image frames from the past $N$ number of frames. Furthermore, we aim to design a very light-weight network that can be easily implemented on a GPU denied environment. We have successfully designed a motion prediction network that can approximate frames up to 10 time stamps ahead of time. In the following sub-section we present a detailed description of our proposed network.
\subsection{Predicting Robot Motion Network (PROM-Net)}
The architecture of the model is shown in figure \ref{ICAPS_fig2}. This model roughly follows the encoder-decoder philosophy of autoencoder networks. The encoder network is built using 8 2D convolutional filters of size $3\times 3$. The outputs are down-sampled using a maxpooling layer of stride 2. A second 16 channel convolution layer with filter size $5\times5$ and stride 2, further maps the input to a 3 dimensional tensor of size $(16\times16\times16)$. These spatial feature tensors are then passed through two consecutive Convolutional LSTM layer having kernel size of $(5\times5)$ and mapped into a 32 channel feature space of size $(8\times 8)$. The mathematical model of Convolutional LSTM is described in \cite{convlstm}. The two ConvLSTM layers capture the spatio-temporal correlations present in the sequence of image frames and pass them to the next decoder layer for inference.
The decoding network consists of 3 Convolutional LSTM layers. After each ConvLSTM layer we have a deconvolution or transpose convolution layer that upsamples the size of each feature channel and downsamples the total number of feature channels. For example, after the first deconvolution operation, the $(32\times8\times8)$ feature tensor is mapped into a $(16\times16\times16)$ feature tensor. We have used skip connections at intermediate layers to recover from the lossy convolution operations (shown with dotted lines in fig. \ref{ICAPS_fig2}).
We apply batch normalization operation after each Convolutional LSTM layer. We also upsample the number of feature channels each time we apply a downsampling operation on the 2 dimensional spatial feature space. This kind of convention has been followed in designing various previous networks such as \cite{unet},\cite{finn}. All the convolutional filters use the ReLU activation function. The entire network is trained using the RMSProp algorithm that minimizes the mean square loss.
PROM-Net has about 6 million trainable parameters and once trained the network weighs only about 5 Megabytes.
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\textwidth]{fig2.png}
\caption{A ROS-Gazebo based virtual experimental environment has been set up in the OpenAI-gym framework.}
\label{ICAPS_fig3a}
\end{subfigure}%
\hspace{1in}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=0.75\linewidth,keepaspectratio]{fig_wicv19.png}
\caption{LEGO Mindstorms with a \\ GoPro Hero 5 Black camera}
\label{ICAPS_fig3b}
\end{subfigure}
\caption{}
\end{figure*}%
\section{Virtual Experimental Setup}
\label{sec4}
For initial analysis of the networks, we set up a ROS-Gazebo based virtual experimental environment in the OpenAI-Gym framework for robotics \cite{openai} to obtain the training and test data for the network. Figure \ref{ICAPS_fig3a} shows a snapshot of the same. The virtual setup has two turtlebots, Tb1 and Tb2. During the data collection phase, Tb1 remains stationary while tracking and recording the movement of Tb2 using a monocular camera. Tb2 moves in front of Tb1, from point A to point B using a Proportional Integral Derivative (PID) controller that corrects the positional and angular error of the robot. We introduce variation in the PID parameters so that no two trajectories are the same even when Tb2 is moving towards the same goal point. This introduces variance in the local neighbourhood of the trajectories even when the goal point is same. We also recorded video of Tb2 moving towards 4 different target points. These 4 different target points are $(1,0.8)$, $(1.5, -0.8)$, $(2-0.8)$ and $(0.5, -0.5)$ where the position of Tb1 is taken as the origin of the inertial frame. The recorded image frames are converted to gray scale images before being used to train the networks. Altogether we collected about 80 different trajectories (20 trajectories in the local neighbourhood of each of the 4 goal points).
\begin{figure*}
\centering
\includegraphics[width=0.92\linewidth, keepaspectratio]{fig_arm.png}
\caption{The 4 environments from left- Atrium (daylight), Atrium (artificial light), Pavement and Airstrip, respectively}
\label{ICAPS_fig4}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=.75\linewidth, keepaspectratio]{PSNR_wicv19}
\caption{PSNR comparison plot between 2 videos of equal length from two different environment (Atrium daytime and Pavement).}
\label{ICAPS_fig5}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.91\textwidth,keepaspectratio]{result1.png}
\caption{Qualitative comparison on the performance of Fully Connected LSTM network and PROM network on simulated data set. The first row represents the ground truth, second and third row show the estimates by PROM network and the fully connected LSTM network, respectively for time stamps $10$, $15$, $20$, $25$, $30$, $35$ and $40$.}
\label{ICAPS_fig6}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.7\textwidth,keepaspectratio]{fig5.png}
\caption{Qualitative analysis on the performance of PROM-Net trained on ARM data set. The first and thrid row from top represents the ground truth, second and fourth row show the estimates generated by PROM-Network for time stamps $20$, $30$, $40$, $50$ and $60$. The first 2 rows represent data under artificial lighting conditions and the last 2 rows are from the outdoor environment.}
\label{ICAPS_fig7}
\end{figure*}
\section{Real Robot Motion Data-set}
To evaluate the performance of PROM-Net with the real-life data, we created our own Actual Robot Motion (ARM) data-set, using two LEGO Mindstorms under different lighting conditions in 4 different environmental settings- indoor (Atrium) daylight, indoor artificial light, and outdoor (pavement) daylight and outdoor (Airstrip) sunlight. To the best of our knowledge, this is the first of its kind data set where the motion of a mobile robot is captured from the first-person view of another robot. In this section, we present details on the real robot motion data set that was collected from the first person perspective of a LEGO Mindstorms robot (see figure \ref{ICAPS_fig3b}) observing another Mindstorms moving in its field of view.
We recorded the videos using a GoPro Hero 5 Black camera at 30 fps, with a resolution of 720$\times$1280. We later down-sampled it to a resolution of 320$\times$240. For the initial phase of data gathering, we mounted the camera on a LEGO Mindstorms robot to observe the environment and kept it in a stationary state.
In future, we will add motion to the recording platform to add more versatility to the data which would closely resemble the practical cases seen in a robot path planning problems. The average speed of the moving agent was kept at about 0.665 km/hr (approximately 11 m/s). The recorded videos do not contain any labelled data as they are meant for unsupervised learning algorithms.
We recorded about 1.5 hours of robot motion of the other LEGO-bot along various trajectories consisting of approximately 120K frames without excluding any particular segments. The GoPro camera offers digital stabilization. We used the narrow-angle shot setting during the recordings. The wide-angle lens of this particular camera produces a significant amount of fish-eye effect for any object moving relatively close to the camera. A wide angle lens will be used in future when we incorporate recordings of unmanned aerial vehicles (UAV) into the data set. Unlike the autonomous ground vehicles, the high speed operation of UAVs (Average speed of 5m/s) demands long range visual data for effective path planning. Below we describe the various scenarios of the recorded data. The videos are segregated in a 3:1 ratio between training and test data. The data set can be accessed at \url{https://sites.google.com/view/meenakshis/dataset}
\begin{table}
\centering
\begin{tabular}{ |p{1.7cm}||p{1.25cm}|p{1cm}|p{1.2cm}|p{1cm}| }
\hline
Types of trajectory & Atrium Daylight & Atrium Night & Pavement & Airstrip\\
\hline
St. Line & 4 &4& 4&4\\
Arc & 4 &4& 4&4\\
Incline L-R & 4 &4& 4&4\\
Incline R-L & 4 &4& 4&4\\
\hline
\end{tabular}\caption{Arrangement of no. of videos in the Data set}
\label{table1}
\end{table}
\section{Scenarios}
Among the two Lego Mindstorms bots, one was remote-controlled via a Bluetooth module to execute four different types of trajectories- Straight path, Inclined path (left to right and right to left) and Arc; each with three different depths (distance from the mounted-camera) in all of the four different environmental settings (Figure \ref{ICAPS_fig4}). The logistics of the recorded videos in each of the environment for each the 4 different types of trajectories are given in table \ref{table1}
This was done to incorporate diversity (Figure \ref{ICAPS_fig5} shows the distribution of PSNR between 2 videos of equal length from 2 different environment) in the data set and to facilitate efficient training of deep networks. Each trajectory in a particular setting was repeated twice for redundancy in a single video.
\subsection{Environment 1, 2: Atrium (Daylight and Artificial Light at Night)}
This setting was used for collecting two different sets of recordings. One was during daytime using natural light (Figure \ref{ICAPS_fig4},$1^{st}$ frame from left ) and the other at night using multiple light sources of white halogens (Figure \ref{ICAPS_fig4}, $2^{nd}$ frame from left). The smooth floor of the atrium results in consistent motion without any jerks. However, the artificially lit night-scene introduces complexity due to multiple shadow formations (different intensities) of the same object.
\subsection{Environment 3: Pavement}
This was recorded in a sun-lit scene with nearby tree canopy (shadows in the backdrop, Figure \ref{ICAPS_fig4}, $3^{rd}$ frame from left) . The ground (lock-tiles) adds intrinsic inconsistency in motion and is bright-colored.
\subsection{Environment 4: Airstrip}
This was recorded in twilight (resulting in, elongated shadows) and the motion was the most jittery here due to coarseness of the asphalt (Figure \ref{ICAPS_fig4}, $4^{th}$ frame from left). Also, there are tiny insects moving in the background which adds a naturally dynamic clutter.
\section{Results and Analysis}
\label{results}
Initially, we trained the network in the simulated environment. In order to maintain uniformity during training, we used the RMSProp optimizer with a batch size of 64 and learning rate 0.001 for all the networks. Our initial investigation with the simulated data set revealed that even though fully connected LSTM networks (\cite{srivastava}) generates moderately accurate predictions for trajectories in the close neighborhood of the ones it has been trained on, it fails to generalize the robot motion when the test trajectories are unlike any training data it has seen before. The same can be inferred from figure \ref{ICAPS_fig6}. Figure \ref{ICAPS_fig6} also shows that PROM-Net can efficiently approximate the future robot motion for unforeseen test scenarios.
For each of the test cases, we have given the network 10 image frames as input and the network predicted the next 10 frames in future. The reconstructed frames by PROM-Net on the real robot data set for two different environments (indoor with artificial lights and outdoor with sunlight) are shown in figure \ref{ICAPS_fig7}. Even though for this paper we have only presented results with grey-scale images, our network can be very easily modified for RGB inputs.
We have given the variation in structural similarity index (SSIM) for all the 10 predicted frames on the real world data set in figure \ref{ICAPS_fig8}. To compare the performance of the proposed prediction network with a FC LSTM network we have given the Peak Signal to Noise Ratio (PSNR) plots for both simulated and real data set in figure \ref{ICAPS_fig9}. It can be easily inferred from the plots that PROM-Net performs well with both the simulated and real data sets.
From figure \ref{ICAPS_fig7}, we can infer that the blurriness in the predicted frames arises due to the regression losses in convolution layers. As our application is focused on solving path planning problems for robotic agents, we can easily accommodate minor reconstruction loss in the predicted frames. Our intentions are to infer the future direction of motion for the moving objects and PROM-Net has proven to be very effective for that purpose.
\section{Conclusion}
\label{sec7}
We presented a novel light-weight unsupervised learning framework for robot motion prediction problems. A new robot motion data set has been introduced to train and test deep architectures for motion and path planning problems with small scale mobile agents. While the present model is capable of predicting robot motions under stationary condition, a more robust framework is needed in order to estimate future frames where the motion of the robot influences the data collected by the camera sensor. We are already working towards building such models. In our future work, we plan on designing and testing a vision based MPC on a mobile agent for motion planning in a cluttered dynamic environment. We envisage that reward function would penalize the controller for actions that would move the agent closer to any obstacle and reward it when the area of the obstacle reduces in the predicted frames. We also plan on extending our robot motion data-set with multiple mobile agents (human and robots) moving in the robot workspace.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{fig2a.png}
\caption{SSIM distribution between predicted frames and the ground truth for the 10 time stamps on the ARM data-set.}
\label{ICAPS_fig8}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth, keepaspectratio]{psnr1}
\caption{PSNR plots for PROM-Net with Real data (red line), Simulated Data (blue line) and Fully Connected LSTM Network with simulated data (green line).}
\label{ICAPS_fig9}
\end{figure}
\input{icaps2019.bbl}
\end{document}
|
1,314,259,995,599 | arxiv | \section{Introduction}
\label{sec:intro}
Model-based simulation of complex physical systems plays an essential
role in understanding real world phenomena. These models are often
characterized by partial differential equations (PDEs), and are
typically subject to uncertainties stemming from unknown coefficient
fields, constitutive laws, source terms, initial and/or boundary
conditions, geometries, etc. When observation data exist, these
parameters can be estimated by solving an inverse problem governed by
the underlying model (e.g., PDE). It is well known that uncertainty is
a fundamental feature of inverse problems, therefore in addition to
inferring the parameters of interest, we need to quantify the
uncertainty associated with this inference. This uncertainty quantification
can be done via
Bayesian inference. Solving Bayesian inverse problems governed by
complex PDEs can be extremely challenging due to high-dimensional
parameter spaces that stem from discretization of infinite-dimensional
parameter fields and the need to repeatedly solve the underlying PDEs.
To overcome these computational challenges, it is essential to
exploit problem structure, when possible. For example, the
underlying PDE solution operator is often diffusive, that observation data may be
sparse or only contain limited
information about the parameter field. These
particularities give rise to a low-rank structure in the second
derivative of the data-misfit component of the inverse problem objective (or of the negative log
likelihood), hereafter referred to as the data-misfit Hessian. In previous work~\cite{isaacpetraetal2014,
petramartinetal2014} we exploited this low-rank structure in
the context of inverse ice sheet flow problems. However, for cases
when this ``low-rank'' is in fact large, as is the case for many inverse problems of practical interest, where the observation data are highly informative, low-rank approximation is insufficient. In this article, we exploit the local sensitivity of model predictions
to parameters, which gives rise to an off-diagonal low-rank
structure. We do so by invoking hierarchical off-diagonal low-rank
(HODLR) matrix approximations and detail how they can be used to reduce the computational
cost to solve large-scale PDE-based inverse problems.
\paragraph{Related work}
Global low-rank approximation of Hessians in inverse problems have
been successfully utilized in~\cite{isaacpetraetal2014, SpantiniSolonenCuiEtAl15,
flath2011, buighattasetal2013, saibaba2015}, with deterministic and randomized
methods~\cite{martinsson2011fast, buighattasetal2013}
being available to generate said approximations. However, some problems,
specifically those with highly informative observation data, are not amenable to
global low-rank approximation, and thus other structure-exploiting
strategies are needed such as those based on local
translation invariance and localized
sensitivities~\cite{algerraoetal2019, algerhartland2022,
ZhuLiFomelEtAl16}. Here we
focus on hierarchical low-rank methods for which convenient randomized methods
are available~\cite{linlulexing2011, martinsson2016}.
Hierarchical matrices have been demonstrated in~\cite{geogaanitescustein2019, litvenkosunetal2019} to be an effective
means to approximate covariance matrices associated to large-scale
Gaussian processes. In~\cite{ambartsumyan2020}, hierarchical matrix
approximations with general hierarchical partitioning patterns are
utilized for the construction of explicit representations of
Hessian inverses. In one of the examples studied, the authors find that
the diffusivity of the parameter-to-PDE-solution map and the informativeness of the observation data impact whether the
data-misfit Hessian is more suited for compression with hierarchical
or global low-rank formats. Here, we build on this study and focus on
a specific inverse problem arising in land ice modeling.
\paragraph{Contributions}
The main contributions of this work are as follows. (1) We motivate
the use of HODLR compression for data-misfit Hessians in
inverse problems governed by PDEs, and
present a detailed study for large-scale
ice sheet inverse problems, such as the Greenland ice sheet. (2) We
describe a strategy that leverages the fast manipulation of
HODLR matrices to efficiently generate approximate samples from a Gaussian posterior
distribution for uncertainty
quantification.
(3) We numerically
study the influence of various problem setups on the off-diagonal
low-rank structure of the data-misfit Hessian. The results show the effectiveness of the HODLR
approximation for various problem scales including for a Greenland
ice sheet inverse problem, which has a discretized parameter
dimension of $3.2\times 10^{5}$.
\section{Preliminaries}
In this section, we summarize background material regarding the
solution of discretizations of infinite-dimensional inverse problems.
We also briefly review HODLR matrices. Specifically, we define HODLR
matrices, list some of their properties and summarize the
computational complexities of computing symmetric HODLR matrix
approximations of symmetric operators that are only available through
their application on vectors.
We refer to~\cite{hackbusch1999, hackbuschbohm2002} for a more thorough discussion
of hierarchical matrices and to~\cite{martinsson2016} for more detail on
HODLR matrices.
\subsection{Bayesian Inverse Problems}\label{sec:Bayes}
A means to account for uncertainty in parametric inference is to
employ the Bayesian approach to inverse problems~\cite{tarantola2005,
kaipio2006, stuart2010}, which takes as input observation data $\bm{d}$, i.e., the data, prior
knowledge of the parameter and a model for the likelihood of data
conditional to $\beta$. Prior knowledge of the discretized parameter
$\bm{\beta}$ is typically determined by the expertise of domain
scientists and mathematically encoded in a probability density
function $\pi_{\text{prior}}\left(\bm{\beta}\right)$. The likelihood
$\pi\left(\bm{d}|\bm{\beta}\right)$ involves the data uncertainty and
the mathematical model for the parameter-to-observable process.
The solution of a Bayesian inverse problem is a probability density function for the
discretized parameter $\bm{\beta}$, that is conditioned on the observation data
according to Bayes formula
\begin{equation*}
\pi_{\text{post}}\left(\bm{\beta}\right) =
\pi\left(\bm{\beta}|\bm{d}\right) \propto
\pi_{\text{prior}}\left(\bm{\beta}\right)
\pi\left(\bm{d}|\bm{\beta}\right),
\end{equation*}
which provides a formal expression for the posterior
distribution. Here, ``$\propto$'' means equal up to a normalization
constant. For a problem with Gaussian prior
$\mathcal{N}\left(\bm{\overline{\beta}},
\bm{\Gamma}_{\text{prior}}\right)$ and data noise $\bm{\eta}$
described by the zero mean Gaussian
$\mathcal{N}\left(\bm{0},\bm{\Gamma}_{\text{noise}}\right)$,
$\pi_{\text{post}}(\cdot)$ has the following form
\begin{equation}
\label{posteriorexpression}
\pi_{\text{post}}\left(\bm{\beta}\right)
\propto
\exp\left(-
\frac{1}{2}\|\bm{\mathcal{F}}(\beta)-\bm{d}\|_{\bm{\Gamma}_{\text{noise}}^{-1}}^{2}
-\frac{1}{2}\|\bm{\beta}-\bm{\overline{\beta}}
\|_{\bm{\Gamma}_{\text{prior}}^{-1}}^{2}\right),
\end{equation}
where $\bm{\mathcal{F}}$ is the parameter-to-observable map.
The notation $\|\cdot\|_{\bm{A}}$ means that the norm is weighted with the
positive-definite matrix $\bm{A}$, i.e., $\|\bm v\|_{\bm{A}}=\sqrt{\bm v^\top\bm{A}\bm v}$.
The parameter-to-PDE-solution map is typically
nonlinear, and consequently the posterior
distribution is not a Gaussian. One characteristic of the posterior
distribution is the point at which it is maximized, or equivalently the point which minimizes the negative log-posterior, the so-called maximum a posteriori (MAP) point,
\begin{equation} \label{MAPexpression}
\bm{\beta}^{\star}:=\arg\text{min}_{\bm{\beta}}
\,J(\bm{\beta}):=
\frac{1}{2}\|
\bm{\mathcal{F}}(\beta)
-\bm{d}
\|_{\bm{\Gamma}_{\text{noise}}^{-1}}^{2}
+\frac{1}{2}\|\bm{\beta}-\bm{\overline{\beta}}
\|_{\bm{\Gamma}_{\text{prior}}^{-1}}^{2}.
\end{equation}
A means to compute the MAP point is to employ a (Gauss) Newton
method for optimization~\cite{nocedalwright2006}, which critically
relies on the availability of the (Gauss-Newton) Hessian. Since, $J$ is defined implicitly in terms of the parameter-to-observable map, which involves a PDE solution operator, we utilize the adjoint method~\cite{borzi2011, gunzburger2002, petrasachs2021} to compute it's gradient and Hessian-applies.
To fully explore posterior distributions, Markov chain Monte-Carlo
(MCMC) techniques~\cite{hastings1970, robertcasella1999} can be used.
Such techniques require a proposal distribution that ideally
approximates the posterior and is easily sampled from. One method to
generate a Gaussian proposal distribution is through the Laplace
approximation of the posterior about $\bm{\beta}_{k}$ (or around the MAP point)
\begin{equation*}
\tilde{\pi}_{\text{post}}
\left(\bm{\beta},\bm{\beta}_{k}\right)
\propto
\exp\left(-\frac{1}{2}\langle
\bm{\beta}-\bm{\mu}_{k},
\bm{H}_{k}\left(\bm{\beta}-\bm{\mu}_{k}\right)
\rangle_{\ell^{2}}\right),\\
\bm{\mu}_{k}=\bm{\beta}_{k}-\bm{H}_{k}^{-1}\bm{g}_{k},
\end{equation*}
where $\bm{g}_{k}$, $\bm{H}_{k}$ are the gradient and Hessian of the
log-posterior $J(\bm{\beta})$ at $\bm{\beta}_{k}$. Another MCMC
sampling approach is the generalized preconditioned Crank-Nicholson
(gpCN) method~\cite{rudolf2018generalization,
pinski2015algorithms}. An attractive choice for the preconditioner
is the Hessian at the MAP point,~\cite{kim2021hippylib}.
For these and other MCMC samplers, one typically needs to apply the
inverse Hessian
$\bm{H}_{k}^{-1}$ or its square root $\bm{H}_{k}^{-1/2}$ repeatedly
and efficiently, which also motivates the study presented in this
paper. In particular, in
Section~\ref{subsec:HODLRGaussianizedPosterior} we discuss how HODLR
approximations can be used for the fast application of the Hessian
square root.
\subsection{Symmetric HODLR Matrices}
\label{subsec:HODLRdef}
A HODLR matrix $\bm{A}\in\mathbb{R}^{N \times N}$, is a matrix
equipped with a depth $L\in\mathbb{N}$, hierarchical partitionings of
the index set $\mathcal{I}=\lbrace 1,2,\dots,N\rbrace$ into continguous subsets and low-rank
off-diagonal blocks defined by the partition, which is described in greater detail in e.g.~\cite{martinsson2016}. The block rank-structure of a HODLR matrix for various hierarchical depths is illustrated in Figure~\ref{fig:hmatrixpartitioningstructure}. An HODLR matrix must satisfy two additional properties.
\begin{enumerate}
\item The depth of the hierarchical
partitioning scales with the logarithm
of the size of the matrix, i.e.,
\begin{equation*}
L=\mathcal{O}\left(\log\,N\right).
\end{equation*}
\item The maximum rank of each hierarchical level $\ell$ off-diagonal block, $r_{\ell}$, is bounded above by a number $r$ that is independent of the problem size $N$, for each level $\ell$
\begin{equation*}
\max_{1 \leq \ell \leq L}r_{\ell}\leq r=\mathcal{O}\left(1\right).
\end{equation*}
\end{enumerate}
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}
\begin{scope}[yscale=0.2, xscale=0.2]
\pgfmathtruncatemacro{\L}{1}
\pgfmathtruncatemacro{\scalefac}{16/(2^\L)}
\pgfmathtruncatemacro{\shiftval}{2^(\L-1)}
\begin{scope}[yscale=-1, xscale=1]
\begin{scope}[yshift=-\shiftval, xshift=-\shiftval]
\begin{scope}[yscale=\scalefac, xscale=\scalefac]
\pgfmathtruncatemacro{\msize}{2^\L}
\pgfmathtruncatemacro{\colstep}{80}
\foreach \l in {1,...,\L}
{
\pgfmathtruncatemacro{\delI}{\msize*2^(-\l)}
\pgfmathtruncatemacro{\DelI}{2*\delI}
\pgfmathtruncatemacro{\colorl}{75}
\pgfmathtruncatemacro{\maxi}{2^(\l-1)}
\foreach \i in {1,...,\maxi}
{
\pgfmathtruncatemacro{\a}{(\i-1)*\DelI}
\pgfmathtruncatemacro{\b}{(\i-1)*\DelI+\delI}
\pgfmathtruncatemacro{\c}{\i*\DelI}
\filldraw[fill=teal] (\a, \b) rectangle (\b, \c);
\filldraw[fill=teal] (\b, \a) rectangle (\c, \b);
}
}
\pgfmathtruncatemacro{\delI}{\msize*2^(-\L)}
\pgfmathtruncatemacro{\DelI}{2*\delI}
\pgfmathtruncatemacro{\colorl}{20+(\L-1)*\colstep}
\pgfmathtruncatemacro{\maxi}{2^(\L-1)}
\foreach \i in {1,...,\maxi}
{
\pgfmathtruncatemacro{\a}{(\i-1)*\DelI}
\pgfmathtruncatemacro{\b}{(\i-1)*\DelI+\delI}
\pgfmathtruncatemacro{\c}{\i*\DelI}
\filldraw[fill=violet] (\a, \a) rectangle (\b, \b);
\filldraw[fill=violet] (\b, \b) rectangle (\c, \c);
}
\end{scope}
\end{scope}
\end{scope}
\end{scope}
\end{tikzpicture}
\qquad
\begin{tikzpicture}
\begin{scope}[yscale=0.2, xscale=0.2]
\pgfmathtruncatemacro{\L}{2}
\pgfmathtruncatemacro{\scalefac}{16/(2^\L)}
\pgfmathtruncatemacro{\shiftval}{2^(\L-1)}
\begin{scope}[yscale=-1, xscale=1]
\begin{scope}[yshift=-\shiftval, xshift=-\shiftval]
\begin{scope}[yscale=\scalefac, xscale=\scalefac]
\pgfmathtruncatemacro{\msize}{2^\L}
\pgfmathtruncatemacro{\colstep}{40/(\L-1)}
\foreach \l in {1,...,\L}
{
\pgfmathtruncatemacro{\delI}{\msize*2^(-\l)}
\pgfmathtruncatemacro{\DelI}{2*\delI}
\pgfmathtruncatemacro{\colorl}{75}
\pgfmathtruncatemacro{\maxi}{2^(\l-1)}
\foreach \i in {1,...,\maxi}
{
\pgfmathtruncatemacro{\a}{(\i-1)*\DelI}
\pgfmathtruncatemacro{\b}{(\i-1)*\DelI+\delI}
\pgfmathtruncatemacro{\c}{\i*\DelI}
\ifnum\l=1
\filldraw[fill=teal] (\a, \b) rectangle (\b, \c);
\filldraw[fill=teal] (\b, \a) rectangle (\c, \b);
\else
\ifnum\l=2
\filldraw[fill=olive] (\a, \b) rectangle (\b, \c);
\filldraw[fill=olive] (\b, \a) rectangle (\c, \b);
\else
\filldraw[fill=magenta] (\a, \b) rectangle (\b, \c);
\filldraw[fill=magenta] (\b, \a) rectangle (\c, \b);
\fi
\fi
}
}
\pgfmathtruncatemacro{\delI}{\msize*2^(-\L)}
\pgfmathtruncatemacro{\DelI}{2*\delI}
\pgfmathtruncatemacro{\colorl}{20+(\L-1)*\colstep}
\pgfmathtruncatemacro{\maxi}{2^(\L-1)}
\foreach \i in {1,...,\maxi}
{
\pgfmathtruncatemacro{\a}{(\i-1)*\DelI}
\pgfmathtruncatemacro{\b}{(\i-1)*\DelI+\delI}
\pgfmathtruncatemacro{\c}{\i*\DelI}
\filldraw[fill=violet] (\a, \a) rectangle (\b, \b);
\filldraw[fill=violet] (\b, \b) rectangle (\c, \c);
}
\end{scope}
\end{scope}
\end{scope}
\end{scope}
\end{tikzpicture}
\qquad
\begin{tikzpicture}
\begin{scope}[yscale=0.2, xscale=0.2]
\pgfmathtruncatemacro{\L}{3}
\pgfmathtruncatemacro{\scalefac}{16/(2^\L)}
\pgfmathtruncatemacro{\shiftval}{2^(\L-1)}
\begin{scope}[yscale=-1, xscale=1]
\begin{scope}[yshift=-\shiftval, xshift=-\shiftval]
\begin{scope}[yscale=\scalefac, xscale=\scalefac]
\pgfmathtruncatemacro{\msize}{2^\L}
\pgfmathtruncatemacro{\colstep}{40/(\L-1)}
\foreach \l in {1,...,\L}
{
\pgfmathtruncatemacro{\delI}{\msize*2^(-\l)}
\pgfmathtruncatemacro{\DelI}{2*\delI}
\pgfmathtruncatemacro{\colorl}{75}
\pgfmathtruncatemacro{\maxi}{2^(\l-1)}
\foreach \i in {1,...,\maxi}
{
\pgfmathtruncatemacro{\a}{(\i-1)*\DelI}
\pgfmathtruncatemacro{\b}{(\i-1)*\DelI+\delI}
\pgfmathtruncatemacro{\c}{\i*\DelI}
\ifnum\l=1
\filldraw[fill=teal] (\a, \b) rectangle (\b, \c);
\filldraw[fill=teal] (\b, \a) rectangle (\c, \b);
\else
\ifnum\l=2
\filldraw[fill=olive] (\a, \b) rectangle (\b, \c);
\filldraw[fill=olive] (\b, \a) rectangle (\c, \b);
\else
\filldraw[fill=gray] (\a, \b) rectangle (\b, \c);
\filldraw[fill=gray] (\b, \a) rectangle (\c, \b);
\fi
\fi
}
}
\pgfmathtruncatemacro{\delI}{\msize*2^(-\L)}
\pgfmathtruncatemacro{\DelI}{2*\delI}
\pgfmathtruncatemacro{\colorl}{20+(\L-1)*\colstep}
\pgfmathtruncatemacro{\maxi}{2^(\L-1)}
\foreach \i in {1,...,\maxi}
{
\pgfmathtruncatemacro{\a}{(\i-1)*\DelI}
\pgfmathtruncatemacro{\b}{(\i-1)*\DelI+\delI}
\pgfmathtruncatemacro{\c}{\i*\DelI}
\filldraw[fill=violet] (\a, \a) rectangle (\b, \b);
\filldraw[fill=violet] (\b, \b) rectangle (\c, \c);
}
\end{scope}
\end{scope}
\end{scope}
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Rank-structure of a
matrix $\bm{A}$ with hierarchical depths $L=1$ (left), $L=2$ (middle)
and $L=3$ (right). Off-diagonal blocks are assumed to be low-rank.}
\label{fig:hmatrixpartitioningstructure}
\end{figure}
Such matrices are referred to as data-sparse since
the low-rank blocks allow for them to be represented computationally with less than $\mathcal{O}\left(N^{2}\right)$ floating point
numbers. In particular, the storage of an HODLR matrix is
$\mathcal{O}\left(N\,\log\,N\right)$,
$\mathcal{O}(N\,\log\,N)$ flops are needed
to compute a HODLR matrix-vector product~\cite{martinsson2011fast},
and $\mathcal{O}(N\,\log^{2}\,N)$ flops are required for
direct methods to compute an inverse HODLR matrix-vector product~\cite{ambikasarandarve2013}, as well as square root and inverse square root matrix-vector products~\cite{ambikasaranoneil2014}.
\paragraph{Compression}
We aim to generate HODLR approximations
of data-misfit Hessians in inverse problems. For large-scale problems, the data-misfit Hessian is available only as a matrix-free operator. In order to construct HODLR approximations of symmetric matrix-free operators,
we employ previously developed randomized linear algebraic routines
which only require the matrix-free action on a limited number of random vectors
with specified null entries, referred to
as \textit{structured} random vectors.
The Hessian action on these structured random vectors
is used to sample row and column spaces of off-diagonal Hessian submatrices
and allow for randomized approximate truncated singular value decompositions of the aforementioned off-diagonal submatrices. More details can be found in the appendix, see Algorithm~\ref{alg:HODLRcompression}.
For the results that we present in Section~\ref{sec:2DResults}
a rank-adaptive symmetric matrix-free~\cite{halkomartinsson2011, xixiachan2014}, hierarchical compression algorithm
is utilized, that is based on~\cite{martinsson2016}.
A similar algorithm is presented in~\cite{keyesturkiyyah2019},
wherein the hierarchical partitioning
is more general and the low-rank blocks have nested bases.
The rank-adaptivity provides a high probability means of resolving
the off-diagonal blocks to a desired level
of accuracy. By utilizing available matrix-vector product information and the Rayleigh quotient, a rank adaptive relative tolerance algorithm is made possible.
\paragraph{Computational Cost of Generating HODLR Approximations}
The number of matrix-vector products $\zeta$,
needed to compress a symmetric
matrix using $d$ oversampling vectors,
into a level $L$ HODLR matrix with off-diagonal ranks
$\lbrace r_{\ell}\rbrace_{\ell=1}^{L}$ is given by
\begin{equation}
\zeta=2\left(\langle r\rangle +d\right)L+N/2^{L},
\text{ where }\langle r\rangle:=\frac{1}{L}\sum_{\ell=1}^{L}r_{\ell}.
\label{eq:HODLRcost}
\end{equation}
Equation~\ref{eq:HODLRcost} can be understood from Algorithm~\ref{alg:HODLRcompression} in Appendix~\ref{subsec:randomizedcompressionalgorithms},
as for each level $\ell$ one needs to compute $r_{\ell}+d$ Hessian vector products, in order
to compute $\bm{Y}$ (line~$7$ of Algorithm~\ref{alg:HODLRcompression}) and $r_{\ell}+d$ Hessian vector products to compute
$\bm{Z}$ (line~$14$ of Algorithm~\ref{alg:HODLRcompression}). The remaining $N/2^{L}$ Hessian vector products arise from the need to determine the diagonal subblocks, which is detailed in~\cite{martinsson2011fast}. We note that with an adaptive procedure to determine an
approximate basis $\bm{Q}$, such as that in~\cite{xixiachan2014}, for a
block matrix column space, the cost is reduced to
$\zeta_{\text{adaptive}}=2\left(\langle r\rangle+d/2\right)L+N/2^{L}$
but with the additional computational burden of extra orthogonalization routine calls. We note that $\zeta=\mathcal{O}(\log\,N)$
matrix-vector products are needed to generate an HODLR approximation of a matrix with HODLR structure. For sufficiently large problems HODLR compression is not expected to be more computationally efficient than global low-rank (LR) compression, as $\zeta^{\text{LR}}=r+d$, the number of matrix-vector products to generate a rank $r$ compression by the single-pass algorithm~\cite{martinsson2016} with $d$ oversampling vectors is independent of the problems size. However, for problems of substantial size, we observe that the HODLR format does offer computational savings (see Section~\ref{sec:HumboldtGreenland}).
\section{HODLR matrices in inverse problems governed by PDEs}
\label{sec:HODLRapplication}
Here, we illustrate why data-misfit Hessians in inverse problems
governed by PDEs may contain numerically low-rank off-diagonal
blocks, describe how one can permute parameters to expose
this HODLR structure, and show how HODLR approximations can be
leveraged to draw samples from Gaussian approximations of Bayesian
posterior distributions.
\subsection{Motivation}
\label{subsec:motivation}
Consider the following data-misfit cost functional
\begin{equation*}
J_{\text{misfit}}\left(\beta\right):=\frac{1}{2}\|\bm{\mathcal{F}}(\beta)-\bm{d}\|
_{\bm{\Gamma}_{\text{noise}}^{-1}}^{2},\quad \text{with}\quad \mathcal{F}(\beta)=\bm{\mathcal{B}}u,
\end{equation*}
where $\bm{\mathcal{B}}$ linearly maps the PDE solution $u=u(\beta)$,
for the spatially-distributed parameter field $\beta$, to the model
predictions associated to the data $\bm{d}$. Moreover,
$\bm{\Gamma}_{\text{noise}}$ is the covariance matrix describing the Gaussian
noise of the observational data. For illustration purposes, we assume
that the parameter function $\beta$ is defined on a region $\Gamma_{1}$ and the data $\bm{d}$ is
observed on a region $\Gamma_{2}$, which may or may not be distinct.
These quantities are related through the solution of the governing PDE
and the measurement operator $\bm{\mathcal B}$. The
characteristics of this relation depends on properties of the
governing PDE. In
the following, we assume that a spatially (or temporally) localized
perturbation in the $\beta$ field leads to a predominantly localized
effect in the PDE solution $u$, and thus the model predictions
$\bm{\mathcal{B}}u$. This property is illustrated in Figure~\ref{fig:sensitivitycone}, where we use a sensitivity cone to
illustrate the influence of a local perturbation in
$\beta$, defined over $\Gamma_1$, on the PDE solution $u$ in
$\Gamma_2$. It is well known that for an elliptic PDE, local
perturbations influence the solution globally, but depending on the
geometry of the domain and the equation, this global effect may
rapidly decay outside a subset of $\Gamma_2$ that captures the main
effects of the perturbation. For instance, in a problem as in Figure~\ref{fig:sensitivitycone}, the influence of perturbations in $\beta$
on $u$ is likely to become more localized when the distance between
$\Gamma_1$ and $\Gamma_2$ decreases.
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}
\draw[red,dashed] (7,.5) -- (3,.5) -- (1,0);
\draw[dashed] (3,.5) -- (3,2);
\draw[blue,dashed] (3,2) -- (3,2.5);
\fill[red!90,nearly transparent] (5,0) -- (7,.5) -- (3,.5) -- (1,0) -- cycle;
\draw [fill=red!40!white,opacity=1] (3,2.25) -- (3.5,2.25) -- (3.28,.25) -- (3.22,0.25) -- cycle;
\draw (3.25,0.25) -- (2.25, -0.5);
\draw [thick] (2.25, -0.5) node[below]{perturbation $\psi_{i}$};
\draw [fill=red] (3.25,2.25) circle (.25cm and 0.07cm);
\draw [fill=red] (3.25,0.25) circle (.025cm and 0.007cm);
\draw (3.25,2.25) -- (2,3);
\draw [thick] (2,3) node[above]{sensitivity cone, $\frac{\delta u}{\delta \beta}(\beta)(\psi_{i})$};
\draw (1,0) -- (5,0) -- (5,2) -- (1,2) -- (1,0);
\draw (5,2) -- (7,2.5) -- (3,2.5) -- (1,2);
\draw (7,2.5) -- (7,.5) -- (5,0);
\draw (5,2.25) -- (6,3) ;
\draw [thick] (6,3) node[above]{$\Gamma_{2}$};
\draw (4.75,.125) -- (5.75,-.5) ;
\draw [thick] (5.75,-.5) node[below]{$\Gamma_{1}$};
\fill[blue!90,nearly transparent] (5,2) -- (7,2.5) -- (3,2.5) -- (1,2) -- cycle;
\end{tikzpicture}
\end{center}
\caption{Sketch illustrating a case where the influence of
changes in
the parameter $\beta$ on the PDE solution $u$ in
$\Gamma_{2}$ is focused in a small area. To
illustrate this, we show a sensitivity cone,
i.e., the PDE
solution $u$ is predominantly impacted in a
cone about the support of the localized
parameter perturbation.}
\label{fig:sensitivitycone}
\end{figure}
We next discuss the relationship between properties of the PDE as
discussed above and off-diagonal blocks in the Hessian matrix (or its
Gauss-Newton variant). The data-misfit Hessian, i.e., the Hessian of
the data-misfit part of the cost functional, can be derived using the
adjoint method~\cite{borzi2011, gunzburger2002, petrasachs2021}. However, we find
that the HODLR structure of the data-misfit Hessian is most easily
seen by studying a formal expression of it in terms of the first and
second order sensitivities $\delta u/\delta\beta$,
$\delta^{2}u/\delta\beta^{2}$
\begin{eqnarray*}
\frac{\delta^{2}}{\delta\beta^{2}}J_{\text{misfit}}\left(\beta\right)\left(\beta_{1},\beta_{2}\right)
&=&
\left(\bm{\mathcal{B}}u-\bm{d}\right)^{\top}
\bm{\Gamma}_{\text{noise}}^{-1}
\left(
\bm{\mathcal{B}}
\frac{\delta^{2}u}{\delta \beta^{2}}\left(\beta\right)\left(\beta_{1},\beta_{2}\right)\right)\,
+ \\
&\phantom{e}&
\left(
\bm{\mathcal{B}}
\frac{\delta u}{\delta \beta}\left(\beta\right)\left(\beta_{1}\right)
\right)^{\top}\,
\bm{\Gamma}_{\text{noise}}^{-1}
\left(
\bm{\mathcal{B}}
\frac{\delta u}{\delta \beta}\left(\beta\right)\left(\beta_{2}\right)
\right),
\end{eqnarray*}
where $\delta u/\delta \beta\left(\beta\right)\left(\beta_{1}\right)$
is the first variation~\cite{gelfand1963} of $u$ with respect to
$\beta$ in direction $\beta_{1}$, and $\delta^{2}u/\delta
\beta^{2}\left(\beta\right)\left(\beta_{1},\beta_{2}\right)$ is the
second variation of $u$ with respect to $\beta$ in directions
$\beta_{1},\beta_{2}$, that is,
\begin{eqnarray*}
\frac{\delta u}{\delta\beta}\left(\beta\right)\left(\beta_{1}\right):=
\left[\frac{\mathrm{d}}{\mathrm{d}\epsilon}
u\left(\beta+\epsilon\beta_{1}\right)\right]_{\epsilon=0}, \\
\frac{\delta^{2}u}{\delta\beta^{2}}\left(\beta\right)\left(\beta_{1},\beta_{2}\right):=
\left[\frac{\mathrm{d}}{\mathrm{d}\epsilon}
\frac{\delta u}{\delta\beta}\left(\beta+\epsilon\beta_{2}\right)\left(\beta_{1}\right)\right]_{\epsilon=0}.
\end{eqnarray*}
Upon discretizing $\beta$ with finite elements we obtain the following
formal expression for the $(i,j)$-entry of the data-misfit Hessian
$\bm{H}_{\text{misfit}}$ and of the Gauss-Newton
data-misfit Hessian $\bm{H}_{\text{misfit}}^{\text{GN}}$
\begin{eqnarray}
\label{eq:Hmisfitelements}
\left(\bm{H}_{\text{misfit}}\right)_{i,j}&=
\frac{\delta^{2}}{\delta \beta^{2}}\Big(J_{\text{misfit}}\left(\beta\right)\Big)
\left(\psi_{i},\psi_{j}\right),\\
\left(\bm{H}_{\text{misfit}}^{\text{GN}}\right)_{i,j}&=
\left(
\bm{\mathcal{B}}
\frac{\delta u}{\delta \beta}\left(\beta\right)\left(\psi_{i}\right)
\right)^{\top} \,
\bm{\Gamma}_{\text{noise}}^{-1}
\left(
\bm{\mathcal{B}}
\frac{\delta u}{\delta \beta}\left(\beta\right)\left(\psi_{j}\right)
\right),
\end{eqnarray}
where $\lbrace \psi_{j} \rbrace_{j=1}^{N}$ is a basis for the nodal
finite-element space, which is used to approximate~$\beta$.
When sensitivities are predominantly local as discussed above and when the support of two finite
element basis functions $\psi_{i},\psi_{j}$ are well separated, the terms
\begin{equation*}
\left(
\bm{\mathcal{B}}
\frac{\delta u}{\delta \beta}\left(\beta\right)\left(\psi_{i}\right)
\right)^{\top}\,
\bm{\Gamma}_{\text{noise}}^{-1}
\Big(\bm{\mathcal{B}}\frac{\delta u}{\delta \beta}\left(\beta\right)\left(\psi_{j}\right) \Big)
\quad\text{and}\quad
\bm{\mathcal{B}}\Big(\frac{\delta^{2}u}{\delta \beta^{2}}\left(\beta\right)\left(\psi_{i},\psi_{j}\right)\Big),
\end{equation*}
are rather small (assuming diagonally dominant noise covariance
matrices). This is, e.g., due to $\bm{\mathcal{B}}\delta
u/\delta\beta(\beta)(\psi_{i})$ having small values when
$\bm{\mathcal{B}}\delta u/\delta \beta(\beta)(\psi_{j})$ is
large. Now, let $\mathcal{I},\mathcal{J}$ be disjoint index
subsets of $\lbrace 1,2,\dots,N\rbrace$, then the entries in the matrix
block $\lbrace \left(\bm{H}_{\text{misfit}}\right)_{i\in \mathcal
I,j\in \mathcal J}\rbrace$ of the data-misfit Hessian are relatively
small whenever
$\cup_{i\in\mathcal{I}}\text{supp}\left(\psi_{i}\right)$ and
$\cup_{j\in\mathcal{J}}\text{supp}\left(\psi_{j}\right)$ are well
separated. Such Hessian blocks are well suited for
approximation by low-rank matrices. When the degrees of freedom
corresponding to the finite element basis functions $\psi_i$
are ordered such that
$\mathcal{I},\mathcal{J}$ are contiguous,
$\left(\bm{H}_{\text{misfit}}\right)_{\mathcal{I},\mathcal{J}}$ is an
off-diagonal subblock of $\bm{H}_{\text{misfit}}$ and
$\bm{H}_{\text{misfit}}$ tends to have HODLR structure as defined in
Section~\ref{subsec:HODLRdef}. The Gauss-Newton data-misfit Hessian
may have HODLR structure for the same reasons. In both cases, the
order of the basis functions and thus the degrees of freedom influence
this structure. Ideally, one wants an order that maintains locality,
i.e., consecutive indices correspond to basis functions that are close
to each other, and as a consequence, basis function with significantly
different indices are far from each other such that the corresponding
off-diagonal blocks have small entries and can be well approximated
using a low-rank matrix approximation. We defer to Section~\ref{subsec:dofordering} for a discussion of methods and numerical
experiments regarding the order of the degrees of freedom.
\subsection{Application of HODLR structure for fast sampling of
Gaussian posterior approximations}
\label{subsec:HODLRGaussianizedPosterior}
In \cite{petramartinetal2014},
the following expressions of the
Gaussianized posterior covariance are provided,
\begin{eqnarray*}
\bm{\Gamma}_{\text{post}}&=&
\left(\bm{H}_{\text{misfit}}+
\bm{\Gamma}_{\text{prior}}^{-1}\right)^{-1} =
\bm{\Gamma}_{\text{prior}}^{1/2}
\left(
\bm{H}_{\text{misfit}}^{\prime}
+\bm{I}
\right)^{-1}
\bm{\Gamma}_{\text{prior}}^{\top/2}, \\
\bm{H}_{\text{misfit}}^{\prime}&:=&
\bm{\Gamma}_{\text{prior}}^{\top/2}
\bm{H}_{\text{misfit}}
\bm{\Gamma}_{\text{prior}}^{1/2}, \\
\bm{\Gamma}_{\text{post}}^{1/2}
&=&\bm{\Gamma}_{\text{prior}}^{1/2}
\left(
\bm{H}_{\text{misfit}}^{\prime}
+\bm{I}
\right)^{-1/2},
\end{eqnarray*}
where the matrix square-root $\bm{A}^{1/2}$ is such that
$\bm{A}=\bm{A}^{1/2}\left(\bm{A}^{1/2}\right)^{\top}$. For Bayesian
inverse problems with a parameter field that is distributed spatially
over a bounded subset of $\mathbb{R}^{m}$, $m=2,3$, a reasonable
choice is to use the square of an inverse elliptic PDE operator for
the prior covariance~\cite{stuart2010}, which permits a means of
obtaining a symmetric square root of
$\bm{\Gamma}_{\text{prior}}$. In previous works such as~\cite{isaacpetraetal2014, SpantiniSolonenCuiEtAl15, flath2011, buighattasetal2013, saibaba2015},
the prior-preconditioned data-misfit Hessian
$\bm{H}_{\text{misfit}}^{\prime}$, was approximated by global low-rank
compression. This strategy provides an efficient means of approximating the posterior covariance matrix in inverse problems with data sets that contain sufficiently small amounts of information.
Here we propose to exploit HODLR problem structure and generate approximate posterior covariance matrices by HODLR approximations of the prior-preconditioned data-misfit $\bm{\tilde{H}}_{\text{misfit}}^{\prime}$,
see Appendix~\ref{subsec:posteriorerroranalysis} for an analysis on
how such an approximation impacts the accuracy of the approximate
posterior covariance
\begin{equation*}
\bm{\tilde{\Gamma}}_{\text{post}}=\bm{\Gamma}_{\text{prior}}^{1/2}
\left(
\bm{\tilde{H}}_{\text{misfit}}^{\prime}
+\bm{I}
\right)^{-1}
\bm{\Gamma}_{\text{prior}}^{\top/2}.
\end{equation*}
A symmetric square-root factorization
of $\bm{\tilde{H}}_{\text{misfit}}^{\prime}+\bm{I}$ is then generated
with $\mathcal{O}\left(N\,\log^{2}\,N\right)$
flops~\cite{ambikasaranoneil2014}.
The symmetric factorization allows for
a $\mathcal{O}\left(N\,\log\,N\right)$
means of applying both the square root
and inverse square root.
\section{Bayesian inverse ice sheet problems}
The simulation of the dynamics of ice sheets (e.g., the Greenland or
Antarctic ice sheets) is an important component of coupled climate
simulations. Such simulations require estimation of a
present state of the ice that is consistent with available
observations, a process sometimes referred to as model
initialization. This estimation problem can be formulated either as a
deterministic inverse problem (i.e., as nonlinear least squares
optimization governed by PDEs) or as a Bayesian inverse problem (i.e., as a statistical
problem which aims to characterize a distribution of states). The
latter approach, while more expensive, provides uncertainty estimates
in addition to determining a best parameter fit.
Ice sheet dynamics~\cite{cuffeypatterson2010} is typically governed by nonlinear Stokes equations
or simplifications thereof, such as the first-order equations
(see e.g.,~\cite{dukowicz2010}). Generally, the most uncertain component in
ice sheet simulations is the basal boundary condition, i.e., how the
ice sheet interacts with the rock, sand, water or a mix thereof at its
base. Estimating an ice sheet's effective boundary condition from
velocity observations on the top surface, the ice sheet's geometry and
a model for its dynamics is thus an important problem that can
mathematically formulated as an inverse problem~\cite{isaacpetraetal2014,
larouretal2012,
morlighem2010,
peregopricestadler2014,
petrazhustadleretal2012}.
We summarize the formulation of this inverse problem next. As common
in the literature, we use \emph{a snapshot} optimization approach, where all the data are assumed to be collected over a short period of time during which changes in the ice geometry are negligible. We denote
the bounded domain covered by ice by $\Omega\subset\mathbb{R}^{m}$, $m\in\lbrace 2,3\rbrace$,
and the basal, lateral and top parts of the domain boundary
$\partial\Omega$ by $\Gamma_{b}$, $\Gamma_{l}$, and $\Gamma_{t}$, as illustrated in Figure~\ref{fig:schematic}.
The governing equations are nonlinear incompressible Stokes
equations whose solution is the ice flow velocity $\bm{u}:\Omega\to\mathbb{R}^m$ and the pressure
$p:\Omega\to\mathbb{R}$ given as follows:
\begin{eqnarray}
-\nabla\cdot\bm{\sigma}_{\bm{u}}
=\rho\bm{g}\,\,\,\text{ in }\Omega, \label{Stokeseqn:1}\\
\phantom{-}\nabla\cdot\bm{u}
=0\,\,\,\,\,\,\,\,\,\,\text{ in }\Omega, \label{Stokeseqn:2}\\
\phantom{-}\bm{\sigma}_{\bm{u}}\bm{n}
=\bm{0}\,\,\,\,\,\,\,\,\,\,\text{ on }\Gamma_{t}, \label{Stokeseqn:3}\\
\phantom{-}\bm{u}\cdot\bm{n} =0 \text{ and } \bm{T}\left(\bm{\sigma}_{\bm{u}}\bm{n}+\exp\left(\beta\right)\bm{u}\right)
=\bm{0}\,\,\,\,\,\,\text{ on }\Gamma_{b},\label{Stokeseqn:4}
\end{eqnarray}
along with additional lateral boundary conditions. Here, $\beta$ is a
basal sliding parameter field, $\rho\bm{g}$ the body force density,
where $\rho$ is the mass density of the ice and $\bm{g}$ the
acceleration due to gravity. Equation~\ref{Stokeseqn:1}
describes the conservation of momentum,~\ref{Stokeseqn:2} the
conservation of mass, and~\ref{Stokeseqn:3} are stress-free boundary
conditions for the top surface (the ice-air interface). In normal
direction, Equation~\ref{Stokeseqn:4} states a non-penetration condition,
i.e., the ice cannot flow into the rock/sand layer which supports it (here
$\bm{n}$ denotes the outward unit normal to the boundary $\partial\Omega$ and $\bm{T}$ the tangential operator, $\bm{Tv} = \bm{v}-\bm{n}(\bm{n}^{\top}\bm{v})$). In tangential
direction, Equation \ref{Stokeseqn:4} specifies a tangential sliding
condition that relates the fraction of tangential sliding and
tangential stress through the (logarithmic) basal sliding field
$\beta=\beta(x)$, $x\in \Gamma_b$. We employ Glen's flow law~\cite{glen1955}, a constitutive law for ice that relates the stress
tensor $\bm{\sigma}_{\bm{u}}$ and the strain rate tensor
$\bm{\dot{\varepsilon}}_{\bm{u}}= \frac{1}{2}\left(
\bm{\nabla}\bm{u}+\bm{\nabla}\bm{u}^{\top} \right)$,
\begin{equation}
\bm{\sigma}_{\bm{u}}= 2\eta\left(\bm{u}\right)
\bm{\dot{\varepsilon}}_{\bm{u}} -\bm{I}p, \text{ with } \eta\left(\bm{u}\right) =
\frac{1}{2}A^{-1/n}\bm{\dot{\varepsilon}}_{\text{II}}^{\frac{1-n}{2n}},
\end{equation}
where $\eta$ is the effective viscosity, $\bm{I}$ is the unit matrix,
$\bm{\dot{\varepsilon}}_{\text{II}} =
\text{tr}\left(\bm{\dot{\varepsilon}}_{\bm{u}}^{2}\right)$ is the
second invariant of the strain rate tensor, $A$ is a flow rate factor,
and $n$ is Glen's exponent. Ice is typically modeled using $n\approx
3$, which corresponds to a shear-thinning constitutive relation, here we use $n=3$.
As discussed above, the parameter containing the largest uncertainty
is the (logarithmic) basal sliding field $\beta=\beta(x)$. Thus, it is
usually the parameter inferred from (typically, satellite)
observation data $\bm{d}$, here in the form of surface velocity measurements. Using an
appropriate point observation operator $\bm{ \mathcal B}$ that extracts point
data from the solution $\bm{u}$ of the governing equations~\ref{Stokeseqn:1}-\ref{Stokeseqn:4}, and assuming additive observation errors $\bm{\eta}$,
the relationship between model and data is now of the typical
form
\begin{equation}
\bm d = \bm{\mathcal B}\bm u + \bm \eta.
\end{equation}
Assuming that the observation errors $\bm \eta$ and the prior
for the parameter field $\beta$ follow Gaussian distributions, we are in the
framework of Bayesian inverse problems summarized in
Section~\ref{sec:Bayes}.
\section{Example I: Two-dimensional ISMIP-HOM benchmark}
\label{sec:2DResults}
We first study the prospects of compressing the Gauss-Newton data-misfit Hessian in a
problem inspired by the ISMIP-HOM collection of ice sheet simulation
benchmark problems~\cite{pattynetal2008}. This problem set
was used to explore inverse ice sheet problems
in~\cite{peregopricestadler2014, petrazhustadleretal2012}.
After a short description of the problem setup, we present results such as
the MAP point estimate $\bm{\beta}^{\star}$ and approximate Gaussianized posterior samples using an HODLR compression of the posterior covariance. Then, we study the impact that various
problem features have on the suitability of the Gauss-Newton data-misfit Hessian for compression
to the HODLR and global low-rank formats.
\subsection{Problem setup}
This problem setup consists of a rectangular piece of ice on a
slope, as sketched in Figure~\ref{fig:schematic}. This simple example allows
us to study the influence of the domain aspect ratio, the number of observations and the level of mesh refinement on the properties of the Gauss-Newton data-misfit Hessian matrix.
The domain has a width of
$W=10^{4} \left[\text{m}\right]$ and a height of
$H=10^{2}\left[\text{m}\right]$. Periodic boundary conditions are
employed along the lateral boundaries such that the setup models an
infinite slab of ice on a slope. The governing equations and
other boundary conditions are as discussed in
Equations~\ref{Stokeseqn:1}-\ref{Stokeseqn:4}.
The Stokes equations are discretized using Taylor-Hood finite elements
on a mesh of~$256\times 10$ rectangles, each subdivided into two
triangles, for the domain length $\left[0,W\right)$ and height
$\left[0,H\right]$. To compute a MAP estimate, we generate
synthetic surface velocity data using the
``true'' logarithmic basal sliding field,
$\beta_{\text{true}}\left(x\right):=
\log\left(1\,200+1\,100\sin\left(\frac{2\pi
x}{W}\right)\right)$. Given this basal sliding field, we solve
Equations~\ref{Stokeseqn:1}-\ref{Stokeseqn:4},
extract the tangential velocity component at $100$
uniformly distributed points on the top boundary $\Gamma_t$, and add
$1\%$ relative Gaussian noise to each data point, resulting in the
synthetic data $\bm{d}$.
It remains to define the prior distribution for the parameter field
$\beta$. The average value of $\beta_{\text{true}}$ is used as
constant prior mean $\overline{\beta}\left(x\right)= 6.73315 \approx
\frac{1}{W}\int_{0}^{W}\beta_{\text{true}}\left(s\right)\mathrm{d}s$. The
prior covariance matrix $\bm{\Gamma}_{\text{prior}}$ is a
discretization of the covariance PDE operator $\mathcal{C}:=\left(\delta
I-\gamma\Delta\right)^{-1}$, with $\gamma=6\times 10^{2}$ and $\delta
=2.4\times 10^{-3}$, with Robin boundary conditions~\cite{daonstadler2018}. These values are chosen in order to provide a
relatively large prior correlation length of
$10^{3}\left[\text{m}\right]$~\cite{lindgrenruelindstrom2011}.
Next, we summarize the computation of the MAP
point and the compression of the Gauss-Newton data-misfit Hessian matrix at the MAP point.
\pgfmathsetseed{2}
\begin{figure}[tb]
\centering
\begin{tikzpicture}
\draw[->,thick] (-1,-1/6*-1) -- (7,-1/6*7) node[pos=0.97, above] {${ x}$} node [pos=0.33, below] {$\Gamma_{b}$} ;
\draw[->,thick] (0,0) -- (1/6*3,3) node[pos=0.93, left] {${z}$};
\shade[bottom color=lightblue,nearly transparent] (0,0) -- (1/6*2,2) -- (1/6*2+6,2-1/6*6) -- (6,-1/6*6) -- (0,0);
\shade[bottom color=gray,shading angle=60,nearly transparent] (-1,-1/6*-1) -- (7,-1/6*7) -- (-1,-1/6*7)-- (-1,-1/6*-1);
\draw[thick] (0,0) -- (1/6*2,2) -- (1/6*2+6,2-1/6*6) -- (6,-1/6*6) -- (0,0);
\draw[dashed,thick] (6,-1/6*6) -- (3,-1/6*6);
\path[->,thick] (4,-4/6) edge [bend right] (4,-1);
\node at (3.7,-5/6) {$\theta$};
\draw [<->,thick] (-.5-.1,.5*1/6) -- (1/6*2-.5-.1,2+.5*1/6) node [midway, left] {$H$} node [midway, right] {$\Gamma_{l}$};
\node[right] at (6+1/6,0) {$\Gamma_{l}$};
\draw [<->,thick] (1/6*2+6+.5/6,2-1/6*6+6*.5/6+.1) -- (1/6*2+.5/6,2+6*.5/6+.1) node [midway, above] {$W$} node [midway, below] {$\Gamma_{t}$};
\foreach \x in {1,2,...,20}{
\pgfmathsetmacro{\y}{1/2+1/2*rand}
\draw [thick,blue]({1/6*2*\y+(1-\y)*(1/6*2+6)},{2*\y+(2-1/6*6)*(1-\y)}) circle [radius=0.05];
}
\end{tikzpicture}
\caption{Schematic of two-dimensional slab of ice used for
Example I in
Section~\ref{sec:2DResults}. The blue circles show
representative (random) measurement locations. The angle $\theta$ is the slope of the ice
slab.}\label{fig:schematic}
\end{figure}
\begin{figure}[tb]
\begin{center}
\begin{tabular}{rl}
\begin{tikzpicture}[baseline, trim axis left]
\begin{axis}[
grid=major,
width=7cm,
height=5cm,
xlabel = $x$,
xmin=0.0, xmax = 1.e4,
ymin = 4.0, ymax = 8.0,
ytick={4.0, 5.0, 6.0, 7.0, 8.0},
xtick={0.0, 2.5e3, 5.0e3, 7.5e3, 1.0e4},
legend style={font=\small, nodes=right},
legend pos=south west,
title=$\beta$
]
\addlegendentry{reconst.}
\addplot [color=red, line width = 1.5pt]
table {2DbetaReconstruction.dat};
\addlegendentry{truth}
\addplot [color=black, line width = 0.75pt]
table {./2DbetaTruth.dat};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major,
width=7cm,
height=5cm,
xlabel = $x$,
xmin=0.0, xmax = 1.e4,
ymin = 0.5, ymax = 2.5,
ytick={0.5, 1.0, 1.5, 2.0, 2.5},
xtick={0.0, 2.5e3, 5.0e3, 7.5e3, 1.0e4},
legend style={font=\small, nodes=right},
legend pos=south east,
title = $\bm{T}\bm{u}|_{z=H}$
]
\addlegendentry{reconst.}
\addplot [color=red, line width = 1.5pt]
table {2DuxReconstruction.dat};
\addlegendentry{obs. data}
\addplot [color=black, line width = 0.5pt, mark=o, mark size=1.25pt]
table {2DuxObserved.dat};
\end{axis}
\end{tikzpicture}\\
\end{tabular}
\end{center}
\caption{Shown for Example I are on the left the MAP point $\beta^{\star}$ (red) and the
truth basal sliding parameter $\beta_{\text{true}}$ (black) used to
generate synthetic observations of the tangential velocity
component on the upper surface $\Gamma_{t}$. Shown on the right are noisy synthetic
observations (black dots) used for computing the MAP point and the
associated tangential surface velocity reconstruction (red).}
\label{fig:ISMIP:MAP}
\end{figure}
\subsection{MAP point and HODLR Gaussianized posterior}
The nonlinear optimization problem for finding the MAP estimate is
solved using an inexact Gauss-Newton minimization method with backtracking linesearch~\cite{nocedalwright2006}, where
the linear systems are iteratively solved by the conjungate
gradient method. The resulting MAP point is
shown in Figure~\ref{fig:ISMIP:MAP}. The MAP parameter field $\beta^{\star}$, closely resembles the true parameter
$\beta_{\text{true}}$, which is a consequence of the large amount of
available data and relatively small noise level.
Next, we use the Gaussianized posterior distribution with a compressed
prior-preconditioned data-misfit
Hessian $\bm{H}_{\text{misfit}}^{\prime}$ to generate approximate samples from the posterior
distribution. Upon construction of the HODLR compression of the prior-preconditioned data-misfit
Hessian (details and comparisons can be found below in Section~\ref{subsec:ISMIP:props}), we draw samples from the HODLR Gaussianized
posterior as outlined in
Section~\ref{subsec:HODLRGaussianizedPosterior}.
In Figure~\ref{fig:ISMIP:samples}, we compare the mean, pointwise standard
deviation and samples from the prior and the posterior distributions.
As expected, we find that the data updates our
belief about the spatially distributed parameter field and reduces the
uncertainty. In particular, the $2\sigma$ bounds on the
one-dimensional point marginals $\sigma\left(x\right)$,
$\bm{\sigma}_{i}=\left[\bm{\Gamma}_{i,i}\right]^{-1/2}$ of the
Gaussianized posterior and the prior distributions are shown, in order
to verify that the samples are largely contained within two standard
deviations of their respective means. The prior-preconditioned data-misfit
Hessian $\bm{H}_{\text{misfit}}^{\prime}$, is compressed using a relative tolerance of $10^{-6}$, that is~$\|\bm{H}_{\text{misfit}}^{\prime}-\bm{\tilde{H}}_{\text{misfit}}^{\prime}\|_{2}/\|\bm{H}_{\text{misfit}}^{\prime}\|_{2}\leq 10^{-6}$,
with high probability.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=.35]{2DpriorSamples.png}
\hfil
\includegraphics[scale=.35]{2DHODLRposteriorSamples.png}\hfil
\end{center}
\caption{Results for Example I: Two random samples (red), mean $\overline{\beta}$ (blue)
and boundaries of the region $R=\lbrace \left(x,y\right) \text{ such
that } 0\leq x\leq W \text{ and
}\overline{\beta}(x)-2\sigma(x)\leq y\leq
\overline{\beta}(x)+2\sigma(x)\rbrace$ (dashed black) are shown for
the prior (left) and a HODLR Gaussianized posterior using the scheme
described in Section~\ref{subsec:HODLRGaussianizedPosterior}
(right).}\label{fig:ISMIP:samples}
\end{figure}
\subsection{Dependence of Hessian block spectra on problem setting}\label{subsec:ISMIP:props}
Next, we study how problem features impact the
numerical suitability of using global low-rank and HODLR
compressions to approximate the Gauss-Newton data-misfit Hessian.
In this and
subsequent sections we measure the cost to generate the matrix compression
in terms of Hessian vector products, which we also describe as Hessian applies, as each said vector product
requires two linearized PDE solves and thus dominates the
computational cost. We use the result of Appendix~\ref{subsec:erroranalysis},
to claim $\varepsilon$ absolute error in a level $L$ HODLR approximation, when there is no more than $\varepsilon/L$ absolute error in each off-diagonal block.
What is particular to this section, is that \textit{adaptive} single-pass and HODLR algorithms are used to generate global low-rank and HODLR approximations, based on absolute tolerance criteria. The absolute tolerance algorithmic input is scaled by the largest global low-rank singular value in order to report relative approximation errors. We note that additional errors are neglected in the reported approximation error such as that incurred in the peeling process~\cite{linlulexing2011, martinsson2016} and additional approximation assumptions in the single-pass algorithm, both of which are not expected to be significant.
\paragraph{Influence of aspect ratio}
Here, we vary the aspect ratio of the domain $\phi=H/W$, where $H$ and $W$ are the domain height and width respectively,
in order to study how it
influences the block spectra of the Gauss-Newton data-misfit Hessian and
ultimately the computational cost. Figure~\ref{fig:aspectratiostudy}
shows that the global spectrum is more sensitive to changes in the relative
length scale $\phi$ than the spectra of the off-diagonal blocks. Low-rank
approximations of the off-diagonal blocks become computationally
cheaper as $\phi$ decreases as a result of the sensitivity cones
becoming increasingly localized as the ice sheet thickness decreases. Global low-rank approximations become more expensive
as $\phi$ decreases, a result of the data being more informative. We
note that realistic problems, such as the Humboldt glacier and the
Greenland ice sheet studied later in Section \ref{sec:HumboldtGreenland},
have small aspect ratios and are thus expected to have data-misfit Hessians that
are less amenable to global low-rank approximation.
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major,
width=8cm,
height=6cm,
xlabel = $\|\bm{H}_{\text{misfit}}^{\text{GN}}-\bm{\tilde{H}}_{\text{misfit}}^{\text{GN}}\|_{2}/\|\bm{H}_{\text{misfit}}^{\text{GN}}\|_{2}\text{, approximation error}$,
ylabel = {Computaitonal cost (Hessian applies)},
xmin=1.e-8, xmax = 1.e-2,
xmode=log,
ymin = 0.0, ymax = 125.0,
xtick={1.e-8, 1.e-6, 1.e-4, 1.e-2, 1.0},
ytick={0.0, 25.0, 50.0, 75.0, 100.0, 125.0},
legend style={font=\small, nodes=left,
nodes={scale=0.7, transform shape}},
legend pos=outer north east
]
\addlegendentry{HODLR, $\phi=1/200$}
\addplot [color=blue, line width = 1.25pt]
table {2DAR200HODLRcompression.dat};
\addlegendentry{HODLR, $\phi=1/100$}
\addplot [color=blue, dashdotted, line width = 1.25pt]
table {2DAR100HODLRcompression.dat};
\addlegendentry{HODLR, $\phi=1/50$\phantom{e}}
\addplot [color=blue, dashed,line width = 1.25pt]
table {2DAR50HODLRcompression.dat};
\addlegendentry{HODLR, $\phi=1/25$\phantom{e}}
\addplot [color=blue, loosely dotted,line width = 1.25pt]
table {2DAR25HODLRcompression.dat};
\addlegendentry{LR, $\phi=1/200$}
\addplot [color=black, line width = 1.25pt]
table {2DAR200LRcompression.dat};
\addlegendentry{LR, $\phi=1/100$}
\addplot [color=black, dashdotted, line width = 1.25pt]
table {2DAR100LRcompression.dat};
\addlegendentry{LR, $\phi=1/50$\phantom{e}}
\addplot [color=black, dashed, line width = 1.25pt]
table {2DAR50LRcompression.dat};
\addlegendentry{LR, $\phi=1/25$\phantom{e}}
\addplot [color=black, loosely dotted, line width = 1.25pt]
table {2DAR25LRcompression.dat};
\end{axis}
\end{tikzpicture}
\caption{Comparison of HODLR and global low-rank (LR) compression costs of the
Gauss-Newton data-misfit Hessian $\bm{H}_{\text{misfit}}^{\text{GN}}$,
for Example I with ice sheet aspect ratio $\phi$. This figure shows that for
low aspect ratios, HODLR becomes more efficient than global low-rank
for medium levels of target accuracy.}
\label{fig:aspectratiostudy}
\end{center}
\end{figure}
\paragraph{Influence of the parameter dimension}
We now vary the level of mesh discretization refinement in order to study the influence of data informativeness, through the discretized parameter dimension $N=\text{dim}(\bm{\beta})$, on the computational cost to generate
HODLR and global low-rank approximations of the Gauss-Newton data-misfit Hessian. The hierarchical depth $L$ is
incremented for every doubling of the discretized parameter dimension,
in order that the hierarchical depth scales with the logarithm of the size of the Hessian matrix,
a condition described in Section~\ref{subsec:HODLRdef}.
Figure~\ref{fig:paramdimstudy} provides computational evidence of the
claim made in Section~\ref{subsec:HODLRdef}, that the number of
applies needed to hierarchically compress an operator with HODLR
structure is $\mathcal{O}\left(\log\,N\right)$. On the contrary, the number
of applies to generate the global low-rank approximation is rather insensitive
to the level of mesh refinement.
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major,
width=8cm,
height=6cm,
xlabel = $\|\bm{H}_{\text{misfit}}^{\text{GN}}-\bm{\tilde{H}}_{\text{misfit}}^{\text{GN}}\|_{2}/\|\bm{H}_{\text{misfit}}^{\text{GN}}\|_{2}\text{, approximation error}$,
ylabel = Computaitonal cost (Hessian applies),
xmin=1.e-8, xmax = 1.e-2,
xmode=log,
ymin = 0.0, ymax = 150.0,
xtick={1.e-8, 1.e-6, 1.e-4, 1.e-2, 1.0},
ytick={0.0, 25.0, 50.0, 75.0, 100.0, 125.0, 150.0,
175.0},
legend style={font=\small, nodes=left, nodes={scale=0.7, transform shape}},
legend pos=outer north east,
]
\addlegendentry{HODLR, $\text{dim}\left(\bm{\beta}\right)=128$}
\addplot [color=blue, dashdotted, line width = 1.25pt]
table {2Dn128HODLRcompression.dat};
\addlegendentry{HODLR, $\text{dim}\left(\bm{\beta}\right)=256$}
\addplot [color=blue, dashed, line width = 1.25pt]
table {2Dn256HODLRcompression.dat};
\addlegendentry{HODLR, $\text{dim}\left(\bm{\beta}\right)=512$}
\addplot [color=blue, densely dotted, line width = 1.25pt]
table {2Dn512HODLRcompression.dat};
\addlegendentry{LR, $\text{dim}\left(\bm{\beta}\right)=128$}
\addplot [color=black, dashdotted, line width = 1.25pt]
table {2Dn128LRcompression.dat};
\addlegendentry{LR, $\text{dim}\left(\bm{\beta}\right)=256$}
\addplot [color=black, dashed, line width = 1.25pt]
table {2Dn256LRcompression.dat};
\addlegendentry{LR, $\text{dim}\left(\bm{\beta}\right)=512$}
\addplot [color=black, densely dotted, line width = 1.25pt]
table {2Dn512LRcompression.dat};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{Dependence of HODLR and global low-rank (LR) compression costs of the
Gauss-Newton data-misfit Hessian on $\text{dim}\left(\bm{\beta}\right)$, the
dimension of the discretized logarithmic basal sliding field
for Example I. The cost of global low-rank compression is almost constant, while the cost of HODLR compression increases as the mesh is refined.}
\label{fig:paramdimstudy}
\end{figure}
\paragraph{Influence of the data dimension}
Figure~\ref{fig:nobsstudy} shows that the global rank grows with the
number of observations points and thus global low-rank compression tends to be
less efficient for problems with strongly informative observation data. The rate of
spectral decay of the (Gauss-Newton) data-misfit Hessian is related to the degree of ill-posedness
of the unregularized inverse problem.
As the number of observations increases, these associated model predictions are increasingly sensitive to small scale variations in the basal sliding
field. Thus, more data, generally makes the data set more informative about the
parameter and the (Gauss-Newton) data-misfit Hessian have a weaker rate of spectral decay.
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major,
width=8cm,
height=6cm,
xlabel = $\|\bm{H}_{\text{misfit}}^{\text{GN}}-\bm{\tilde{H}}_{\text{misfit}}^{\text{GN}}\|_{2}/\|\bm{H}_{\text{misfit}}^{\text{GN}}\|_{2}\text{, approximation error}$,
ylabel = Computational cost (Hessian applies),
xmin=1.e-8, xmax = 1.e-2,
ymin = 0.0, ymax = 200.0,
xmode=log,
ytick={0.0, 25.0, 50.0, 75.0, 100.0, 125.0, 150.,
175.0, 200.0},
legend style={font=\small, nodes=left, nodes={scale=0.7, transform shape}},
legend pos=outer north east,
]
\addlegendentry{HODLR, $\text{dim}(\bm{d})=1.0\times 10^{2}$}
\addplot [color=blue, dashed, line width = 1.25pt]
table {2Dnobs100HODLRcompression.dat};
\addlegendentry{HODLR, $\text{dim}(\bm{d})=1.5\times 10^{2}$}
\addplot [color=blue, dashdotted, line width = 1.25pt]
table {2Dnobs150HODLRcompression.dat};
\addlegendentry{HODLR, $\text{dim}(\bm{d})=2.0\times 10^{2}$}
\addplot [color=blue, densely dotted, line width = 1.25pt]
table {2Dnobs200HODLRcompression.dat};
\addlegendentry{LR, $\text{dim}(\bm{d})=1.0\times 10^{2}$}
\addplot [color=black, dashed, line width = 1.25pt]
table {2Dnobs100LRcompression.dat};
\addlegendentry{LR, $\text{dim}(\bm{d})=1.5\times 10^{2}$}
\addplot [color=black, dashdotted, line width = 1.25pt]
table {2Dnobs150LRcompression.dat};
\addlegendentry{LR, $\text{dim}(\bm{d})=2.0\times 10^{2}$}
\addplot [color=black, densely dotted, line width = 1.25pt]
table {2Dnobs200LRcompression.dat};
\end{axis}
\end{tikzpicture}
\caption{Dependence of HODLR and global low-rank (LR) compression costs of the
Gauss-Newton data-misfit Hessian on $\text{dim}(\bm{d})$, the data dimension, for Example I. The computational cost
for global low-rank approximation increases with
the number of observations, while the cost for HODLR compression is
rather insensitive.}
\label{fig:nobsstudy}
\end{center}
\end{figure}
\section{Example II: Humboldt glacier and Greenland ice sheet}
\label{sec:HumboldtGreenland}
Here, we study the scalability of the proposed methods using
large-scale ice sheet problems which are typically used in climate
simulations. Namely, we focus on the Humboldt glacier in North-West
Greenland, and the entire Greenland ice sheet. For these simulations,
we use the ice sheet model MALI,~\cite{hoffman2018}, which relies on
Albany,~\cite{tezaurperegoetal2015}, a C++ multi-physics library for
the implementation of the so-called first-order approximation of
Stokes equations. This first-order approximation is based on scaling arguments
motivated by the shallow nature of ice sheets and uses the
incompressibility condition to reduce the unknows to the horizontal
velocities. We use PyAlbany~\cite{liegeois2022} a convenient Python
interface to the Albany package, which in turn builds upon
Trilinos~\cite{trilinos-website}. Albany is designed to support parallel and
scalable finite-element discretized PDE solvers and various analysis
capabilities. Details about the parameter, state, data dimensions as
well as the number of cores and hierarchical levels used in the
computations is provided in Table~\ref{table:exampleII}.
\begin{table}[htp]
\centering
\begin{tabular}{|c|c|c|}
\hline
& Humboldt & Greenland \\
\hline
dim$(\bm{\beta})$ & $11\,608$ & $320\,116$ \\
\hline
dim$(\bm{u})$ & $255\,376$ & $7\,042\,552$ \\
\hline
dim$(\bm{d})$ & $23\,216$ & $640\,232$ \\
\hline
\# of cores & $120$ & $2\,048$ \\
\hline
$L$ & $8$ & $10$ \\
\hline
\end{tabular}
\caption{Problem specifications for the Humboldt glacier and Greenland
ice-sheet problems (Example II). Dimension of the discretized parameter field
dim$(\bm{\beta}$), dimension of the discretized velocity field
dim$(\bm{u})$, dimension of the observations
dim$(\bm{d})$, processors employed for computations and
$L$ depth of HODLR hierarchical partitioning.}
\label{table:exampleII}
\end{table}
The following study is partially motivated by findings made in the
Section~\ref{sec:2DResults}, namely that the role of the aspect ratio between the
vertical and horizontal directions (see Section~\ref{subsec:ISMIP:props}) influences the ability to use global low-rank
compression and favors HODLR compression. We generate HODLR and global low-rank approximations
and then based on the computed spectra, Equation~\ref{eq:HODLRcost} and $\zeta^{\text{LR}}=r+d$, we estimate
the computational cost. Additionally, we study
how the ordering of the degrees of freedom impacts the spectral decay
for off-diagonal blocks of the data-misfit Hessian.
We present
results for both, the Humboldt glacier, which expands about $4\times
10^{2}$ [km] laterally, and the Greenland ice sheet, which
expands about $1.8\times 10^{3}$ [km]. The ice is at most~$3.4$~[km] thick, resulting in approximate
aspect ratios of $8.5\times 10^{-3}$ for Humboldt and $1.9\times 10^{-3}$ for
Greenland. We use a nonuniform triangulation of the Greenland ice
sheet, with mesh size ranging from 1 to 10 [km], and we then extrude
it in the vertical direction, obtaining a 3D mesh having 10 layers of
prismatic elements. The velocity observations at the top surface of the Greenland ice sheet are obtained from satellite observations~\cite{joughin2015}. The MAP basal sliding field and the temperature fields are obtained as part of the initialization process, using a numerical optimization approach to match the ice velocity observations and constrained by the first-order flow model coupled
with a temperature model~\cite{perego2022}.
Additional details about the mesh geometries and data, in particular regarding the Humboldt glacier, can be found in~\cite{hillebrand2022}.
In Figure~\ref{fig:Humboldtfieldplots}, we show the observed surface
velocity $\bm{d}$ in [m/yr], the MAP estimates of the logarithmic
basal sliding field $\beta^{\star}$ ($\exp(\beta^{\star})$ is in
[kPa yr/m]) and surface velocity in [m/yr] generated by the model.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=.1125]{Humboldtuobs.png} \hfil
\includegraphics[scale=.1125]{Humboldtstate.png} \hfil
\includegraphics[scale=.1125]{Humboldtbeta.png} \hfil \\
\includegraphics[scale=.1625]{GISuobs.png} \hfil
\includegraphics[scale=.1625]{GISstate.png} \hfil
\includegraphics[scale=.1625]{GISbeta.png} \hfil
\end{center}
\caption{Data and MAP estimates for Example II. Shown are the surface velocity observation
data (left), and the reconstructed
surface velocity field (middle) that is based on the MAP
estimate of the logarithmic basal sliding field
(right). Top row is for the Humboldt glacier and bottom row for the
Greenland ice sheet.}
\label{fig:Humboldtfieldplots}
\end{figure}
\begin{figure}[tb]
\begin{center}
\begin{tabular}{rl}
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major, width=0.425\textwidth, height=0.425\textwidth,
ymode=log,
xlabel = {$j$, singular value number},
ylabel = {$\sigma_{j}$, singular value},
xmin=1, xmax = 4.e3,
ymin=1.e-5, ymax = 1.e1,
xtick={1.e3, 2.e3, 3.e3, 4.e3},
ytick={1.e-5, 1.e-3, 1.e-1, 1.e1},
]
\addplot [color=black, line width = 1.25pt]
table{HumboldtSigGlb.dat};
\addplot [color=black, dashdotted, line width = 1.25pt]
table{GISSigGlb.dat};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major, width=0.425\textwidth, height=0.425\textwidth,
ymode=log,
xlabel = {$j$, singular value number},
scaled ticks=false,
xmin=1,xmax=200,
ymin=1.e-10,ymax=1.e0,
ytick={1.e-10, 1.e-8, 1.e-6, 1.e-4, 1.e-2, 1.e0},
legend style={font=\small, nodes=left, nodes={scale=0.7, transform shape}},
legend pos=outer north east,
]
\addlegendentry{$\ell=1$ Humboldt}
\addplot [color=teal, line width = 1.25pt]
table{HumboldtSigL0J0.dat};
\addlegendentry{$\ell=1$ GIS\phantom{boldt }}
\addplot [color=teal, dashdotted, line width = 1.25pt]
table{GISSigL0J0.dat};
\addlegendentry{$\ell=2$ Humboldt}
\addplot [color=olive, line width = 1.25pt]
table{HumboldtSigL1J0.dat};
\addlegendentry{$\ell=2$ GIS\phantom{boldt }}
\addplot [color=olive, dashed, line width = 1.25pt]
table{GISSigL1J0.dat};
\addlegendentry{$\ell=3$ Humboldt}
\addplot [color=gray, line width = 1.25pt]
table{HumboldtSigL2J0.dat};
\addlegendentry{$\ell=3$ GIS\phantom{boldt }}
\addplot [color=gray, dashdotted, line width = 1.25pt]
table{GISSigL2J0.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
\end{center}
\caption{Singular values of the data-misfit Hessian (left
figure) and various off-diagonal blocks of the data-misfit
Hessian (right figure) for Example II. The color-scheme in
the right most figure is consistent with Figure~\ref{fig:hmatrixpartitioningstructure}. On the left, the
singular values of the Humboldt and Greenland data-misfit
Hessians are shown using a solid and dash-dotted line,
respectively. On the right, we show the singular values of
the upper most
blocks, that is $\bm{A}^{\left(\ell\right)}_{1,2}$ as defined
in Section~\ref{subsec:erroranalysis}.}
\label{fig:HumboldtGreenlandSpectra}
\end{figure}
\begin{figure}[tb]
\begin{center}
\begin{tabular}{rl}
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major, width=0.425\textwidth, height=0.425\textwidth,
xmode=log, ymode=log,
xlabel = {Approximation error},
ylabel = {Computational cost (Hessian applies)},
xmin=2.e-6, xmax = 1.e-1,
ymin=5.e1, ymax = 1.e4,
legend style={font=\small, nodes=left, nodes={scale=0.7, transform shape}},
legend pos=outer north east,
]
\addplot [color=black, line width = 1.25pt]
table{HumboldtLRcompression.dat};
\addplot [color=blue, line width = 1.25pt]
table{HumboldtHODLRcompression.dat};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}[baseline, trim axis right]
\begin{axis}[
grid=major, width=0.425\textwidth, height=0.425\textwidth,
xmode=log, ymode=log,
xlabel = {Approximation error},
xmin=3.1e-4, xmax = 1.e-1,
ymin=5.e1, ymax = 1.e4,
legend style={font=\small, nodes=left, nodes={scale=0.7, transform shape}},
legend pos=outer north east,
]
\addlegendentry{LR}
\addplot [color=black, line width = 1.25pt]
table{GISLRcompression.dat};
\addlegendentry{HODLR}
\addplot [color=blue, line width = 1.25pt]
table{GISHODLRcompression.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{Estimated computational costs (measured by the number of Hessian
applies) to compress the Humboldt glacier (left) and
Greenland ice-sheet (right) data-misfit Hessians into the
global low-rank (LR) and hierarchical off-diagonal low-rank
(HODLR) formats as a function of the approximation
error $\|\bm{H}_{\text{misfit}}-\bm{\tilde{H}}_{\text{misfit}}\|_{2}/\|\bm{H}_{\text{misfit}}\|_{2}$.}
\label{fig:GreenlandHumboldtError}
\end{center}
\end{figure}
\subsection{HODLR compressability}
We next generate global low-rank approximations of a Greenland and Humboldt
data-misfit Hessian as well as low-rank approximations of various
off-diagonal blocks. Plots of the estimated singular values are
provided in Figure~\ref{fig:HumboldtGreenlandSpectra}. We observe that
the spectrum of the Greenland ice sheet decays substantially slower
than the one for the Humboldt glacier. Besides the different
sizes of these two discretized problems, this is also due to the
different aspect ratios. Having
estimated singular values of the data-misfit Hessians and the
appropriate off-diagonal blocks, one is able to estimate computational
costs to compress them into the global low-rank and HODLR matrix
formats. The computational cost as a function of Hessian approximation
target accuracy is given in Figure~\ref{fig:GreenlandHumboldtError},
wherein it is demonstrated that the HODLR compression format can offer
a favorable means to approximate data-misfit Hessians for large-scale
inverse problems governed by complex ice-sheet models.
\subsection{Impact of parameter degree of freedom ordering}
\label{subsec:dofordering}
We seek to ensure that the off-diagonal blocks,
determined by the hierarchical partitioning
described in Section~\ref{subsec:HODLRdef},
of the data-misfit Hessian are low-rank.
For this reason, the nodes $\lbrace \bm{x}_{i}\rbrace_{i}$
associated to the degrees of freedom (dofs) are ordered
according to a kd-tree, i.e., a recursive hyperplane splitting.
The ordering provided by the kd-tree is such that
the $(i,j)$-entry of the distance matrix
$\bm{D}_{i,j}=\|\bm{x}_{i}-\bm{x}_{j}\|_{2}$,
is typically small whenever $|i-j|$ is small,
that is the dof ordering preserves some notion of locality
(see Section~\ref{subsec:motivation}).
In particular, a sparse permutation matrix
$\bm{B}$, is determined,
whose action reorders the dofs from the default
ordering provided by the finite element discretization to that specified
by the kd-tree.
The data-misfit Hessian with respect to the kd-tree ordering,
$\bm{H}^{\text{kd}}_{\text{misfit}}:=\bm{B}\bm{H}_{\text{misfit}}\bm{B}^{\top}$,
is then amenable to HODLR compression. Subsequently,
$\bm{B}^{\top}\bm{\tilde{H}}^{\text{kd}}_{\text{misfit}}\bm{B}$ is an approximation of the data-misfit Hessian with respect to the default ordering
The dof ordering has no impact on a matrix's global numerical rank but
does indeed impact the numerical rank of its numerous submatrices that
are defined by a fixed partitioning scheme, such as the off-diagonal
blocks of an HODLR matrix (see Section~\ref{subsec:HODLRdef}). Here,
we study the HODLR compressibility of the Humboldt glacier data-misfit
Hessian by comparing the rate of decay of an off-diagonal block's
singular values using the default ordering provided by Albany and the
ordering obtained by a kd-tree recursive hyperplane splitting. As
observed in Figure~\ref{fig:humboldt-order}, the rate at which the
singular values of the level-1 off-diagonal block decay, strongly
depends on the dof ordering. This is because the ordering given by the
kd-tree better preserves locality, and as a consequence, by the
argument provided in Section~\ref{subsec:motivation}, the singular
values decay much faster when using the kd-tree ordering. The kd-tree
ordering therefore provides a substantially computationally cheaper
means to generate an HODLR approximation of the data-misfit
Hessian. Figure~\ref{fig:humboldt-order} also shows distance matrices
for the default and kd-tree bases. These show the improved locality
for the kd-orderings. Note that data-misfit Hessian matrices are
expected to follow a similar structure as these distance matrices,
which explains why the former's off-diagonal blocks can be compressed
more effectively in the kd-order than in the default order of dofs.
\begin{figure}[tb]
\begin{tikzpicture}
\node at (0,7) {};
\node at (0,-1) {};
\node at (0,0) {
\begin{axis}[compat=1.3,
grid=major,
width=10.5cm,
height=7cm,
ymode=log,
xlabel = $j\text{, singular value number}$,
ylabel = $\sigma_{j}\text{, singular value}$,
xmin=1,xmax=500,
ymin=1.e-11,ymax=1.e0,
ytick={1.e-11, 1.e-9, 1.e-7, 1.e-5, 1.e-3, 1.e-1},
legend style={nodes=left, nodes={scale=0.7, transform shape}},
legend pos=north east,
]
\addlegendentry{default basis}
\addplot [color=violet, line width = 1.pt]
table{HumboldtkdtreeSigL0J0.dat};
\addlegendentry{kd-tree basis}
\addplot [color=violet, dash dot, line width = 1.5pt]
table{HumboldtSigL0J0.dat};
\end{axis}
};
\node at (10.1,2.0)
{\includegraphics[scale=.2]{HumboldtDistanceMatrix.png}};
\node at (2.3,1.4) {\includegraphics[scale=.2]{HumboldtDistanceMatrixkd.png}};
\draw [-, opacity=0.6, very thick, white!50!black]
(3.6,1.4)--(4.5,2.5);
\draw [-, opacity=0.6, very thick, white!50!black] (8.8,2)--(7.5,3.9);
\end{tikzpicture}
\caption{Singular values of the hierarchical level $1$
off-diagonal block, $\bm{A}_{1,2}^{(1)}$, of the Humboldt
glacier data-misfit Hessian, when expressed in a kd-tree basis
and the default basis. Shown also are heat maps of the distance
matrices $\bm{D}_{i,j}=\|\bm{x}_{i}-\bm{x}_{j}\|_{2}$, wherein
the nodes $\lbrace \bm{x}_{i}\rbrace_{i}$, associated to the
finite element degrees of freedom have been ordered according to
a default standard and a kd-tree.
\label{fig:humboldt-order}}
\end{figure}
\section{Conclusion}
In this work, we motivated why data-misfit Hessians which arise from a class
of inverse problems governed by PDEs have HODLR matrix
structure. HODLR matrices can efficiently be inverted and factorized,
operations needed for solving inverse problems governed by PDEs by
Newton's method, for constructing Gaussian approximations and for
Markov chain Monte Carlo sampling methods. We study inverse ice sheet
problems, for which, under certain regimes, HODLR matrices provide a more computationally
efficient approximation format than the global low-rank matrix format. These
problems are those with highly informative data and small aspect ratio ice
sheets. While global low-rank matrices are favorable for large
discretized parameter dimension and small data dimension, we find that
HODLR matrices can offer computational savings for large-scale inverse problems
such as a Greenland ice sheet inverse problem with satellite
observational data and a discretized parameter dimension that exceeds
$10^{5}$.
For future work, we believe that the computational cost can be reduced further by utilizing hierarchical matrix partitionings
that satisfy a strong admissibility condition~\cite{hackbuschbohm2002},
as they are better suited to exploit data-misfit
Hessian structure. However, generating a hierarchical matrix
approximation with such a partitioning, e.g., by the
peeling method~\cite{linlulexing2011, martinsson2016},
requires substantially more Hessian vector products.
Ultimately, to further reduce the
computational cost of Hessian approximations in inverse problems
governed by PDEs, exploiting further problem
structure will be essential.
\section{Appendix}
\subsection{Randomized Compression Algorithms}
\label{subsec:randomizedcompressionalgorithms}
Here, for completeness we outline the randomized matrix-free double-pass global low-rank and HODLR compression algorithms. The essential ideas of the randomized double-pass low-rank algorithm~\cite{halkomartinsson2011} are
\begin{enumerate}
\item the application of a vector $\bm{\omega}$ with random entries to a matrix
$\bm{A}$, yields a vector $\bm{y}=\bm{A}\bm{\omega}$, which is likely aligned with
the dominant left singular vectors of $\bm{A}$;
\item a matrix $\bm{Q}$, whose columns are nearly aligned with the dominant left singular vectors of $\bm{A}$, can be used to construct
an accurate low-rank approximation $\bm{\tilde{A}}=\bm{Q}\bm{Q}^{\top}\bm{A}$ of $\bm{A}$.
\end{enumerate}
The double-pass randomized SVD algorithm is presented in Algorithm~\ref{alg:doublepass} and does not significantly differ from that
in~\cite{halkomartinsson2011}, specifically it is lines $7,8$ and $9$ that are distinct. This minor modification frees us from the need to compute a (parallel) singular
value decomposition (SVD) of a (distributed) $N\times k$ matrix, such
as $\bm{Z}$. Here, we only need to compute an SVD of the smaller
$k\times k$ matrix $\bm{R_{Z}}$. In the distributed memory parallelism
setting of Section~\ref{sec:HumboldtGreenland}, this algorithmic
modification allows us to only require the invocation of serial SVD routines, as
$\bm{R_{Z}}$, which is typically small, is available on each processor.
\begin{algorithm}[H]
\caption{Double-pass randomized SVD. \\
\textbf{Input:} $\bm{A}\in\mathbb{R}^{N\times N}$, $r\in\mathbb{N}$ desired rank and oversampling parameter $d\in\mathbb{N}$.\\
\textbf{Output:} low-rank approximation $\bm{\tilde{A}}$ of $\bm{A}$}
\begin{algorithmic}[1]
\STATE{$k=r+d$}
\STATE{$\bm{\Omega}=$ \verb~randn~$(N,k)$ \hfill $\lbrace$Initiate random matrix$\rbrace$}
\STATE{$\bm{Y}=\bm{A}\bm{\Omega}$ \hfill $\lbrace$Sample column space$\rbrace$}
\STATE{$\bm{Q_{Y}}=$ \verb~orthog~($\bm{Y}$) \hfill $\lbrace$Orthogonalize column samples$\rbrace$}
\STATE{$\bm{Z}=\bm{A}^{\top}\bm{Q_{Y}}$ \hfill $\lbrace$Sample row space$\rbrace$}
\STATE{$\bm{Q_{Z}}=$ \verb~orthog~($\bm{Z}$) \hfill $\lbrace$Orthogonalize row samples$\rbrace$}
\STATE{$\bm{R_{Z}}=\bm{Q_{Z}}^{\top}\bm{Z}$ \hfill $\lbrace$Compress row samples$\rbrace$}
\STATE{$\bm{R_{Z}}=\bm{\hat{V}}\bm{\Sigma}\bm{\hat{U}}^{\top}$ \hfill $\lbrace$SVD of $k\times k$ compressed row sample matrix$\rbrace$}
\STATE{$\bm{V}=\bm{Q_{Z}}\bm{\hat{V}}$ \hfill $\lbrace$Project row space information$\rbrace$}
\STATE{$\bm{U}=\bm{Q_{Y}}\bm{\hat{U}}$ \hfill $\lbrace$Project column space information$\rbrace$}
\STATE{$\bm{\tilde{A}}=\bm{U}\bm{\Sigma}\bm{V}^{\top}$ \hfill $\lbrace$Form low-rank approximation$\rbrace$\,}
\end{algorithmic}
\label{alg:doublepass}
\end{algorithm}
The randomized hierarchical off-diagonal low-rank algorithm proceeds by compressing off-diagonal blocks by the double-pass algorithm. The larger off-diagonal blocks are compressed prior to the compression of smaller off-diagonal blocks, via a peeling procedure~\cite{linlulexing2011}. Here, both
$\bm{A}$ and $\bm{\tilde{A}}$ are assumed to be symmetric as we seek compression of symmetric operators and computation of symmetric approximants.
\begin{algorithm}[H]
\caption{Symmetric matrix-free randomized HODLR. \\
\textbf{Input:} symmetric $\bm{A}\in\mathbb{R}^{N\times N}$, hierarchical depth $L\in\mathbb{N}$, $r_{1},\dots,r_{L}$ desired ranks of the off-diagonal blocks at each hierarchical depth and oversampling parameter $d$.\\
\textbf{Output:} symmetric HODLR approximation $\bm{\tilde{A}}$ of $\bm{A}$}
\label{alg:HODLRcompression}
\begin{algorithmic}[1]
\FOR{$\ell=1,2,\dots,L$}
\STATE{$k_{\ell}=r_{\ell}+d$}
\STATE{$\bm{\Omega}=$ \verb~zeros~$(N,k_{\ell})$}
\FOR{$j=1,\dots,2^{\ell-1}$}
\STATE{$\bm{\Omega}(\mathcal{I}_{2j}^{(\ell)},:)=$ \verb~randn~$(|\mathcal{I}_{2j}^{(\ell)}|, k_{\ell})$ \hfill $\lbrace$Initiate structured random matrix$\rbrace$}
\ENDFOR
\STATE{$\bm{Y}=\left(\bm{A}-\sum_{j=1}^{\ell-1}\bm{A}^{(j)}\right)\bm{\Omega}$ \hfill $\lbrace$Sample off-diagonal block column spaces$\rbrace$}
\FOR{$j=1,\dots,2^{\ell-1}$}
\STATE{$\bm{Y}^{(j)}=$ zeros$(N,k_{\ell})$}
\STATE{$\bm{Y}^{(j)}(\mathcal{I}_{2j-1}^{(\ell)},:)=\bm{Y}(\mathcal{I}_{2j-1}^{(\ell)},:)$}
\STATE{$\bm{Q_{Y}}^{(j)}=$ \verb~orthog~$(\bm{Y}^{(j)})$ \hfill $\lbrace$Orthogonalize column samples of the level $\ell$ off-diagonal blocks$\rbrace$}
\ENDFOR
\STATE{$\bm{Q_{Y}}=\sum_{j=1}^{2^{\ell-1}}\bm{Q_{Y}}^{(j)}$ \hfill $\lbrace$Row space sampling matrix$\rbrace$}
\STATE{$\bm{Z}=\left(\bm{A}-\sum_{j=1}^{\ell-1}\bm{A}^{(j)}\right)\bm{Q_{Y}}$ \hfill $\lbrace$Sample off-diagonal block row spaces$\rbrace$}
\FOR{$j=1,\dots,2^{\ell-1}$}
\STATE{$\bm{Z}^{(j)}=\bm{Z}(\mathcal{I}_{2j}^{(\ell)},:)$}
\STATE{$\bm{Q_{Z}}^{(j)}=$ \verb~orthog~$(\bm{Z}^{(j)})$ \hfill $\lbrace$Orthogonalize row samples of the level $\ell$ off-diagonal blocks$\rbrace$}
\STATE{$\bm{R_{Z}}^{(j)}=\left(\bm{Q_{Z}}^{(j)}\right)^{\top}\bm{Z}^{(j)}$ \hfill $\lbrace$Compress level $\ell$ off-diagonal block row samples$\rbrace$}
\STATE{$\bm{R_{Z}}^{(j)}=\bm{\hat{V}}_{2j-1}^{(\ell)}
\bm{\Sigma}_{2j-1}^{(\ell)}
\bm{\hat{U}}_{2j-1}^{(\ell)}$ \hfill $\lbrace$SVD of $k_{\ell}\times k_{\ell}$ compressed row sample matrix$\rbrace$ }
\STATE{$\bm{V}_{2j-1}^{(\ell)}=\bm{Q_{Z}}^{(j)}\bm{\hat{V}}_{2j-1}^{(\ell)}$ \hfill $\lbrace$Project row space information$\rbrace$}
\STATE{$\bm{U}_{2j-1}^{(\ell)}=\bm{Q_{Y}}^{(j)}\bm{\hat{U}}_{2j-1}^{(\ell)}$ \hfill $\lbrace$Project column space information$\rbrace$}
\STATE{$\bm{V}_{2j}^{(\ell)}=\bm{U}_{2j-1}^{(\ell)}$}
\STATE{$\bm{U}_{2j}^{(\ell)}=\bm{V}_{2j-1}^{(\ell)}$}
\STATE{$\bm{\Sigma}_{2j}^{(\ell)}=\bm{\Sigma}_{2j-1}^{(\ell)}$}
\ENDFOR
\STATE{$\bm{A}^{(\ell)}=\sum_{j=1}^{2^{\ell}}\bm{U}_{j}^{(\ell)}\bm{\Sigma}_{j}^{(\ell)}
\left(\bm{V}_{j}^{(\ell)}\right)^{\top}$}
\ENDFOR
\STATE{obtain block diagonal $\bm{D}$ of $\bm{A}$ by sampling $\bm{A}-\sum_{j=1}^{L}\bm{A}^{(j)}$}
\STATE{$\bm{\tilde{A}}=\bm{D}+\sum_{\ell=1}^{L}\bm{A}^{(\ell)}$}
\end{algorithmic}
\end{algorithm}
\subsection{Global HODLR approximation error from the accumulation of block low-rank off-diagonal approximation errors}
\label{subsec:erroranalysis}
Let $\bm{A}$ be a $N\times N$ matrix and consider the following partitioning
\begin{eqnarray*}
\bm{A}^{\left(1\right)}
&=&
\left(\matrix{
\bm{0} & \bm{A}_{1,2}^{\left(1\right)} \cr
\bm{A}_{2,1}^{\left(1\right)} & \bm{0} \cr}\right), \\
\bm{A}^{\left(2\right)}
&=&
\left(\matrix{
\bm{0} & \bm{A}_{1,2}^{\left(2\right)} &
\bm{0} & \bm{0} \cr
\bm{A}_{2,1}^{\left(2\right)} & \bm{0} &
\bm{0} & \bm{0} \cr
\bm{0} & \bm{0} &
\bm{0} & \bm{A}_{3,4}^{\left(2\right)} \cr
\bm{0} & \bm{0} &
\bm{A}_{4,3}^{\left(2\right)} & \bm{0} \cr
}\right), \\
\bm{D}&=&
\left(\matrix{
\bm{A}_{1,1}^{\left(2\right)} & \bm{0} &
\bm{0} & \bm{0} \cr
\bm{0} & \bm{A}_{2,2}^{\left(2\right)} &
\bm{0} & \bm{0} \cr
\bm{0} & \bm{0} &
\bm{A}_{3,3}^{\left(2\right)} & \bm{0} \cr
\bm{0} & \bm{0} &
\bm{0} & \bm{A}_{4,4}^{\left(2\right)} \cr
}\right),
\end{eqnarray*}
where $\bm{A}_{i,j}^{\left(\ell\right)}$ is the $(i,j)$ block of a $2^{\ell}\times 2^{\ell}$
block partitioning of $\bm{A}$, where $1\leq \ell\leq L$. $\bm{A}^{(\ell)}$ contains all blocks $\bm{A}_{i,j}^{(\ell)}$ such that $|i-j|=1$ and $\bm{D}$ contains the
diagonal blocks $\bm{A}^{(L)}_{i,i}$. Above, we show the decomposition $\bm{A}=\sum_{\ell=1}^{L}\bm{A}^{(\ell)}+\bm{D}$ for $L=2$ hierarchical depth but in the following analysis
$L$ is a arbitrary. Let $\bm{x}\in\mathbb{R}^{N}$, then
\begin{eqnarray*}
\bm{A}\bm{x}&=&
\sum_{j=1}^{L}\bm{A}^{\left(j\right)}\bm{x}+\bm{D}\bm{x}, \\
\bm{A}^{\left(1\right)}\bm{x}
&=&
\left(\matrix{
\bm{A}_{1,2}^{\left(1\right)}\bm{x}_{2}^{\left(1\right)} \cr
\bm{A}_{2,1}^{\left(1\right)}\bm{x}_{1}^{\left(1\right)} \cr
}\right),
\,\,\,\bm{x}=
\left(\matrix{
\bm{x}_{1}^{\left(1\right)} \cr
\bm{x}_{2}^{\left(2\right)} \cr
}\right), \\
\bm{A}^{\left(j\right)}\bm{x}&=&
\left(\matrix{
\bm{A}_{1,2}^{\left(j\right)}\bm{x}_{2}^{\left(j\right)} \cr
\bm{A}_{2,1}^{\left(j\right)}\bm{x}_{1}^{\left(j\right)} \cr
\vdots \cr
\bm{A}_{2^{j}-1,2^{j}}^{\left(j\right)}\bm{x}_{2^{j}}^{\left(j\right)} \cr
\bm{A}_{2^{j},2^{j}-1}^{\left(j\right)}\bm{x}_{2^{j}-1} \cr
}\right),
\,\,\,\bm{x}
=
\left(\matrix{
\bm{x}_{1}^{\left(j\right)} \cr
\bm{x}_{2}^{\left(j\right)} \cr
\vdots \cr
\bm{x}_{2^{j}-1}^{\left(j\right)} \cr
\bm{x}_{2^{j}}^{\left(j\right)} \cr
}\right),
\end{eqnarray*}
from which we obtain the following expression
\begin{equation*}
\|\bm{A}^{\left(j\right)}\bm{x}\|_{2}^{2}=
\sum_{k=1}^{2^{j-1}}\left(
\|\bm{A}_{2\,k-1,2\,k}^{\left(j\right)}
\bm{x}_{2\,k}^{\left(j\right)}\|_{2}^{2}
+
\|\bm{A}_{2\,k,2\,k-1}^{\left(j\right)}
\bm{x}_{2\,k-1}^{\left(j\right)}\|_{2}^{2}
\right).
\end{equation*}
Now assume that $\bm{\tilde{A}}$ is an HODLR approximation of $\bm{A}$, whose diagonal $\bm{D}$ is equal to the diagonal of $\bm{A}$ so that
\begin{eqnarray*}
\left(\bm{A}-\bm{\tilde{A}}\right)
&=&\sum_{j=1}^{L}
\Delta\bm{A}^{\left(j\right)},\\
\Delta\bm{A}^{\left(j\right)}&:=&\left(\bm{A}^{\left(j\right)}-\bm{\tilde{A}}^{\left(j\right)}\right).
\end{eqnarray*}
Here we assume each off-diagonal block has been approximated to some absolute tolerance $\varepsilon>0$,
so that $\|\Delta \bm{A}_{2\,k-1,2\,k}^{\left(j\right)}\|_{2},
\|\Delta \bm{A}_{2\,k,2\,k-1}^{\left(j\right)}\|\leq\varepsilon$ for each $j= 1,2,\dots, L$ and $k = 1,2,\dots,2^{j-1}$. For $\bm{x}\in\mathbb{R}^{N}$ we have
\begin{equation*}
\|\left(\bm{A}-\bm{\tilde{A}}\right)\bm{x}\|_{2}
\leq
\sum_{j=1}^{L}\|\Delta\bm{A}^{\left(j\right)}\,\bm{x}\|_{2},
\end{equation*}
\begin{eqnarray*}
\|\Delta\bm{A}^{\left(j\right)}\,\bm{x}\|_{2}&=&
\sqrt{
\sum_{k=1}^{2^{j-1}}\left(
\|\Delta \bm{A}_{2\,k-1,2\,k}^{\left(j\right)}
\,\bm{x}_{2\,k}^{\left(j\right)}\|_{2}^{2}
+
\|
\Delta
\bm{A}_{2\,k,2\,k-1}^{\left(j\right)}\,
\bm{x}_{2\,k-1}^{\left(j\right)}\|_{2}^{2}
\right)
} \\
&\leq&
\sqrt{
\sum_{k=1}^{2^{j-1}}
\left(\varepsilon^{2}
\|\bm{x}_{2\,k}^{\left(j\right)}\|_{2}^{2}
+\varepsilon^{2}
\|\bm{x}_{2\,k-1}^{\left(j\right)}\|_{2}^{2}\right)
}, \\
\|\Delta \bm{A}^{\left(j\right)}\,\bm{x}\|_{2}
&\leq&
\varepsilon
\sqrt{
\sum_{k=1}^{2^{j-1}}
\left(
\|\bm{x}_{2\,k}^{\left(j\right)}\|_{2}^{2}
+\|\bm{x}_{2\,k-1}^{\left(j\right)}\|_{2}^{2}\right)
}
=\varepsilon\|\bm{x}\|_{2}, \\
\|\left(\bm{A}
-\bm{\tilde{A}}\right)
\bm{x}\|_{2}
&\leq& \varepsilon \,L\, \|\bm{x}\|_{2}, \\
\|\bm{A}-\bm{\tilde{A}}\|_{2}&:=&
\sup_{\bm{x}\neq\bm{0}}
\left(
\frac{\|\left(\bm{A}-\bm{\tilde{A}}\right)\bm{x}\|_{2}}
{\|\bm{x}\|_{2}}\right)\leq \varepsilon\,L
\end{eqnarray*}
\subsection{Error analysis for posterior-covariance}
\label{subsec:posteriorerroranalysis}
Consider a symmetric matrix
$\bm{A}\in\mathbb{R}^{N\times N}$, whose eigenvalues are bounded below by a number greater than $-1$ and a symmetric
approximant $\bm{\tilde{A}}$, with discrepancy $\Delta\bm{A}=\bm{A}-\bm{\tilde{A}}$.
We signify a generic eigenvalue of $\bm{S}$ by $\lambda\left(\bm{S}\right)$ so that $s_{1}\leq \lambda\left(\bm{S}\right)\leq s_{2}$ indicates that all eigenvalues of $\bm{S}$ are bounded below by $s_{1}$ and above by $s_{2}$.
Now we provide a bound for the error of $\left(\bm{I}+\bm{A}\right)^{-1}-\left(\bm{I}+\bm{\tilde{A}}\right)^{-1}$,
given that $\|\Delta\bm{A}\|_{2}= \varepsilon$, so that one
may assess the accuracy of an HODLR Gaussianized posterior covariance.
When, as in Section~\ref{subsec:HODLRGaussianizedPosterior}, $\bm{A}$ is the prior-preconditioned Hessian misfit, $\|\left(\bm{I}+\bm{A}\right)^{-1}-\left(\bm{I}+\bm{\tilde{A}}\right)^{-1}\|_{2}$
quantifies the discrepancy between an HODLR approximate Gaussianized posterior covariance and the true Gaussianized posterior covariance.
\begin{eqnarray*}
\left(\bm{I}+\bm{A}\right)^{-1}-\left(\bm{I}+\bm{\tilde{A}}\right)^{-1}=\left(\bm{I}+\bm{A}\right)^{-1}-\left(\bm{I}+\bm{A}-\Delta\bm{A}\right)^{-1} =\\
\left(\bm{I}+\bm{A}\right)^{-1}-\left(\left(\bm{I}+\bm{A}\right)\left(\bm{I}-
\left(\bm{I}+\bm{A}\right)^{-1}
\Delta\bm{A}\right)\right)^{-1}=\\
\left(\bm{I}+\bm{A}\right)^{-1}-\left(\bm{I}-
\left(\bm{I}+\bm{A}\right)^{-1}
\Delta\bm{A}\right)^{-1}\left(\bm{I}+\bm{A}\right)^{-1}=\\
\left(\bm{I} - \left(\bm{I}-
\left(\bm{I}+\bm{A}\right)^{-1}
\Delta\bm{A}\right)^{-1}\right)\left(\bm{I}+\bm{A}\right)^{-1}.
\end{eqnarray*}
Given that $\|\Delta\bm{A}\|_{2}=\varepsilon$, we have
\begin{eqnarray*}
-\varepsilon \leq \lambda\left(\Delta\bm{A}\right)\leq \varepsilon, \\
-\varepsilon^{*}
\leq \lambda\left(\left(\bm{I}+\bm{A}\right)^{-1}\Delta\bm{A}\right)\leq \varepsilon^{*}, \\
\varepsilon^{*}:=\varepsilon(1+\lambda_{\text{min}}(\bm{A}))^{-1},\\
1+\varepsilon^{*} \geq \lambda\left(\bm{I}-\left(\bm{I}+\bm{A}\right)^{-1}\Delta\bm{A}\right)\geq 1- \varepsilon^{*},
\end{eqnarray*}
we next assume $\varepsilon^{*}<1$, so that the eigenvalues of $\bm{I}-\left(\bm{I}+\bm{A}\right)^{-1}\Delta\bm{A}$ are necessarily positive and
\begin{equation*}
\left(1+\varepsilon^{*}\right)^{-1}
\leq \lambda\left(\left(\bm{I}-\left(\bm{I}+\bm{A}\right)^{-1}\Delta\bm{A}\right)^{-1}\right)\leq \left(1-\varepsilon^{*}\right)^{-1}.
\end{equation*}
With this it follows that
\begin{eqnarray*}
\|\left(\bm{I}+\bm{A}\right)^{-1}-
\left(\bm{I}+\bm{\tilde{A}}\right)^{-1}\|_{2}/\|\left(\bm{I}+\bm{A}\right)^{-1}\|_{2}\leq \left(1-\left(1+\varepsilon^{*}\right)^{-1}\right) \\
\|\left(\bm{I}+\bm{A}\right)^{-1}-
\left(\bm{I}+\bm{\tilde{A}}\right)^{-1}\|_{2}/\|\left(\bm{I}+\bm{A}\right)^{-1}\|_{2}\leq \frac{\varepsilon^{*}}{1+\varepsilon^{*}},
\end{eqnarray*}
where, as before $\varepsilon^{*}=\|\Delta\bm{A}\|_{2}/\left(1+\lambda_{\text{min}}\left(\bm{A}\right)\right)$.
\section*{Acknowledgments}
The authors thank Trevor Hillebrand from Los Alamos National Laboratory for help with setting up the Humboldt and Greenland ice-sheet grids and datasets. Support for this work was provided by the National Science Foundation under Grant No.\ DMS-1840265 and CAREER-1654311 and through the SciDAC project ProSPect, funded by the U.S.\ Department of Energy (DOE) Office of Science, Advanced Scientific Computing Research and Biological and Environmental Research programs. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No.\ DE-AC02-05CH11231, under NERSC award ERCAP0020130.
\section*{Disclaimer}
This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525.
\section*{References}
\bibliographystyle{iopart-num}
|
1,314,259,995,600 | arxiv | \section{\Large Introduction.}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\noindent
This paper explains and develops the approach recently described by one of the authors in refs.~\cite{EdeR14,EdeR17a,EdeR17b} to evaluate the hadronic vacuum polarization (HVP)contribution to the anomalous magnetic moment of the muon $a_{\mu}^{\rm HVP}$.
Our motivation is threefold:
\begin{enumerate}
\item The persistent discrepancy at the $\sim 4\sigma$ level between the experimental determination of the anomalous magnetic moment of the muon~\cite{BNL}
\begin{equation}\lbl{eq:exp}
a_{\mu}(\rm E821-BNL)= 116~592~089 (54)_{\rm\tiny stat} (33)_{\rm\tiny syst}\times 10^{-11} [0.54 ppm]\,,
\end{equation}
and the standard model prediction~\cite{TH}
\begin{equation}\lbl{eq:sm}
a_{\mu}(\rm SM)= 116~591~805~(42) \times 10^{-11}\,.
\end{equation}
\item
The fact that the standard model contribution which at present has the largest error, is the one coming from the lowest order hadronic vacuum polarization (HVP)
contribution to $a_{\mu}(\text{SM})$, evaluated from a combination of experimental results on
$e^+ e^-$ data~\cite{Davier11,Hagiwara11,Davier16,KNT17}:
\begin{equation}\lbl{eq:HVPexps}
a_{\mu}^{\rm HVP}=(6.931\pm 0.034)\times 10^{-8}~{\cite{Davier16}} \quad \mbox{\rm and} \quad
a_{\mu}^{\rm HVP}=(6.933\pm 0.025)\times 10^{-8}~{\cite{KNT17}} \,.
\end{equation}
\item
The possibility of an alternative evaluation of $a_{\mu}^{\rm HVP}$, either based on QCD first principles with the help of lattice QCD (LQCD) simulations (see e.g. refs.~\cite{Della12}-\cite{lehner17}), or on new dedicated experiments as proposed in ref.~\cite{Venanetal}.
\end{enumerate}
\noindent
The standard representation of $a_{\mu}^{\rm HVP}$ used in the experimental determinations is the one in terms of a
weighted integral of the hadronic spectral function $\frac{1}{\pi}\mbox{\rm Im}\Pi(t) $:
\begin{equation}\lbl{eq:str}
a_{\mu}^{\rm HVP} = \frac{\alpha}{\pi}\int_{4 m_{\pi}^2}^{\infty}
\frac{dt}{t}\int_{0}^{1}dx\frac{x^2(1-x)}{x^2+\frac{t}{m_{\mu}^2}(1-x)}
\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,.
\end{equation}
Thanks to the optical theorem, the hadronic spectral function is obtained from the total $e^+ e^-$ cross section into hadrons via one photon annihilation ($m_e \rightarrow 0$)
\begin{equation}
\sigma(t)_{[e^+ e^- \rightarrow (\gamma)\rightarrow {\rm Hadrons}]}=\frac{4\pi^2 \alpha}{t}\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,.
\end{equation}
We observe that the integrand in Eq.~\rf{eq:str} can be rearranged in a way:
\begin{equation}\lbl{eq:eucr}
a_{\mu}^{\rm HVP} = \frac{\alpha}{\pi}\int_{0}^{1}dx\ (1-x) \int_{4 m_{\pi}^2}^{\infty}
\frac{dt}{t}\ \frac{\frac{x^2}{1-x} m_{\mu}^2}{t+\frac{x^2}{1-x} m_{\mu}^2}\
\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,,
\end{equation}
which explicitly displays the dispersion relation between the hadronic spectral function and the { renormalized} hadronic photon self-energy in the euclidean:
\begin{equation}\lbl{eq:disprel}
-\Pi(Q^2)=\int_{4 m_{\pi}^2}^{\infty}
\frac{dt}{t}\frac{Q^2}{t+Q^2}\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,,\quad{\rm with}\quad
Q^2 \equiv \frac{x^2}{1-x}m_{\mu}^2 \ge 0 \,,
\end{equation}
and therefore~\cite{LPdeR72,EdeR94}
\begin{equation}\lbl{eq:eu}
a_{\mu}^{\rm HVP} = -\frac{\alpha}{\pi}\int_{0}^{1}dx\ (1-x) \ \Pi\left(\frac{x^2}{1-x}m_{\mu}^2\right)\,.
\end{equation}
Trading the Feynman parameter $x$-integration by a $Q^2$-integration results in a slightly more complicated expression
\begin{equation}\lbl{eq:lqcd}
a_{\mu}^{\rm HVP} =
\frac{\alpha}{\pi}\int_0^\infty \frac{dQ^2}{Q^2} \sqrt{\frac{Q^2}{4 m_{\mu}^2+Q^2}}\left(\frac{\sqrt{4 m_{\mu}^2 +Q^2}-\sqrt{Q^2}}{{\sqrt{4 m_{\mu}^2 +Q^2}+\sqrt{Q^2}}} \right)^2 [-\Pi(Q^2)]\,,
\end{equation}
which is the one proposed for LQCD evaluations~\cite{Blum03}.
Because of the parametric $x$-dependence in Eq.~\rf{eq:eu}, or the $Q^2$-weight function in the integrand of Eq.~\rf{eq:lqcd}, the $a_{\mu}^{\rm HVP}$ integral is dominated by the low-$Q^2$ behaviour of the hadronic self-energy function $\Pi(Q^2)$. The natural question which then arises is:
{\it What is the best way to help LQCD (see e.g. refs.~\cite{Della12}-\cite{lehner17}), or dedicated experiments~\cite{Venanetal}, to evaluate this integral when only limited information about $\Pi(Q^2)$ at low $Q^2$ values is available?}
The answer that we propose follows the way initiated in ref.~\cite{EdeR17a}. It is based on Mellin-Barnes techniques which we shall describe below and which we shall illustrate with several examples. As we shall see, this is a very powerful method compared to other approaches discussed in the literature (see e.g. refs.~\cite{ABCGPT16, BDDJ16, DOMetal17} and references therein).
The paper has been organized as follows. The next section is an introduction to the QCD properties of the Mellin transform of the HVP spectral function. Section III is dedicated to a few ingredients, which are required to understand and justify the method that we propose. The subsection III.3 is particularly technical since it justifies mathematically the underlying approach and the restriction to the subclass of Marichev-like Mellin approximants given in Eq.~\rf{eq:marichevend}. For those who are just interested in the applications, it can be escaped in a first reading. Section IV illustrates the application of Mellin-Barnes approximants (MBa) to vacuum polarization in QED at the two loop level. Section V tests the advocated technique of MBa with the experimental values of the HVP moments provided to us by the authors of ref.~\cite{KNT17}. These moments, with their errors, are obtained from the same spectral function which results in the second number quoted in Eq.~\rf{eq:HVPexps}.
We show how the successive MBa approach the experimental determination of $a_{\mu}^{\rm HVP}$. The conclusions with an outlook on future work are given in Section VI. A few technical details have been included in an Appendix.
\section{\Large The Mellin Transform of the Hadronic Spectral Function.}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\noindent
In QCD the hadronic spectral function is positive and goes asymptotically to a constant ($q_i$ denotes the charge, in electric charge units, of an active quark with flavour $i$ ) :
\begin{equation}\lbl{eq:pqed}
\frac{1}{\pi}\mbox{\rm Im}\Pi(t) \underset{{t\rightarrow\infty}}{\thicksim}
\left(\frac{\alpha}{\pi} \right)\left(\sum_i q_i^2\right)\frac{1}{3}N_c \left[1+{\cal O}(\alpha_{\mbox{\rm {\scriptsize s}}})\right]\,,
\end{equation}
with perturbative QCD (pQCD) $\alpha_{\mbox{\rm {\scriptsize s}}}$-corrections known up to four loops.
The moment integrals
\begin{equation}\lbl{eq:moments}
\int_{t_0}^\infty\frac{dt}{t}\left(\frac{t_0}{t} \right)^{1+n}\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,,\quad n=0,1,2\cdots \,,
\end{equation}
where throughout the paper $t_0$ denotes the threshold value of the hadronic spectral function:
\begin{equation}
t_0=4 m_{\pi^{\pm}}^2
\,,
\end{equation}
can be experimentally determined; and the dispersion relation in Eq.~\rf{eq:disprel} relates them to successive derivatives of the hadronic self-energy function $\Pi(Q^2)$ at the origin:
\begin{equation}\lbl{eq:momeucl}
\int\limits_{t_0}^{\infty}\frac{dt}{t}\left(\frac{t_0}{t} \right)^{1+n}
\frac{1}{\pi}\mbox{\rm Im}\Pi(t)=
\frac{(-1)^{n+1} }{(n+1)!}( t_0)^{n+1} \left(\frac{\partial^{n+1}}{(\partial Q^2 )^{n+1}}\Pi(Q^2)\right)_{Q^2 =0}\,,\quad n=0,1,2,\cdots\,,
\end{equation}
which are accessible to LQCD evaluations.
In fact, as pointed out a long time ago~\cite{BdeR69}, the first moment for $n=0$ provides a rigorous upper bound to the muon anomaly:
\begin{equation}
a_{\mu}^{\rm HVP}
\le \frac{\alpha}{\pi}\frac{1}{3}\frac{m_{\mu}^2}{t_0}\int_{t_0}^\infty\frac{dt}{t}\ \frac{t_0}{t}\ \frac{1}{\pi}\mbox{\rm Im}\Pi(t)
=\left(\frac{\alpha}{\pi}\right)\frac{1}{3}\frac{m_{\mu}^2}{t_0}
\left(-{t_0}\frac{\partial}{\partial Q^2}\Pi(Q^2)\right)_{Q^2 =0}\,.
\end{equation}
Quite generally, the moments in Eq.~\rf{eq:moments} obey constraints which follow from the positivity of the spectral function
and may provide useful tests to LQCD determinations. We discuss these constraints in the Appendix.
The moment integrals in Eq.~\rf{eq:moments} can be generalized to a function, which is precisely the Mellin transform of the hadronic spectral function $\frac{1}{\pi}\mbox{\rm Im}\Pi(t)$ defined as follows~\cite{EdeR14}:
\begin{equation}\lbl{eq:melspec}
{\cal M}\left[\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\right](s)\equiv {\cal M}(s) =\int_{t_0}^\infty\frac{dt}{t}\left(\frac{t}{t_0} \right)^{s-1}\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,,\quad -\infty\le \mbox{\rm Re}(s) <1 \,,
\end{equation}
with the domain of definition extended to the full complex $s$-plane by analytic continuation.
An important property of ${\cal M}$ is that ${\cal M}(-s)$ is a completely monotonic function of $s$, for the real variable $s$ in the interval $]-\infty,1[$. It follows simply from Eq.~\rf{eq:melspec}
which implies that all the successive derivatives of $M(s)$ satisfy the positivity conditions
\begin{equation}\lbl{eq:monot}
{\cal M}^{(n)}(s)\ge 0\,\quad{\rm for~all}\quad n\ge 0.
\end{equation}
As a result, ${\cal M}(s)$ can have neither poles nor zeros in the negative $\mbox{\rm Re}(s)$ axis and has a perfectly smooth (increasing) shape in this region. This {\it smoothness property} of ${\cal M}(s)$, which is at the basis of the approximation method that we shall propose, is to be contrasted with the shape of the spectral function $\frac{1}{\pi}\mbox{\rm Im}\Pi(t)$ itself which, as we know from experiments, has a rather complicated structure.
In QCD, the Mellin transform ${\cal M}(s)$ is singular at $s=1$ with a residue which is fixed by the pQCD asymptotic behaviour of the spectral function in Eq.~\rf{eq:pqed}. The contribution from the $u$, $d$, $s$, $c$, $b$ and $t$ quarks gives
\begin{equation}\lbl{eq:QCDMs1}
{\cal M}(s)\underset{{s\rightarrow\ 1}}{\thicksim} \left(\frac{\alpha}{\pi}\right)\left(\frac{4}{9}+\frac{1}{9}+\frac{1}{9}+\frac{4}{9}+\frac{1}{9}+\frac{4}{9}\right)N_c\ \frac{1}{3}\ \frac{1}{1-s}+{\cal O}(\alpha_{\mbox{\rm {\scriptsize s}}})\,.
\end{equation}
The spectral function moments are, therefore, the particular values of the ${\cal M}(s)$ function at $s=0\,, -1\,, -2\,, -N$ with integer $N$.
As discussed in refs.~\cite{EdeR14,EdeR17a} there exists a representation of $\Pi(Q^2)$, and hence of the anomaly $ a_{\mu}^{\rm HVP}$, in terms of the Mellin transform ${\cal M}(s)$. This follows from inserting the Mellin-Barnes identity~\footnote{For the benefit of the reader who may be unfamiliar with Mellin-Barnes integrals we give a proof of this identity in the Appendix.}
\begin{equation}\lbl{eq:MBaid}
\frac{1}{1+\frac{Q^2}{t}}=\frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds\ \left(\frac{Q^2}{t}\right)^{-s}\ \Gamma(s)\Gamma(1-s)
\end{equation}
in the dispersion relation in Eq.~\rf{eq:disprel}, which results in the representation
\begin{equation}\lbl{eq:MBaPI}
\Pi(Q^2) = -\frac{Q^2}{t_0}\ \frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds\ \left(\frac{Q^2}{t_0} \right)^{-s} \Gamma(s)\Gamma(1-s)\ {\cal M}(s)\,,\quad c_s \equiv \mbox{\rm Re}(s) \in ]0,1[ \,;
\end{equation}
and the corresponding integral representation for the Adler function
\begin{equation}\lbl{eq:adler}
{\cal A}(Q^2)\equiv -Q^2 \frac{\partial \Pi(Q^2)}{\partial Q^2}
=\frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds\ \left(\frac{Q^2}{t_0} \right)^{1-s} \Gamma(s)\Gamma(2-s)\ {\cal M}(s)\,,\quad c_s \equiv \mbox{\rm Re}(s) \in ]0,1[ \,.
\end{equation}
\noindent
Setting $Q^2 =\frac{x^2}{1-x}m_{\mu}^2$ in the representation of $\Pi(Q^2)$ in Eq.~\rf{eq:MBaPI} and inserting it in the r.h.s. of Eq.~\rf{eq:eu} we have
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
a_{\mu}^{\rm HVP} & = & -\frac{\alpha}{\pi}\int_{0}^{1}dx\ (1-x) \ \Pi\left(\frac{x^2}{1-x}m_{\mu}^2\right)\\
& = & \frac{\alpha}{\pi}\int_{0}^{1}dx\ (1-x)
\ \frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds\ \left(\frac{\frac{x^2}{1-x}m_{\mu}^2 }{t_0} \right)^{1-s} \Gamma(s)\Gamma(1-s)\ {\cal M}(s)\,.
\end{eqnarray}}
\noindent
The integral over the $x$-parameter can now be made analytically, leading to the expression~\cite{EdeR14}
\begin{equation}\lbl{eq:MBamu}
a_{\mu}^{\rm HVP} = \left(\frac{\alpha}{\pi}\right) \frac{m_{\mu}^2}{t_0}\frac{1}{2\pi i}\int\limits_{c_s -i\infty}^{c_s +i\infty}ds\left(\frac{m_{\mu}^2}{t_0} \right)^{-s} {\cal F}(s)\ {\cal M}(s)\,,\quad c_s \equiv \mbox{\rm Re}(s) \in ]0,1[\,,
\end{equation}
where ${\cal F}(s)$ is a product of three Gamma functions:
\begin{equation}
{\cal F}(s)= -\Gamma(3-2s)\ \Gamma(-3+s)\ \Gamma(1+s)\,,
\end{equation}
and the hadronic dynamics is thus entirely factorized in the Mellin transform ${\cal M}(s)$.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_01.pdf}
\bf\caption{\lbl{fig:Ftau}}
\vspace*{0.25cm}
{\it Shape of the function ${\cal F}\left(\frac{1}{2}-i\tau\right)$ in Eq.~\rf{eq:MBamu} versus $\tau$.\\ The red curve is the real part of the function, the blue dashed curve its imaginary part.}
\end{center}
\end{figure}
\noindent
The weight function ${\cal F}(s)$ in Eq.~\rf{eq:MBamu} is universal and has a shape which, for $s$ within the {\it fundamental strip}~\cite{FGD95}: $c_s \equiv \mbox{\rm Re}(s) \in ]0,1[$ and the choice $s=\frac{1}{2}-i\tau$, is shown in Fig.~\rf{fig:Ftau} as a function of $\tau$. Notice that the real part of this function (the red curve) is symmetric under $\tau\rightarrow -\tau$ while its imaginary part is antisymmetric. Both the real and imaginary parts fall very fast as $\tau$ increases.
With the change of variable
\begin{equation}
s\rightarrow \frac{1}{2}-i\tau\,,
\end{equation}
the integral in Eq.~\rf{eq:MBamu} becomes then a Fourier transform:
\begin{equation}\lbl{eq:MBamuF}
a_{\mu}^{\rm HVP} = \left(\frac{\alpha}{\pi}\right)\sqrt{\frac{m_{\mu}^2}{t_0}}\frac{1}{2\pi }\int\limits_{-\infty}^{+\infty}d\tau \ e^{-i\tau \log\frac{t_0}{m_{\mu}^2}}\ {\cal F}\left(\frac{1}{2}-i\tau\right)\ {\cal M}\left(\frac{1}{2}-i\tau \right)\,.
\end{equation}
Because of the shape of the ${\cal F}\left(\frac{1}{2}-i\tau\right)$ function and the growth restrictions on ${\cal M}\left(\frac{1}{2}-i\tau \right)$ for large $\tau$, which are fixed by the fact that $\Pi(Q^2)$ obeys a dispersion relation in QCD, this Fourier integral is fully dominated by the behaviour of the integrand in a very restricted $\tau$-interval, $-T\le \tau \le +T$ with $T$ of order one.
\section{\Large Some Technical Ingredients.}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\noindent
We shall next recall a few technical ingredients which in the literature go under the name of: Ramanujan Master Theorem, Marichev class of Mellin transforms, Generalized Hypergeometric Functions and Meijer's G-Functions. They are necessary to implement and justify the MBa framework that we propose.
\subsection{\large The so called Ramanujan's Master Theorem.}
\noindent
Consider a function $F(x)$ which admits a power series expansion
\begin{equation}
F(x)\underset{{x\rightarrow 0}}{\thicksim} \lambda(0)-\lambda(-1)x+\lambda(-2)x^2-\lambda(-3)x^3 +\cdots\,.
\end{equation}
Ramanujan's theorem refers then to the formal identity~\cite{Berndt85}
\begin{equation}
\int_0^\infty\ dx\ x^{s-1} \left\{ \lambda(0)-\lambda(-1)x+\lambda(-2)x^2-\lambda(-3)x^3 +\cdots\right\}=\Gamma(s)\Gamma(1-s)\lambda(s)\,,
\end{equation}
and implies that the Mellin transform of $F(x)$ is given by
\begin{equation}
\int_0^\infty dx x^{s-1} F(x) = \Gamma(s)\Gamma(1-s)\lambda(s)\,.
\end{equation}
The function $\lambda(s)$, extended over the full complex $s$-plane, can thus be simply obtained from the discrete $n$-functional dependence of the $\lambda(-n)$ coefficients of the Taylor expansion of $F(x)$ by the formal replacement $n\rightarrow -s$. The proof of this beautiful theorem was provided by Hardy~\cite{Hardy78} and it is based on Cauchy's residue theorem as well as on the Mellin-Barnes representation. The basic assumption in Hardy's proof is a growth restriction on $|\lambda(s)|$ which assures that the series $\lambda(0)-\lambda(-1)x+\lambda(-2)x^2-\lambda(-3)x^3 +\cdots$ has some radius of convergence. In our case $F(x)$ will be the hadronic photon self-energy function $\Pi(Q^2)$, with $x\equiv\frac{Q^2}{t_0}$, and Hardy's growth restriction is equivalent to the one required to write a dispersion relation for $\Pi(Q^2)$.
At small $Q^2$ values, the hadronic photon self-energy function $\Pi(Q^2)$ in QCD has indeed a power series expansion:
\begin{equation}
-\frac{t_0}{Q^2}\Pi(Q^2)\underset{{Q^2\rightarrow 0}}{\thicksim}\
{\cal M}(0)-\frac{Q^2}{t_0}{\cal M}(-1)+\left(\frac{Q^2}{t_0}\right)^2 {\cal M}(-2)-\left(\frac{Q^2}{t_0}\right)^3 {\cal M}(-3)+\cdots \,,
\end{equation}
and the coefficients ${\cal M}(0)$, ${\cal M}(-n)$, $n=1,2,3,\dots$ are precisely the moments of the spectral function defined in Eq.~\rf{eq:momeucl}. Ramanujan's theorem implies then that
\begin{equation}
\int_0^\infty d\left(\frac{Q^2}{t_0}\right)\left(\frac{Q^2}{t_0}\right)^{s-1}\left\{{\cal M}(0)-\frac{Q^2}{t_0}{\cal M}(-1)+\left(\frac{Q^2}{t_0}\right)^2 {\cal M}(-2)+\cdots\right\}
= \Gamma(s)\Gamma(1-s)\ {\cal M}(s)\,,
\end{equation}
which allows, in principle, to reconstruct the Mellin transform ${\cal M}(s)$ in the full complex $s$-plane from just the knowledge of the discrete moments ${\cal M}(-n)$, $n=0,1,2,3,\cdots$. Given $N$ moments ${\cal M}(-n)$, $n=0,1,2,3,\cdots N-1$, the method of Mellin-Barnes approximants (MBa) that we propose constructs successive ${\cal M}_{N}(s)$ functions which exactly reproduce the values of the first $N$-moments and approximate better and better the full ${\cal M}(s)$. When inserted in the integrand of the r.h.s. of Eq.~\rf{eq:MBamuF} they result in a set of successive $a_{\mu}^{\rm HVP}(N)$ approximations to the full $a_{\mu}^{\rm HVP}$. A simple example of this procedure was discussed in ref.~\cite{EdeR17a} in the case of vacuum polarization in QED at the one loop level where, in that case, the corresponding Mellin transform is exactly reproduced from its knowledge at just three $s$ values: e.g. $s=1,0,$ and $-1$.
\subsection{\large Marichev's Class of Mellin Transforms.}
\noindent
The class in question is the one defined by {\it standard products} of gamma functions of the type
\begin{equation}\lbl{eq:marichev}
{\cal M}(s)=C\ \displaystyle\prod_{i,j,k,l}\frac{\Gamma(a_{i}-s)\Gamma(c_{j}+s)}{\Gamma(b_{k}-s)\Gamma(d_{l}+s)}\,,
\end{equation}
with constants $C$, $a_i$, $b_k$, $c_j$ and $d_l$ and where the Mellin variable $s$ only appears with a $\pm$ coefficient.
The interesting thing about this class of functions is that all the Generalized Hypergeometric Functions have Mellin transforms of this type~\cite{Marichev83}. As a result, many functions have a representation in terms of Mellin-Barnes integrals involving linear combinations of standard products of the Marichev type in Eq.~\rf{eq:marichev}.~\footnote{For a helpful tutorial see e.g. ref.~\cite{Fikioris06} and references therein.}
In our case, the monotonicity property in Eq.~\rf{eq:monot} of the QCD Mellin transform implies precise restrictions on the subclass of Marichev-like functions that one must consider when trying to implement successive approximations.
In that respect we have been particularly helped by some relatively recent mathematical literature~\cite{PK01,PTZ94,PASS96}. The authors of these references have studied the general conditions for the convergence of a very general class of Mellin-Barnes integrals, which include those of the Marichev class, and their results can be summarized as follows.
Consider the rather general type of Mellin-Barnes integral
\begin{equation}\lbl{eq:generalint}
I(z) =\frac{1}{2\pi i} \int\limits_{c-i\infty}^{c+i\infty} ds \,
z^{-s}\frac{\prod_{j=1}^m\Gamma(A_j s + B_j )}{\prod_{k=1}^n\Gamma(C_k s + D_k )}\,.
\end{equation}
In our case this will apply to the Mellin-Barnes integral in Eq.~\rf{eq:MBaPI} where
\begin{equation}
z\equiv\frac{Q^2}{t_0}\quad\mbox{\rm and}\quad I(z)\equiv -\frac{t_0}{Q^2}\Pi(Q^2)\,,
\end{equation}
as well as to the Mellin-Barnes integral in Eq.~\rf{eq:MBamu} where
\begin{equation}
z\equiv\frac{m_{\mu}^2}{t_0}\quad\mbox{\rm and}\quad I(z)\equiv a_{\mu}^{\rm HVP}(z)\,.
\end{equation}
Quite generally, the authors of refs.~\cite{PK01,PTZ94} have studied the properties of the mapping which integrals like those in Eq.~\rf{eq:generalint} establish between the Mellin $s$-plane and the $z$-plane. This is illustrated in Fig.~\rf{fig:mapping} where the crosses denote the positions of the poles in the integrand of Eq.~\rf{eq:generalint}: in blue the poles at the left of the fundamental strip (represented by the green strip in the figure) and in red at the r.h.s. of the fundamental strip. In the $z$-plane we show the disc $\vert z\vert \le R$ in blue, with $R$ the radius of convergence, and the cut starting at $\mbox{\rm Re} (z)\ge R$~\footnote{For the sake of simplicity in drawing the figure, we assume that the disc of convergence is centered at $z=0$ and that the cut starts at $\mbox{\rm Re} (z)\ge R$.}. The converse mapping theorem of ref.~\cite{FGD95} relates in a precise way the singularities in the complex $s$-plane
of the integrand in Eq.~\rf{eq:generalint} to the asymptotic expansions of $I(z)$ for $z$ large (the red mapping in Fig.~\rf{fig:mapping}) and for $z$ small (the blue mapping in Fig.~\rf{fig:mapping}).
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.70\textwidth]{figure_03.pdf}
\bf\caption{\lbl{fig:mapping}}
\vspace*{0.25cm}
{\it Mapping of the Mellin $s$-Plane to the $z$-plane.}
\end{center}
\end{figure}
\noindent
Following refs.~\cite{PK01,PTZ94,PASS96} we are instructed to consider the two quantities:
\begin{equation}
\Delta \doteq \sum_{j=1}^m A_j-\sum_{k=1}^n C_k\quad\mbox{\rm and}\quad \alpha \doteq \sum_{j=1}^m|A_j|-\sum_{k=1}^n|C_k|\,.
\end{equation}
\noindent
Then, the region where the integral $I(z)$ converges is $|\arg z|<\frac{\pi}{2}\alpha$~(see e.g. \cite{PK01}), and there are three cases to be considered~\cite{PTZ94,PASS96}:
\begin{itemize}
\item If $\Delta>0$, closing the integration contour to the left leads to a series representation of the integral $I(z)$ which converges for any value of $z$, but closing the contour to the right gives a divergent asymptotic expansion.
\item If $\Delta<0$, closing the contour to the right leads to a series representation of $I(z)$ which converges for any value of $z$, but closing the contour to the left gives a divergent asymptotic expansion.
\item If $\Delta=0$, closing the contour to the left and to the right gives two convergent series, the first series obtained by closing to the left converges within a disk $|z|<R$ whereas the other one converges outside this disk. Moreover, if $\alpha>0$, the two series are the analytic continuation of each other.
\end{itemize}
\noindent
These three cases are illustrated in Fig.~\rf{fig:threecases}.
\begin{figure}[!ht]
\begin{center}
{\includegraphics[width=0.90\textwidth]{figure_04.pdf}}
\bf\caption{\lbl{fig:threecases}}
\vspace*{0.25cm}
\end{center}
{\it Behaviour of the series expansions of $I(z)$ depending on the sign of $\Delta$ for $|z|<R$ (the blue region) and $|z|>R$. The label \textit{div.} denotes the regions where the asymptotic expansion is divergent or does not exist. The cut is represented by the green zigzag line.}
\end{figure}
We are now in the position of fixing the class of successive Mellin approximants ${\cal M}_{N}(s)$ that we { should} use to ensure that they converge in the same way as the full QCD Mellin transform ${\cal M}(s)$ does. Associated to each ${\cal M}_{N}(s)$ approximant there will be a corresponding $\Pi_{N}(Q^2)$ approximant to $\Pi(Q^2)$ { (via Eq.~\rf{eq:MBaPI})} and, therefore, a corresponding $a_{\mu}^{\rm HVP}(N)$ approximant to $a_{\mu}^{\rm HVP}$ { (via Eq.~\rf{eq:MBamuF})}.
The input will be that we know the values of the first few moments
\begin{equation}\lbl{eq:input}
{\cal M}(0)\,,\quad{\cal M}(-1)\,,\quad{\cal M}(-2)\,,\cdots\,, \quad{\cal M}(-N+1)\,,
\end{equation}
including their errors and their correlation matrix,
either from a LQCD determination or from a dedicated experiment.
Given this input, we shall { then} restrict the successive Marichev-like Mellin approximants in Eq.~\rf{eq:marichev} to those satisfying the following criteria:
\begin{enumerate}
\item
The fundamental strip of each Mellin approximant ${\cal M}_{N}(s)$ must be the same as the one of the full Mellin transform ${\cal M}(s)$, so that the insertion of ${\cal M}_{N}(s)$ in the r.h.s. of Eq.~\rf{eq:MBamu} does not change the convergence region $c_s \equiv \mbox{\rm Re}(s) \in ]0,1[$ of the exact Mellin transform.
In practice, due to the fact that the sequence of poles from $\Gamma(a_i -s)$ is at $s=a_i +n$ and the one from $\Gamma(c_j +s)$ at $s=-c_j-n$ with $n\in{\bf N}$ implies the restrictions:
\begin{equation}
\mbox{\rm Re}~a_i \ge 1\quad \mbox{\rm and}\quad \mbox{\rm Re}~c_j \ge 0\,.
\end{equation}
\item
The Mellin approximant ${\cal M}_{N}(s)$ should not generate poles nor zeros in the region $-\infty<\mbox{\rm Re}(s)<1$, where ${\cal M}(s)$ is known to be monotonously increasing. Since $\mbox{\rm Re}~c_j \ge 0$, no poles for $\mbox{\rm Re}(s)<1$
implies the absence of factors $\Gamma(c_j +s)$ or $j_{\rm max}=0$. No zeros for ${\cal M}_{N}(s)$ in the region $-\infty<\mbox{\rm Re}(s)<1$ implies
\begin{equation}
\mbox{\rm Re}~b_k \ge 1.
\end{equation}
\item
{ We also want the corresponding $\Pi_{N}(Q^2)$-function (see Eq.~\rf{eq:MBaPIN} below) to the Mellin approximant ${\cal M}_{N}(s)$} to converge for $z\equiv \frac{Q^2}{t_0}$ both for $|z|<1$ and $|z|>1$ which, according to the convergence conditions discussed above, requires that
\begin{equation}\lbl{eq:delta}
\Delta =(1-1-i_\mathrm{max})-(-k_\mathrm{max}+l_\mathrm{max})=k_\mathrm{max}-i_\mathrm{max}-l_\mathrm{max}=0\,.
\end{equation}
\item
Finally, we want the two series generated by the $\Pi_{N}(Q^2)$ approximant for $|z|<1$ and $|z|>1$ to be the analytic continuation of each other which implies
\begin{equation}
\alpha=(2+i_\mathrm{max}) -(k_\mathrm{max}+l_\mathrm{max})>0 \,.
\end{equation}
This, combined with Eq.~\rf{eq:delta}, implies $l_\mathrm{max}<1$ and hence the absence of $\Gamma(d_l +s)$ factors in the denominator of Eq.~\rf{eq:marichev}.
\end{enumerate}
\noindent
From the above considerations we conclude that, in the case of HVP in QCD, the only Mellin approximants of the Marichev class that one must consider are those restricted to the subclass:
\begin{equation}\lbl{eq:marichevend}
{\cal M}_{N}(s)=C_{N}\ \displaystyle\prod_{k=1}^{N}\frac{\Gamma(a_{k}-s)}{\Gamma(b_{k}-s)}\,,
\end{equation}
with $C_N >0$ and both
\begin{equation}\lbl{eq:akbk}
\mbox{\rm Re}~a_k \ge 1\quad\mbox{\rm and}\quad\mbox{\rm Re}~b_k \ge 1\,.
\end{equation}
Furthermore, the monotonicity property of the QCD Mellin transform requires that (see e.g. ref.~\cite{Alzer97})
\begin{equation}\lbl{eq:bkmak}
\lambda_{N}\doteq\sum_{k=1}^{N}\left(b_{k}-a_{k}\right)\ge 0\,,
\end{equation}
which implies the asymptotic behaviour
\begin{equation}
{\cal M}_{N}(s)\underset{{s\rightarrow -\infty}}{\thicksim}C_{N}(-s)^{-\lambda_{N}}\,,
\end{equation}
and assures the positivity of ${\cal M}_{N}(s)$ for $\mbox{\rm Re}(s)\in ]-\infty,1[$.
When considering a linear superposition of functions of the subclass in Eq.~\rf{eq:marichevend}:
\begin{equation}
{\cal M}_{N_{1}}+{\cal M}_{N_{2}}+\cdots\,,
\end{equation}
each term must satisfy the restrictions in Eqs.~\rf{eq:akbk} and \rf{eq:bkmak} with real constants $C_{N_{1}}\,,C_{N_{2}}\,,\cdots$ such that
\begin{equation}
C_{N_{1}}+C_{N_{2}}+\cdots\ge 0\,.
\end{equation}
Besides the matching to the input moments in Eq.~\rf{eq:input}, all the MBa that we shall use will be constrained to satisfy the leading pQCD short-distance behaviour~\footnote{It is possible to incorporate $\alpha_s$ corrections as well. They don't change, however, the residue of the pole at $s=1$.}
\begin{equation}\lbl{eq:pQCD1}
{\cal M}^{\rm QCD}(s)\underset{{s\rightarrow 1}}{\thicksim} \frac{\alpha}{\pi}\left(\sum_i q_i^2\right)\frac{1}{3}N_c~\frac{1}{1-s}\,.
\end{equation}
Given a MBa ${\cal M}_{N}(s)$, the corresponding $\Pi_{N}(Q^2)$ approximant to $\Pi(Q^2)$ is then
\begin{equation}\lbl{eq:MBaPIN}
\Pi_{N}(Q^2) = -\frac{Q^2}{t_0}\ \frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds\ \left(\frac{Q^2}{t_0} \right)^{-s} \Gamma(s)\Gamma(1-s)\ {\cal M}_{N}(s)\,,\quad c_s \equiv \mbox{\rm Re}(s) \in ]0,1[ \,,
\end{equation}
and the $a_{\mu}^{\rm HVP}(N)$ approximant to $a_{\mu}^{\rm HVP}$ is given by the integral in Eq.~\rf{eq:MBamuF} with the corresponding
${\cal M}_{N}\left(\frac{1}{2}-i\tau \right)$ inserted in the r.h.s. of the integrand.
Notice that the factor ${\cal F}(s)$ does not modify the convergence criteria discussed above for $a_{\mu}^{\rm HVP}(N)$ because ${\cal F}(s)$ has $\Delta=0$ and $\alpha=4$.
\subsection{\large The $\Pi_{N}(Q^2)$ are Generalized Hypergeometric Functions.\\
The $\mbox{\rm Im}\Pi_{N}(t)$ are Meijer's G-Functions~\protect\footnote{These special functions are built-in in several computer languages. Our definition is consistent with \textit{Mathematica} software that we have used to perform the numerical analyses.}.}
\noindent
The Generalized Hypergeometric Function~\cite{Erdely53}
\begin{equation}
{_P}{F}_{Q}[a_1 ,a_2 ,\dots a_P ; b_1 ,b_2 ,\dots b_Q ; z] ~\equiv~ _{P}{F}_{Q}\left(\left. \begin{array}{cccc} a_1 & a_2 & \dots & a_P \\ b_1 & b_2 & \dots & b_Q \end{array}\right\vert {z}\right)\,,
\end{equation}
is defined, for $\vert z\vert <1$, by the series
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:hgseries}
\lefteqn{ \hspace*{-2cm} 1+\frac{a_1 a_2 \dots a_P}{b_1 b_2 \dots b_Q}\frac{z}{1!}
+\frac{a_1 (a_1 +1) a_2 (a_2 +1) \dots a_P (a_P +1)}{b_1 (b_1 +1) b_2 (b_2 +1) \dots b_Q (b_Q +1)}\frac{z^2}{2!}+\cdots}\nonumber \\
& &
\equiv \sum_{n=0}^\infty \frac{(a_1 )_{n}(a_2 )_{n}\dots (a_P )_{n}}{(b_1 )_{n}(b_2 )_{n}\dots (b_Q )_{n}}\frac{z^n}{n!}\,,
\end{eqnarray}}
\noindent
where in the second line we use the Pochhammer symbol
\begin{equation}
(a)_n \equiv \frac{\Gamma(a+n)}{\Gamma(a)}=a(a+1)(a+2)\cdots (a+n-1)\,,
\end{equation}
with in particular,
\begin{equation}
(a)_0 =1\,,\quad\mbox{\rm and}\quad (1)_n =n!\,.
\end{equation}
This series has $P$ numerator parameters, $Q$ denominator parameters and one variable $z$. Any of these parameters are real or complex, but the $b$ parameters must not be negative integers. The case where $P=2$ and $Q=1$ corresponds to the so called Gauss Hypergeometric Function. The sum of this type of series, when it exists, defines a Generalized Hypergeometric Function (GH-Function).
The reason why we are interested in GH-Functions is that,
inserting the general expression in Eq.~\rf{eq:marichevend} for the ${\cal M}_{N}(s)$ approximant in the integrand of the r.h.s. in Eq.~\rf{eq:MBaPIN}, and then doing the Mellin-Barnes integral over the $s$-variable, results in a specific GH-Function of the type:
\begin{equation}\lbl{eq:GHF}
\Pi_{N}(Q^2)=-\frac{Q^2}{t_0}\ C_N\ \displaystyle\prod_{k=1}^{N}\frac{\Gamma(a_{k})}{\Gamma(b_{k})}\ _{1+N}{F}_{N}\left(\left. \begin{array}{ccccc} 1 & a_1 \dots & a_N \\ ~ & b_1 \dots & b_N \end{array}\right\vert {-\frac{Q^2}{t_0}}\right)\,,
\end{equation}
which is given by the series in Eq.~\rf{eq:hgseries} for $\vert \frac{Q^2}{t_0}\vert <1$, with its analytic continuation defined by the underlying Mellin-Barnes integral, Eq.~\rf{eq:MBaPIN} in this case. The corresponding Adler function is also a GH-Function:
\begin{equation}\lbl{eq:AdlerGHF}
{\cal A}_{N} (Q^2)\equiv -Q^2 \frac{\partial \Pi_{N}(Q^2)}{\partial (Q^2)}=\frac{Q^2}{t_0}\ C_N\ \displaystyle\prod_{k=1}^{N}\frac{\Gamma(a_{k})}{\Gamma(b_{k})}\ _{1+N}{F}_{N}\left(\left. \begin{array}{ccccc} 2 & a_1 \dots & a_N \\ ~ & b_1 \dots & b_N \end{array}\right\vert {-\frac{Q^2}{t_0}}\right)\,.
\end{equation}
The reason why we are interested in Meijer's G-Functions is that the inverse Mellin transform of ${\cal M}_{N}(s)$ corresponding to Eq.~\rf{eq:melspec}, i.e. the Mellin Barnes integrals
\begin{equation}
\frac{t_0}{t}\frac{1}{\pi}\mbox{\rm Im}\Pi_{N}(t)=\frac{1}{2\pi i}\int\limits_{c-i\infty}^{c+i\infty}ds \left(\frac{t}{t_0} \right)^{-s} {\cal M}_{N}(s)\,,\quad c_s \equiv \mbox{\rm Re}(s) \in ]-\infty,1[ \,,
\end{equation}
for arbitrary $N$ and $t\ge t_0$
are a particular class of Meijer's G-Functions.
Indeed, in full generality, Meijer's G-Functions are defined by a complex $L$-path integral~(see e.g. {\it The Meijer G-Function $\Meijer{m,n}{p,q}{z}{ \,\boldsymbol{a} }{ \,\boldsymbol{b}}$}, in sect.~8.2 of ref.~\cite{PMB90}, pp.~617-626):
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\lefteqn{\Meijer{m,n}{p,q}{z}{1-a_1,\ldots, 1-a_n\,;a_{n+1},\ldots,a_p}{b_1,\ldots,b_m\,;\,1-b_{m+1},\ldots,1-b_q}=} \nonumber \\ & &
\frac{1}{2\pi i} \int_{L} ds\ z^{-s} \ \frac{\Gamma(b_1+s)\cdots\Gamma(b_m+s)\cdot\Gamma(a_1-s)\cdots\Gamma(a_n-s)}{\Gamma(a_{n+1}+s)\cdots\Gamma(a_p+s)\cdot\Gamma(b_{m+1}-s)\cdots\Gamma(b_q-s)}\,,
\end{eqnarray}}
\noindent
and have the property that
\begin{equation}
\Meijer{0,n}{p,q}{z}{ \,\boldsymbol{a} }{ \,\boldsymbol{b}} = 0 \; \; \text{for} \;\; |z|<1\;.
\end{equation}
For the class of Marichev-like ${\cal M}_{N}(s)$ functions in Eq.~\rf{eq:marichevend} this results in a set of {\it equivalent} spectral functions:
\begin{equation}
\frac{1}{\pi} \mbox{\rm Im}\Pi_{N}(t) = \frac{t}{t_0}\; C_N \;\Meijer{0,N}{0,N}{\frac{t}{t_0}}{1-a_1,\ldots,1-a_N \,; \,\relbar\!\relbar}{\relbar\!\relbar\, ;\, 1-b_1,\cdots,1-b_N}\;.
\end{equation}
These successive {\it equivalent} spectral functions, alike the physical spectral function, are only defined for $t\ge t_0$ but they are not expected to reproduce, {\it locally}, the detailed physical shape unless the level of approximation reaches the exact solution (as it is the case in the QED example at the one loop level discussed in ref.~\cite{EdeR17a}). However, when inserted in a dispersion relation integral, they reproduce the predicted smooth behaviour of the successive self-energy functions $\Pi_{N}(Q^2)$ and Adler ${\cal A}_{N}(Q^2)$ functions. It is in this sense that we call them {\it equivalent}.
The explicit form of these general expressions for the first $N=1$ and $N=2$ cases are as follows:
\begin{itemize}
\item N=1
This corresponds to the case where we only know the first moment ${\cal M}(0)$. Then
\begin{equation}
{\cal M}_1 (s)=C_1 \frac{\Gamma(a_1-s)}{\Gamma(b_1 -s)}\,,\quad{\rm with}\quad C_1= \frac{\alpha}{\pi}\frac{5}{3}\frac{N_c}{3}\Gamma(b_1 -1)\quad\mbox{\rm and}\quad a_1=1
\end{equation}
to ensure the pQCD pole behaviour at $s=1$. The only free parameter $b_1$ is then fixed by the matching condition ${\cal M}_1 (0)={\cal M}(0)$ and one finds
\begin{equation}
\Pi_1 (Q^2)=-\frac{Q^2}{t_0}\ C_1 \frac{1}{\Gamma(b_1)}
\ _{2}{F}_{1}\left(\left. \begin{array}{cc} 1 & a_1 \\ ~ & b_1\end{array}\right\vert {-\frac{Q^2}{t_0}}\right)\,,
\end{equation}
and the corresponding Adler function [see Eq.~\rf{eq:adler}] is
\begin{equation}
{\cal A}_1 (Q^2)=-Q^2 \frac{\partial \Pi_1(Q^2)}{\partial (Q^2)}=
\frac{Q^2}{t_0}\ C_1 \frac{1}{\Gamma(b_1)}
\ _{2}{F}_{1}\left(\left. \begin{array}{cc} 2 & a_1 \\ ~ & b_1\end{array}\right\vert {-\frac{Q^2}{t_0}}\right)\,.
\end{equation}
In this simple case the {\it equivalent} spectral function is
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\frac{1}{\pi} \mbox{\rm Im}\Pi_{1}(t) & = & \frac{t}{t_0}\; C_1 \;\Meijer{0,1}{0,1}{\frac{t}{t_0}}{1-a_1 \,; \,\relbar\!\relbar}{\relbar\!\relbar\, ;\, 1-b_1}\\
& = & \frac{\alpha}{\pi}\frac{5}{3}\left(\frac{t_0}{t} \right)^{a_1 -1}\left(1-\frac{t_0}{t} \right)^{b_1-2}\,.
\end{eqnarray}}
\item N=2
This corresponds to the case where we know the first two moments ${\cal M}(0)$ and ${\cal M}(-1)$. Then
\begin{equation}
{\cal M}_2 (s)=C_2\frac{\Gamma(1-s)}{\Gamma(2 -s)}\frac{\Gamma(a_2 -s)}{\Gamma(b_2-s)}\quad{\rm with}\quad C_2= \frac{\alpha}{\pi}\frac{5}{3}\frac{N_c}{3}\frac{\Gamma(b_2 -1)}{\Gamma(a_2 -1)}\,,
\end{equation}
and the parameters $a_2$ and $b_2$ fixed by the two matching conditions
\begin{equation}
{\cal M}_2 (0)={\cal M}(0)\quad\mbox{\rm and}\quad{\cal M}_2 (-1)={\cal M}(-1)\,.
\end{equation}
Then
\begin{equation}
\Pi_2 (Q^2)=-\frac{Q^2}{t_0}\ C_2 \frac{\Gamma(a_2)}{\Gamma(b_2)}
\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 1 & 1 & a_2 \\ ~ & 2 & b_2\end{array}\right\vert {-\frac{Q^2}{t_0}}\right)\,;
\end{equation}
the corresponding Adler function is
\begin{equation}
{\cal A}_2 (Q^2)=-Q^2 \frac{\partial \Pi_2(Q^2)}{\partial (Q^2)}=
\frac{Q^2}{t_0}\ C_2 \frac{\Gamma(a_2)}{\Gamma(b_2)}
\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 2 & 1 & a_2 \\ ~ & 2 & b_2\end{array}\right\vert {-\frac{Q^2}{t_0}}\right)\,,
\end{equation}
and the {\it equivalent} $N=2$ spectral function is~\footnote{Notice the contrast with the predicted {\it equivalent} spectral function of the Pad\'e approximant constructed with ${\cal M}(0)$ and ${\cal M}(-1)$ which is just a delta function.}:
\begin{equation}
\frac{1}{\pi} \mbox{\rm Im}\Pi_{2}(t) = \frac{t}{t_0}\; C_2 \;\Meijer{0,2}{0,2}{\frac{t}{t_0}}{0,1-a_2 \,; \,\relbar\!\relbar}{\relbar\!\relbar\, ;\, -1,1-b_2}\,.
\end{equation}
\end{itemize}
We next propose to show the application of the Mellin-Barnes approximants discussed above to a non trivial example.
\section{\Large Mellin-Barnes-approximants (MBa) in QED at two loops.}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\noindent
We wish to test the techniques developed in the previous section with a more complicated example than the lowest order QED vacuum polarization discussed in ref.~\cite{EdeR17a}. We suggest to examine the case of the QED vacuum polarization at two loops.
The proper fourth order QED spectral function was first calculated by K\"{a}llen and Sabry in 1955~\cite{KS55} and later on in ref.~\cite{LdeR68}. It is given by the following expression:
With $m$ the lepton mass in the QED VP-loop and
\begin{equation}
\delta= \sqrt{1-\frac{4m^2}{t}}\,,
\end{equation}
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:KS}
\frac{1}{\pi}\mbox{\rm Im}\Pi^{\rm QED}_{\rm 4th}(t) & = & \left(\frac{\alpha}{\pi} \right)^2\left\{
\delta\left(\frac{5}{8}-\frac{3}{8}\delta^2 -\left(\frac{1}{2}-\frac{1}{6} \delta^2\right)\log\left[64\frac{\delta^4}{(1-\delta^2)^3} \right]\right)\right.\nonumber \\
& + & \left(\frac{11}{16}+\frac{11}{24}\delta^2 -\frac{7}{48}\delta^4
+\left(\frac{1}{2}+\frac{1}{3}\delta^2 -\frac{1}{6}\delta^4 \right)\log\left[\frac{(1+\delta)^3}{8\delta^2}\right]\right)\log\left[\frac{1+\delta}{1-\delta} \right] \nonumber \\
& + & \left. 2\left(\frac{1}{2}+\frac{1}{3}\delta^2 -\frac{1}{6}\delta^4 \right)
\left(2\ \mbox{\rm Li}_2\left[\frac{1-\delta}{1+\delta} \right]+\mbox{\rm Li}_2 \left[-\frac{1-\delta}{1+\delta} \right] \right)\right\}\theta(t-4m^2)\,.
\end{eqnarray}}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_05.pdf}
\bf\caption{\lbl{fig:spect4}}
\vspace*{0.25cm}
{\it Shape of the Spectral Function in Eq.~\rf{eq:KS} in $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
\noindent
The asymptotic behaviours of this spectral function are
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\frac{1}{\pi}\mbox{\rm Im}\Pi^{\rm QED}_{\rm 4th}(t) & \underset{{t\rightarrow 4m^2}}{\thicksim} & \left(\frac{\alpha}{\pi} \right)^2\left\{\frac{\pi^2}{4}-2\sqrt{\frac{t}{4m^2}-1}+\frac{\pi^2}{6}\left(\frac{t}{4m^2}-1\right)+{\cal O}\left[\left(\frac{t}{4m^2}-1\right)^{3/2}\right]\right\}\,, \lbl{eq:spect4thth}\\
\frac{1}{\pi}\mbox{\rm Im}\Pi^{\rm QED}_{\rm 4th}(t) & \underset{{t\rightarrow\infty}}{\thicksim} & \left(\frac{\alpha}{\pi} \right)^2\left\{\frac{1}{4}+\frac{3}{4}\frac{4m^2}{t}+{\cal O}\left[\left(\frac{4m^2}{t} \right)^2 \log\left(\frac{t}{4m^2}\right)\right]\right\} \lbl{eq:spect4thinf}\,.
\end{eqnarray}}
\noindent
Notice that the behaviour at threshold $t\sim 4m^2$ is rather different to the one at the one loop level~\cite{EdeR17a} and the shape of the spectral function, which is shown in Fig.~\rf{fig:spect4}, is also very different.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_06.pdf}
\bf\caption{\lbl{fig:melspec4}}
\vspace*{0.25cm}
{\it Shape of the Mellin Transform of the Spectral Function in Eq.~\rf{eq:KS} in $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
The shape of the Mellin transform of the 4th order spectral function in Eq.~\rf{eq:KS} is shown in Fig.~\rf{fig:melspec4}. Like the Mellin transform in QCD it is also singular at $s=1$ but with a different residue
\begin{equation}\lbl{eq:s1}
{\cal M}^{\rm QED}_{\rm 4th}(s)\underset{{s\rightarrow 1}}{\thicksim} \left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4}\frac{1}{1-s}\,,
\end{equation}
and shares with QCD the property of being a monotonously increasing function from $s=-\infty$ to $s<1$.
The real part of the fourth order vacuum polarization in QED is also known analytically~\cite{KS55}. It is a rather complicated expression and, therefore, it is a good test to see how well it is approximated by the successive GH-Functions in Eq.~\rf{eq:GHF}. The shape of the $\Pi^{\rm QED}_{\rm 4th}(Q^2)$ function in the Euclidean is shown in Fig.~\rf{fig:pi4E}.
We shall discuss this 4th order QED example in a way as close as possible to the QCD case which we shall later be confronted with. Therefore, the input will be the successive values of the moments of the spectral function, i.e. of the derivatives of $\Pi_{\rm 4th}^{\rm QED}(Q^2)$ at $Q^2 =0$.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_07.pdf}
\bf\caption{\lbl{fig:pi4E}}
\vspace*{0.25cm}
{\it Shape of the 4th order QED vacuum polarization function in the Euclidean \\ $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
\noindent
The first few Mellin moments
\begin{equation}
{\cal M}^{\rm QED}_{\rm 4th}(s)\equiv\int_{4m^2}^\infty \frac{dt}{t}\left(\frac{t}{4m^2}\right)^{s-1}\frac{1}{\pi}\mbox{\rm Im}\Pi^{\rm QED}_{\rm 4th}(t)\,,
\end{equation}
for $s=0,-1,-2,-3,-4,-5$, in units of $\left(\frac{\alpha}{\pi} \right)^2$ are tabulated below in Table~\rf{table:t1}.
\begin{table*}[h]
\caption[Results]{ ${\cal M}(s)$ Moments in units of $\left(\frac{\alpha}{\pi} \right)^2$. }
\lbl{table:t1}
\begin{center}
\begin{tabular}{|c|c|c|} \hline \hline {\bf Moment} & {\bf Exact result} & {\bf Numerical value}
\\
\hline \hline
${\cal M}(0)$ & $82/81$ & $1.012356796$
\\
${\cal M}(-1)$ & $449/675$ & $0.665185185$ \\
${\cal M}(-2)$ & $249916/496125$ & $0.503735936$ \\
${\cal M}(-3)$ & $51986/127575$ & $0.407493631$ \\
${\cal M}(-4)$ & $432385216/1260653625$ & $ 0.342984946$ \\
${\cal M}(-5)$ & $5415247216/18261468225$ & $0.296539531$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{\large Successive Approximations to ${\cal M}^{\rm QED}_{\rm 4th}(s)$, $\Pi^{\rm QED}_{\rm 4th}(Q^2)$ and $a_{\mu}^{\rm VP}$.}
\noindent
We can now proceed to the construction of a successive set of MBa's to ${\cal M}^{\rm QED}_{\rm 4th}(s)$ of the type shown in Eq.~\rf{eq:marichevend} and to the evaluation of the corresponding GH-function approximation to $\Pi^{\rm QED}_{\rm 4th}(Q^2)$ of the type shown in Eq.~\rf{eq:GHF}. At each approximation step we shall then evaluate the corresponding contribution to the anomalous magnetic moment of a fermion of mass $m$ induced by the 4th order vacuum polarization generated by the same fermion (see the corresponding Feynman diagrams in Fig.~\rf{fig:QED4}),
and compare it with the exact result which is known analytically~\cite{MR69}:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:MiRe}
a^{\rm VP}_{\mu} & = & \left(\frac{\alpha}{\pi} \right)^3 \left\{\frac{673}{108}-\frac{41}{81}\pi^2
-\frac{4}{9}\pi^2 \log(2)-\frac{4}{9}\pi^2 \log^{2}(2) +\frac{4}{9}\log^{4}(2) -\frac{7}{270}\pi^4 \right.\nonumber \\
& & \left. +\frac{13}{18}\zeta(3)+\frac{32}{3}{\rm PolyLog}\left[4\,, \frac{1}{2} \right]\right\} =\left(\frac{\alpha}{\pi} \right)^3 0.0528707\,.
\end{eqnarray}}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.70\textwidth]
{figqed4new-eps-converted-to.pdf}
\bf\caption{\lbl{fig:QED4}}
{\it Feynman diagrams contributing to the muon anomaly in Eq.~\rf{eq:MiRe}.}
\vspace*{0.25cm}
\end{center}
\end{figure}
\noindent
The result in Eq.~\rf{eq:MiRe} is a rather complicated expression involving higher transcendental numbers with important numerical cancellations among the different terms and, therefore, it should provide a good test. We want to investigate how well we reproduce this exact result using the Mellin-Barnes integral representation in Eq.~\rf{eq:MBamuF} which, when adapted to this case, reads as follows:
\begin{equation}\lbl{eq:MBamuNQED}
a^{\rm VP}(N) = \left(\frac{\alpha}{\pi}\right) \frac{1}{2}\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}d\tau\ e^{-i\tau\log 4}\ {\cal F}\left(\frac{1}{2}-i\tau \right)\ {\cal M}_{N}\left(\frac{1}{2}-i\tau \right)\,,
\end{equation}
with ${\cal M}_{N}(s)$ the successive Mellin approximants.
\subsubsection{\normalsize The $N=1$ MBa.}
\noindent
This corresponds to the case where we only know ${\cal M}_{\rm 4th}^{\rm QED}(0)$. Following Eq.~\rf{eq:marichevend} we are instructed to consider as a first Mellin approximant:
\begin{equation}\lbl{eq:1MQED}
{\cal M}^{\rm QED}_{\rm 4th}(s)\Rightarrow {\cal M}_{1}(s)= C_{1}\frac{\Gamma(a-s)}{\Gamma(b-s)}\,,
\end{equation}
which must be singular at $s=1$. This fixes the $a$ parameter to $a=1$ and the overall normalization to
\begin{equation}
C_{1}= \left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4}\Gamma(b-1)\,,
\end{equation}
so as to reproduce the leading singularity when $s\rightarrow 1$.
Matching ${\cal M}_{1}(s)$ at $s=0$ with the numerical value of ${\cal M}_{\rm 4th}^{\rm QED}(0)$ in Table~\rf{table:t1} fixes the $b$ parameter to
\begin{equation}
b=1.24695122\,.
\end{equation}
We can then perform the corresponding integral in
Eq.~\rf{eq:MBamuNQED} which gives as a result for the first $N=1$ approximant:
\begin{equation}\lbl{eq:QEDN1}
a^{\rm VP} (N=1)=\left(\frac{\alpha}{\pi} \right)^3 \times 0.0500007\,.
\end{equation}
It reproduces the Mignaco-Remiddi exact result in Eq.~\rf{eq:MiRe} to an accuracy of 5\%.
\subsubsection{\normalsize The $N=2$ MBa.}
\noindent
This corresponds to the case where we know the slope and curvature of $\Pi_{\rm 4th}^{\rm QED}(Q^2)$ at $Q^2 =0$, i.e. ${\cal M}_{\rm 4th}^{\rm QED}(0)$ and ${\cal M}_{\rm 4th}^{\rm QED}(-1)$. This information is similar to that already available from LQCD~\footnote{See refs.~\cite{Burger14,Ch16,Lellouch16} and references therein.}. We shall therefore discuss it in detail.
The Mellin approximant in this case has two parameters $a$ and $b$:
\begin{equation}\lbl{eq:2MQED}
{\cal M}^{\rm QED}_{\rm 4th}(s)\Rightarrow {\cal M}_{2}(s)= C_{2}\frac{\Gamma(1-s)}{\Gamma(2-s)}\frac{\Gamma(a-s)}{\Gamma(b-s)}\,,
\end{equation}
and the leading short-distance constraint fixes the overall normalization to
\begin{equation}
C_{2}=\left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4} \frac{\Gamma(b-1)}{\Gamma(a-1)}\,,
\end{equation}
with the parameters $a$ and $b$ fixed by the two matching equations:
\begin{equation}
\frac{1}{4}\frac{a-1}{b-1}={\cal M}_{\rm 4th}^{\rm QED}(0)\quad\mbox{\rm and}\quad
\frac{1}{8}\frac{a}{b}\frac{a-1}{b-1}={\cal M}_{\rm 4th}^{\rm QED}(-1)\,,
\end{equation}
or equivalently
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\frac{1}{4}\frac{a-1}{b-1} & = & {\cal M}_{\rm 4th}^{\rm QED}(0)\\
\frac{1}{2}\frac{a}{b} & = & \frac{{\cal M}_{\rm 4th}^{\rm QED}(-1)}{{\cal M}_{\rm 4th}^{\rm QED}(0)}\,.
\end{eqnarray}}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_09.pdf}
\bf\caption{\lbl{fig:intgrand}}
\vspace*{0.25cm}
{\it Plot of the real part of the integrand ${\cal R}_{2}(\tau)$ in Eq.~\rf{eq:fourier2}:\\ the red curve corresponds to inserting the exact ${\cal M}^{\rm QED}_{\rm 4th}\left(\frac{1}{2}-i\tau \right)$ in the integrand,\\ the dashed blue curve to inserting the approximation ${\cal M}_{2}\left(\frac{1}{2}-i\tau \right)$.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_10.pdf}
\bf\caption{\lbl{fig:melexM2}}
\vspace*{0.25cm}
{\it The red curve is the Mellin Transform of the Spectral Function in Eq.~\rf{eq:KS}.\\
The dotted blue curve is the $N=2$ Mellin approximant in Eq.~\rf{eq:2MQED}.\\ Both curves are shown in $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
\noindent
Inserting the numerical values in Table~\rf{table:t1} for ${\cal M}_{\rm 4th}^{\rm QED}(0)$ and ${\cal M}_{\rm 4th}^{\rm QED}(-1)$ results in the values
\begin{equation}\lbl{eq:pars2}
a=1.46508\quad\mbox{\rm and}\quad b=1.11485\,.
\end{equation}
With these parameter values inserted in ${\cal M}_2 (s)$ in Eq.~\rf{eq:2MQED}, and performing the corresponding integral
\begin{equation}\lbl{eq:fourier2}
a_{\mu}^{\rm VP}(N=2) = \left(\frac{\alpha}{\pi}\right)\frac{1}{2} \frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}d\tau\ \underbrace{e^{-i\tau \log 4}\ {\cal F}\left(\frac{1}{2}-i\tau\right)\ {\cal M}_{2}\left(\frac{1}{2}-i \tau \right)}_{{\cal R}_{2}(\tau)}\,,
\end{equation}
gives the result
\begin{equation}
a^{\rm VP} (N=2)=\left(\frac{\alpha}{\pi} \right)^3 \times 0.0531447\,,
\end{equation}
which reproduces the Mignaco-Remiddi result in Eq.~\rf{eq:MiRe} to an accuracy of 0.5\%, a significant improvement with respect to the $N=1$ approximant. Figure~\rf{fig:intgrand} shows the behaviour of the real part of the integrand ${\cal R}_{2}(\tau)$ in Eq.~\rf{eq:fourier2} as a function of $\tau$, where the red curve is the one when one inserts the exact Mellin transform ${\cal M}^{\rm QED}_{\rm 4th}\left(\frac{1}{2}-i\tau \right)$ in the integrand and the dashed blue curve the one associated to the $N=2$ approximation. Already at this level of approximation the agreement between both integrands is quite impressive.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_11.pdf}
\bf\caption{\lbl{fig:R2}}
\vspace*{0.25cm}
{\it Plots of the ratio $\frac{{\cal M}_{2}(s)}{{\cal M}(s)}$ versus $s$. Notice the scale of the plots. }
\end{center}
\end{figure}
At this stage it is also interesting to compare the exact Mellin transform shown in Fig~\rf{fig:melspec4} with the one corresponding to the $N=2$ approximation. This is shown in Fig.~\rf{fig:melexM2} where the blue dotted curve is the $N=2$ approximation. The agreement of the two curves down to $s\simeq -3$ is quite remarkable. In order to see the difference between these two curves we show in Fig.~\rf{fig:R2} the plot of their ratio. The ${\cal M}_{2}(s)/{\cal M}(s)$ ratio turns out to be greater than one everywhere, except in the interval $-1\le s \le 0$. This is why the $N=2$ result approaches the exact value of the anomaly from above. The quality of the interpolation between $s=0$ and $s=-1$ provided by the $N=2$ approximation is shown at the right in Fig.~\rf{fig:R2}. Notice the scale in the figure, e.g. the value at the minimum of the ratio shown in this figure is $0.9937$ compared to one.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_12.pdf}
\bf\caption{\lbl{fig:RPi22}}
\vspace*{0.25cm}
{\it The red curve is the exact 4th order QED VP-function.\\
The dotted blue curve is the $N=2$ approximant.\\ Both curves are shown in $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
According to Eq.~\rf{eq:GHF}, the $N=2$ GH-function approximant to $\Pi_{\rm 4th}^{\rm QED}(Q^2)$ is given by the expression ($z\equiv \frac{Q^2}{4m^2}$):
\begin{equation}\lbl{eq:GHFN2}
\Pi_{\rm 4th}^{\rm QED}(Q^2)\Rightarrow \Pi_{(N=2)}^{\rm QED}(Q^2) =
\left(\frac{\alpha}{\pi} \right)^2 \ (-z)\frac{1}{4}\frac{a-1}{b-1}\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 1 & 1 & a \\ ~ & 2 & b\end{array}\right\vert {-z}\right)\,,
\end{equation}
\noindent
where $\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 1 & 1 & a \\ ~ & 2 & b\end{array}\right\vert {-}\right)$ is the GH-Function defined by the series:
\begin{equation}
_{3}{F}_{2}\left(\left. \begin{array}{ccc} 1 & 1 & a \\ ~ & 2 & b\end{array}\right\vert {-z}\right)=\sum_{n=0}^{\infty}\frac{(1)_n (1)_n (a)_n }{(2)_n (b)_n}\frac{(-z)^n}{n!}\,,
\end{equation}
and $a$ and $b$ have the values given in Eq.~\rf{eq:MBamuF}.
Figure \rf{fig:RPi22} shows how well the MBa $\Pi_{(N=2)}^{\rm QED}(Q^2)$ (blue curve) does when compared to the exact function (red curve). From this comparison, one can qualitatively understand why the $N=2$ approximation already reproduces the exact value of $a^{\rm VP}$ in Eq.~\rf{eq:MiRe} at the $0.5\%$ level.
The {\it equivalent} spectral function corresponding to the $N=2$ approximation is given by the Meijer's G-Function:
\begin{equation}
\frac{1}{\pi} \mbox{\rm Im}\Pi_{2}(t) = \frac{t}{t_0}\; \left(\frac{\alpha}{\pi} \right)^2 \frac{1}{4} \;\Meijer{0,2}{0,2}{\frac{t}{t_0}}{0,1-a \,; \,\relbar\!\relbar}{\relbar\!\relbar\, ;\, -1,1-b}\,,
\end{equation}
and its shape, compared to the exact spectral function, is shown in Fig.~\rf{fig:Pi22}. Notice that the {\it equivalent} spectral function corresponding to the unique Pad\'e approximant constructed with ${\cal M}_{\rm 4th}^{\rm QED}(0)$ and ${\cal M}_{\rm 4th}^{\rm QED}(-1)$ would be just a delta function.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_13.pdf}
\bf\caption{\lbl{fig:Pi22}}
\vspace*{0.25cm}
{\it The red curve is the exact 4th order QED spectral function.\\
The dotted blue curve is the $N=2$ approximant.\\ Both curves are shown in $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
\subsubsection{\normalsize The $N=3$ MBa.}
\noindent
This corresponds to the Mellin approximant
\begin{equation}\lbl{eq:3MQED}
{\cal M}^{\rm QED}_{\rm 4th}(s)\Rightarrow {\cal M}_{3}(s)= C_{3}\frac{\Gamma(1-s)\Gamma(a_1 -s)}{\Gamma(b_1 -s)\Gamma(b_2 -s)}\,,
\end{equation}
with
\begin{equation}
C_{3}=\left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4} \frac{\Gamma(b_1 -1)\Gamma(b_2 -1)}{\Gamma(a_1 -1)}\,,
\end{equation}
and the three parameters $a_1$, $a_2$ and $b_1$ fixed by matching ${\cal M}_{3}(s)$ to the values of the three moments ${\cal M}^{\rm QED}_{\rm 4th}(0)$, ${\cal M}^{\rm QED}_{\rm 4th}(-1)$, and ${\cal M}^{\rm QED}_{\rm 4th}(-2)$. The matching equations in this case are:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4}\frac{1}{b_{1}-1}(a_{1}-1)\frac{1}{b_{2}-1} & = & {\cal M}^{\rm QED}_{\rm 4th}(0)\,, \\
\frac{1}{b_{1}}a_{1}\frac{1}{b_{2}} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-1)}{{\cal M}^{\rm QED}_{\rm 4th}(0)}\,,\\
2\frac{1}{b_{1}+1}(a_{1}+1)\frac{1}{b_{2}+1} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-2)}{{\cal M}^{\rm QED}_{\rm 4th}(-1)}\,,
\end{eqnarray}}
\noindent
which results in the values:
\begin{equation}
a_{1} =
2.528554853\,,\quad
b_{1} = 1.163614902\,,\quad
b_{2} = 3.307115556\,,
\end{equation}
or the equivalent solution with $b_{1}\rightleftharpoons b_{2}$. With these values inserted in ${\cal M}_3 (s)$ in Eq.~\rf{eq:2MQED}, and performing the corresponding integral in
Eq.~\rf{eq:MBamuNQED} gives the result
\begin{equation}
a^{\rm VP} (N=3)=\left(\frac{\alpha}{\pi} \right)^3 \times 0.0528678\,,
\end{equation}
which now reproduces the Mignaco-Remiddi result in Eq.~\rf{eq:MiRe} to the remarkable accuracy of 0.004\%.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_14.pdf}
\bf\caption{\lbl{fig:Mel11012}}
\vspace*{0.25cm}
{\it The red curve is the Mellin Transform of the exact Spectral Function.\\
The dashed blue curve is the $N=3$ Mellin approximant. Both curves are shown in $\left(\frac{\alpha}{\pi} \right)^2$ units.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm} \includegraphics[width=0.50\textwidth]{figure_15.pdf}
\bf\caption{\lbl{fig:R33S}}
\vspace*{0.25cm}
{\it Plots of the ratio $\frac{{\cal M}_{3}(s)}{{\cal M}(s)}$ versus $s$. Notice the vertical scales of these plots. }
\end{center}
\end{figure}
As an illustration of the quality of the approximation, we show in Fig.~\rf{fig:Mel11012} the Mellin transform of the $N=3$ approximation (the blue dashed curve) compared to the exact Mellin transform (the red curve). At the scale of the figure it is practicably impossible to see the difference. In order to see that, we show plots of the ratio ${\cal M}_{3}(s)/{\cal M}(s)$ in Fig.~\rf{fig:R33S}. Notice the scale in the left plot of Fig.~\rf{fig:R33S} as compared to the one in Fig.~\rf{fig:R2} and the improvement in the figure at the right which is plotted at the same scale as Fig.~\rf{fig:R2}.
An accuracy of 0.004\% is already much beyond what is required of the HVP contribution to the muon anomaly in QCD, but for the sake of testing the approximation procedure that we are advocating, let us try further possible improvements.
\subsubsection{\normalsize The $N=4$ MBa.}
\noindent
The $N=4$ approximant is
\begin{equation}\lbl{eq:4MQED}
{\cal M}^{\rm QED}_{\rm 4th}(s)\Rightarrow {\cal M}_{4}(s)= C_{4}
\frac{\Gamma(1-s)\Gamma(a_1 -s)\Gamma(a_2 -s)}{\Gamma(2-s)\Gamma(b_1 -s)\Gamma(b_2 -s)}\,,
\end{equation}
with
\begin{equation}
C_{4}=\left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4} \frac{\Gamma(b_1 -1)\Gamma(b_2 -1)}{\Gamma(a_1 -1)\Gamma(a_2 -1)}\,,
\end{equation}
and the four parameters $a_1$, $a_2$, $b_1$ and $b_2$ solutions of the matching equations:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\frac{1}{4}\frac{a_1-1}{b_1-1}\frac{a_2-1}{b_2-1} & = & {\cal M}^{\rm QED}_{\rm 4th}(0)\,, \\
\frac{1}{2}\frac{a_1}{b_1}\frac{a_2}{b_2} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-1)}{{\cal M}^{\rm QED}_{\rm 4th}(0)}\,,\\
\frac{2}{3} \frac{(a_1 +1)}{(b_1 +1)} \frac{(a_2 +1)}{(b_2 +1)} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-2)}{{\cal M}^{\rm QED}_{\rm 4th}(-1)}\,, \\
\frac{3}{4} \frac{(a_1 +2)}{(b_1 +2)} \frac{(a_2 +2)}{(b_2 +2)} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-3)}{{\cal M}^{\rm QED}_{\rm 4th}(-2)}\,,
\end{eqnarray}}
\noindent
which give, as an acceptable solution, the values:
\begin{equation}
a_ 1=
2.829673582\,,\quad b_1 = 3.528046148\,,\quad a_2 = 1.902891314\,,\quad b_2 = 1.161374634\,,
\end{equation}
or the equivalent solution with $a_{1}\rightleftharpoons a_{2}$ and $b_{1}\rightleftharpoons b_{2}$.
The corresponding prediction for the muon anomaly is
\begin{equation}
a_{\mu}^{\rm VP}(N=4)= \left(\frac{\alpha}{\pi} \right)^3 0.0528711\,,
\end{equation}
which reproduces the exact value at the level of 0.00075\%, practically the exact result.
It seems fair to conclude from these examples that the successive use of MBa of the Marichev class in Eq.~\rf{eq:marichevend} is an excellent method to approach, rather quickly in this case, the exact result with an excellent accuracy. The question which, however, arises is: {\it how far can one go?}. The exact Mellin transform of the QED fourth order spectral function, contrary to the second order one discussed in ref.~\cite{EdeR17a}, is expected to be a much more complicated expression than just a simple {\it standard product} of the Marichev class in Eq.~\rf{eq:marichevend}. Therefore, {\it a priori}, one expects these approximations to break at some $N$-level where no acceptable solutions exist any longer. Let us then proceed to examine what happens when one tries higher $N$-approximants of a single {\it standard product}.
\subsubsection{\normalsize The $N=5$ MBa.}
\noindent
The $N=5$ Mellin approximant is
\begin{equation}\lbl{eq:5MQED}
{\cal M}^{\rm QED}_{\rm 4th}(s)\Rightarrow {\cal M}_{5}(s)= C_{5}
\frac{\Gamma(1-s)\Gamma(a_1 -s)\Gamma(a_2 -s)}{\Gamma(b_1 -s)\Gamma(b_2 -s)\Gamma(b_3 -s)}\,,
\end{equation}
with
\begin{equation}
C_{5}=\left(\frac{\alpha}{\pi}\right)^2 \frac{1}{4} \frac{\Gamma(b_1 -1)\Gamma(b_2 -1)\Gamma(b_3 -1)}{\Gamma(a_1 -1)\Gamma(a_2 -1)}\,,
\end{equation}
and the parameters $a_1$, $a_2$, $b_1$, $b_2$, $b_3$ solutions of the matching equations:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\frac{1}{4}\frac{a_1 -1}{b_1 -1}\frac{a_2 -1}{b_2 -1}\frac{1}{b_3-1} & = & {\cal M}^{\rm QED}_{\rm 4th}(0)\,, \\
\frac{a_1}{b_1}\frac{a_2}{b_2}\frac{1}{b_3} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-1)}{{\cal M}^{\rm QED}_{\rm 4th}(0)}\,,\\
2 \frac{a_1 +1}{b_1 +1} \frac{a_2 +1}{b_2 +1}\frac{1}{b_3 +1} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-2)}{{\cal M}^{\rm QED}_{\rm 4th}(-1)}\,, \\
3 \frac{a_1 +2}{b_1 +2} \frac{a_2 +2}{b_2 +2}\frac{1}{b_3 +2} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-3)}{{\cal M}^{\rm QED}_{\rm 4th}(-2)}\,, \\
4 \frac{a_1 +3}{b_1 +3} \frac{a_2 +3}{b_2 +3}\frac{1}{b_3 +3} & = & \frac{{\cal M}^{\rm QED}_{\rm 4th}(-4)}{{\cal M}^{\rm QED}_{\rm 4th}(-3)}\,.
\end{eqnarray}}
\noindent
There are still acceptable solutions to this system of polynomial equations with the values:
\begin{equation}
b_1 = 1.16249580\,, a_1 = 4.111523616\,, b_2 = 4.354959443\,, a_2 = 2.360299888\,, b_3 =
2.917297589\,,
\end{equation}
and the permutations of $a_1$, $a_2$ and $b_1$, $b_2$, $b_3$ which give equivalent solutions.
The corresponding prediction for the muon anomaly is now
\begin{equation}
a_{\mu}^{\rm VP}(N=5)= \left(\frac{\alpha}{\pi} \right)^3 0.0528706\,,
\end{equation}
which reproduces the exact value at the level of 0.00018\%, still an improvement with respect to the $N=4$ Approximation!
This is, however, the best one can do in the two loop QED case with single Mellin approximants of the type shown in Eq.~\rf{eq:marichevend}. Indeed, if one tries to improve with a $N=6$ approximant of this type, one finds that all the solutions for the parameters $a_1$, $a_2$, $a_3$, $b_1$, $b_2$, $b_3$ from the matching equations bring in complex numbers with real parts which are inside of the {\it fundamental strip}, in contradiction with the initial requirements for an acceptable solution that we imposed. This is the signal that, in our example, single Marichev-like approximants break down at a critical $N$-level where the function $\Pi_{\rm 4th}^{\rm QED}(Q^2)$ cannot be approximated any longer with just one GH-Function. It is possible, however, to extend the class of approximants to {\it superpositions of standard products} as indicated in Eq.~\rf{eq:marichevend} and in fact this is what we shall do in the case of QCD.
From the previous analysis we conclude that, in the case of the QED fourth order vacuum polarization, the best prediction we can make with single Marichev-like MBa's is an average of the $N=4$ and $N=5$ approximants with an error estimated from the deviation of this average to the $N=4$ and $N=5$ results i.e.,
\begin{equation}
a_{\mu}^{\rm VP}(\rm QED~4th~order)=\left(\frac{\alpha}{\pi} \right)^3 (0.0528709\pm 0.0000003)\,.
\end{equation}
This is already an excellent prediction when compared to the exact result in Eq.~\rf{eq:MiRe}.
\section{\Large Test of MBa with experimental HVP Moments.}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\noindent
The KNT collaboration~\cite{KNT17} has kindly provided us with the values of the first few moments of the hadronic spectral function with their errors, as well as their covariance matrix. These moments were obtained using the same hadronic spectral function which results in the second number quoted in Eq.~\rf{eq:marichevend}. It provides us with a good test of how well the approximants that we propose work when applied to a set of hadronic moments with realistic errors. The first five moments with their errors are given in Table~\rf{table:teubner} and their correlation matrix is given in Table~\rf{table:alex} in the next section. We observe that the relative errors of the first two moments ${\cal M}(0)$ and ${\cal M}(-1)$ in Table~\rf{table:teubner} are smaller than the relative error in the determination of the lowest order HVP contribution to $a_{\mu}^{\text{HVP}}$ in Eq.~\rf{eq:HVPexps}~\cite{KNT17}. The higher moments ${\cal M}(-n)$ for $n=2,3,...$ have higher relative errors but they of course
contribute less and less to the total $a_{\mu}^{\rm HVP}$ determination.
\begin{table*}[h]
\caption[Results]{ ${\cal M}(s)$ Moments and Errors in $10^{-3}$ units . }
\lbl{table:teubner}
\begin{center}
\begin{tabular}{|c|c|c|} \hline \hline {\bf Moment} & {\bf Experimental Value} & {\bf Relative Error}
\\
\hline \hline
${\cal M}(0)$~~ & $0.7176\pm 0.0026$ & $0.36\%$
\\
${\cal M}(-1)$ & $0.11644\pm 0.00063$ & $0.54\%$ \\
${\cal M}(-2)$ & $0.03041\pm 0.00029$ & $0.95\%$ \\
${\cal M}(-3)$ & $0.01195\pm 0.00017$ & $1.4\%$\\
${\cal M}(-4)$ & $0.00625\pm 0.00011$ & $1.8\%$ \\
${\cal M}(-5)$ & $0.003859\pm 0.000078$ & $2.0\%$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
We shall next proceed, like in the previous section, to the construction of successive MBa's of the type shown in Eq.~\rf{eq:marichevend} and to the evaluation of the corresponding GH-Functions $\Pi^{\rm QCD}_{N}(Q^2)$ and $\frac{1}{\pi}\mbox{\rm Im}\Pi_{N}(t)$. At each approximation we shall then evaluate the corresponding $a_{\mu}^{\rm HVP}(N)$ contribution to the muon anomlay. In the next subsection we shall only consider as input the center values of the moments in Table~\rf{table:teubner} and postpone the error analysis for later discussion in the next subsection.
\subsection{\large Successive MBa's to ${\cal M}^{\rm QCD}(s)$, $\Pi^{\rm QCD}(Q^2)$, $\frac{1}{\pi}\mbox{\rm Im}\Pi^{\rm QCD}(t)$ and $a_{\mu}^{\rm HVP}$.}
\noindent
\subsubsection{\large\bf The $N=1$ MBa.}
\vspace*{0.25cm}
\noindent
This corresponds to the MBa which one can construct when only the first moment ${\cal M}(0)$ is known. In this case
\begin{equation}\lbl{eq:mel1QCD}
{\cal M}_{1}(s)=\frac{\alpha}{\pi}\frac{5}{3}\Gamma(1-s)\frac{\Gamma(b_{1}-1)}{\Gamma(b_{1}-s)}\,,
\end{equation}
where the singularity at $s=1$ is the one associated to the asymptotic leading behaviour of the QCD spectral function with $u$, $d$, $s$, $c$, $b$ and $t$ quarks in Eq.~\rf{eq:pqed}.
Matching the value of ${\cal M}_{1}(s)$ at $s=0$ with the one from the experimental determination in Table~\rf{table:teubner} fixes the $b_1$-parameter to the value:
\begin{equation}
b_{1}=6.395\,.
\end{equation}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.40\textwidth]{figure_16.pdf}
\bf\caption{\lbl{fig:mel1QCD}}
\vspace*{0.25cm}
{\it The red curve shows the shape of the $N=1$ MBa in Eq.~\rf{eq:mel1QCD}.\\
The blue circles are the experimental values in Table~\rf{table:teubner}.}
\end{center}
\end{figure}
\noindent
Figure~\rf{fig:mel1QCD} shows the shape of the predicted Mellin transform. The blue points in the figure correspond to the experimental values of the moments in Table~\rf{table:teubner} with their errors, which are too small to be seen at the scale in the figure. The agreement, at the precision of the scale of the figure, is excellent.
Inserting the expression of the first Mellin approximant ${\cal M}_{1}(s)$ in the integrand at the r.h.s. of Eq.~\rf{eq:MBamu} gives the result of the first MBa to the muon anomaly:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:MBaRamuFQCD}
a_{\mu}^{\rm HVP}(N=1) & = & \left(\frac{\alpha}{\pi}\right)\sqrt{\frac{m_{\mu}^2}{t_0}}\frac{1}{2\pi }\int\limits_{-\infty}^{+\infty}d\tau \ e^{-i\tau \log\frac{t_0}{m_{\mu}^2}}\ {\cal F}\left(\frac{1}{2}-i\tau\right)\ {\cal M}_{N=1}\left(\frac{1}{2}-i\tau \right)\\
& = & 6.991\times 10^{-8}\,,
\end{eqnarray}}
\noindent
which reproduces the central value result in Eq.~\rf{eq:HVPexps}~\cite{KNT17} surprisingly well: to $0.8\%$.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.40\textwidth]{figure_17.pdf}
\bf\caption{\lbl{fig:R1E}}
\vspace*{0.25cm}
{\it Plot of the ratio of the experimental moments in Table~\rf{table:teubner} with their errors\\ to those predicted by the $N=1$ Mellin-Barnes-Approximation.}
\end{center}
\end{figure}
In order to understand why the $N=1$ MBa is already so good, let us explore more in detail the plot of ${\cal M}_{1}(s)$ in Fig~\rf{fig:mel1QCD}. To better observe the deviations between the experimental moments and the predicted moments we plot in Fig.~\rf{fig:R1E} their ratio as a function of $s=-n$, $n=0,1,2,\dots$. The deviation of this ratio from one shows the discrepancy. Notice that, here, only the value of the ${\cal M}(0)$ moment has been used as an input. The predicted values of ${\cal M}(-1)$, ${\cal M}(-2)$ and even ${\cal M}(-3)$ turn out to be rather close to the experimental values, although already the predicted ${\cal M}(-3)$ and certainly the predicted higher moments are not compatible with the experimental statistical errors. Higher moments, however, contribute less ans less to the total value of the anomaly and this is why $a_{\mu}^{\rm HVP}(N=1)$ turns out to be already such a good approximation.
Why does the $N=1$ MBa do a better job in the case of QCD than in the two loop QED case we discussed before? The reason for this is that in the QCD case, contrary to the QED case, there are resonances in the low energy region of the spectral function with mass scales which, relative to the muon mass, enhance the contribution of the low moments, in particular ${\cal M}(0)$. If instead of the muon anomaly we were considering the electron anomaly, the $N=1$ MBa would already be giving a result with an accuracy comparable to the full determination.
Although, given the result in Eq.~\rf{eq:MBaRamuFQCD} and the present accuracy from experiment, there seems to be little room for improvement, let us examine what happens when one tries the $N=2$ MBa.
\noindent
\subsubsection{\large\bf The $N=2$ MBa.}
\vspace*{0.25cm}
\noindent
Here the Mellin approximant has the analytic form
\begin{equation}\lbl{eq:mel2QCD}
{\cal M}_{2}(s)=\frac{\alpha}{\pi}\frac{5}{3}\frac{\Gamma(1-s)}{\Gamma(2-s)}
\frac{\Gamma(a_1 -s)}{\Gamma(a_1 -1)}\frac{\Gamma(b_1 -1)}{\Gamma(b_1 -s)}\,,
\end{equation}
and the parameters $a_1$ and $b_1$ are fixed by the matching equations:
\begin{equation}\lbl{eq:a1b1}
{\cal M}_{2}(0)={\cal M}(0)\quad\mbox{\rm and}\quad {\cal M}_{2}(-1)={\cal M}(-1) \,,
\end{equation}
with ${\cal M}(0)$ and ${\cal M}(-1)$ given in Table~\rf{table:teubner}. This results in the values:
\begin{equation}\lbl{eq:HVPab}
a_1 =1.900\quad\mbox{\rm and}\quad b_1 = 5.855\,.
\end{equation}
The shape of the ${\cal M}_{2}(s)$ Mellin transform turns out to be rather similar to the ${\cal M}_{1}(s)$ one in Fig.~\rf{fig:R1E}. In order to appreciate the differences between the $N=1$ and $N=2$ MBa's, we compare in Fig.~\rf{fig:R12E} the ratios of the experimental moments to those of the ${\cal M}_{2}(s)$ prediction (the red dots) and to those of the ${\cal M}_{1}(s)$ prediction (the blue dots). The overall shape of the red dots is clearly better because they are nearer to one.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_18.pdf}
\bf\caption{\lbl{fig:R12E}}
\vspace*{0.25cm}
{\it Plot of the ratio of the experimental moments in Table~\rf{table:teubner} with their errors\\ to those predicted by the $N=2$ MBa in red and the $N=1$ MBa in blue.}
\end{center}
\end{figure}
With the expression of the second Mellin approximant ${\cal M}_{2}(s)$ inserted in the integrand at the r.h.s. of Eq.~\rf{eq:MBamu} we get as a result of the $N=2$ MBa to the muon anomaly:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:MBaRamuFQCD2}
a_{\mu}^{\rm HVP}(N=2) & = & \left(\frac{\alpha}{\pi}\right)\sqrt{\frac{m_{\mu}^2}{t_0}}\frac{1}{2\pi }\int\limits_{-\infty}^{+\infty}d\tau \ \underbrace{e^{-i\tau \log\frac{t_0}{m_{\mu}^2}}\ {\cal F}\left(\frac{1}{2}-i\tau\right)\ {\cal M}_{N=2}\left(\frac{1}{2}-i\tau \right)}_{{\cal R}(\tau)}\\
& = & 6.970\times 10^{-8}\,,
\end{eqnarray}}
\noindent
which reproduces the central value result in Eq.~\rf{eq:HVPexps}~\cite{KNT17} at the $0.5\%$ level, i.e. an improvement by a factor of 1.6 with respect to the $N=1$ case. Figure~\rf{fig:integrand2} shows the shape of the integrand ${\cal R}(\tau)$ in Eq.~\rf{eq:MBaRamuFQCD2} which, as expected, has a rapid decrease as $\vert\tau\vert\gtrsim 1$.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_19.pdf}
\bf\caption{\lbl{fig:integrand2}}
\vspace*{0.25cm}
{\it Plot of the integrand in Eq.~\rf{eq:MBaRamuFQCD2} as a function of $\tau$.}
\end{center}
\end{figure}
As discussed in the previous section, the MBa technique allows to reconstruct as well $\Pi_{N}(Q^2)$ approximants of the HVP self energy in terms of GH-functions. The corresponding $N=2$ approximant is ($z=\frac{Q^2}{t_0}$):
\begin{equation}\lbl{eq:meijerN2QCD}
\Pi_{N=2}^{\rm QCD}(Q^2) = \left(\frac{\alpha}{\pi} \right) \ (-z)\frac{5}{3}\frac{a_1 -1}{b_1 -1}\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 1 & 1 & a_1 \\ ~ & 2 & b_1 \end{array}\right\vert {-z}\right)\,,
\end{equation}
with $a_1$ and $b_1$ given in Eq.~\rf{eq:HVPab}.
The shape of the function $\Pi_{N=2}^{\rm QCD}(Q^2)$ is shown in Fig.~\rf{fig:PI2QCD}.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_20.pdf}
\bf\caption{\lbl{fig:PI2QCD}}
\vspace*{0.25cm}
{\it Shape of the function $\Pi_{N=2}^{\rm QCD}(Q^2)$ in Eq.~\rf{eq:meijerN2QCD} as a function of $z=\frac{Q^2}{t_0}$.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_21.pdf}
\bf\caption{\lbl{fig:spec2QCD}}
\vspace*{0.25cm}
{\it Plots of the $N=2$ MBa Spectral Function. }
\end{center}
\end{figure}
Plots of the spectral function associated to the $N=2$ MBa are also shown in Figs.\rf{fig:spec2QCD}. Although, asymptotically, the $N=2$ MBa spectral function approaches the pQCD value it can only be considered a smooth interpolation of the physical spectral function which, as we know, has a lot of local structure. This interpolation, however, when inserted in the r.h.s. of Eq.~\rf{eq:str} reproduces the determination of the anomaly using the experimental spectral function at the { $0.5\%$} level already mentioned. It is in this sense that it is a good interpolation.
We shall next explore what happens when one tries to improve the $N=2$ MBa with higher approximants and further input from the experimental values of higher moments.
\noindent
\subsubsection{\large\bf The $N=3$ MBa.}
\vspace*{0.25cm}
\noindent
The corresponding Mellin approximant which generalizes the one in Eq.~\rf{eq:mel1QCD} has the analytic form
\begin{equation}\lbl{eq:mel3QCD}
{\cal M}_{3}(s)=\frac{\alpha}{\pi}\frac{5}{3}\Gamma(1-s)\frac{\Gamma(b_{1}-1)}{\Gamma(b_{1}-s)}\frac{\Gamma(a_{1}-s)}{\Gamma(a_{1}-1)}\frac{\Gamma(b_{2}-1)}{\Gamma(b_{2}-s)}\,,
\end{equation}
with the parameters $a_1$, $b_1$ and $b_2$ solutions of the matching equations
\begin{equation}\lbl{eq:match3}
{\cal M}_{3}(0)={\cal M}(0)\,,\quad{\cal M}_{3}(-1)={\cal M}(-1)\quad\mbox{\rm and}\quad {\cal M}_{3}(-2)={\cal M}(-2)\,.
\end{equation}
In this case one finds a ``possible solution'' where
\begin{equation}\lbl{eq:nogood}
a_1 =-0.362\,,\quad b_1 =6.462\,,\quad\ b_2 =-0.346\,,
\end{equation}
and the equivalent one with $b_{1}\rightleftharpoons b_{2}$. These ``solutions'', however, are not acceptable because they generate a pole at $s=a_1$ which is inside of the fundamental strip in contradiction with first principles, as discussed in Section III.3. Nevertheless, the negative numerical values of $a_1$ and $b_2$ are in fact rather close to each other. Had they been exactly the same, there would have been a cancellation between $\Gamma(a_1 -s)$ and $\Gamma(b_2 -s)$ in Eq.~\rf{eq:mel3QCD} indicating that it is not possible to improve beyond $N=2$ with a single Marichev-like function. The situation here is rather similar to the one encountered earlier when considering the $N=6$ MBa in the QED example.
The fact that in QCD the simple Marichev-like approximants fail to find physical solutions already at the $N=3$ level is perhaps not so surprising. One does not expect, beyond a certain level of accuracy, to be able to approximate $\Pi^{\rm QCD}(Q^2)$ at all $Q^2$ values with just one GH-function. One may, however, ask: is it possible to find generalizations of the simple Marichev-like MBa's which, when using more than the first two moments in Table~\rf{table:teubner} as an input, provide acceptable solutions to compare with $a_{\mu}^{\rm HVP}$ in Eq.~\rf{eq:HVPexps}~\cite{KNT17}? As already mentioned at the end of Section IV there is a positive answer to that. It consists in using standard superpositions of Mellin approximants of the type indicated in Eq.~\rf{eq:marichevend}. This, in turn, implies specific superpositions of GH-Functions which approximate the self-energy $\Pi^{\rm QCD}(Q^2)$ in the Euclidean, and hence $a_{\mu}^{\rm HVP}$.
\noindent
\subsubsection{\large\bf The $N=(2)+(1)$ MBa.}
\vspace*{0.25cm}
\noindent
The simplest superposition which gives acceptable solutions to the matching equations, when one knows three moments in the HVP case, consists of the sum of one $N=2$ MBa and one $N=1$ MBa:
\begin{equation}\lbl{eq:mel21QCD}
{\cal M}_{2+1}(s)=\frac{\alpha}{\pi}\frac{5}{3}\frac{1}{2}\left\{\frac{1}{1-s}\frac{\Gamma(a_1 -s)}{\Gamma(a_1 -1)}\frac{\Gamma(b_1-1)}{\Gamma(b_1-s)}+\Gamma(1-s)\frac{\Gamma(b_{2}-1)}{\Gamma(b_2-s)}\right\}\,,
\end{equation}
with the overall factor 1/2 fixes the correct pQCD residue at $s=1$,
and the parameters $a_1$, $b_1$ and $b_2$ are solutions of the matching equations:
\begin{equation}\lbl{eq:2+1}
{\cal M}_{2+1}(0)={\cal M}(0)\,,\quad{\cal M}_{2+1}(-1)={\cal M}(-1)\quad\mbox{\rm and}\quad{\cal M}_{2+1}(-2)={\cal M}(-2)\,.
\end{equation}
There is only one acceptable solution to these equations with the values:
\begin{equation}
a_1=5.2668,\quad b_1=14.514\,,\quad\mbox{\rm and}\quad b_2=19.177\,.
\end{equation}
With ${\cal M}_{2+1}(s)$ inserted in the integrand at the r.h.s. of Eq.~\rf{eq:MBamu} we get as a result for the muon anomaly:
\begin{equation}
a_{\mu}^{\rm HVP}(N=2+1)=6.957\times 10^{-8}
\end{equation}
which reproduces the central value result in Eq.~\rf{eq:HVPexps}~\cite{KNT17} at the $0.4\%$ level, and is an improvement with respect to the previous $N=2$ case.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_22.pdf}
\bf\caption{\lbl{fig:adler2+1}}
\vspace*{0.25cm}
{\it Plots of the $N=2+1$ Adler Function versus $z=\frac{Q^2}{t_0}$. }
\end{center}
\end{figure}
The corresponding sum of HG-Functions to the ${\cal M}_{2+1}(s)$ MBa in Eq.~\rf{eq:2+1} which results as an approximation to the HVP self-energy is now
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:2+1-Pi}
\Pi_{N=2+1}^{\rm QCD}(Q^2) & = & \left(\frac{\alpha}{\pi} \right)\ (-z)\frac{5}{3}\frac{1}{2} \left\{\frac{a_1 -1}{b_1 -1}\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 1 & 1 & a_1 \\ ~ & 2 & b_1 \end{array}\right\vert {-z}\right)\right.\nonumber \\
& & \hspace*{2.5cm} +\left.\frac{1}{b_2 -1}\ _{2}{F}_{1}\left(\left. \begin{array}{cc} 1 & 1 \\ ~ & b_2\end{array}\right\vert {-z}\right) \right\}\,,
\end{eqnarray}}
\noindent
and the corresponding approximation to the Adler function is
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:2+1Ad}
{\cal A}_{N=2+1}^{\rm QCD}(Q^2) & = & \left(\frac{\alpha}{\pi} \right)\ z\frac{5}{3}\frac{1}{2} \left\{\frac{a_1 -1}{b_1 -1}\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 2 & 1 & a_1 \\ ~ & 2 & b_1 \end{array}\right\vert {-z}\right)\right.\nonumber \\
& & \hspace*{2cm} +\left.\frac{1}{b_2 -1}\ _{2}{F}_{1}\left(\left. \begin{array}{cc} 2 & 1 \\ ~ & b_2\end{array}\right\vert {-z}\right) \right\}\,.
\end{eqnarray}}
\noindent
The shape of this Adler function is shown in Fig.~\rf{fig:adler2+1}.
\noindent
\subsubsection{\large\bf The $N=(2)+(1)+(1)$ MBa.}
\vspace*{0.25cm}
\noindent
With the first four moments of HVP as an input, there is
a new superposition of MBa's which gives an acceptable solution to the matching equations. It is the following linear combination of a $N=2$ MBa and two $N=1$ MBa's:
\begin{equation}\lbl{eq:2+1+1}
{\cal M}_{2+1+1}(s)=\frac{\alpha}{\pi}\frac{5}{3}\left\{\frac{1}{1-s}\frac{\Gamma(a_1 -s)}{\Gamma(a_1 -1)}\frac{\Gamma(b_1 -1)}{\Gamma(b_1 -s)}+ \Gamma(2-s)\frac{\Gamma(b_2 -1)}{\Gamma(b_2 -s)}+
\Gamma(2-s)\frac{\Gamma(b_3 -1)}{\Gamma(b_3 -s)}\right\}\,.
\end{equation}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_23.pdf}
\bf\caption{\lbl{fig:2+1+1}}
\vspace*{0.25cm}
{\it The red curve is the shape of ${\cal M}_{2+1+1}$ in Eq.~\rf{eq:2+1+1} for $-5\le s\le 0$.\\ The dots are the experimental values of the moments.}
\end{center}
\end{figure}
\noindent
The matching equations:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\lefteqn{{\cal M}_{2+1+1}(0)={\cal M}(0)\,,\quad{\cal M}_{2+1+1}(-1)={\cal M}(-1)\,,} \nonumber\\
& & {\cal M}_{2+1+1}(-2)={\cal M}(-2)\,,\quad\mbox{\rm and}\quad{\cal M}_{2+1}(-3)={\cal M}(-3)\,,
\end{eqnarray}}
\noindent
give an acceptable solution with values:
\begin{equation}
a_1 = 1.0180\,,\quad b_1=1.7495\,,
\end{equation}
and two complex conjugate values for $b_2$ and $b_3$, or equivalently $b_{2}\rightleftharpoons b_{3}$:
\begin{equation}
b_2 =12.822 + i~2.6069\,,\quad b_3 =12.822 - i~2.6069\,,
\end{equation}
which gives a total real contribution to the sum of the two $N=1$ terms in Eq.~\rf{eq:2+1+1}.
The expression of the $N=2+1+1$ Mellin approximant ${\cal M}_{2+1+1}(s)$ inserted in the integrand at the r.h.s. of Eq.~\rf{eq:MBamu} results in a value for the muon anomaly:
\begin{equation}
a_{\mu}^{\rm HVP}(N=2+1+1)=6.932\times 10^{-8}\,,
\end{equation}
which almost exactly reproduces the central value result in Eq.~\rf{eq:HVPexps}~\cite{KNT17}, and represents a net improvement with respect to the previous $N=2+1$ approximation.
The shape of the Mellin transform ${\cal M}_{2+1+1}(s)$ is shown in Fig.~\rf{fig:2+1+1} together with the experimental values of the first five moments. Figure~\rf{fig:ratio2+1+1} shows the ratio of the experimental values of the first five moments to the values predicted by ${\cal M}_{2+1+1}$ in Eq.~\rf{eq:2+1+1}.
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_24.pdf}
\bf\caption{\lbl{fig:ratio2+1+1}}
{\it Plot of the ratio of the experimental moments in Table~\rf{table:teubner} to those of the $N=2+1+1$ MBa.\\
Notice the difference of scale in the vertical axis, as compared to the one in Fig.~17.}
\end{center}
\end{figure}
\vspace*{-0.25cm}
The Adler function associated to ${\cal M}_{2+1+1}(s)$ in Eq.~\rf{eq:2+1+1} is the sum of three GH-Functions:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}\lbl{eq:2+1+1Ad}
{\cal A}_{N=2+1+1}^{\rm QCD}(Q^2) & = & \left(\frac{\alpha}{\pi} \right)\ z\frac{5}{3} \left\{\frac{a_1 -1}{b_1 -1}\ _{3}{F}_{2}\left(\left. \begin{array}{ccc} 2 & 1 & a_1 \\ ~ & 2 & b_1 \end{array}\right\vert {-z}\right)\right.\nonumber \\
& & +\left.\frac{1}{b_2 -1}\ _{2}{F}_{1}\left(\left. \begin{array}{cc} 2 & 2 \\ ~ & b_2\end{array}\right\vert {-z}\right) + \frac{1}{b_3 -1} \ _{2}{F}_{1}\left(\left. \begin{array}{cc} 2 & 2 \\ ~ & b_3\end{array}\right\vert {-z}\right) \right\}\,,
\end{eqnarray}}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.50\textwidth]{figure_25.pdf}
\bf\caption{\lbl{fig:ad2+1+1}}
\vspace*{0.25cm}
{\it Plot of the Adler function in Eq.~\rf{eq:2+1+1Ad}.}
\end{center}
\end{figure}
\noindent
and its shape is shown in Fig.~\rf{fig:ad2+1+1}.
\noindent
Plots of the spectral function corresponding to the $N=2+1+1$
MBa are also shown in Fig.~\rf{fig:spect211}. { The plots already exhibit underlying features of the hadronic structure.}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.40\textwidth]{figure_26a.pdf} \includegraphics[width=0.40\textwidth]{figure_26b.pdf}
\bf\caption{\lbl{fig:spect211}}
\vspace*{0.25cm}
{\it Plots of the $N=2+1+1$ Spectral Function. }
\end{center}
\end{figure}
\noindent
\subsection{\large Uncertainties of the Successive MBa's to $a_{\mu}^{\rm HVP}$.}
\vspace*{0.25cm}
\noindent
We shall finally examine the sensitivity of the results obtained for the $a_{\mu}^{\rm HVP}(N)$ to small variations in the input parameters $a_k$ and $b_k$ of the successive ${\cal M}_{N}(s)$, as well as to the choice of the $N$-approximant itself. The errors in the experimental determination of the moments ${\cal M}(-n)$ have been tabulated in Table~\rf{table:teubner} and their correlation matrix is given in Table~\rf{table:alex}. One can see that the values of these moments are highly correlated, reflecting the fact that they all have been extracted from different integrals of the same input data on the spectral function.
The statistical part of the analysis is standard. We first construct the covariance matrix $C_{ij}$ of the first $N$ moments obtained from experiment ${\cal M}(1-i)\,,i=1,\dots,N$:
\begin{equation}
C_{ij}=\rho_{ij}\sigma_i\sigma_j\,,\quad\text{with}\quad \rho_{ii}=1\,,\ \ -1<\rho_{i,j}<+1\quad\mbox{\rm and}\quad i,j=1,\dots,N\,,
\end{equation}
where $\rho_{ij}$ is the correlation coefficient between the moment $\#i$ and the moment $\#j$, each with Gaussian uncertainty $\sigma_{i}$ and $\sigma_{j}$ . Then we define a $\chi^2$ function associated to a given Mellin-Barnes approximant ${\cal M}_{N}(s)$, which depends on a set of parameters $(a_k\,,b_k)$:
\begin{equation}\label{eq:chi2}
\chi^2 = \sum_{i,j=1}^{N} \left[{\cal M}_{N}(1-i)-{\cal M}(1-i)\right] C^{-1}_{ij} \left[{\cal M}_{N}(1-j)-{\cal M}(1-j)\right]\,.
\end{equation}
\begin{table*}[h]
\caption[Results]{\it Correlation Matrix of the Moments ${\cal M}(0),\ldots,{\cal M}(-5)$ in Table~\rf{table:teubner} }
\lbl{table:alex}
\begin{displaymath}
\left(\begin{array}{cccccc}
1 & 0.83 & 0.62 & 0.50 & 0.42 & 0.37 \\
& 1 & 0.93 & 0.84 & 0.77 & 0.70 \\
&& 1 & 0.98 & 0.93 & 0.88 \\
&&& 1 & 0.987 & 0.96 \\
&&&& 1 & 0.991 \\
&&&&& 1
\end{array}\right)\,.
\end{displaymath}
\end{table*}
\noindent
and minimize this $\chi^2$ with respect to the set of parameters $(a_k\,,b_k)$. The errors are sufficiently small to ensure that a point-like estimate is an excellent approximation, and we obtain the covariance matrix in the $(a_k,b_k)$ parameter space from the Hessian matrix of the $\chi^2$ function computed at its minimum. Using linear error propagation we can then calculate the statistical uncertainty on $a_{\mu}^{\rm HVP}$, as reported in the third column of Table~\rf{table:uncertainties}.
The fact that all the approximants have a similar uncertainty that coincides with the one of the complete evaluation of $a_{\mu}^{\rm HVP}$~\cite{KNT17} is a sign that the statistical information is saturated by all our MBa's.
\begin{table*}[h]
\caption{ Numerical results on the determination of $a_{\mu}^{\rm HVP}$ ($10^{-8}$ units), for each considered MBa. }
\lbl{table:uncertainties}
\begin{center}
\begin{tabular}{|c|c|c|} \hline \hline {\bf MBa Ansatz} & {\bf Central Value} & {\bf Stat. Uncertainty}
\\
\hline \hline
Eq.~\rf{eq:mel1QCD} ($N=1$) & 6.991 & 0.023 \\
Eq.~\rf{eq:mel2QCD} ($N=2$) & 6.970 & 0.024 \\
Eq.~\rf{eq:mel21QCD} ($N=(2)+(1)$) & 6.957 & 0.025 \\
Eq.~\rf{eq:2+1+1} ($N=(2)+(1)+(1)$) & 6.932 & 0.025 \\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[!ht]
\begin{center}
\hspace*{-1cm}\includegraphics[width=0.75\textwidth]{figure_27.pdf}
\bf\caption{\lbl{fig:amuN}}
\vspace*{0.25cm}
{\it Results for $a_{\mu}^{\rm HVP}$ as a function of the number of input moments $N$. The blue points correspond to alternative choices of MBa's (two choices for $N=2,3,4$) with their statistical uncertainty.\\{ The pink band is the full experimental result of ref.~\cite{KNT17}.}}
\end{center}
\end{figure}
Our results would not be complete without a study of the systematic shift associated to the successive MBa's which interpolate the values of the experimental moments and reconstruct the full Mellin functions. With this aim, in addition to the MBa's discussed in detail in the previous section, we have also tested alternative parameterizations for $N=2,3,4$ which are obtained by changing the location of the poles in the superposition terms ( \textit{e.g.} $\Gamma(2-s)$ instead of $\Gamma(1-s)$ in Eq.~\rf{eq:mel21QCD}). These alternative MBa's have also valid solutions for the corresponding $(a_k\,, b_k)$ parameters and, therefore, can also be considered as good alternative choices. The results of all the evaluations of $a_{\mu}^{\rm HVP}$ which we have made are plotted in Fig.~\rf{fig:amuN}, as a function of the number of input moments $N$.
We observe that the successive results converge towards the experimental value in Eq.~\rf{eq:HVPexps}.
\section{\Large Conclusions and Outlook}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\noindent
Equation \rf{eq:momeucl} shows that moments of the hadronic spectral function are equivalent to derivatives of the hadronic self-energy function $\Pi(Q^2)$ at $Q^2 =0$. The latter are accessible to LQCD simulations as well as to eventual dedicated experiments. We have shown how, from an accurate determination of the first few moments, one could reach an evaluation of the HVP contribution to the muon anomaly with a competitive precision, or even higher, than the present experimental determinations.
The method that we propose uses a new technique of Mellin-Barnes approximants which has been explained and justified in detail in the text. Essentially it is based on generic QCD properties which fix the class of Mellin transforms ${\cal M}(s)$ of the spectral function that one can use as successive approximants. The muon anomaly $a_{\mu}^{\rm HVP}$, in terms of these ${\cal M}(s)$-functions, is given by the Fourier transform in Eq.~\rf{eq:MBamuF}. The corresponding approximations to the hadronic self-energy function $\Pi(Q^2)$ are well defined Generalized Hypergeometric Functions which we have given explicitly and the approximations to the spectral function are also given in terms of Meijer's G-Functions. This offers the possibility of applying the same techniques developped here to the case where the information from LQCD, or from experiment, is given in terms of determinations of the self-energy function $\Pi(Q^2)$ at fixed Euclidean $Q^2$-values, as e.g. in ref.~\cite{Lellouch17}. { We plan to discuss this in
the near future.}
We have illustrated the practical application of the method with the example of the QED contribution to the muon anomaly from the vacuum polarization Feynman diagrams in Fig.~\rf{fig:QED4}. We have also discussed the case where one uses as an input the experimental values of the first moments provided to us by the collaboration of ref.~\cite{KNT17}. We find that, in this case, our approach reproduces very well their complete phenomenological analysis.
\vspace*{0.5cm}
\begin{center}
{\Large\bf Acknowledgments}
\end{center}
\vspace*{0.25cm}
\noindent
We are very grateful to Thomas Teubner and to Alex Keshavarzi for providing us with the experimental values of the first few moments and the error correlations { of their update}. We also thank Laurent Lellouch and Ruth Van de Water for their interest and informative discussions, and Alex Keshavarzi, Ruth Van de Water and the referee for a careful reading of the manuscript. D.G. thanks M.~Knecht and CPT for their hospitality during the beginning of this work.
The work of J.C. and E.deR. has been carried out thanks to the support of the OCEVU Labex (ANR-11-LABX-0060) and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French government program managed by the ANR.
\vspace*{1cm}
\begin{appendix}
\renewcommand{\thesection}{\normalsize \Alph {section}}
\begin{center}
{\bf\normalsize APPENDIX}
\end{center}
\vspace*{0.5cm}
\noindent In this appendix we discuss various technical details which appear in the main text
\section{\normalsize The Basic Mellin-Barnes Identity}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\noindent
The identity in Eq.~\rf{eq:MBaid} is a particular case of the identity ($N=1,2,3,\dots$):
\begin{equation}\lbl{eq:MBaidn}
\frac{1}{(1+A)^{N}}=\frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds \left(A \right)^{-s} \frac{\Gamma(s)\Gamma(N-s)}{\Gamma(N)}\,.
\end{equation}
We shall first show how performing the integral in the r.h.s. for $N=1$ reproduces the l.h.s. For that we make a choice of $s$ with $\mbox{\rm Re}(s) \in ]0,1[$, e.g. $s=\frac{1}{2}+i\tau$. Then
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\lefteqn{\frac{1}{2\pi i}\int\limits_{c_s-i\infty}^{c_s+i\infty}ds \left(A \right)^{-s} \Gamma(s)\Gamma(1-s)} \nonumber\\
& & = \frac{1}{\sqrt{A}}\frac{1}{2\pi}\int_{-\infty}^{+\infty}
d\tau \exp{\left(-i\tau\log{A} \right)}\frac{\pi}{\cosh(\pi\tau)}\nonumber\\
& & = \frac{1}{\sqrt{A}}\frac{1}{2\pi}\frac{\pi}{\cosh\left(\frac{\log{A}}{2} \right)}= \frac{1}{\sqrt{A}}\frac{1}{2}\frac{1}{\frac{e^{\frac{1}{2}\log{A}}+e^{-\frac{1}{2}\log{A}}}{2}}\nonumber\\
& & = \frac{1}{\sqrt{A}}\frac{1}{\sqrt{A}+\frac{1}{\sqrt{A}}}=\frac{1}{1+A}\,,\quad {\rm c.q.d.}
\end{eqnarray}}
\noindent
Taking $N$-derivatives with respect to $A$ in this identity reproduces Eq.~\rf{eq:MBaidn}.
We shall next evaluate the Mellin transform of $\frac{1}{(1+A)^N}$ and show that
\begin{equation}
\int_0^\infty dA\ A^{s-1} \frac{1}{(1+A)^N}=\frac{\Gamma(s)\Gamma(N-s)}{\Gamma(N)}\,.
\end{equation}
We do that by applying Ramanujan's Master Theorem to the Taylor expansion:
\begin{equation}
\frac{1}{(1+A)^N}=\sum_{k=0\,,1\,,2\dots} (-1)^k \left[\frac{\Gamma(N+k)}{\Gamma(N)\Gamma(k+1)}\right] A^{k}\,,
\end{equation}
from which Ramanujan allows us to conclude that
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\int_0^\infty dA\ A^{s-1} \frac{1}{(1+A)^N} & = & \Gamma(s)\Gamma(1-s)\times \left[\frac{\Gamma(N-s)}{\Gamma(N)\Gamma(-s+1)}\right]\\
& = & \frac{\Gamma(s)\Gamma(N-s)}{\Gamma(N)}\,,\quad {\rm c.q.d.}\,.
\end{eqnarray}}
\vspace*{-0.5cm}
\section{\normalsize Positivity Properties of the Mellin Moments}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\noindent
Because of the positivity property of the spectral function $\frac{1}{\pi}\mbox{\rm Im}\Pi(t)$ the Mellin Moments ${\cal M}(-N)$ which, here, for convenience, we write as follows
\begin{equation}
\Sigma(N)=\int_{t_0}^\infty\frac{dt}{t_0}\left(\frac{t_0}{t} \right)^{2+N}\frac{1}{\pi}\mbox{\rm Im}\Pi(t)\,,\quad N=0,1,2,\dots\,,
\end{equation}
must satisfy certain constraints which we next discuss. Notice that with this definition:
\begin{equation}
{\cal M}(-n)\equiv \Sigma(N=n)\,.
\end{equation}
It is useful to change variables slightly: set
\begin{equation}
z=\frac{t_0}{t}\,,\quad \frac{dt}{t_0}=-\frac{dz}{z^2}\,,
\end{equation}
and, therefore,
\begin{equation}
\Sigma(N)=\int_0^1 dz z^N \frac{1}{\pi}\mbox{\rm Im}\Pi\left(\frac{1}{z}t_0\right)\,.
\end{equation}
The positivity constraints follow from the fact that
\begin{equation}
\sum_{N,N'}\left[\int_0^1 dz z^{N+N'} \frac{1}{\pi}\mbox{\rm Im}\Pi\left(\frac{1}{z}t_0\right)\right]\xi_N \xi{_N'}\ge 0\,,
\end{equation}
where $\xi_N$ and $\xi{_N'}$ are the components of arbitrary positive real vectors. This implies that the matrix
\begin{equation}
\Sigma(N,N')\equiv \int_0^1 dz z^{N+N'} \frac{1}{\pi}\mbox{\rm Im}\Pi\left(\frac{1}{z}t_0\right)\,,
\end{equation}
must be positive definite. The relevant constraints are then the following:
\begin{itemize}
\item $N=N'=0$:
\begin{equation}
\Sigma(0)\ge 0\,.
\end{equation}
\item $(N,N')= 0,1$
\begin{equation}
\Sigma(0)\ge 0\,,\quad \Sigma(1)\ge 0\,,\quad
\Sigma(1)\le \Sigma(0)\,.
\end{equation}
\item $(N,N')= 0,1,2$
\begin{equation}
\hspace*{-0.25cm} \Sigma(0)\ge 0\,,\quad \Sigma(1)\ge 0\,,\quad \Sigma(2)\ge 0\,,\quad\Sigma(1)\le \Sigma(0)\,,\quad\Sigma(2)\le \Sigma(1)\,,
\quad
\Sigma(0)\Sigma(2)\ge [\Sigma(1)]^2\,.
\end{equation}
\item $(N,N')= 0,1,2,3$
\begin{equation}
\Sigma(0)\ge 0\,,\quad \Sigma(1)\ge 0\,,\quad \Sigma(2)\ge 0\,,\quad\Sigma(3)\ge 0\,,
\end{equation}
\begin{equation} \quad\Sigma(1)\le \Sigma(0)\,,\quad\Sigma(2)\le \Sigma(1)\,,\quad\Sigma(3)\le \Sigma(2)\,,
\end{equation}
\begin{equation}
\Sigma(0)\Sigma(2)\ge [\Sigma(1)]^2\,,\quad\Sigma(1)\Sigma(3)\ge [\Sigma(2)]^2\,,
\end{equation}
and
\begin{equation}
[\Sigma(0)-\Sigma(1)][\Sigma(2)-\Sigma(3)]\ge
[\Sigma(1)-\Sigma(2)]^2\,.
\end{equation}
\end{itemize}
LQCD determinations of Mellin Moments should be consistent with these constraints.
\end{appendix}
\vspace*{1.2cm}
|
1,314,259,995,601 | arxiv |
\section{{Discussion, Limitations and Future Work}}
\label{sec:discussion}
We introduced a novel system GraphQ\ to perform interactive visual pattern queries on graph databases based on user-created query patterns. To facilitate interactive query, we utilize graph representation learning to resolve the problem of subgraph decision and node alignment. The intuitive and explainable visual cues provided by NeuroAlign\ are paired with novel visual and interaction designs to help users navigate the retrieval results and extract insights.
Due to the complexity of the subgraph matching problem, there are still many open questions we have not addressed yet:\looseness=-1
\textbf{Node alignment for multiple subgraph isomorphism.} Currently, the training and inference of NeuroAlign\ focus on a single instance of subgraph isomorphism. However, in practice, the query nodes could be mapped to multiple sets of nodes in the same matching target graph. Counting and enumerating all these instances is a very challenging problem and requires future research. {Besides that, multiple pattern matches in a large graph bring additional challenges for interaction and scalable visual representations.} \looseness=-1
\textbf{Scalability to very large query graphs.} During training of NeuroMatch, we observe that hard negative samples are crucial to achieving high precision rate. However, sampled or perturbed queries need to be verified with exact matching algorithms to ensure the subgraph relationship does not exist. These algorithms are slow to compute especially when the query and target neighborhood graphs become larger and the connectivity becomes denser. A potential approach to alleviate the issue is to assign large weights to these hard negatives and reduce the overall need to invoke these algorithms during training. \looseness=-1
{\textbf{Handling directed or disconnected query patterns.} Currently, our algorithm works with using undirected, connected graphs as the query pattern. For directed graphs, we converted them into undirected graphs as input for NeuroMatch and NeuroAlign. To account for the direction of connectivity, the backbone GNN model needs to be modified. For example, GraphSAGE can be modified by distinguishing the in-node and out-node neighborhoods during the aggregate-update process and other GNNs specifically designed for directed graphs such as \cite{tong2020directed,shi2019skeleton} can be considered. On the other hand, for disconnected query patterns, a potential workaround is to consider each connected component separately and make an ensemble of the individual predictions. However, the performance still needs to be investigated.}\looseness=-1
In the future, besides addressing the aforementioned limitations, we plan to investigate database index applied on the embeddings of the large graph database to allow even more efficient retrieval at sub-linear time. Furthermore, considering the wide variety of graph-structured data, we plan to extend the current work to more usage scenarios including social network analysis \cite{yanardag2015deep} and $3$-D point clouds \cite{neumann2013graph}. \looseness=-1
\clearpage
\section{Evaluation}
{Our evaluation of the proposed system consists of two example usage scenarios (Section \ref{subsection:workflow_analysis} and \ref{subsection:scene_graph}), quantitative experiments on various datasets (Section \ref{subsection:experiment_results}), and interview with domain experts on both usage scenarios (Section \ref{subsection:expert_interview}).}\looseness=-1
\subsection{Example Usage Scenario: Program Workflow Analysis}
\label{subsection:workflow_analysis}
\begin{figure*}[h]
\centering
\vspace{-0.15in}
\includegraphics[width=0.9\linewidth]{figures/case_study_1_high.pdf}
\vspace{-0.15in}
\caption{The user selects a fan-like pattern (a). Exact subgraph matching returns 21 results (b). After enabling approximate search (\autoref{fig:teaser}(4)), the back-end returns 172 graphs (d) containing fan-like patterns, although some of them are simpler than the query. The query results indicate that such structure can be reused as a template to reduce the manual effort for future workflow creation.\looseness=-1}
\vspace{-0.10in}
\label{fig:case_study_1}
\end{figure*}
\begin{figure}[h]
\centering
\vspace{-0.10in}
\includegraphics[width=\linewidth]{figures/scene_graph_extraction_2.pdf}
\vspace{-0.2in}
\caption
To obtain a semantic scene graph from an image in the MSRC-21 dataset, we use the Quickshift \cite{vedaldi2008quick} algorithm which segments the image into partitions, i.e. super-pixels; then we derive each semantic label as the most frequent ground-truth label of all pixels inside the corresponding super-pixel. Each super-pixel is mapped to a graph node with the semantic attribute. \looseness=-1}
\vspace{-0.2in}
\label{figure:scene_graph_extraction}
\end{figure}
In the first usage scenario, we apply GraphQ~to analyze a collection of graphs describing the workflows in a vehicle diagnostics software program. The software program uses prescripted workflow graphs to check the functionalities of the system and locate the problem in the vehicles. The workflows are modeled as directed graphs where each node represents an individual procedure in the workflow and the link represents their sequential orders. {We convert the graphs to undirected graphs as input for the query algorithms.} In total, there are $\sim$20 different types of procedures in the workflow, and we use node colors in the system to distinguish them (\autoref{fig:teaser}) (all the names of the nodes are anonymized). In both NeuroMatch and NeuroAlign, the type of the procedures is considered as a node attribute. \looseness=-1
The workflows are manually created and it is a time-consuming process. The goal of analyzing workflow graphs is to identify subroutines in the workflow that are reused frequently and therefore can be used as templates, or submodules in the future to facilitate the workflow editing process or to simplify the workflow descriptions. However, identifying such frequent subroutines cannot be easily automated -- substantial domain knowledge in automotive hardware and software system is needed to curate meaningful patterns, therefore a human-in-the-loop approach is well-suited. \looseness=-1
{Through an initial data exploration together with the domain experts, we found that pairwise comparison of workflows using graph editing distance \cite{gao2010surveyged} can provide an overview of the graph similarities in the dataset. This overview can help the user to select interesting workflows as the starting point for exploration. Our system integrates a t-SNE projection \cite{van2008tsne} of all the graphs based on the graph editing distance matrix which reveals several clusters (\autoref{fig:teaser}(a)). The user can use the brushing function to select one cluster and the selected graphs will be updated in the table (\autoref{fig:teaser}(b)). The user could then select any graph from the table to be displayed in the query editor (\autoref{fig:teaser}(1)) to create example-based queries.} In \autoref{fig:teaser}(c), a subroutine with a branching structure is selected by brushing on the visualization. The user can invoke the context menu and search for the query pattern in the graph database. With approximate matching disabled (\autoref{fig:teaser}(4)), the system returns 45 matched graphs in the database. In the graph types histogram, we can see that most of the matched graphs belong to two types (\autoref{fig:teaser}(d)). For an overview of the matching results (\autoref{fig:teaser}(2.1)), the user could toggle minimize in the query results display (\autoref{fig:teaser}(f)) and highlight the node matches returned by NeuroAlign\ (\autoref{fig:teaser}(e)). The result shows that indeed most of the graphs returned contain the nodes in the query pattern, indicating that the algorithm is returning reliable results. To further view the details, the user turns off the minimize toggle, and the graphs are displayed in a similar layout as in the query panel and the user can review more details about each graph including the graph name, number of nodes, and links, etc (\autoref{fig:teaser}(2.2)). {To facilitate the inspection of more detail about the returned matches and aligned nodes, we design the side-by-side display of the query graph and returned matching graph (\autoref{fig:teaser}(5)). The display is activated as a popup window when the user clicks on the zoom button (\autoref{fig:teaser}(g)).} Users can also add additional node attribute constraints by clicking on the corresponding node attribute (\autoref{fig:teaser}(h)) to be matched in the query results. In this example there is no workflow satisfying the specified attribute constraint. After verifying the results the user can save the query pattern in a json file to be reused when manually creating workflows in the future.\looseness=-1
\autoref{fig:case_study_1} shows the query results for a fan-like structure selected from a graph (\autoref{fig:case_study_1}(a)). The system returns 21 matched results with approximate search disabled. Indeed most of the returned graphs contain the fan-like structure (\autoref{fig:case_study_1}(b)), indicating another reusable submodule in the workflow creation process. In the t-SNE plot, the graphs with matching fan-like patterns are highlighted in orange, showing the graphs are scattered in different clusters according to graph editing distance (\autoref{fig:case_study_1}(c)). {This finding indicates our method can uncover meaningful patterns in the sub-regions of the graphs that are missed by graph-level similarities.} To further extend the search to graphs that may contain similar, but not exact the same patterns, the user toggles the button to enable approximate search (\autoref{fig:teaser}(4)), the returned result contains much more graphs (172 graphs) than in exact matching (\autoref{fig:case_study_1}(d)). The user sorts the results based on the number of nodes and found that the graphs with approximate matches contain a simpler fan-like structure with fewer nodes. Based on the analysis the user concludes that the fan-like pattern can be used as a template in the future. \looseness=-1
\begin{figure*}[th]
\centering
\vspace{-0.10in}
\includegraphics[width=0.95\linewidth]{figures/case_study_msrc.png}
\vspace{-0.10in}
\caption{Case study 2, searching by brushing a subregion (a chain of sky, building, and road nodes) on the (MSRC-21) scene graph and find the matching results (b), most of which contain the same chain of such three nodes as in (a). The three nodes' relationship resembles a typical street view image. \looseness=-1}
\vspace{-0.10in}
\label{figure:case_study_2}
\end{figure*}
\subsection{Example Usage Scenario: Scene Graph Search}
\label{subsection:scene_graph}
In the second usage scenario, we apply GraphQ~to semantic scene graph search in computer vision applications to find images with similar objects and relationships that resemble our query subgraph structure. {It can be useful for many computer vision tasks such as image retrieval \cite{schroeder2020structured,yoon2020image}, visual question answering, relationship modeling, and image generation.} We follow the procedures described in \cite{propagationkernels} to extract a semantic scene graph from each image. Each node in the graph represents a super-pixel extracted from the image using a segmentation algorithm and the links between nodes encode the adjacency information between those super-pixels. Each node is annotated with a semantic label, as one of its attributes and the whole extracted graph from an image is an undirected, planar graph \cite{planargraph}. In this study, we use a public image segmentation dataset (MSRC-21 \cite{msrc21}) to illustrate this approach. Each image contains ground-truth labels such as \textit{tree}, \textit{grass}, \textit{wall} and unlabeled \textit{void}, etc. We illustrate the process to extract the scene graph from each image in \autoref{figure:scene_graph_extraction}. \looseness=-1
To perform scene graph search, the user starts with the overview of all graphs in the database. The user picks a graph to work on and brushes a subgraph, for example, three connected nodes (\autoref{figure:case_study_2}(a)) including sky, building and road. This subgraph structure could indicate a typical city environment (with buildings and road). The backend, with approximate search disabled, returns matched result of 25 graphs and most of them contain the same subgraph: street view: interconnected super-pixels of sky, building and road as shown in (\autoref{figure:case_study_2}(b)). Note in histogram overview (\autoref{figure:case_study_2}(c)), all of these resulted images come from the same row (17th) in MSRC-21 dataset that belongs to the category ``road/building". \iffalse Same as workflow dataset, when we allow fuzzy search, the returned results have even more images that contains more types than sky, building, and road (\autoref{figure:case_study_2}(d)). However, they are not necessarily interconnected together due to fuzzy nature.\fi The user can also sort by different metrics and filter by different node information such as area range, or even super-pixel location, etc. Through these interactions, the user eventually finds interesting images tailored to needs.\looseness=-1
\subsection{Quantitative Evaluation}
\label{subsection:experiment_results}
\begin{table}[t]
\centering
\caption{Subgraph decision performance using NeuroMatch.}
\begin{tabular}{cccc}
\hline
\textbf{Dataset} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\ \hline
\textbf{Workflow} & 87.0 & 89.9 & 88.4 \\
\textbf{MSRC-21} & 83.6 & 91.6 & 87.4 \\
{\textbf{COX2}} & 87.4 & 90.9 & 89.1 \\
{\textbf{Enzymes}} & 81.8 & 73.0 & 77.1 \\ \hline
\end{tabular}
\vspace{-0.20in}
\label{tab:subgraph}
\end{table}
\begin{table*}[t]
\centering
\vspace{-0.10in}
\caption{Node alignment performance. NeuroAlign~achieves averaged 25\% improvement on the final accuracy.}
\begin{tabular}{c|ccccc|ccccc}
\hline
\textbf{Method} & \textbf{Dataset} & \textbf{\begin{tabular}[c]{@{}c@{}}top-1 \\ acc.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}top-2 \\ acc.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}top-3 \\ acc.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}acc. w/ \\ assignment\end{tabular}} & \textbf{Dataset} & \textbf{\begin{tabular}[c]{@{}c@{}}top-1 \\ acc.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}top-2 \\ acc.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}top-3 \\ acc.\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}acc. w/ \\ assignment\end{tabular}} \\ \hline\hline
\textbf{NeuroMatch} & \multirow{2}{*}{\textbf{Workflow}} & 64.2 & 85.6 & 93.4 & 68.6 & \multirow{2}{*}{{\textbf{COX2}}} & 42.2 & 56.5 & 65.9 & 44.1 \\
\textbf{NeuroAlign (Ours)} & & \textbf{91.5} & \textbf{97.7} & \textbf{98.7} & \textbf{95.2} & & \textbf{65.3} & \textbf{81.6} & \textbf{92.0} & \textbf{70.4} \\
\hline
\textbf{NeuroMatch} & \multirow{2}{*}{\textbf{MSRC-21}} & 40.9 & 62.7 & 77.0 & 52.6 & \multirow{2}{*}{{\textbf{Enzymes}}} & 41.7 & 56.6 & 67.4 & 47.5 \\
\textbf{NeuroAlign (Ours)} & & \textbf{59.6} & \textbf{84.2} & \textbf{95.1} & \textbf{81.3} & & \textbf{53.6} & \textbf{75.3} & \textbf{86.3} & \textbf{66.7} \\
\hline
\end{tabular}
\vspace{-0.20in}
\label{tab:align}
\end{table*}
We evaluate the performance of the proposed system on {4 graph datasets in various domains}: program workflow dataset (vehicle diagnostic), MSRC-21 (image processing), COX2 (chemistry) and Enzymes (biology). {The workflow dataset contains $\sim$500 individual workflow graphs with the number of nodes ranging from 5 to 150. $\sim$20 different types of nodes correspond to different diagnostic procedures. MSRC-21 \cite{msrc21} contains natural scene images with 21 object semantic labels. After the super-pixel extraction and processing steps as described in Section \ref{subsection:scene_graph} and \autoref{figure:scene_graph_extraction}, the resulting graph dataset includes 544 graphs with 11 to 31 nodes. COX2 \cite{sutherland2003spline,Morris+2020} consists of 467 chemical molecule graphs with the number of nodes ranging from 32 to 56. Enzymes dataset \cite{schomburg2004brenda,Morris+2020} contains 600 graphs of protein tertiary structure with 3 to 96 nodes. The last 3 datasets are public.}\looseness=-1
{We utilize an 8-layer GraphSAGE in training and the hidden dimension for node embeddings is 64. For NeuroAlign, the attention network has two hidden layers of dimensions 256 and 64. We use ReLU activation. The learning rate is fixed at 0.0001 without weight decay and Adam optimizer is utilized.}\looseness=-1
{The training data is generated on the fly by randomly sampling the positive and negative pairs, as described in \autoref{sec:training}. Note that the ground-truth label for a positive pair is obtained automatically during sampling, and for a negative pair is calculated by exact matching algorithm \cite{cordella2004sub}. The batch size is fixed to 128. For validation data, we sample the dataset following the same process, prior to training. For testing data, we sample based on the evaluation tasks as described in the following sections.}\looseness=-1
All experiments are conducted on a single GeForce GTX 1080 Ti GPU. We measure the performance of the system in terms of prediction correctness and runtime efficiency. For all evaluations, the approximate query matching is turned off. The detailed description of the evaluation setup and experimental results are presented below. \looseness=-1
\subsubsection{Prediction Accuracy}
\label{sec:results_acc}
To construct the testing dataset for evaluation of the prediction accuracy, we randomly extract $5$ queries from each graph, and obtain their ground-truth subgraph-isomorphism labels. The evaluation is conducted on the problem of subgraph decision and node alignment separately. For subgraph decision, we measure the precision and recall, commonly used in the information retrieval domain, to measure how well NeuroMatch retrieves the ground-truth matching target graphs from the graph database. \looseness=-1
For node alignment, the objective is to measure how well the algorithm predicts the correct matching nodes on the retrieved target graphs. Since the wrong retrieval does not have ground-truth node alignment, we conduct the evaluation on the set of correctly retrieved target graphs. For this task, we compare our proposed NeuroAlign\ with NeuroMatch, which provides node correspondence through the matched anchor nodes. Greedy assignment (Section \ref{sec:assignment}) is applied on both NeuroMatch and NeuroAlign\ to improve the inference. The details on utilizing the greedy assignment on NeuroMatch can be found in the appendix. To measure the performance, we calculate the top-$k$ ($k\in\{1,2,3\}$) accuracy along with the accuracy after the greedy assignment on each query, and report the average among all queries. {In case multiple matches exist in the ground truth, we only consider the one closest to algorithm prediction to measure the accuracy.} The identification of multiple subgraph isomorphisms \cite{liu2020neural} is a more challenging research topic and we provide a discussion in Section \ref{sec:discussion}.\looseness=-1
The performance of subgraph decision is shown in Table \ref{tab:subgraph}. The results show that the system is able to retrieve around $90\%$ matching target graphs for both datasets while maintaining high precision. Note that achieving high precision is much more challenging than high recall since a matching target graph is rare as compared to non-matching graphs. The excellent precision and F1 score of the system demonstrate the model's capability to learn embeddings that correctly reflect the subgraph relationship.\looseness=-1
The comparison between NeuroMatch and our proposed algorithm NeuroAlign\ on the node alignment task is shown in Table \ref{tab:align}. NeuroMatch performed poorly on this task due to multiple predicted matches for many query nodes. We achieve significant improvement over NeuroMatch (e.g. $27.3\%$ improvement on top-$1$ acc. and $22.2\%$ improvement after assignment for Workflow, $18.7\%$ improvement on top-$1$ acc. and $28.7\%$ improvement after assignment for MSRC-21). We also observe that MSRC-21 is much more challenging than Workflow dataset due to the dense connectivity and a large number of similar adjacency nodes. Interestingly, although NeuroAlign\ makes many wrong decisions from the top-$1$ predictions, its top-$3$ predictions contain most labels. As a result, the simple assignment approach successfully resolves many predicted conflicts and significantly improves the accuracy. Contrarily the assignment does not make much improvement for NeuroMatch predictions. In addition, we experimented with the optimal Hungarian assignment algorithm and observe that, as compared to our greedy approach, the improvement is negligible for NeuroAlign, but higher for NeuroMatch (e.g. achieves $73.1\%$ acc. on Workflow and $55.4\%$ acc. on MSRC-21) due to more conflicting predictions.\looseness=-1
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/time_comp.png}
\vspace{-0.10in}
\caption{Runtime comparison with VF2\cite{cordella2004sub} and NeuroMatch\cite{lou2020neural} on the Workflow dataset. Runtime in seconds is shown on the $y$-axis as logarithm scale and the exact number is above the bar. Compared to VF2, our system provides 10$\times$--100$\times$ speedup starting from 10 query nodes and therefore enables interactive query. Our proposed NeuroAlign\ component adds little to none computational overhead as compared to NeuroMatch, while providing much more accurate node-alignment results.\looseness=-1}
\vspace{-0.15in}
\label{fig:speed}
\end{figure}
\subsubsection{Runtime Efficiency}
\label{sec:results_speed}
{Next, we measure the runtime efficiency in comparison with the VF2 baseline \cite{cordella2004sub} to evaluate the speed gain. VF2 is the state-of-the-art exact matching algorithm based on backtracking procedure. Although it calculates true subgraph-isomorphism results, the computation is expensive, especially for larger graphs.} In addition, we also compare with a similar system where NeuroAlign\ component is removed to evaluate the added computational overhead of NeuroAlign. For this evaluation, we consider the number of query nodes ranging from $5$ to $30$ with an increment of $5$ on the Workflow dataset, and randomly extract $2000$ corresponding queries for each number. We measure the averaged runtime in seconds for the matching with the entire database. The results are visualized in \autoref{fig:speed}. We observe that the runtime of VF2 increases exponentially with the increase in query nodes and reaches close to $6$ minutes with just $25$ query nodes. With further increased query nodes they become larger than many target graphs and cannot be matched, thus creating a runtime drop at node size $30$. In contrast, our runtime increases linearly with query node size. Compared to NeuroMatch, the added NeuroAlign\ component induces little to none computational overhead. Surprisingly it is slightly faster than NeuroMatch in some cases. We conjecture this is due to the easier assignment task generated by NeuroAlign\ (i.e.\ fewer conflicts), such that the greedy algorithm can terminate early.\looseness=-1
\subsection{Expert Interview}
\label{subsection:expert_interview}
To evaluate the usability of the system, we conducted a semi-structured interview involving three industrial experts working on program workflow construction and review for the first usage scenario, as well as {three researchers} working in the computer vision domain for the second usage scenario. We introduced the system with a walk-through of the interactive features and visual encodings and then explored the system together through a remote call. We report a brief summary of the findings here as an initial validation of the usability and utility of the system. \looseness=-1
For the first usage scenario, {the domain experts considered the visual analytic system easy to understand and fits their current usage scenario very well: identifying reusable workflow modules to simplify future workflow creation. They can easily create new patterns and search for matching graphs in the database and validate the results in the visualization interface.} They even proposed new usages such as using the visualization to review newly created workflows. {One of them commented, ``The abstraction and searching of custom queries open up a lot of opportunities".} In addition, they requested that the returned workflows to be grouped by additional node features for fine-grained analysis. We are currently working with the experts to deploy the system for larger-scale use, and are expecting more feedback after long-term usage. \looseness=-1
For the second usage scenario, {the domain experts appreciated the usefulness of the system by commenting, ``It's great to perform query so fast and see results interactively. It's certainly very powerful for many computer vision problems".} They showed great interest in applying the system for diagnosing computer vision models to answer questions such as: does an object detection model performs worse when the object is placed on the road instead of in a room? {One of them is interested in retrieving images containing similar semantic structure as some failure cases of the model to perform further analysis and model refinement. Another expert is interested in utilizing the tool for computer vision problems with a heavy focus on object relationships, such as image captioning and visual question answering.} For improvement, they mentioned that the graph edge could encode additional information such as the relative positions (up, down, left, right) of the superpixels to retrieve similar images. {In addition, a ranking of the matched images could be provided based on the closeness of visual appearance to the query image.} \looseness=-1
\subsection{{Problem Definition}}
We first formally define the subgraph matching problems. We denote $G=(V,E)$ as an undirected, connected graph with vertex set $V$ and edge set $E$, $X$ as the features associated with $V$ (e.g.\ categorical attributes). Given a query graph $G_Q$ and a target graph $G_T$, we consider the \textbf{\textit{decision problem}} which determines whether there exists a subgraph $H_T\subseteq G_T$, such that $G_Q$ is isomorphic to $H_T$. When $H_T$ exists, i.e.\ $G_Q$ is subgraph-isomorphic to $G_T$, we further consider the \textbf{\textit{node alignment problem}} which looks for an injective mapping function $f:V_{Q}\rightarrow V_T$, such that $\{f(v),f(u)\}\in E_T$ if $\{v,u\}\in E_{Q}$. When the node features $X$ exist, the matching requires equivalence of the feature too. Note that this defines \textit{edge-induced} subgraph isomorphism, which is our focus in the paper. However, the system is general to apply on \textit{node-induced} subgraph isomorphism \cite{bachl1999isomorphic} too.\looseness=-1
An illustrative example is shown in \autoref{fig:subgraph}, where the colors encode node categorical feature and letters are the node names. The example query graph $G_Q$ is a subgraph of $G_T$ with the correct node alignment of $f(a)=A,f(b)=B,f(c)=C,f(d)=D$. In this paper, we consider the practical case of a large database of target graphs, where the task is to solve the above decision problem and node-alignment problem for each of the target graphs.\looseness=-1
\subsection{{Overall Framework}}
{Our proposed framework consists of two core components:} NeuroMatch (\autoref{fig:neuromatch}) and NeuroAlign (\autoref{fig:neuroalign}), which focus on solving the subgraph decision and node alignment problems respectively. Given a graph database and user-created query graph, we utilize the state-of-the-art NeuroMatch method \cite{lou2020neural} to efficiently retrieve matching target graphs which contain the query graph. NeuroMatch decomposes the graphs into small neighborhoods to make fast decision locally and then aggregates the results. {After a matching target graph is found, the node alignment between the two graphs can still be ambiguous and misleading based on what we observe in the experimental results. This is due to the fact that the learning process of NeuroMatch relies entirely on small neighborhoods within the graphs. As a result, each query node could end up matched to multiple target nodes where many of them are actually false positives. To tackle these issues, we propose a novel model NeuroAlign, which directly predicts node alignment from query and target graphs, without segmenting them into small neighborhoods. It computes node-to-node attention based on graph node embeddings to obtain the alignment results.} Finally, the matching target graphs and corresponding matching nodes are returned to the user for exploration and analysis. \looseness=-1
NeuroMatch and NeuroAlign both employ GraphSAGE \cite{hamilton2017inductive} as the backbone GNN for representation learning. For simplicity, we consider GraphSAGE as a general function that performs representation learning, where the input is a given graph and the output is a set of embeddings for every node in the graph. Optionally, a pooling layer can be added on top of the node embeddings to obtain a single embedding of the input graph. A more detailed description can be found in the appendix. We use $h_v$ to denote the learned representation of node $v$ at the final output layer, which will be used by NeuroMatch and NeuroAlign as described in the following sections. \looseness=-1
\subsection{{Subgraph Decision via NeuroMatch}}
\label{sec:neuromatch}
\begin{figure}[t]
\centering
\vspace{-0.15in}
\includegraphics[width=0.8\linewidth]{figures/neuromatch.png}
\vspace{-0.15in}
\caption{NeuroMatch determines whether $G_Q$ is a subgraph of $G_T$ by looking for local matches first and then aggregate the results. In this figure, we highlight the $1$-hop local neighborhoods at anchor nodes $b,c$ in the query graph as an example (in green and orange outlines). The NeuroMatch algorithm compares these $1$-hop neighborhoods with those in the target graph. It finds that the $1$-hop neighborhood graph of $b$ is a subgraph of the $1$-hop neighborhood of $B$ (highlighted in green) and the neighborhood of $c$ is a subgraph of the neighborhood of $C$ (highlighted in orange). Since for each query node ($a$, $b$, $c$, $d$), we can find a matching $1$-hop neighborhood graph in the target graph ($A$, $B$, $C$, $D$), the algorithm concludes that indeed $G_Q$ is a subgraph of $G_T$. \looseness=-1}
\label{fig:neuromatch}
\vspace{-0.10in}
\end{figure}
{Conducting subgraph matching in the embedding space can facilitate efficient retrieval. However, considering the scale of the database and the large size of certain graphs, it is challenging to build the predictive model to encode the subgraph relationships. NeuroMatch resolves this issue by decomposing the given query and target graphs into many small regions and learns the subgraph relationship in these small regions first.} In particular, for each node $q$ in the query graph, it extracts a small $k$-hop neighborhood graph $g_q$. For each node $t$ in the target graph, it also extracts their $k$-hop neighborhood $g_t$. Then the problem of determining whether $G_Q\subseteq G_T$ transforms into many local subgraph matching decisions about whether $g_q\subseteq g_t$. To find potential local matches, NeuroMatch compares all pairs of nodes between the query and target graphs. Finally, the ensemble decision can be made by checking whether every query neighborhood can find a matching target neighborhood. Figure \ref{fig:neuromatch} shows a simple example to illustrate the main idea of NeuroMatch. In order to determine the local subgraph relationship, i.e.\ whether the $k$-hop neighborhood graph $g_q$ is a subgraph of $g_t$, the algorithm feeds $g_q$ and $g_t$ into GNN with the pooling layer to extract the respective anchor node embedding at $q$ and $t$. A comparator function then takes each pair of these embeddings and predicts the subgraph relationship, as shown in \autoref{fig:neuromatch}. We describe the method in the appendix and refer readers to the NeuroMatch paper for more detail \cite{lou2020neural}.\looseness=-1
When the model is trained, we pre-compute and store embeddings of all graphs in the database. The inference process simply iterates through all pairs of query and target nodes, and utilizes the (trained) comparator to make local subgraph decisions. The aggregated decision is then made by checking whether each query neighborhood finds a match. This process has linear complexity in terms of both query and target number of nodes, thus facilitates efficient retrieval at the front-end interface. \looseness=-1
\begin{figure}[t]
\centering
\vspace{-0.15in}
\includegraphics[width=0.97\linewidth]{figures/neuroalign.png}
\vspace{-0.15in}
\caption{NeuroAlign algorithm obtains accurate node-to-node correspondence. It extracts the embeddings of each node in the query graph and the target graph by directly feeding them through GNN. It then uses an attention network to compare every pair of node embeddings between the query and target graphs. For the convenience of computation, these pair-wise comparison results are formed as a matrix. The rows correspond to query nodes and columns correspond to target nodes. The matrix is then transformed into a probability matrix through softmax on each row. A greedy assignment algorithm resolves potential conflicts (black outlined block) during inference {(Section \ref{sec:assignment})}.\looseness=-1}
\label{fig:neuroalign}
\vspace{-0.10in}
\end{figure}
\subsection{Node Alignment via NeuroAlign}
\label{sec:neuroalign}
NeuroMatch determines whether the query is a subgraph of the target graph. When a matching target graph is retrieved and visualized, it is still difficult for the user to extract insights when the target graph is large and the topology is complex. In this case, showing the corresponding nodes can provide intuitive and explainable visual cues. We propose NeuroAlign, to obtain improved node alignment performance. We formulate the prediction problem as a classification task, where query nodes are examples and the target nodes correspond to labels. This architectural change is crucial to enable more accurate alignment by accounting for much larger areas on both graphs. However, for different target graphs, the number of classes (i.e.\ target nodes) varies. This creates a challenge for predictive models. We resolve it by employing a flexible, cross-graph attention mechanism.\looseness=-1
As shown in \autoref{fig:neuroalign}, NeuroAlign\ directly takes the node embeddings obtained from GNN on the entire graphs $G_Q$ and $G_T$. These embeddings are denoted as $\{h_q,\forall q\in G_Q\}$ and $\{h_t,\forall t\in G_T\}$. We then compute the similarity between each query embedding and every target embeddings through an attention network. This process can be considered as creating an attention matrix $\mathbf{A} \in\mathbb{R}^{\|V_Q\|\times\|V_T\|}$, where the element $\mathbf{A}_{q,t}$ contains the attention from node $q$ to $t$. We then directly transform the similarity matrix to a probability matrix $\mathbf{P}\in\mathbb{R}^{\|V_Q\|\times\|V_T\|}$ using row-wise softmax and use them in the cross-entropy loss. Formally,\looseness=-1
\begin{equation}
\label{eq:neuroalign}
\begin{gathered}
\mathbf{A}_{q,t}=\psi(h_q\mathbin\Vert h_t) \\
\mathbf{p}_q=\text{softmax}(\mathbf{a}_q) \\
L(G_Q,G_T)=-\sum_{q\in G_Q} \mathbf{y}_q \log(\mathbf{p}_q)
\end{gathered}
\end{equation}
where $\psi$ denotes the attention network, $\mathbf{a}_q$ is the $q$-th row of $\mathbf{A}$, and $\mathbf{y}_q$ is the one-hot ground-truth label for node $q$, indicating which node in $G_T$ is the corresponding node of $q$. The prediction $\mathbf{p}_q$ contains the probabilities of matching query node $q$ to every target node. We implement the attention network as a multi-layer perceptron, which takes a pair of embeddings produced by the GNN, concatenate them and return a similarity score between a node $q$ in the query graph and a node $t$ in the target graph. In case $G_T$ is too large, the computation of $\mathbf{A}_{q,t}$ could consume too much memory, and needs to be constrained to a subgraph at $t$. In practice, we specify a maximum size that covers most target graphs in the database. \looseness=-1
Similar to NeuroMatch, when the model is trained, we can pre-compute all graph embeddings generated by NeuroAlign to make the retrieval process efficient. In addition, NeuroAlign\ works subsequently to NeuroMatch and only activates when a subgraph relationship is predicted, thus creating minimal computational overhead for visualization and interaction.\looseness=-1
\subsection{{Algorithm Training}}
\label{sec:training}
The training of NeuroMatch and NeuroAlign are conducted separately. Training NeuroMatch (and its backbone GraphSAGE GNN) involves sampling large amounts of mini-batches containing both positive and negative pairs. A positive pair consists of two neighborhood graphs $g_q$ and $g_t$ that satisfy the subgraph relationship, while a negative pair consists of neighborhood graphs where the relationship is violated. To sample a positive pair, we first randomly sample a $k-$hop neighborhood as $g_t$, and then sample a subgraph within $g_t$ as the query neighborhood $g_q$. To sample negative pairs, we start with the obtained target neighborhood $g_t$ above, and sample a smaller neighborhood from a different graph as $g_q$ (query neighborhood). Note that $g_q$ needs to be verified with exact matching protocol \cite{cordella2004sub} to ensure $g_q\nsubseteq g_t$. In practice, we find that \textit{hard} negatives are necessary to achieve high precision, which are obtained by perturbing the above positive pair ($g_q\subseteq g_t$) such that the subgraph relationship no longer exists. We perturb the positive pair by randomly adding edges to $g_q$ and verify the success with exact matching \cite{cordella2004sub}. As can be seen, negative sampling extensively invokes exact matching algorithm, which is slow to compute. To keep the training tractable, we set small neighborhood hop $k=3$ and also limit the number of nodes to sample from the neighborhood to $30$. \looseness=-1
Training NeuroAlign\ (and its backbone GraphSAGE GNN) is much simpler. It involves sampling only positive pairs, since its objective is to improve node alignment when the subgraph decision has already been made that $G_Q\subseteq G_T$. Therefore, the sampling involves extracting random queries from the graphs in the database. For each target graph $G_T$ in the database, we randomly sample a subgraph within it as $G_Q$. The ground-truth injection mapping is acquired directly in the sampling process, and it is converted to $\mathbf{y}_q$ to indicate which node in $G_T$ is the corresponding node of $q$. NeuroAlign\ can be trained efficiently through this simple sampling process and without invoking the expensive exact matching algorithm.\looseness=-1
\subsection{Greedy Assignment for Inference}
\label{sec:assignment}
{During inference of node alignment, different nodes in the query graph could be mapped to the same node on the target graph. This is likely to occur among nodes with highly similar topological and attribute features.} The prediction conflict can be resolved with a task assignment algorithm. Instead of resorting to the combinatorial Hungarian algorithm \cite{munkres1957algorithms}, we further develop a simple greedy assignment approach. Specifically, given the predicted probability matrix $\mathbf{P}$, we iterate the probabilities in descending order and record the corresponding matching pair only when both the query and target nodes have not been assigned. The iteration stops when all query nodes have been assigned. This simple process resolves conflicting assignment to the same target node and improves the overall node alignment performance (experimental results in Section \ref{sec:results_acc}).\looseness=-1
\begin{figure*}[ht]
\centering
\vspace{-0.15in}
\includegraphics[width=0.95\linewidth]{figures/architecture.png}
\vspace{-0.12in}
\caption{System architecture of GraphQ. The back-end precomputes and stores the graph representations to support efficient matching graph retrieval through the NeuroMatch algorithm. After the matching graphs are obtained, we use NeuroAlign~to obtain accurate node-to-node
correspondence to be displayed in the visualization for the user to verify the results. Users can start from an overview of all the graphs in the database and select one to construct example-based query pattern. The query pattern can be slightly perturbed to retrieve approximate matching results from the database. After the results are returned, the user can use a variety of views to explore the returned results.\looseness=-1}
\vspace{-0.20in}
\label{figure:system_architecture}
\end{figure*}
\subsection{Approximate Query Matching}
In addition to the retrieval results obtained from the query graph, we provide the option to perform approximate query matching. This method perturbs the query graph slightly, in order to obtain similar, but different matching graphs. Specifically, denote the set of obtained matches from the original query graph $G_Q$ as $R$. We remove one node from $G_Q$ and its associated edges to obtain the perturbed query $G'_Q$. {Then we conduct the search with NeuroMatch on $G'_Q$ and add the novel matches $R$. We continue the iteration by removing a node from the perturbed query, until either a prespecified maximum number of steps is reached or $G'_Q$ becomes disconnected. To lower the chance of getting a disconnected graph, each time we remove the node with the lowest degree in $G'_Q$.}\looseness=-1
\subsection{Graph Visualization}
Graph visualization is an extensively studied topic \cite{herman2000graph, nobre2019starmultivariategraph} for its application in a wide range of domains. Open source or commercial software for graph visualization (e.g. Gelphi \cite{bastian2009gephi} and Neo4j Bloom\cite{neo4j}) are also available for off-the-shelf use. Researchers in graph visualization typically focus on one or more of the following aspects: develop layout algorithms to efficiently compute readable and aesthetic visualizations {(e.g. \cite{gansner1993technique, bennett2007aesthetics, diaz2002survey,hu2005efficient, jacomy2014forceatlas2, kwon2017would})}, design new visual encoding to display nodes and edges (e.g. \cite{herman2000graph, henry2007nodetrix, van2015reducing}), develop graph simplification or sampling technique to avoid over-plotting and visual clutter (e.g. \cite{dunne2013motif, van2014multivariate}), and design novel user interaction scheme for exploratory analysis {(e.g. \cite{herman2000graph, tominski2006fisheye, pister2020integrating, srinivasan2017graphiti})}. Depending on the nature of the graph data, {they have developed a variety of systems and algorithms for} directed/undirected graphs, multivariate graphs (with node/edge attributes) and dynamic network visualization to support a wide range of graph analytic tasks \cite{lee2006task, pretorius2014tasks}. \looseness=-1
In this work, we focus on supporting interactive, example-based visual query of graph patterns in a database and visualizing the results. This is a generic framework that can be applied to both directed or undirected graph and graphs with node/edge attributes, as demonstrated in the example usage scenarios. We utilize existing graph layout techniques for a detailed view of directed graphs \cite{gansner1993technique} and design a compact visualization for summarizing graph structure to provide an overview of the query results. \looseness=-1
\subsection{{Visual Graph Query}}
Graph patterns/motifs are frequently used to simplify the display of graphs and reduce visual clutter. Motif Simplification \cite{dunne2013motif} was developed to identify graph motifs including \textit{clique}, \textit{fan}, and \textit{d-connectors} based on topological information and visualized them as glyphs in the node-link display for more efficient usage of the screen space. More generally, cluster patterns, esp. ``near-clique'' structures are the most studied and visualized in the literature and various methods have been developed to compute and visualize them \cite{vehlow2017groupstructure}. However, most of the patterns/ motifs here are predefined and can not be easily modified by users.
Graphite\cite{chau2008graphite}, Vogue \cite{bhowmick2013vogue}, and Visage \cite{pienta2016visage} support interactive, user-specified queries on graph data and Vigor \cite{pienta2017vigor} focuses on visualization of the querying results. In these systems, users can interactively specify node attributes as well as topological constraints in the form of a query graph and the system searches for matching subgraphs. However, the complexity of the query is usually limited, which reduces the expressive power of the specified patterns.\looseness=-1
Our approach is also inspired by a number of existing visual query system on time series data, where the user can interactively specify the patterns they are searching for, by either drawing the pattern directly on a canvas or selecting the pattern from a data sample \cite{wattenberg2001sketch, hochheiser2003interactive, hochheiser2004dynamic, buono2005interactive, lekschas2020peax}. Supporting user-specified patterns gives the user great flexibility and power to perform exploratory analysis in various application domains. However, querying arbitrary patterns on a graph structure brings unique challenges in terms of the computation speed needed to support an interactive user experience, which we address with a graph representation learning-based approach.\looseness=-1
\subsection{Graph Representation Learning for Subgraph Pattern Matching}
Graph neural networks (GNNs) have emerged as a generic approach for graph representation learning, which can support a variety of graph analytics tasks including link prediction, node classification, and community structure identification \cite{kipf2016semi,hamilton2017inductive,velivckovic2017graph,xu2018powerful,shanthamallu2019gramme}. The recent development on GNN library further increases the popularity among researchers \cite{torch_geometric}. The success of GNN on diverse graph tasks also motivated researchers to address the comparison problem between different graphs, such as graph matching \cite{li2019graph} and graph similarity learning \cite{al2019ddgk}. A comprehensive survey on this topic is provided in \cite{ma2019deep}. Recently, GNNs have been shown to improve the performance on the challenging subgraph-isomorphism problems, including subgraph matching \cite{lou2020neural}, subgraph isomorphism counting \cite{liu2020neural}, maximum common subgraph detection \cite{bai2019neural}, and {graph alignment \cite{fey2020deep}. Powered by flexible representation learning, these approaches addressed issues of heuristic-based solutions \cite{heimann2018regal,sun2012efficient} in terms of accuracy and query scalability. Our objective is to utilize GNNs to facilitate fast user-interaction with graph queries, where the embeddings of the existing graphs can be pre-computed and stored to enable efficient retrieval during the inference stage. Compared to \cite{bai2019neural,fey2020deep}, our approach resolves subgraph isomorphism from the learned embedding space alone, without expensive iterative search \cite{bai2019neural} or embedding refinement aided by the additional network \cite{fey2020deep}.} Our proposed {framework} utilizes NeuroMatch \cite{lou2020neural} as a core component to efficiently query matching graphs but involves a novel component NeuroAlign\ to resolve the issue of NeuroMatch on obtaining accurate node alignment. The capability to identify matching nodes is critical for intuitive user interaction with complex topologies.\looseness=-1
There are relatively fewer works in the visual analytics domain utilizing graph representation learning. In \cite{fujiwara2020visualconstrastive}, a contrastive learning approach is developed to visualize graph uniqueness and explain learned features. Graph representation learning-based algorithms have also been developed for graph layout/drawing \cite{wang2019deepdrawing, kwon2019deep}, evaluating graph visualization aesthetics \cite{haleem2019evaluating}, and sample large graphs for visualization \cite{zhou2020context}. Our framework addresses the important problem of subgraph matching and facilitates intuitive interaction. To the best of our knowledge, this is the first approach based on representation learning for interactive visual graph queries. \looseness=-1
\section{{Visualization and Interaction}}
{In this section, we first evaluate the design goals of GraphQ (Section \ref{section:design_requirements}). We then describe the GraphQ system with details on its visualization and interaction components (Section \ref{subsection:sys_components}), and technical implementation (Section \ref{subsection:sys_implementation}).}\looseness=-1
\subsection{Design Goals}
\label{section:design_requirements}
GraphQ's principle design goal is to provide a generic solution for interactive graph pattern search on a graph database based on user-specified examples. The basic requirement is that the user needs to be able to interactively select and refine graph patterns and analyze the retrieved results. In the meanwhile, the system should display the matching instances as well as explaining the results by highlighting the node correspondences. \looseness=-1
We further enrich and refine the design goals by collecting requirements for domain-specific usage scenarios. We analyzed two example usage scenarios including workflow graph pattern analysis and semantic scene graph analysis in image understanding. For the first usage scenario (details in Section~\ref{subsection:workflow_analysis}) we worked closely with the domain experts who provided the workflow graph data and who are also the end-user of the system. In the second usage scenario, we reference the relevant literature in computer vision on semantic scene graphs. Semantic scene graph is a commonly used graph structure that describes not only the objects in an image but also their relations \cite{johnson2015image}. They are frequently used to retrieve images with the same semantics. By analyzing the commonalities of the two usage scenarios we identified the following user analysis tasks to support in GraphQ: \looseness=-1
\vspace{-0.08in}
\begin{enumerate}[label=\textbf{\textit{T\arabic*}}, leftmargin=*]
\setlength\itemsep{-0.2em}
\item \label{req:t1} \textbf{Browse/search the graph database}. To start the query process, the user needs to be able to select from hundreds to thousands of graphs. Therefore, the system should provide graph search and filtering functionalities based on the category, the name, or graph statistics such as the number of nodes/links. Besides that, {a visualization showing an overview of all graphs in the database will be useful to help locate interesting graphs or clusters.} \looseness=-1
\item \textbf{Interactively construct the query pattern} by selecting on a graph visualization. To minimize user effort, the system should support both bulk selection mechanisms such as brushing the graph regions as well as query refinement methods to add/delete individual nodes/edges from the pattern. \looseness=-1
\item \textbf{Interpret and validate the matched graphs} via highlighted similarities and differences. To help users interpret the matching results, the node correspondences, as well as differences in the query results, should be highlighted. Furthermore, since the subgraph matching and node correspondence calculation algorithms are not 100\% accurate, the results need to be presented in a meaningful way for easy verification. \looseness=-1
\item \textbf{Explore the distribution of the matching instances}. {After the matched graphs are returned, the system should indicate how frequently the query pattern occurs in the entire database, and provide the distribution of the pattern among different categories of graphs in the database.}\looseness=-1
\item \textbf{Refine query results}. A flexible query system should further support query refinement mechanism where the users can apply their domain knowledge to filter the results with additional constraints, such as matching additional node attributes or limiting the results to a certain category of graphs. \looseness=-1
\end{enumerate}
\subsection{GraphQ System}
\label{section:system}
We design GraphQ~to support the user analysis tasks (\textbf{T1-5}) described in Section~\ref{section:design_requirements} with the architecture and user workflow featured in ~\autoref{figure:system_architecture}. The user can start with an overview of the graph database (\textbf{T1}), brush, and select a graph to create example-based query patterns (\textbf{T2}). The query pattern (along with optionally perturbed query pattern for approximate query matching) will be sent to the back-end, its node representations will be computed and compared with the precomputed node embeddings to obtain a set of matching graphs containing the query pattern. The matching results along with the query pattern will go through NeuroAlign~to compute one-to-one node correspondence. The query results will be displayed in the front-end with multiple levels-of-detail (\textbf{T3}) and can be refined further by adding node-attribute constraints interactively in the query panel (\textbf{T5}). {The distribution of the matching graphs will be highlighted interactively in the database overview panel (\textbf{T4}).}\looseness=-1
\subsubsection{Components}
\label{subsection:sys_components}
The user interface of GraphQ~is composed of four main components:\looseness=-1
\textit{\textbf{Overview and filters}}. In the overview panel (\autoref{fig:teaser}(3)) the system displays the distribution of key graph statistics such as the number of the nodes/edges as well as domain-specific attributes such as the category of the graph. Both univariate distributions and bivariate distributions can be displayed as histograms or scatterplots. Users can brush the charts and select a subset of graphs to create example-based query patterns.
To provide an overview of the graph structural information and help users navigate and select a graph to start the query {(\textbf{T1})}, we further precompute the graph editing distance \cite{gao2010surveyged} which roughly captures the structural similarities between all pairs of graphs. A 2-D projection coordinates of the graph can then be precomputed using t-SNE \cite{van2008tsne} based on the distance matrix and stored as additional graph attributes (\autoref{fig:teaser}(a)).\looseness=-1
After the query result is obtained, the charts will be updated to provide a contextual view of how the subgraph pattern occurs in the database. For example, the user can observe whether the pattern occurrence concentrate on a small subset of graph categories or it is a generic pattern that appears in many different categories (\textbf{T4}) (\autoref{fig:teaser}(d)).\looseness=-1
Furthermore, the overview panel is a customizable module that can be configured through a json file specifying the attributes to be displayed and the chart to display it. Users can also interactively fold each chart and hide it in the display, such that space can be used for keeping important attribute information on the screen. The system also displays a popup window to show detailed information for selected charts.\looseness=-1
\textit{\textbf{Graph query panel}}. In the graph query panel (~\autoref{fig:teaser}(1)), the user can interactively select from a graph instance to construct the query pattern. The color of the nodes encodes the key node attribute to be matched in the subgraph pattern query. The system currently supports categorical node attributes. This can be extended to numerical attributes by quantizing the values. Additional node attributes are displayed in attachment to the nodes or in tooltips. {As discussed in \autoref{section:design_requirements}, we need to support fast, interactive query construction (\textbf{T2}).} In this panel, the user can quickly select a group of nodes and the subgraph they induce by brushing a rectangular area on the visualization. They can also construct the pattern in a more precise manner by clicking the \textbf{+} and \textbf{-} button on the top right corner of each node. A minimap on the bottom right of the panel allows the user to easily navigate and explore graphs of larger size. The layout of the graph is computed with existing layout algorithms, such as the algorithm described in \cite{gansner1993technique} for directed graphs. When the nodes have inherent spatial locations, they are used directly for display.\looseness=-1
\textit{\textbf{Query results}}. After the sub-graph pattern matching results are returned, the query results panel will be updated to display all the matching graphs as a small multiples display (\autoref{fig:teaser}(2.1) and (2.2)). Since the number of returned results could be large, the system supports sorting the returned graphs with graph attribute values such as the number of nodes (\autoref{fig:teaser}(f)). {To support \textbf{T3}, the matching nodes are highlighted based on the results returned by the node alignment module.} The graphs can be displayed either in a node-link diagram with the same layout as the graph in the query panel (\autoref{fig:teaser}(2.2)) or in a thumbnail visualization designed to display the graph in a more compact manner (\autoref{fig:teaser}(2.1)). In particular, we use topological sorting of the nodes for directed acyclic graphs to order the nodes, layout them vertically, and route the links on the right to obtain a compact view (\autoref{fig:teaser}(2.1)).\looseness=-1
\textit{\textbf{Comparison view}}. {To support \textbf{T3} and \textbf{T5}, we further visualize the query and selected matching graphs side-by-side in a popup window.} The user can click on the zoom-in button on each small multiple to bring out the comparison view (\autoref{fig:teaser}(5)) and review each matching graph in detail. The matched nodes are highlighted for verification.\looseness=-1
\subsubsection{Implementation}
\label{subsection:sys_implementation}
GraphQ's implementation uses a typical client-server architecture. The frontend UI framework is implemented in Javascript with React\cite{react} and AntD UI\cite{antd} libraries. The visualizations are drawn using D3.js\cite{d3} on svg within the React framework. We use dagre \cite{dagre} to compute directed graph layout in the front-end. The backend server is implemented in Python with Flask \cite{grinberg2018flask}. The graph data are stored as json documents in the file system and modeled with NetworkX \cite{hagberg2008exploring}. We use PyTorch \cite{NEURIPS2019_9015} for graph representation learning for both subgraph matching and node correspondence learning. More specifically, we use PyTorch Geometric \cite{torch_geometric} and DeepSNAP \cite{deepsnap} to batch graph data (including their topological structures and node features) for training and inference. \looseness=-1
\section{Related Work}
\input{sections/related_work}
\section{Algorithm}
\input{sections/method}
\input{sections/vis_sys}
\input{sections/evaluation}
\input{sections/discussion_conclusion}
\bibliographystyle{abbrv-doi}
|
1,314,259,995,602 | arxiv | \section{Introduction}
The existence of super-massive black holes (SMBHs) with masses larger than billion solar masses at $z\gtrsim6$ \citep[e.g.,][]{Banados18a,Matsuoka18a, Wang21b}, when the Universe was $<1$ Gyr old, challenges our current understanding of SMBH and galaxy formation and evolution, and is thus one of the most pressing open issues in modern astrophysics \citep[e.g.,][]{Woods19}.
Their distance and faintness make observations of these objects difficult and strongly biased towards the most luminous and massive accreting SMBHs. A complementary approach is to use numerical simulations as tools to study the largely unknown phases of SMBH growth in the early Universe \citep[e.g.,][]{Tanaka09,Sijacki09, Habouzit16, Habouzit19}.
However, observed properties of high-redshift accreting SMBHs, or active galactic nuclei (AGN), and predictions of numerical simulations have been compared only seldom \citep[e.g.,][Zana et al. accepted]{Ni20, Habouzit21, DiMascia21a}.
An important ingredient entering in numerical simulations focused on the early growth of SMBHs is the effect of AGN feedback \citep[e.g.][]{Costa14, Costa20, Barai18, Habouzit19, Valentini21}, as it is often considered to have a major role in shaping the evolution of AGN and galaxies along the whole cosmic history \citep[e.g.,][]{Fiore17}. In particular, optically-selected luminous quasi-stellar objects (QSOs) in the early Universe often present evidence for the launching of fast and massive multi-phase outflows (e.g., \citealt{Maiolino12, Cicone15,Bischetti19, Carniani19, Schindler20, Izumi21}; but see also, e.g., \citealt{Decarli18, Novak20, Meyer22}), which are expected to affect the observable properties of the QSOs themselves and their host galaxies, such as X-ray oscuration, UV extinction, and gas content \citep[e.g., ][]{Brusa15b, Ni20}.
Outflows observed in QSOs are though to originate from fast nuclear winds, which, in turn, may be accelerated by several physical mechanisms, including radiation pressure, due to UV photons produced in the accretion disc, on dust grains or on partially ionized gas mediated by UV transitions, and magnetic effects \citep[e.g.][]{Proga00, Murray05, Fabian08, Yuan15, RicciC17}. The physical scales involved in these processes are those of the accretion disk \citep[e.g., ][]{Giustini19}. Since such scales cannot be resolved by large-scale cosmological simulations, different authors have modeled AGN feedback
using several different recipes (e.g., \citealt{Barai18, Costa20, Ni20}).
Moreover, the effect of the outflow on the surrounding material can potentially depend on its geometry \citep[e.g.,][]{Zubovas16}.
Since the exact acceleration physics, and thus launching direction, of nuclear winds is not well understood, numerical simulations typically assume either spherical \citep[e.g., ][]{Feng16} or bi-conical \citep[e.g., ][]{Sala21} outflow geometry as study cases.
Beside the properties of the individual galaxies hosting accreting SMBHs, numerical simulations provide also information on the environment of high-redshift luminous AGN. While these objects are expected to reside in the peaks of the dark matter halo distribution, which are generally characterized by large overdensities of galaxies (e.g., \citealt{Costa14, Wise19}), although with some scatter (e.g., \citealt{Habouzit19}), observations struggle to provide us with a clear view of typical high-redshift QSO environment. In fact, $z>6$ QSOs have been reported to reside in a variety of environments, including underdense, normal, and overdense regions (e.g. \citealt{Ota18, Mazzucchelli19,Overzier21}). The first spectroscopically confirmed galaxy overdensity around a $z>6$ QSOs was presented recently by \cite{Mignoli20}, followed by a tentative confirmation of another structure by \cite{Overzier21}.
A significant fraction ($\approx40\%$) of $z\gtrsim6$ QSOs has ALMA-detected dusty companion galaxies at distances of a few kpc \citep[e.g.][]{Willott17, Decarli18, Neeleman19, Venemans20}. These satellite galaxies might host heavily reddened and buried AGN \citep[e.g., ][]{DiMascia21a}, although currently there is no strong observational evidence for the presence of accreting SMBHs in their centres \citep[e.g., ][]{Connor19,Connor20, Vito19a, Vito21}.
Such objects would be typically brighter than inactive galaxies, expecially in the X-ray band. Therefore, their predicted number in numerical simulations can be tested against observational results to infer how well simulations approximate reality.
In this paper, we present a study of the effect of AGN kinetic feedback on the observable properties of $z>6$ AGN in cosmological simulations. In particular, we analyse a set of numerical simulations presented by \citet[][hereafter, \citetalias{Barai18}]{Barai18} with different kinetic feedback prescriptions, focusing on the most massive SMBH at $z=6$ and its surrounding environment. We extract multiwavelength observables such as column density and radial extent of the gas distributed in the host galaxies, UV and X-ray AGN fluxes, and number of satellite AGN detectable over small (i.e., a few kpc) distances from the central SMBH. We compare these properties with results from multiwavelength observations.
The paper is structured as follows.
In \S~\ref{Method} we describe the numerical setup of the simulations, the AGN selection, and the method used to measure the gas column density and distribution. In \S~\ref{NH_distro} we discuss the redshift evolution of the column densities for the considered AGN. In \S~\ref{comparison_obs} we present the observable properties of the simulated AGN and their host galaxies, and we compare them with empirical findings. In \S~\ref{environment} we investigate the presence of multiple AGN systems over scales of a few kpc, and we compare their detectability rates in the X-ray band with results from observations of high-redshift AGN. Finally, in \S~\ref{discussion} we discuss and interpret the results, and in \S~\ref{conclusions} we provide a summary.
All quoted distances are physical unless otherwise noticed.
We adopt a flat $\Lambda$CDM cosmology with $H_0=67.7\,\mathrm{km\,s^{-1}}$ and $\Omega_m=0.307$ \citep{Planck16}.
\section{Method}\label{Method}
\subsection{Numerical model} \label{Numerical_methods}
We consider the simulation runs \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace by \citetalias{Barai18}, which include kinetic feedback. We provide here a summary of the numerical setup and refer to the original works for an in-depth discussion.
{\citetalias{Barai18}} used a modified version of the Smooth Particle Hydrodynamics (SPH) N-body code \code{GADGET-3} \citep{Springel05} to follow the evolution of a comoving volume of ($500$ Mpc)$^3$, starting from cosmological initial condition generated with \code{music} \citep{hahn11} at $z=100$, and zooming-in on the most massive (i.e., $4\times10^{12}\,\mathrm{M_\odot}$) dark matter (DM) halo, corresponding to a $\approx3\sigma$ overdensity \citep[e.g.,][]{Barkana01}, inside the box down to $z=6$. Therefore, the final zoomed-in simulations focus by construction on a highly biased cubic region, with a volume of (5.21 Mpc)$^3$. The highest level of the simulation has a mass resolution of $m_{\rm DM} = 7.54 \times 10^6$ ${\rm M}_{\odot}$ and $m_{\rm gas} = 1.41 \times 10^6$ ${\rm M}_{\odot}$ for DM and gas particles, respectively. The softening length for gravitational forces for these high-resolution DM and gas particles is $R_{\mathrm{soft}} = 1 h^{-1}$ ckpc.
The code accounts for gas heating and cooling (including metal-line cooling) depending on the gas metal content, based on eleven element species (H, He, C, Ca, O, N, Ne, Mg, S, Si, Fe) that are tracked in the simulation \citep{Tornatore07}. Star formation in the inter-stellar medium (ISM) is implemented following the multiphase effective subresolution model by \citet{Springel03}, adopting a density threshold for star formation of $n_{SF} = 0.13 \ {\rm cm}^{-3}$.
The simulations include stellar winds, supernovae feedback, and metal enrichment, and assume a \citet{Chabrier03} initial mass function in the mass range $0.1-100$ ${\rm M}_{\odot}$ \citep{Tornatore07,barai13,biffi16}.
When a DM halo that is not already hosting a black hole (BH) reaches a total mass of $M_{\rm h} \geq 10^9$ ${\rm M}_{\odot}$, a $M_{\rm BH} = 10^5$ ${\rm M}_{\odot}$ BH is seeded at its centre. BHs are treated as collisionless sink particles and are allowed to grow by accretion of the surrounding gas or by mergers with other BHs. Gas accretion onto BHs is modelled via the classical Bondi-Hoyle-Littleton accretion rate $\dot{M}_{\rm Bondi}$ \citep{Hoyle39, Bondi44, Bondi52}, capped at the Eddington rate $\dot{M}_{\rm Edd}$:
\begin{equation}
\dot{M}_{BH} = {\rm min} (\dot{M}_{\rm Bondi}, \dot{M}_{\rm Edd}).
\end{equation}
Accreting BH radiate away a fraction $\epsilon_{\rm r}$ of the accreted rest-mass energy, with a bolometric luminosity
\begin{equation}\label{eq:luminosity_bh}
L_{\rm bol} = \epsilon_{\rm r} \dot{M}_{\rm BH} c^2,
\end{equation}
where $c$ is the speed of light. \citetalias{Barai18} fixed the radiative efficiency to $\epsilon_{\rm r} = 0.1$, a fiducial value for radiatively efficient, geometrically thin, optically thick accretion disks around a Schwarzschild BH \citep{Shakura73}.
A fraction $\epsilon_{\rm f} = 0.05$ of the total output energy is distributed to the surrounding gas in a kinetic form\footnote{We refer to \citetalias{Barai18} for details about the choice of the value for $\epsilon_{\rm f}$ and the numerical implementation of the kinetic feedback.}. In \textit{AGNcone}\xspace the kinetic energy is distributed along two cones with a half-opening angle of $45\degree$. The direction of the cone axis is chosen randomly for each BH at the seeding time, and is kept fixed throughout the simulation \citep{Barai18}, similarly to what is done in \cite{Zubovas16}. Instead, the AGN feedback in \textit{AGNsphere}\xspace pushes away the gas particles along random directions, thus mimicking a spherical geometry.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/MBH_z.png}
\caption{BH masses as a function of redshift for the \textit{AGNcone}\xspace (left) and \textit{AGNsphere}\xspace (right) runs. Only SMBHs accreting at $\dot{M}>0.02\,\mathrm{M_\odot}$ are considered. The arrows mark the mergers between BHs. AGN considered in \S~\ref{NH_distro} and \S~\ref{comparison_obs} (i.e., those that reach $z=6$ with $M_{BH}>10^8\,\mathrm{M_{\odot}}$) are plotted as filled symbols.}
\label{fig:masses}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/Mdot_z.png}
\caption{Mass accretion rate as a function of redshift for the \textit{AGNcone}\xspace (left) and \textit{AGNsphere}\xspace (right). The corresponding bolometric luminosity (Eq.~\ref{eq:luminosity_bh}) is reported in the right axis. }
\label{fig:mdot}
\end{figure*}
\subsection{AGN selection}\label{selection}
We analyse the simulation snapshots in steps of $\Delta z =0.2$ from $z=10$ to $z=8$ and $\Delta z =0.1$ from $z=8$ to $z=6$. In particular, we follow the most massive SMBH at $z=6$ in each simulation set, and consider a box with side size of 60 kpc centred on it. We refer to all of the SMBHs in the box accreting at $\dot{M_{BH}}>0.02\,\mathrm{M_\odot\,yr^{-1}}$ (i.e., $L_{bol}\approx10^{44}\mathrm{~erg ~s^{-1}}$) as AGN. Fig.~\ref{fig:masses} presents the BH mass evolution of AGN in the two simulations. Each AGN is labelled with the initial letter of the run (C for \textit{AGNcone}\xspace, S for \textit{AGNsphere}\xspace).
\textit{AGNcone}\xspace forms two very massive ($>10^9\,M_\odot$) BHs at $z<7$, while only less massive BHs are formed in the \textit{AGNsphere}\xspace run. This behaviour is linked to the implementation of the feedback: \textit{AGNcone}\xspace allows the gas to accrete continuously along the equatorial directions, while the lack of a preferential direction along which the outflow is launched in \textit{AGNsphere}\xspace does not allow for a steady and efficient accretion onto the SMBH. This effect can be appreciated in Fig.~\ref{fig:mdot}: the accretion rate of \textit{AGNcone}\xspace is generally higher than that of \textit{AGNsphere}\xspace, at least up to $\dot{M}\approx1-30\,\mathrm{M_\odot\,yr^{-1}}$. At higher accretion rates, which are reached by the most accreting BHs at $z<7$, AGN feedback prevents further increase of the accretion rate.
Hereafter, we focus our analysis on the AGN that reach $z=6$ with $M_{BH}>10^8\,\mathrm{M_\odot}$ and $L_{bol}>10^{46}\,\mathrm{erg\,s^{-1};}$ (see filled symbols in Fig.~\ref{fig:masses} and Fig.~\ref{fig:mdot}), which we refer to as ``bright AGN" (i.e., C1, C2, and C3 in \textit{AGNcone}\xspace; S1 and S2 in \textit{AGNsphere}\xspace).
These BH mass and luminosity values are typical of known $z>6$ QSOs \citep[e.g., ][]{Yang21}, allowing us to compare the physical properties of simulated and observed AGN in a consistent way. We note that, since the simulations focus on a single cosmic region at high redshift, the derived expectations on the AGN observable properties might be affected by cosmic variance.
\subsection{Gas column density and radial distribution}\label{NH}
Here we describe the method that we use to derive the distribution of hydrogen, helium, and metal column densities in the ISM for galaxies hosting AGN in the considered simulations. We make use of the hydrogen column density in the remaining of the paper to derive the observational properties predicted by the two considered simulations.
We estimate the distribution of the column densities for the bright AGN in the simulations by launching 1000 randomly selected lines of sight (LOSs) toward each AGN from a distance $d= 30$ kpc. Each LOS is considered as the axis of a cylinder with basis radius of $R_{\mathrm{soft}}$. We note that the resolutions of the simulations do not allow us to probe structures on smaller scales, as, for instance, the existence of a dusty torus on pc scales. Then, each cylinder is divided along its length into bins of $l_{\mathrm{bin}}=0.25$ kpc width, for a total of $\frac{d}{l_\mathrm{bin}}=120$ radial bins. We compute the density of each chemical element in a bin of the cylinder from the mass carried by each particle included in that bin.
With this approach, we also obtain the radial distribution of the gas density.
Finally, we integrate along the cylinder to compute the total column density of hydrogen ($N_H$) and of the other elements. The resulting total $N_H$ is not sensitive to reasonably different values of $l_{\mathrm{bin}}$ (i.e., from 0.25 kpc to 1 kpc). Therefore, we used $l_{\mathrm{bin}}=0.25$ as this value allows us to sample well the radial distribution of the gas (see \S~\ref{radial_distro}).
Fig.~\ref{fig:Mollweide} (upper panel) presents an example of the derived column-density map centred on the QSO C1 in \textit{AGNcone}\xspace. Each circle represents one of the 1000 random LOSs, which sample homogeneously the entire solid angle as seen from C1.
To assess the effect of feedback on the column density (\S~\ref{NH_distro}), we also consider an additional simulation run presented in \cite{Barai18}, that is identical in terms of initial conditions and physical prescriptions to the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace, except that BHs are not seeded. The only type of feedback in this run, which we refer to as \textit{noAGN}\xspace, is due to supernovae explosions (see \citealt{Barai18} for detailed discussion).
We associate each AGN in a simulation to the corresponding galaxy in the \textit{noAGN}\xspace runs following a method similar to that described in Zana et al. (accepted): first, we identify the DM halo hosting the AGN as the one having its centre of mass closest to the position of the SMBH. Then, we identify the corresponding halo in the \textit{noAGN}\xspace run by cross-matching the DM particle IDs in the two runs, and selecting the halo in \textit{noAGN}\xspace which shares the largest fraction of particles with the initial AGN halo, further imposing that the mass difference must be within a factor of 10-50.\%\footnote{The exact threshold is adjusted at each time step in order to find at least one halo counterpart.} Finally, we repeat the procedure described above on the selected halo in \textit{noAGN}\xspace, and derive the column density distribution in absence of AGN feedback. At $z>8$, the redshifts at which the \textit{noAGN}\xspace snapshots are taken are significantly different from those of the runs including AGN, making the DM-halo match procedure highly uncertain. Thus, we limit the identification of the AGN-hosting galaxies counterparts in the \textit{noAGN}\xspace run to $z<8$.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/NH_Mollweide.png}
\includegraphics[width=1\textwidth ]{figures/vr_Mollweide.png}
\caption{\textit{Upper panel}: Mollweide projection of the column density along 1000 random lines of sights centred on the QSO C1 at $z=7.1$. \textit{Lower panel}: Mollweide projection of the radial velocities of all particles within 10 kpc from C1 at $z=7.1$. The different sampling of the maps is intended to show the homogeneity of the 1000 LOSs used to compute $N_H$ in the upper panel, and the velocity of the individual gas particles in the lower panel. The map is aligned with the outflow cone direction. Regions where the particles have high positive velocities correspond to the two cones along which the kinetic energy is distributed by the AGN feedback in the \textit{AGNcone}\xspace simulation. Such cones are characterised by the lowest values of column densities.}
\label{fig:Mollweide}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/NH_z_analysis.png}
\caption{Evolution of column density for bright AGN in the \textit{AGNcone}\xspace (C1, C2, C3) and \textit{AGNsphere}\xspace (S1, S2) simulations. We show the median value (solid line, color coded according to the AGN bolometric luminosity and accretion rate), and the 10\% and 90\% percentiles (dashed lines) computed by launching 1000 lines of sight. The gray stripes enclose the 10\% to 90\% percentiles of the column densities of matched galaxies in the same simulation sets where, however, BHs have not been seeded (i.e., the \textit{noAGN}\xspace case). To compare with observational results (\S~\ref{Xray_obsc}), the red arrows mark the 3$\sigma$ upper limits derived for X-ray detected QSOs with $>10$ counts from \citet{Nanni18} and \citet{Connor19}.}
\label{fig:NH_z_all}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/fLOS_z_analysis.png}
\caption{Fraction of lines of sight obscured by column densities $<10^{22}\,\mathrm{cm^{-1}}$ (solid lines) and $<10^{23}\,\mathrm{cm^{-1}}$ (dashed lines) as a function of redshift for the bright AGN in the \textit{AGNcone}\xspace (C1, C2, C3) and \textit{AGNsphere}\xspace (S1, S2) simulations. The symbols are color coded according to the AGN bolometric luminosity and accretion rate.}
\label{fig:fLOS_all}
\end{figure*}
\section{Column density evolution}\label{NH_distro}
Fig.~\ref{fig:NH_z_all} presents the evolution of the column density for bright AGN in the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations.
Considering the \textit{AGNcone}\xspace simulation, the AGN column densities are similar to, or slightly lower than, those derived for the corresponding galaxies in the \textit{noAGN}\xspace run until the AGN accretion rate reaches $\dot{M}\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$. This happens at $z\approx7$ for C1 and C2, and $z\approx6.3$ for C3 (see Fig.~\ref{fig:mdot}). At later times, the AGN column density drops significantly by up to $\approx1$ dex and the accretion rate starts to oscillate. The 10\% and 90\% percentiles span up to one order of magnitude, especially at $z=6-7$, when the accretion rates reach the maximum values, producing the most powerful conical outflows.
Instead, the column densities of the corresponding galaxies in \textit{noAGN}\xspace (grey stripes in Fig.~\ref{fig:NH_z_all}) keep on increasing relatively smoothly. This finding confirms the AGN $N_H$ drop and the presence of unobscured LOSs to the effect of the conical kinetic feedback. At low accretion rates the produced outflow cannot stop the infall of material, but once the accretion rate reaches high enough values, the energy carried by the outflow impacts a significant part of the gas in the halo, hindering further infalling, especially along the conical outflow directions. As a result, the $N_H$ decreases, as well as the AGN accretion rate, until more material is allowed to accrete, producing a new burst of powerful feedback. Such a cyclic activity explains the decreasing median $N_H$, the wider $N_H$ distribution, and the oscillating $\dot{M}$ behaviour at later cosmic times.
This result is in qualitative agreement with the self-regulation scenario discussed by, e.g., \cite{Sijacki09, Dubois13, Costa14, Feng14,Richardson16, Trebitsch19}, according to which the AGN feedback controls the growth of the black hole and limits the duration of high accretion episodes by emptying the host galaxy gas reservoir, provided that the accretion rate is sufficiently high.
However, we note that the physical interpretation of our results is complicated by the effect that one AGN may have on other AGN-hosting galaxies passing through its feedback cone. In fact, C1, C2, and C3 in \textit{AGNcone}\xspace at $z<7$ are always closer than 30 kpc, and reach minimum distances as small as 4 kpc. At these distances, powerful outflows launched from one AGN may affect nearby galaxies (e.g., Zana et al. accepted).
As an example of the feedback effect on the column density, in Fig.~\ref{fig:Mollweide}, we compare the $N_H$ map centred on C1 with the radial velocity map of all particle within 10 kpc from C1. The maps correspond to $z=7.1$, when C1 reaches a local maximum in accretion rate before the strong AGN feedback starts to impact significantly the $N_H$ (Fig.~\ref{fig:NH_z_all}) and $\dot{M}$ starts to oscillate. Comparing the column density map (upper panel) with the map of the radial velocity of individual particles (lower panel), we notice that the two conical outflows, identified as regions with positive radial velocities, correspond to LOSs with low column densities. Such LOSs are those along which high-redshift AGN are more easily to be detected in the rest-frame UV band, as we investigate in details in \S~\ref{UV}.
Fig.~\ref{fig:fLOS_all} presents the fraction of LOSs along which $N_H<10^{22}\,\mathrm{ ~cm^{-2}}$ (solid lines) and $N_H<10^{23}\,\mathrm{ ~cm^{-2}}$ (dashed lines) for each bright AGN.
Hereafter, we use the widely used threshold $N_H=10^{22}\,\mathrm{ ~cm^{-2}}$ to separate obscured and unobscured AGN.\footnote{ However, we note that we consider the dust extinction as a more relevant quantity when we study the AGN rest-frame UV emission in \S~\ref{UV}.} For instance, \cite{Merloni14} found that such a value returns the best agreement between samples of obscured AGN as defined in optical (e.g., narrow emission-line AGN) and X-ray bands.
From Fig.~\ref{fig:fLOS_all} we infer that only at $z\lesssim7$ a fraction of the LOSs would appear as unobscured. In particular, at $z\lesssim7$ C1 presents unobscured LOSs over $10-40\%$ of the solid angle, while this fraction is much more variable with redshift (i.e., $0-80\%$) for C2 and C3.
The most massive BH in the \textit{AGNsphere}\xspace simulation, S1, follows a somewhat similar $N_H$ evolution to that of the AGN in \textit{AGNcone}\xspace: a roughly constant median $N_H$ value up to $z\approx7$ followed by a slightly decreasing and wider $N_H$ distribution (Fig.~\ref{fig:NH_z_all}), and the appearance of unobscured LOSs (Fig.~\ref{fig:fLOS_all}) at later cosmic times. However, some differences exist: first, the AGN $N_H$ is always significant lower than that of the corresponding galaxy in the \textit{noAGN}\xspace run (grey stripe), even at $z>7$. Secondly, the column density drop at $z<7$ is not as strong as in the \textit{AGNcone}\xspace case. Finally, at $z>7$ the accretion rate of S1 is not as smooth as in the \textit{AGNcone}\xspace case, and keeps on increasing even at $z<7$.
These differences may be due to the prescripted geometry of the kinetic feedback in the \textit{AGNsphere}\xspace case, in which gas particles are accelerated in a random direction during every accretion event. Therefore, in contrast with the \textit{AGNcone}\xspace case, there is no preferential direction (i.e., the equatorial plane of the conical outflow) along which material can keep on accreting undisturbed for long periods of time at $z>7$. In particular, the accretion rate of S1 never exceeds $\approx10\,\mathrm{M_\odot\,yr^{-1}}$, which is the approximate threshold after which the AGN kinetic feedback affects more evidently the $N_H$ distribution and the accretion rate of AGN in the \textit{AGNcone}\xspace run.
The column density evolution of S2, instead, does not appear to be strongly influenced by the AGN feedback. Although the median $N_H$ is slightly lower than the values found in the \textit{noAGN}\xspace case, it remains constant with time, and does not drop even at $z<6.5$, when S2 reaches similar accretion rate to S1. As a result, S2 would never appear as an unobscured AGN. We note that the typical column density of S2 is a factor $\approx3$ higher than that of S1 at any redshift, and its accretion rate rises smoothly from $z=7$ to $z=6$. These properties suggest that higher accretion rates than the values reached by S2 are required in order to launch outflows powerful enough to sweep away the gas in the case of large column densities (e.g., \citealt{Trebitsch19}), even when kinetic energy is distributed along random directions by the AGN feedback.
The median values of $N_H$ we derive from the \cite{Barai18} simulations are consistent with typical values found by \cite{Lupi22}. However, the resolution of that work is $\times 85$ higher than our simulations, and allows the authors to sample compact regions of dense gas with $N_H\gtrsim10^{24}\,\mathrm{cm^{-2}}$, especially at $z>8$, when AGN feedback has not yet affected significantly the ISM distribution and density in the host galaxies. One of the main methodological differences with that work is that we compare the ISM densities in the same galaxies in which SMBHs are actively accreting or are not seeded at all. Thus, we probe directly the effect of AGN feedback on the ISM in the host galaxy.
\section{Comparison with observations}\label{comparison_obs}
In this section, we compare the observable properties derived from the $N_H$ distributions of the AGN predicted by the simulations (\S~\ref{NH_distro}) with observational results. In particular we focus
on the comparison with constraints from X-ray observations (\S~\ref{Xray_obsc}),
the radial distribution of the gas reservoirs (\S~\ref{radial_distro}), and the observed UV magnitudes (\S~\ref{UV}).
\subsection{X-ray obscuration}\label{Xray_obsc}
X-ray observations are routinely used to constrain the column density of obscuring material along the LOSs of AGN.
Low and moderate values of column densities (\mbox{$N_H\lesssim10^{22}\,\mathrm{ ~cm^{-2}}$}) can absorb soft X-ray photons (rest-frame energies $\lesssim2$ keV), whereas larger column densities are required to absorb a high fraction of more energetic photons.
However, X-ray observations of high-redshift QSOs \citep[e.g.][]{Vito19b,Wang21a} sample rest-frame energies $E>3$ keV, and are thus sensitive only to high column densities ($N_H\gtrsim 3\times10^{23}\,\mathrm{ ~cm^{-2}}$), at least at the sentivities of currently available facilities. Moreover, all of the known $z>6$ QSOs have been selected based on their unobscured rest-frame UV emission (i.e., they are optically classified as type 1 QSOs), and thus are not expected to be heavily obscured in the X-ray band. For these reasons, existing X-ray observations of bright $z>6$ QSOs provide us with only loose upper limits of $N_H$. The downward-pointing red arrows in Fig.~\ref{fig:NH_z_all} are the observed upper limits on $N_H$ derived for a sample of $z>6$ QSOs by \citealt{Nanni17} and \citealt{Connor19}, with typical luminosities $L_{bol}=10^{46}-10^{47}\,\mathrm{erg\,s^{-1}}$. The column densities derived for bright AGN in all of the considered simulations are lower than, or consistent with, such loose upper limits. Although the $N_H$ values found for the \textit{noAGN}\xspace case are typically higher, they are still consistent with some measured upper limits. Therefore, the constraints on $N_H$ obtained from X-ray observations of high-redshift QSOs only marginally favour the presence of kinetic feedback.
We note that constraining AGN obscuration using X-ray observations requires an assumption on gas metallicity, as X-ray photons are mainly absorbed by metal atoms. Typically, solar metallicity is assumed, whereas the ISM metallicity of the host galaxies of the AGN in the \citetalias{Barai18} simulations is sub-solar (e.g., by factors of $\approx2-3$ at $z=6$; e.g., Zana et al. in prep.). This consideration reinforces the overall consistency between the $N_H$ values constrained from X-ray observations and found in the simulations, as significantly larger column densities would be required in the case of sub-solar metallicities to produce X-ray obscuration in excess to that observed in real QSOs. In \S~\ref{environment} we discuss the X-ray detectability of the QSOs in the simulations.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/NH_radii_analysis.png}
\caption{Median radius ($R_{90}$, computed over all of the lines of sight) containing the $90\%$ of the total gas as a function of redshift. The color code indicates the median total $N_H$, averaged over all of the lines of sight. The dashed grey lines mark the same quantity computed for matched galaxies in the same simulation sets where, however, BHs have not been seeded (i.e., the \textit{noAGN}\xspace case). The black ticks mark $R_{90}$ for 25 $z>6$ QSOs, as estimated from the [C II] emission beam-deconvolved sizes presented by \citet{Venemans20}.}
\label{fig:NH_radius}
\end{figure*}
\subsection{Gas radial distribution}\label{radial_distro}
We investigate the effect of kinetic feedback on the observable sizes of the gas reservoirs in high-redshift QSOs. From the radial distribution of $N_H$ derived
for each LOS in \S~\ref{Method}, we computed the radius from the centre of the galaxy which includes $90\%$ of the gas contributing to the total $N_H$. Then, for each galaxy, we computed the median value considering all of the 1000 LOSs, and define it as $R_{90}$. We use such a quantity to quantify the size of the gas reservoir in a galaxy.
Fig.~\ref{fig:NH_radius} presents $R_{90}$ as a function of redshift for every bright AGN in the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations, as well as for the matched galaxies in the \textit{noAGN}\xspace runs. All of the bright AGN in the \textit{AGNcone}\xspace simulation (C1, C2, C3) have a similar evolution of $R_{90}$: their gas reservoir sizes are constant ($\approx1$ kpc) at $z\gtrsim7$. At lower redshift, where $N_H$ decreases due to strong effect of the kinetic feedback, which is proportional to $\dot{M}$ and $L_{bol}$ (see the color-code of the circles in Fig.~\ref{fig:NH_z_all} and Fig.~\ref{fig:NH_radius}), $R_{90}$ increases up to several kpc. This behaviour is expected considering that the AGN feedback applies a mechanical push to the surrounding gas particles. In fact, the size of the gas reservoir in the \textit{noAGN}\xspace run, where the AGN feedback lacks (grey dashed lines in Fig.~\ref{fig:NH_radius}), remains constant or tends to even decrease at later cosmic times.
The evolution of $R_{90}$ for S1 in the \textit{AGNsphere}\xspace simulation is similar to that of the AGN in the \textit{AGNcone}\xspace simulation. However, the increase of $R_{90}$ is stronger and begins at earlier cosmic times. We recall that the accretion rate of S1 is typically lower than that of the AGN in \textit{AGNcone}\xspace (see Fig.~\ref{fig:mdot}), and therefore the stronger evolution of $R_{90}$ is not due to intrinsically stronger outflows launched by the AGN, but, as discussed in \S~\ref{NH_distro}, to the different geometry of the outflow: being launched along random directions at every accretion event, the outflow is more likely to transmit the kinetic energy to the gas particles in the galaxy even at low or moderate accretion rates. Instead, S2 does not follow the same evolution as S1. On the contrary, $R_{90}$ decreases to sub kpc values approaching $z=6$. As discussed in \S~\ref{NH_distro} we ascribe this behaviour to the relatively low accretion rate, which does not produce feedback strong enough to efficiently affect the gas distribution in the host galaxy.
We compare our findings with the observed extent of the [C II] emission of 25 $z>6$ QSOs presented by \cite{Venemans20}, assuming that the [C II] emission line is a good tracer of the spatial extent of the total gas reservoir \citep[e.g.,][]{Zanella18, Sommovigo21}. We used the major axis of the deconvolved [C II] emission size (Tab. 3 of \citealt{Venemans20}), which represents the FWHM of the emitting source, and converted it into the radius that includes 90\% of the [C II] light, assuming a Gaussian distribution.\footnote{We note that the conclusions hold if we use an exponential profile (e.g., \citealt{Fujimoto20}) and convert the FWHM values reported by \cite{Venemans20} into exponential scale lengths. In this case, we obtain larger radii than in the Gaussian case by a factor of $\approx1.75$.} The resulting values are reported in
Fig.~\ref{fig:NH_radius}) as black ticks at the redshift of each QSO.
The AGN in the \textit{AGNcone}\xspace simulation have $R_{90}$ consistent with the observed values, while the median gas size of S1 is larger at nearly every redshift. S2 have a size consistent with the most compact QSOs in the \cite{Venemans20} sample. However, this comparison is not fair: the ISM in S2 produces very large column densities at all redshifts and all LOSs (Fig.~\ref{fig:NH_z_all}), and thus large expected values of dust extinction. All of the QSOs studied in \cite{Venemans20} are instead rest-frame UV selected objects: we lack observational information about the extent of the gas reservoirs of buried high-redshift QSOs, as is S2. In all cases, the median gas size of the \textit{noAGN}\xspace control galaxies are smaller than the observed values for QSOs, suggesting that kinetic feedback is required to produce the gas extents observed in real QSOs.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/m1450_z_stripe_analysis.png}
\caption{ Apparent magnitude at rest-frame $\lambda = 1450\,\text{\normalfont\AA}$ as a function of redshift for bright AGN in the \textit{AGNcone}\xspace (C1, C2, C3) and \textit{AGNsphere}\xspace (S1, S2) simulations. The purple regions encompass the 50\% least extincted LOSs, while the grey hatched regions represent the 50\% most extincted LOSs. The grey circles are $z>6$ QSOs collected from \citet{Banados16, Banados18a}, \citet{Chehade18}, \citet{Matsuoka18a,Matsuoka18b, Matsuoka19a,Matsuoka19b}, \citet{Mazzucchelli17b}, \citet{Reed17}, \citet{Tang17}, \citet{Wang17, Wang18a,Wang18b, Wang19, Wang21b}, and \citet{Yang19,Yang20}.}
\label{fig:m1450_z}
\end{figure*}
\subsection{UV magnitudes}\label{UV}
In \S~\ref{Xray_obsc} we discussed how the available X-ray observations of $z>6$ QSOs are not sensitive to the column density values that we derived for bright AGN in the simulations. Instead, the rest-frame UV emission of high-redshift AGN is expected to be severely affected by dust extinction even for low values of $N_H$. In this section, we compare the expected rest-frame UV magnitudes of bright AGN in the simulations with the observed values of known $z>6$ QSOs.
We assumed that the intrinsic (i.e., unextincted) rest-frame UV spectra of the AGN-hosting galaxies in the simulations are dominated by the AGN (i.e., we do not include stellar emission) and are well represented by the \cite{VandenBerk01} composite spectrum of type 1 QSOs, rescaled to their bolometric luminosity via the bolometric correction of \cite{Venemans16} and \cite{Decarli18}
\begin{equation}
\mathrm{log}\left(\frac{L_{bol}}{\mathrm{erg\,s^{-1}}}\right)=4.553+0.911\times \mathrm{log}\left(\frac{\lambda L_{\lambda}(1450\text{\normalfont\AA})}{\mathrm{erg\,s^{-1}}}\right).
\end{equation}
We assumed a simple uniform slab of dust located in front of each AGN and an SMC extinction curve, and computed the measured rest-frame UV flux as
\begin{equation}
F_\lambda^\mathrm{obs}=F_\lambda^\mathrm{intr}e^{-\tau_\lambda},
\end{equation}
where $\tau_\lambda=k_\lambda\Sigma_m f_{dust}$, $k_\lambda$ is the extinction cross section at wavelength $\lambda$, $\Sigma_m$ is the mass column density of metals, which we computed in \S~\ref{NH}, and the fraction of metal mass locked into dust is assumed to be $f_{dust}=0.15$ as in \cite{DiMascia21b}. Finally, we computed the apparent magnitude at the wavelength corresponding to rest-frame 1450 \text{\normalfont\AA}, that is $m_{1450}$.
For all considered AGN, the metal mass is computed from the column densities of metals derived in \S~\ref{NH} for 1000 LOSs at every simulation snapshot. Thus, we obtain a distribution of 1000 values of $m_{1450}$ at every redshift. In Fig.~\ref{fig:m1450_z} we show the magnitudes obtained for the 50\% least (purple regions) and most (grey hatched regions) extincted LOSs.
To allow for a comparison with observations, we add the magnitudes of a sample of $z>6$ QSOs collected from \cite{Banados16, Banados18a}, \cite{Chehade18}, \cite{Matsuoka18a,Matsuoka18b, Matsuoka19a,Matsuoka19b}, \cite{Mazzucchelli17b}, \cite{Reed17}, \cite{Tang17}, \cite{Wang17, Wang18a,Wang18b, Wang19, Wang21b}, \cite{Yang19,Yang20}, with typical magnitudes of $19\lesssim m_{1450}\lesssim 24$.
Among the considered simulations, \textit{AGNcone}\xspace produces the UV brightest AGN, which are consistent with the magnitudes of known QSOs at $z\lesssim7$. As discussed in \S~\ref{NH_distro}, such redshift range corresponds to the period when the AGN strong kinetic feedback strongly affects the gas column density in the host galaxy, strongly suggesting that known, optically selected $z>6$ AGN are indeed observed preferentially along directions where AGN feedback has cleared the LOS of most of the gas and dust.
This prediction is hard to be tested observationally. Not only estimating the outflow direction is a difficult task, but the incidence of outflow in high-redshift AGN itself is still a matter of debate (e.g., \citealt{Maiolino12, Cicone15, Bischetti19, Novak20, Izumi21, Meyer22}). Moreover, $z>6$ QSOs might have been detected along LOSs which have been previously cleared of most of the gas and dust by past outflows.
In this respect, a caveat arises from the numerical implementation of the ISM properties in the \cite{Barai18} simulations, which, as described in \S~\ref{Numerical_methods}, follow the prescription of \cite{Springel03}. This model does not capture the ISM porosity and therefore is not able to resolve clumpy structures on ~pc scales. Resolving such structures might decrease the effective opacity of the medium and possibly produce more unobscured lines of sight, even in the absence of AGN feedback.
In Fig.~\ref{fig:m1450_z}, only $<50\%$ of the LOSs of an individual AGN have extinction values small enough to reproduce the observed magnitudes.
We computed the probability that multiple AGN appear as UV bright (i.e., $m_{1450}\lesssim24$) sources along the same LOS, and found that it is negligible. This result is consistent with observations, according to which, to date, no such a system of multiple UV-bright AGN has been discovered at high redshift.
The most luminous AGN in the \textit{AGNsphere}\xspace run, S1, reaches magnitudes as bright as the observed values only at $z\approx6.5$, while it fails at reproducing the magnitudes of $z>6.5$ QSOs. This is due to the lower accretion rate, and thus lower intrinsic luminosity, of S1 than the accretion rates of bright AGN in \textit{AGNcone}\xspace. The large column density of S2 results in dramatic extinction levels along all of the LOSs, such that only along a small fraction of the LOSs S2 has apparent magnitude consistent with those of observed high-redshift QSOs, despite its intrinsic luminosity being similar to that of S1 at $z<6.5$ (Fig.~\ref{fig:mdot}).
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/Flux_z.png}
\caption{Expected X-ray flux in the 0.5--7 keV band as a function of redshift for the \textit{AGNcone}\xspace (\textit{left panel}), and \textit{AGNsphere}\xspace (\textit{right panel}) runs. For each AGN at each redshift, we assumed the median $N_H$ computed for 1000 lines of sight. The horizontal dotted lines mark the flux limit computed for \textit{Chandra}\xspace (50 ks observation), \textit{Athena}\xspace (10 ks), \textit{AXIS}\xspace (10 ks), and \textit{Lynx}\xspace (10 ks). }
\label{fig:Fx}
\end{figure*}
\section{Multiple high-redshift AGN on 1-10 arcsec scales}\label{environment}
Typical separations between AGN in the \cite{Barai18} simulations are $\approx5-50$ kpc, corresponding to only a few arcseconds in projection. To date, no multiple AGN system has been discovered observationally at $z>6$ (e.g., \citealt{Greiner21}), with the highest redshift AGN pair being recently discovered at $z=5.7$ \citep{Yue21}. This result could be due to dust extinction preventing the detection of other possible accreting SMBHs close to high-redshift QSOs, as we found in our simulations (\S~\ref{UV}). Alternatively, QSOs observed at $z\gtrsim6$ intrinsically have no AGN satellite. The latter hypothesis implies that the simulations overpredict the number of bright AGN, due to, e.g., the specific numerical setup and seeding prescription. In addition, as discussed in \S~\ref{Numerical_methods}, the simulations focus on an overdense region, which maximizes the probability of forming multiple SMBHs, and thus bright AGN, in a small volume.
To investigate better the relation between the predicted and observed number of systems of multiple AGN at high redshift,
in \S~\ref{mock_Xray} we produce mock X-ray observations with the \textit{Chandra}\xspace X-ray observatory\footnote{\url{https://cxc.harvard.edu/}} based on the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations. Then, in \S~\ref{multiple_AGN} we compute the probability of detecting multiple AGN on small angular separations, and compare the findings with observational results. Finally, in \S~\ref{future_facilities} we investigate the potential of future X-ray facilities in detecting possible multiple faint AGN over small scales around bright high-redshift QSOs.
\subsection{Mock X-ray observations}\label{mock_Xray}
As discussed in \S~\ref{Xray_obsc}, the column densities that we derived in \S~\ref{NH_distro} for simulated $z>6$ AGN have a negligible effect on the X-ray emission at the observed-frame energies probed by X-ray telescopes, allowing us to factor out the effect of varying $N_H$ along different LOSs. However, we have to take into account another effect related to the specific choice of the LOS: the emission of different AGN might be blended along some LOSs due to projection effects, and appear as a single X-ray source. This effect might be important as the projected angular separations of the AGN in the considered simulations are comparable with the angular resolution of \textit{Chandra}\xspace (i.e., $\approx0.5^{\prime\prime}$), which is the existing X-ray observatory with the sharpest view.
We produce mock observations using the SOXS v. 3.0 software,\footnote{\url{https://hea-www.cfa.harvard.edu/soxs/}} using \textit{Chandra}\xspace response matrices and ancillary files suitable for Cycle 20. SOXS accounts for three background components: a uniform Galactic component, a cosmic background due to point-like sources, and an instrumental component. For each simulation, we produce two sets of mock images, assuming an exposure time of 30 ks or 50 ks, which are typical lengths of real \textit{Chandra}\xspace observations of $z>6$ QSOs \citep[e.g.,][]{Vito19a,Wang21a}.
For each set, we considered 100 random LOSs, along which all AGN have been projected on the sky plane according to their tri-dimensional positions in the simulations. This allows us to statistically take into account 1) the possible blending of multiple sources due to projection effects, and 2) the Poisson fluctuations of the number of detected X-ray photons at a given intrinsic flux.
We convert the bolometric luminosities of AGN in the simulations into X-ray luminosities in the rest-frame $2-10$ keV energy band using the \cite{Duras20} relation.
Then, we compute the fluxes in the 0.5-7 keV band (i.e., one of the standard energy bands used to analyse \textit{Chandra}\xspace observations) for every AGN, and use them as input
values to simulate the images. We adopt intrinsic powerlaw emission with photon index $\Gamma=2$. This is a typical value for AGN up to $z\approx6.5$ \citep[e.g.][]{Nanni17,Vito19b}, although \cite{Vito19b} and \cite{Wang21a} find hints for a steepening at higher redshifts. We also include absorption due to the measured value of column density along the considered LOS, although, as discussed above, the produced obscuration is negligible for our high-redshift objects, and a Galactic absorption component with $N_H=5\times10^{20}\mathrm{ ~cm^{-2}}$. These computations have been performed with XSPEC v.12.11 (\citealt{Arnaud96}; model $phabs\times zvphabs\times powerlaw$)\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/}}. Fig.~\ref{fig:Fx} presents the expected X-ray flux of every AGN in the simulations as a function of redshift.
\subsection{X-ray detection of multiple AGN}\label{multiple_AGN}
We ran a blind source detection procedure on the \textit{Chandra}\xspace mock observations in the 0.5-7 keV band using the \textit{wavdetect} tool in CIAO v.4.12\footnote{\url{https://cxc.harvard.edu/ciao4.12/}} \citep{Fruscione06}, with a significance threshold of $10^{-5}$, over an area corresponding to $<30$ kpc from the central QSO, to be consistent with the volume considered throughout this work (see \S~\ref{Method}). We repeated this procedure for all snapshots in the $z=6-7$ range, which includes most of the $z>6$ QSOs observed with \textit{Chandra}\xspace, thus allowing for a fair comparison with real observations.
Fig.~\ref{fig:Ndet} presents the number of AGN detected in the mock \textit{Chandra}\xspace observations with 30 ks and 50 ks exposures, averaged over the 100 LOSs, for each simulation. \textit{AGNcone}\xspace predicts an average of $\approx1$ detectable AGN already with relatively short exposures (30 ks) and multiple detected X-ray sources using slightly longer observations (50 ks) over all of the considered redshift range. Instead, according to the \textit{AGNsphere}\xspace run, 30 ks (50 ks) \textit{Chandra}\xspace observations of $z\gtrsim6.2$ ($z\gtrsim6.5$) should typically return no detected source, but the probability to detect one or more AGN increases quickly approaching $z=6$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth ]{figures/Ndet_z_analysis.png}
\caption{Average number of detected X-ray sources, averaged over 100 LOSs, detectable in the two simulations within $<30$ kpc from the central AGN with 30 ks (left) and 50 ks (right) \textit{Chandra}\xspace observations. The black dashed line mark the average number of detected sources in real observations of $z>6$ QSOs. }
\label{fig:Ndet}
\end{figure}
\begin{table*}
\caption{Comparison sample of \textit{Chandra}\xspace observations of $z=6-7$ QSOs (see \S~\ref{multiple_AGN}).}
\begin{tabular}{cccccccccc}
\hline
\multicolumn{1}{c}{{ ID }} &
\multicolumn{1}{c}{{ z}} &
\multicolumn{1}{c}{{ Ref}} &
\multicolumn{1}{c}{{ ObsID}} &
\multicolumn{1}{c}{{ $t_{exp}$ [ks] }} &
\multicolumn{1}{c}{{ $N_{det}$}} \\
\multicolumn{1}{c}{{ (1) }} &
\multicolumn{1}{c}{{ (2)}} &
\multicolumn{1}{c}{{ (3)}} &
\multicolumn{1}{c}{{ (4)}} &
\multicolumn{1}{c}{{ (5)}} &
\multicolumn{1}{c}{{ (6)}} \\
\hline
\multicolumn{6}{c}{{ 20-40 ks sample}} \\
J002429.77+391319.0 & 6.621 & W21 & 20416 & 20 & 0 \\
J005006.67+344521.6 & 6.253 & V19 & 20393 & 34 & 1 \\
J022601.87+030259.4 & 6.541 & V19 & 20390 & 26 & 1 \\
J084229.43+121850.5 & 6.076 & V19 & 20392 & 29 & 0 \\
J104819.09-010940.2 & 6.676 & W21 & 20415 & 35 & 0 \\
J150941.78-174926.8 & 6.122 & V19 & 20391 & 27 & 1 \\
J152637.84-205000.7$^*$ & 6.586 & C20 & 22165 & 33 & 0\\
J163033.90+401209.7 & 6.065 & V19 & 5618 & 27 & 1 \\
\multicolumn{6}{c}{{ 40-80 ks sample}} \\
J010953.13-304726.3 & 6.791 & V19 & 20398,22214 & 66 & 0\\
J030516.92-315055.9 & 6.614 & V19 & 20394 & 50 & 0 \\
J103027.11+052455.1$^*$ & 6.308 & N17 & 19926 & 50 & 1 \\
J111033.98-132945.6$^*$ & 6.515 & V19 & 20397 & 54 & 0\\
J114816.65+525150.4 & 6.419 & G17 & 17127 & 78 & 1 \\
J164121.73+375520.2 & 6.047 & V19 & 20396,21961 & 54 & 1 \\
J203210.0-211402.3$^*$ &6.24& C19 & 20470 & 45 & 1 \\
J223255.14+293032.3 & 6.666 & V19 & 20395 & 54 & 1 \\
J234833.34-305410.0 & 6.902 & W21 & 20414 & 42 & 0 \\
\hline
\end{tabular} \\\label{tab:highz_QSOs}
(1) ID of targeted QSO; (2) redshift of targeted QSO; (3) reference for published X-ray data. C19: \cite{Connor19}. C20: \cite{Connor20}. G17: \cite{Gallerani17}. N17: \cite{Nanni17}. V19: \cite{Vito19b}. W21: \cite{Wang21a}. (4) \textit{Chandra}\xspace observation ID considered in this work; (5) Exposure time; (6) number of detected X-ray sources according to the procedure described in \S~\ref{multiple_AGN}. $^*$ These QSOs have been observed with multiple ObsIDs, resulting in longer total exposure times than those reported here. We only consider the reported ObsIDs to allow for a fair comparison with our 30 ks and 50 ks mock observations.
\end{table*}
In order to compare these results with real data, we collected all of the available \textit{Chandra}\xspace observations of $z=6-7$ QSOs with exposure times of 20-40 ks and 40-80 ks (Tab.~\ref{tab:highz_QSOs}). The median exposure time of the 20-40 ks (40-80 ks) observations is 38 ks (54 ks) and the median redshift of the targeted QSOs is $z=6.4$ ($z=6.5$). These values are well matched to our sets of 30 ks and 50 ks mock images, respectively. We repeated the detection procedure described above on the real \textit{Chandra}\xspace observations, considering only an area of $R<30$ kpc from the targeted QSO, to allow for a fair comparison with the mock image results. We stress that the blind detection procedure prevents any bias related to rest-frame UV pre-selection of possible X-ray sources.
The last column of Tab.~\ref{tab:highz_QSOs} reports the number of detected sources in the real observations,\footnote{We note that for almost all of the QSOs considered here, the results of the blind detection procedure agree with what reported in the literature, but for J084229.43+121850.5. \cite{Vito19b} reported a detection of X-ray emission from this QSO, while here we report it as undetected. This apparent discrepancy is due to the different detection procedure (i.e., blind detection vs. rest-frame UV pre-selection of the target position) and significance threshold.} which are almost equally split between no detected source and one detected source (i.e., the targeted QSO): the average numbers of detected X-ray sources in one observation are 0.50 and 0.56 for the 20-30 ks and 40-80 ks samples, respectively. Similar values are obtained by splitting each sample according to its median redshift. Comparing these results with the expected numbers of detected sources in simulations (Fig.~\ref{fig:Ndet}), we find that \textit{AGNcone}\xspace overestimates the number of detectable AGN at all redshifts, assuming both 30 ks and 50 ks exposure times. Instead, \textit{AGNsphere}\xspace underestimates such number assuming 30 ks observations, while shows a strong dependence on redshift for longer exposures: at $z>6.5$ and $z<6.5$ it underestimates and overestimates, respectively, the average number of detected X-ray sources.
Due to the small sample sizes of real QSO observations and the narrow range covered by the number of detectable X-ray sources, it is difficult to provide a quantitatively robust comparison with the predictions from simulations. Nonetheless, we attempt to do it by comparing the normalized histograms of detected sources in the mock and real observations over the entire $z=6-7$ range (Fig.~\ref{fig:Ndet_hist}). This is justified by the relatively flat redshift distribution of the QSOs targeted by real observations (Tab.~\ref{tab:highz_QSOs}). For each set of mock images, we computed the two-sample Anderson-Darling test.\footnote{We used the \textit{anderson\_ksamp} method of the SciPy package \citep{Scipy20}.} The null hypothesis is that the mock and real observations are drawn from the same parent population, for what the number of detected X-ray sources is concern.
We found that the null hypothesis can be rejected with high significance (i.e., Anderson-Darling test sigificance level $\lesssim0.001$) for almost all combinations of simulations and exposure times: Fig.~\ref{fig:Ndet_hist} confirms that \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace overestimate and underestimate, respectively, the number of detectable X-ray sources. Mock simulations of \textit{AGNsphere}\xspace with $t_{exp}=50$ ks is the only set for which the null hypothesis cannot be rejected, although this simulation is not consistent with real observations for $t_{exp}=30$ ks.
It is worth noting that few $z>6$ QSOs have been pointed with long \textit{Chandra}\xspace exposures (100--500 ks; e.g. \citealt{Nanni18}, \citealt{Connor20}, \citealt{Vito21}). Some of these observations were performed to check the presence of faint and possibly obscured AGN around $z>6$ QSOs, for which companion galaxies have been detected with ALMA and HST. However, to date, no solid detection of such satellite AGN has been obtained (\citealt{Vito19a,Vito21, Connor19,Connor20}).
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/Ndet_hist_analysis.png}
\caption{Normalized histograms of the number of detected X-ray sources in the mock and real \textit{Chandra}\xspace observations of $z=6-7$ AGN, for $t_{exp}=$ 30 ks (left) and 50 ks (right).}
\label{fig:Ndet_hist}
\end{figure*}
\subsection{Predictions for future X-ray facilities}\label{future_facilities}
The high sensitivities of future X-ray facilities will allow us to push the search for AGN satellites of luminous optically selected QSOs at $z>6$ down to intrinsic luminosities significantly lower than those probed with \textit{Chandra}\xspace. In Fig.~\ref{fig:Fx} we report as dotted grey lines the approximate expected sensitivity limits of future missions such as \textit{Athena}\xspace/WFI \citep{Nandra13}, \textit{AXIS}\xspace \citep{Mushotzky19, Marchesi20}, and \textit{Lynx}\xspace/HDXI \citep{Gaskin19}, each one computed assuming 10 ks exposure time, and compare them with the sensitivity of a 50 ks \textit{Chandra}\xspace observation. We computed these values by simulating X-ray observations of an X-ray source, assuming a simple power-law spectrum with photon index $\Gamma=2$ and varying flux. In particular, for each instrument, we loaded response matrices and background files\footnote{We use real response matrices and background files for \textit{Chandra}\xspace, and the preliminary files included in SOXS for \textit{Lynx}\xspace, \textit{AXIS}\xspace, and \textit{Athena}\xspace.} in XSPEC, and computed the expected source and background count rates in a region including $\approx90\%$ of the expected point spread function (PSF); i.e., $R=1^{\prime\prime}$ for \textit{Chandra}\xspace, \textit{AXIS}\xspace, and \textit{Lynx}\xspace, and $R=5^{\prime\prime}$ for \textit{Athena}\xspace. Then, we computed the flux that returns a binomial no-source detection probability \citep[i.e., $P_B$;][]{Weisskopf07} such that $(1-P_B)=0.997$, corresponding to $3\sigma$ in the Gaussian approximation.
Fig.~\ref{fig:Fx} shows that all of the considered next-generation X-ray mission will provide us with a huge improvement in the capability of detecting faint AGN at $z>6$, including satellite AGN around bright QSOs at $z>6$, in a fraction of the time of a typical \textit{Chandra}\xspace observation. Fig.~\ref{fig:sim_image} presents simulated X-ray observations with \textit{Chandra}\xspace (50 ks), \textit{Lynx}\xspace (10 ks), \textit{AXIS}\xspace (10 ks), and \textit{Athena}\xspace (10 ks) of a representative snapshot (i.e., $z=6.5$) and LOS of the two simulation runs. The satellite AGN will appear as multiple X-ray sources on a few arcsec scales. This implies that, in addition to high sensitivity, excellent angular resolution, such as that provided by \textit{AXIS}\xspace and \textit{Lynx}\xspace, is required to detect them individually. To probe this issue, we performed a blind detection run with \textit{wavdetect} on these images, and compared the detected sources (black stars in Fig.~\ref{fig:sim_image}) with the input AGN (colored circles): the identification of close objects like C1 and C2 is difficult even with missions with $\approx0.5$ arcsec angular resolution. The problem is clearly more evident with \textit{Athena}\xspace, due to its PSF of a few arcsec.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/sim_Xray_image.png}
\caption{Simulated X-ray observations in the 0.5--7 keV band of the most-massive AGN at $z=6.5$ and the surrounding satellite AGN in the \textit{AGNcone}\xspace (upper row) and \textit{AGNsphere}\xspace (lower row) simulations. From the leftmost to the rightmost columns, we simulated observations with \textit{Chandra}\xspace/ACIS-S (50 ks), \textit{Lynx}\xspace/HDXI (10 ks), \textit{AXIS}\xspace (10 ks), and \textit{Athena}\xspace/WFI (10 ks). For presentation purpose, the angular scale of the \textit{Athena}\xspace image is different from the other cases, due to the larger PSF. The circles mark the location of the simulated AGN for a representative line of sight, and are color coded as in Fig.~\ref{fig:masses}. The black stars mark the position of X-ray detected sources obtained with a blind detection procedure.}
\label{fig:sim_image}
\end{figure*}
\section{Discussion}\label{discussion}
As discussed in \S~\ref{Numerical_methods}, the outflow directions in the considered simulations are assumed not to be physically related to the host-galaxy properties and to be time-independent.
In particular, the \textit{AGNcone}\xspace simulation does not assume the outflow to be perpendicular to the plane of the host galaxy, as suggested by several observations of kpc-scale outflows or radio jets in the local universe \citep[e.g.,][]{Garcia-Burillo14, Cresci15, Morganti15, Venturi21}, where the outflow geometry can be studied in details, and by some numerical simulations \citep[e.g.,][]{Hopkins12}.
Several physical mechanisms can concur in the acceleration of winds at sub-pc scales that eventually produce large-scale outflows, including magneto-hydrodynamic effect (e.g., \citealt{Sadowki13}), thermal driving (e.g., \citealt{Proga07}), radiation pressure acceleration, either applied on dust (e.g., \citealt{Ishibashi15}) or mediated by UV transitions \citep[e.g.][]{Proga04,mizumoto2021}, which might produce outflows with different geometries. Moreover, the outflow geometry might be affected by interactions with the surrounding environment as the outflow expands \citep[e.g.][]{Nelson19, talbot2021}, and might change with time. Cosmological simulations cannot describe in detail such a complex, and largely unknown, physics and evolution of outflows with relatively simple numerical recipes.
The goal of this paper is to investigate the effect of two particular large-scale outflow geometries (i.e., a spherical outflow and a bi-conical outflow parametrized as described in \S~\ref{Numerical_methods}) on the observable properties of high-redshift AGN, regardless of the sub-grid physical mechanisms responsible for their acceleration. Extensive numerical simulations with identical initial conditions and physics except for the outflow parameters would be required to check whether and how the results are sensitive to different choices of the outflow parameters.
Kinetic feedback produced during the phases of fast accretion of SMBHs in the \cite{Barai18} simulations has a significant impact on the surrounding material and is required to match the predicted observable properties of bright AGN with observational results. One of the strongest piece of evidence is represented by the study of the gas extent in the AGN host galaxies (Fig.~\ref{fig:NH_radius}): the gas reservoirs in the \textit{noAGN}\xspace case (i.e., in absence of AGN feedback) are always more compact than those derived from ALMA observations of $z>6$ QSOs (see also, e.g., \citealt{vanderVlugt19}). The effect of AGN feedback pushes the gas in the host galaxies to larger distances (i.e., up to a few kpc) from the centres, in agreement with observations (e.g., \citealt{Cicone15,Bischetti19, Venemans20, Izumi21}). Although other mechanisms related to AGN feedback may produce such an observable, by, for instance, preventing gas infall from large scales (e.g., \citealt{Trussler20}) or causing fluctuations in the gravitational potential, which may lead to a radial migration of the material (e.g., \citealt{vanderVlugt19}), \cite{Barai18} found that the mechanical removal of gas from the inner region of the host galaxies is the main process that affects their gas content in their simulations.
We underline that also some $5<z<7$ star-forming ($1-70\,\mathrm{M_\odot\, yr^{-1}}$) galaxies have been found to show both an extended [C II] halo \citep[e.g.,][]{Fujimoto20} and broad wings in the [C II] emission-line profile \citep[e.g.,][]{Gallerani18, Ginolfi20}, suggestive of outflows possibly powered by a yet undetected accreting MBH \citep[e.g.,][]{Orofino21}.
At $z<7$ the feedback produces a general decrease of the $N_H$ (Fig.~\ref{fig:NH_z_all}), allowing for the appearance of unobscured (i.e., $N_H<10^{22}\,\mathrm{ ~cm^{-2}}$) LOSs (Fig.~\ref{fig:fLOS_all}).
Such directions are most probably those along which known $z>6$ QSOs are preferentially observed, as the rest-frame UV selection of these objects requires low dust extinction. In fact, at $z\lesssim6.5$, when the feedback effect is the strongest, bright AGN in the \textit{AGNcone}\xspace simulation are able to reach the UV magnitudes observed for known $z>6$ QSOs (Fig.~\ref{fig:m1450_z}).
However, such LOSs represent only a fraction of the total LOSs of an AGN (see also, e.g., \citealt{Ni20, Trebitsch19, Lupi22}): more than half of the LOSs would appear too faint to be selected as high-redshift objects in current optical/near-IR surveys, suggesting that a large fraction of the high-redshift, intrinsically luminous QSO population is observationally missed due to strong UV extinction produced by the ISM only. The presence of a dusty torus on pc scales, which is not included in the simulations we have analysed, would further increase such a fraction.
The outflow geometry likely plays an important role: in the case of a conic outflow, SMBH accretion proceeds at maximum efficiency through equatorial infalling of gas until $\dot{M}\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$ (Fig.~\ref{fig:mdot}), producing BHs with masses of $>10^9\,\mathrm{M_\odot}$ at $z=6-7$ (Fig.~\ref{fig:masses}). At these accretion rates, the feedback regulates further accretion and reduces the typical obscuring column density, in particular along the cone direction (Fig.~\ref{fig:Mollweide}). In the case of outflows launched along random directions, the feedback can affect the growth of the SMBH and the $N_H$ distribution even at lower accretion rates, resulting in $<10^9\,\mathrm{M_\odot}$ BHs at $z=6$, provided that the gas in the host galaxy is not too dense, as in the case of S2. Thus, the ISM properties (i.e., $N_H$ and radial size of the gas) of the brightest AGN in the \textit{AGNsphere}\xspace run is in agreement with observations. However, hindering the formation of $>10^{9}\,\mathrm{M_\odot}$ BHs, the spherical geometry of the feedback in \textit{AGNsphere}\xspace prevents AGN from reaching intrinsic luminosities comparable to known $z>6$ QSOs at most redshifts (Fig.~\ref{fig:m1450_z}).
Interestingly, even the most luminous AGN in \textit{AGNcone}\xspace cannot explain the detection of UV-bright QSOs at $z\approx7.5$ (Fig.~\ref{fig:m1450_z}), due to the combination of the relatively small BH masses, and hence low accretion rates, which, by construction, are capped at the Eddington rate, and typically high $N_H$ at that early cosmic time in this simulation. The existence of bright QSOs at $z\approx7.5$ \citep[e.g.,][]{Banados18a,Wang21a} requires different physical conditions for the SMBH formation and mass growth from those adopted in the considered simulations.\footnote{ As mentioned in \S~\ref{selection}, cosmic variance may affect our conclusions, as the simulations focus on a single cosmic region at high redshift.} Future numerical simulations may explore such conditions as viable ways to reconcile the expected and observed properties of $z>7$ AGN. Non-mutually exclusive possibilities are:
\noindent (a) different BH seeding mechanisms, that is, bright and massive QSOs discovered at $z\approx7.5$ may be grown from more massive BH seeds or have been seeded at earlier redshift than the SMBHs in the simulations.
\noindent (b) Sustained periods of super-Eddington accretion at $z>7.5$, whereas in the simulations the SMBH accretion rate is capped at the Eddington limit.
\noindent (c) Mass accretion characterized by a lower radiative efficiency than the value used in the simulations (i.e., $\epsilon_r=0.1$). In this case, the mass that is not converted into radiation contributes to the growth of SMBH, which can reach higher masses than those found in simulations at a given time. For instance, \cite{Davies19} report observational evidence for possible low radiation efficiency ($\epsilon_r\approx0.001$) in high-redshift QSOs.
\noindent (d) High-redshift AGN typically reside in regions which are even more overdense than that investigated in the \cite{Barai18} simulations, thus favouring the formation of SMBHs at earlier epochs. However, this possibility would arguably make the discrepancy between the observed and expected number of multiple X-ray detected AGN on small scales even worse. In addition, observational studies return contradictory results on the typical large-scale environment of high-redshift AGN \citep[e.g., ][]{Ota18,Mazzucchelli19, Mignoli20,Overzier21}.
The analysis that we have performed demonstrates that the comparison between several observable properties of AGN predicted by the \cite{Barai18} simulations and the observational results, including both the properties of the individual galaxies and the environment,
can help us to validate the recipes and assumptions adopted in numerical simulations. In particular, we found that AGN in the considered simulations match the gas radial distributions and apparent UV magnitudes of high-redshift QSOs. In addition, the same set of simulations has been demonstrated to reproduce well a number of physical properties of $z>6$ QSOs, such as dust properties \citep{DiMascia21b}, multi-wavelength spectral energy distribution \citep{DiMascia21a}, and the number of UV-detected and [C II]-detected satellite galaxies (Zana et al. accepted).
However, we also found that the predicted number of X-ray detectable satellite AGN located over small scales around luminous high-redshift QSOs both in the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations does not agree with the observational results.
This observable is relatively easy to estimate from simulations as it depends primarily on the BH accretion rate only, once a suitable conversion to X-ray luminosity is assumed. Moreover, gas and dust absorption does not affect significantly the observed X-ray emission from high-redshift AGN, as opposed to UV emission, up to high column densities (log$\frac{N_\mathrm{H}}{\mathrm{cm^{-2}}}\approx23.5-24.0$; see \S~\ref{Xray_obsc} and \S~\ref{UV}).
The mismatch between the number of multiple X-ray detected AGN on small scales between simulations and observations may be related to numerical issues and physical prescriptions. In particular, the simplistic BH seeding recipe implemented in the considered simulations (i.e., a $10^5\,\mathrm{M_\odot}$ BH is placed in the centre of a galaxy when this reaches a given mass threshold) naturally leads to the formation of a large number of SMBHs, that would appear as bright AGN at later cosmic times. Similar seeding recipies have been commonly adopted by most cosmological simulations (e.g., \citealt{Costa14}, \citealt{DiMatteo17}, \citealt{Barai18}, \citealt{Smidt18}, \citealt{Lupi19}, \citealt{Valentini21}), and typically mimic the ``heavy seed" formation channel for SMBHs \citep[e.g.,][]{Lodato06, Ferrara14}. However, theoretical models of ``heavy seed" formation require stringent physical conditions on, e.g., metallicity, physical state of the gas, ad radiation fields \citep[e.g.,][]{Ferrara14}. Accounting for such conditions in cosmological simulations is particularly difficult, but would reduce the number of formed SMBHs, and thus the discrepancy with observational results.
Another possibility is that observed QSOs at high redshift do not reside in regions as dense as those probed in the analysed simulations (but see, e.g., Zana et al. accepted). In this case, the formation of multiple SMBHs is expected to be hindered, helping us reconcile the expected number of X-ray sources with observational results. In addition, we would also expect to form less massive BHs, with direct consequences on the observational expectations discussed in this paper, as the BH mass is tightly linked with the maximum accretion rate, and thus AGN luminosity and feedback strength. Qualitatively, we would expect to derive fainter rest-frame UV and X-ray fluxes, weaker feedback, and, as a consequence (see Fig.~\ref{fig:NH_radius}), more compact gas reservoirs (i.e., similar to the \textit{noAGN}\xspace case) than the values discussed in \S~\ref{radial_distro}, \S~\ref{UV}, and \S~\ref{environment}.
Future X-ray facilities will provide us with the required sensitivity and angular resolution to investigate the presence of multiple faint AGN around bright high-redshift QSOs down to unprecedented flux limits (see \S~\ref{future_facilities}).
\section{Summary and conclusions} \label{conclusions}
We studied the observable properties of $z=6-10$ bright
AGN in a suite of zoom-in cosmological simulations by \cite{Barai18} characterized by the inclusion of AGN kinetic feedback with either bi-conical (namely, \textit{AGNcone}\xspace) and spherical (\textit{AGNsphere}\xspace) outflow geometry. We focused our investigation on the gas column density and size in the host galaxies, the AGN rest-frame UV magnitude and X-ray fluxes, and the detectability of systems of multiple AGN over a few kpc scale in the X-ray band. We compared these quantities with a control simulation in which SMBHs are not seeded (i.e., \textit{noAGN}\xspace), and observational results of $z>6$ AGN. We summarize our findings as follows.
\begin{itemize}
\item \textit{AGNcone}\xspace produces three bright AGN that grow up to $5\times10^8 < M_{\mathrm{BH}}<5\times10^9\,\mathrm{M_\odot}$ at $z=6$. These objects are characterized by a steady increase of their accretion rate up to $\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$. Once such high values are reached (at $z\approx6.5-7$), the strong AGN feedback prevents further increase of the accretion rate. This behaviour is linked with the bi-conical geometry of the outflow, that allows steady infalling of material along the equatorial directions, at least until the feedback grows strong enough to affect most of the gas in the galaxy halo.
In \textit{AGNsphere}\xspace, the spherical geometry of the outflow affects gas accretion already at low and moderate SMBH growth rate. For this reason, the two bright AGN produced in \textit{AGNsphere}\xspace reach lower values of BH masses (i.e., $2\times10^8 < M_{\mathrm{BH}}<5\times10^8\,\mathrm{M_\odot}$) and accretion rates ($\dot{M}<10\,\mathrm{M_\odot\,yr^{-1}}$) than objects in \textit{AGNcone}\xspace.
\item AGN host galaxies in \textit{AGNsphere}\xspace have gas column densities of $N_H\approx10^{23}\,\mathrm{cm^{-2}}$ from their formation up to $z=6.5-7$, when $N_H$ presents a remarkable drop due to the strong AGN feedback. In fact, the $N_H$ in matched galaxies in \textit{noAGN}\xspace continues to increase during the entire considered redshift range. The brightest AGN in \textit{AGNsphere}\xspace presents a similar behaviour as those in \textit{AGNcone}\xspace, although the $N_H$ is typically slightly lower. We interpret this difference again as due to the assumed spherical symmetry of the outflow. Instead, the second bright AGN in \textit{AGNsphere}\xspace do not reach accretion rate sufficiently high to significantly affect the gas in the host galaxy.
Our findings are consistent with the upper limits on $N_H$ recently reported for a set of $z>6$ AGN observed in the X-rays.
\item Kinetic feedback is required to match the gas extent reported for high-redshift QSOs (i.e., up to a few kpc). In fact, galaxies in \textit{noAGN}\xspace present typical gas sizes of $<1$ kpc, while the extents of the gas reservoirs of AGN in \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace increase up to the observed values of a few kpc at $z\lesssim7$. The exception is the second bright AGN in \textit{AGNsphere}\xspace, due to its relatively low values of accretion rate.
\item All AGN in the simulations would appear as obscured (i.e., $N_H>10^{22}\,\mathrm{cm^{-2}}$) along all lines of sight (LOSs) at $z>7$. These objects would be missed by currently employed UV-based selection methods, which are heavily affected by dust extiction, and would require observations in different bands (e.g., X-ray or infrared) to be unveiled. At later cosmic times, a fraction of LOSs (up to $\approx80\%$, depending on the specific AGN and redshift) have $N_H<10^{22}\,\mathrm{cm^{-2}}$. These are the preferential directions along which known, UV-selected $z>6$ QSOs are observed.
\item Under simple, but reasonable, assumptions on the gas-to-dust mass scaling and dust distribution, we estimate the apparent UV magnitudes ($m_{1450}$) of the AGN in the simulations along different LOSs. We found that AGN in \textit{AGNcone}\xspace have $m_{1450}$ consistent with those observed for real high-redshift QSOs (i.e., $m_{1450}<25$) along $\lesssim50\%$ of the LOSs at $z<7$. AGN in \textit{AGNsphere}\xspace, instead, have fainter magnitudes, due to the lower instrinsic luminosities, and, for the second AGN, the high extinction levels along most of the LOSs. No AGN in the simulations can reproduce the observed UV magnitudes of the few $z\approx7.5$ QSOs known to date, whose formation and accretion history are likely not well captured by the prescriptions assumed in the simulations.
\item The presence of multiple bright AGN over scales of a few kpc led us to investigate their detectability in X-ray observations with \textit{Chandra}\xspace, and to compare the results with real observations of $z>6$ QSOs. We found that the \textit{AGNcone}\xspace run significantly overpredicts the number of X-ray detected multiple AGN at high redshift. Instead, \textit{AGNsphere}\xspace produces AGN with lower rate of X-ray detection than typical values derived in relatively shallow (i.e., $30$ ks) observations, while it is consistent with the results obtained with longer (i.e., $50$ ks) observations.
\end{itemize}
These results demonstrate that the AGN in the considered simulations have physical properties consistent with those of real QSOs for what the column density and extent of the gas in the host galaxies and the UV magnitudes are concerned. A bi-conical geometry for the outflow is favored over a spherical geometry, as it reproduces AGN with the high luminosities and SMBH masses observed for $z=6-7$ QSOs. However, both simulations cannot explain the recent discovery of luminous QSOs at $z\approx7.5$, which may have been formed at higher redshift than the assumed seeding time in our simulations, or may have undergone extensive periods of super-Eddington accretion.
Moreover, we showed that the number of multiple AGN detectable in X-ray band over few kpc scales is the observable property that the considered simulations struggled the most to reproduce. We propose that this issue can be due to the simplistic BH seeding methods generally implemented in cosmological simulations, that do not account for the complex physics related with the formation and rapid growth of massive BHs in the early Universe. Future X-ray observatories will provide us with the sensitivity required to investigate the possible presence of multiple faint AGN satellites around luminous QSOs at high redshift.
\section*{acknowledgements}
We thank the anonymous referee for their valuable comments.
This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and Sherpa.
This research made use of Astropy,\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{Astropy13, Astropy18}. SG acknowledges support from the PRIN-MIUR 2017 grant (PI Fabrizio Fiore).
\section*{Data Availability Statement}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
\section{Introduction}
The existence of super-massive black holes (SMBHs) with masses larger than billion solar masses at $z\gtrsim6$ \citep[e.g.,][]{Banados18a,Matsuoka18a, Wang21b}, when the Universe was $<1$ Gyr old, challenges our current understanding of SMBH and galaxy formation and evolution, and is thus one of the most pressing open issues in modern astrophysics \citep[e.g.,][]{Woods19}.
Their distance and faintness make observations of these objects difficult and strongly biased towards the most luminous and massive accreting SMBHs. A complementary approach is to use numerical simulations as tools to study the largely unknown phases of SMBH growth in the early Universe \citep[e.g.,][]{Tanaka09,Sijacki09, Habouzit16, Habouzit19}.
However, observed properties of high-redshift accreting SMBHs, or active galactic nuclei (AGN), and predictions of numerical simulations have been compared only seldom \citep[e.g.,][Zana et al. accepted]{Ni20, Habouzit21, DiMascia21a}.
An important ingredient entering in numerical simulations focused on the early growth of SMBHs is the effect of AGN feedback \citep[e.g.][]{Costa14, Costa20, Barai18, Habouzit19, Valentini21}, as it is often considered to have a major role in shaping the evolution of AGN and galaxies along the whole cosmic history \citep[e.g.,][]{Fiore17}. In particular, optically-selected luminous quasi-stellar objects (QSOs) in the early Universe often present evidence for the launching of fast and massive multi-phase outflows (e.g., \citealt{Maiolino12, Cicone15,Bischetti19, Carniani19, Schindler20, Izumi21}; but see also, e.g., \citealt{Decarli18, Novak20, Meyer22}), which are expected to affect the observable properties of the QSOs themselves and their host galaxies, such as X-ray oscuration, UV extinction, and gas content \citep[e.g., ][]{Brusa15b, Ni20}.
Outflows observed in QSOs are though to originate from fast nuclear winds, which, in turn, may be accelerated by several physical mechanisms, including radiation pressure, due to UV photons produced in the accretion disc, on dust grains or on partially ionized gas mediated by UV transitions, and magnetic effects \citep[e.g.][]{Proga00, Murray05, Fabian08, Yuan15, RicciC17}. The physical scales involved in these processes are those of the accretion disk \citep[e.g., ][]{Giustini19}. Since such scales cannot be resolved by large-scale cosmological simulations, different authors have modeled AGN feedback
using several different recipes (e.g., \citealt{Barai18, Costa20, Ni20}).
Moreover, the effect of the outflow on the surrounding material can potentially depend on its geometry \citep[e.g.,][]{Zubovas16}.
Since the exact acceleration physics, and thus launching direction, of nuclear winds is not well understood, numerical simulations typically assume either spherical \citep[e.g., ][]{Feng16} or bi-conical \citep[e.g., ][]{Sala21} outflow geometry as study cases.
Beside the properties of the individual galaxies hosting accreting SMBHs, numerical simulations provide also information on the environment of high-redshift luminous AGN. While these objects are expected to reside in the peaks of the dark matter halo distribution, which are generally characterized by large overdensities of galaxies (e.g., \citealt{Costa14, Wise19}), although with some scatter (e.g., \citealt{Habouzit19}), observations struggle to provide us with a clear view of typical high-redshift QSO environment. In fact, $z>6$ QSOs have been reported to reside in a variety of environments, including underdense, normal, and overdense regions (e.g. \citealt{Ota18, Mazzucchelli19,Overzier21}). The first spectroscopically confirmed galaxy overdensity around a $z>6$ QSOs was presented recently by \cite{Mignoli20}, followed by a tentative confirmation of another structure by \cite{Overzier21}.
A significant fraction ($\approx40\%$) of $z\gtrsim6$ QSOs has ALMA-detected dusty companion galaxies at distances of a few kpc \citep[e.g.][]{Willott17, Decarli18, Neeleman19, Venemans20}. These satellite galaxies might host heavily reddened and buried AGN \citep[e.g., ][]{DiMascia21a}, although currently there is no strong observational evidence for the presence of accreting SMBHs in their centres \citep[e.g., ][]{Connor19,Connor20, Vito19a, Vito21}.
Such objects would be typically brighter than inactive galaxies, expecially in the X-ray band. Therefore, their predicted number in numerical simulations can be tested against observational results to infer how well simulations approximate reality.
In this paper, we present a study of the effect of AGN kinetic feedback on the observable properties of $z>6$ AGN in cosmological simulations. In particular, we analyse a set of numerical simulations presented by \citet[][hereafter, \citetalias{Barai18}]{Barai18} with different kinetic feedback prescriptions, focusing on the most massive SMBH at $z=6$ and its surrounding environment. We extract multiwavelength observables such as column density and radial extent of the gas distributed in the host galaxies, UV and X-ray AGN fluxes, and number of satellite AGN detectable over small (i.e., a few kpc) distances from the central SMBH. We compare these properties with results from multiwavelength observations.
The paper is structured as follows.
In \S~\ref{Method} we describe the numerical setup of the simulations, the AGN selection, and the method used to measure the gas column density and distribution. In \S~\ref{NH_distro} we discuss the redshift evolution of the column densities for the considered AGN. In \S~\ref{comparison_obs} we present the observable properties of the simulated AGN and their host galaxies, and we compare them with empirical findings. In \S~\ref{environment} we investigate the presence of multiple AGN systems over scales of a few kpc, and we compare their detectability rates in the X-ray band with results from observations of high-redshift AGN. Finally, in \S~\ref{discussion} we discuss and interpret the results, and in \S~\ref{conclusions} we provide a summary.
All quoted distances are physical unless otherwise noticed.
We adopt a flat $\Lambda$CDM cosmology with $H_0=67.7\,\mathrm{km\,s^{-1}}$ and $\Omega_m=0.307$ \citep{Planck16}.
\section{Method}\label{Method}
\subsection{Numerical model} \label{Numerical_methods}
We consider the simulation runs \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace by \citetalias{Barai18}, which include kinetic feedback. We provide here a summary of the numerical setup and refer to the original works for an in-depth discussion.
{\citetalias{Barai18}} used a modified version of the Smooth Particle Hydrodynamics (SPH) N-body code \code{GADGET-3} \citep{Springel05} to follow the evolution of a comoving volume of ($500$ Mpc)$^3$, starting from cosmological initial condition generated with \code{music} \citep{hahn11} at $z=100$, and zooming-in on the most massive (i.e., $4\times10^{12}\,\mathrm{M_\odot}$) dark matter (DM) halo, corresponding to a $\approx3\sigma$ overdensity \citep[e.g.,][]{Barkana01}, inside the box down to $z=6$. Therefore, the final zoomed-in simulations focus by construction on a highly biased cubic region, with a volume of (5.21 Mpc)$^3$. The highest level of the simulation has a mass resolution of $m_{\rm DM} = 7.54 \times 10^6$ ${\rm M}_{\odot}$ and $m_{\rm gas} = 1.41 \times 10^6$ ${\rm M}_{\odot}$ for DM and gas particles, respectively. The softening length for gravitational forces for these high-resolution DM and gas particles is $R_{\mathrm{soft}} = 1 h^{-1}$ ckpc.
The code accounts for gas heating and cooling (including metal-line cooling) depending on the gas metal content, based on eleven element species (H, He, C, Ca, O, N, Ne, Mg, S, Si, Fe) that are tracked in the simulation \citep{Tornatore07}. Star formation in the inter-stellar medium (ISM) is implemented following the multiphase effective subresolution model by \citet{Springel03}, adopting a density threshold for star formation of $n_{SF} = 0.13 \ {\rm cm}^{-3}$.
The simulations include stellar winds, supernovae feedback, and metal enrichment, and assume a \citet{Chabrier03} initial mass function in the mass range $0.1-100$ ${\rm M}_{\odot}$ \citep{Tornatore07,barai13,biffi16}.
When a DM halo that is not already hosting a black hole (BH) reaches a total mass of $M_{\rm h} \geq 10^9$ ${\rm M}_{\odot}$, a $M_{\rm BH} = 10^5$ ${\rm M}_{\odot}$ BH is seeded at its centre. BHs are treated as collisionless sink particles and are allowed to grow by accretion of the surrounding gas or by mergers with other BHs. Gas accretion onto BHs is modelled via the classical Bondi-Hoyle-Littleton accretion rate $\dot{M}_{\rm Bondi}$ \citep{Hoyle39, Bondi44, Bondi52}, capped at the Eddington rate $\dot{M}_{\rm Edd}$:
\begin{equation}
\dot{M}_{BH} = {\rm min} (\dot{M}_{\rm Bondi}, \dot{M}_{\rm Edd}).
\end{equation}
Accreting BH radiate away a fraction $\epsilon_{\rm r}$ of the accreted rest-mass energy, with a bolometric luminosity
\begin{equation}\label{eq:luminosity_bh}
L_{\rm bol} = \epsilon_{\rm r} \dot{M}_{\rm BH} c^2,
\end{equation}
where $c$ is the speed of light. \citetalias{Barai18} fixed the radiative efficiency to $\epsilon_{\rm r} = 0.1$, a fiducial value for radiatively efficient, geometrically thin, optically thick accretion disks around a Schwarzschild BH \citep{Shakura73}.
A fraction $\epsilon_{\rm f} = 0.05$ of the total output energy is distributed to the surrounding gas in a kinetic form\footnote{We refer to \citetalias{Barai18} for details about the choice of the value for $\epsilon_{\rm f}$ and the numerical implementation of the kinetic feedback.}. In \textit{AGNcone}\xspace the kinetic energy is distributed along two cones with a half-opening angle of $45\degree$. The direction of the cone axis is chosen randomly for each BH at the seeding time, and is kept fixed throughout the simulation \citep{Barai18}, similarly to what is done in \cite{Zubovas16}. Instead, the AGN feedback in \textit{AGNsphere}\xspace pushes away the gas particles along random directions, thus mimicking a spherical geometry.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/MBH_z.png}
\caption{BH masses as a function of redshift for the \textit{AGNcone}\xspace (left) and \textit{AGNsphere}\xspace (right) runs. Only SMBHs accreting at $\dot{M}>0.02\,\mathrm{M_\odot}$ are considered. The arrows mark the mergers between BHs. AGN considered in \S~\ref{NH_distro} and \S~\ref{comparison_obs} (i.e., those that reach $z=6$ with $M_{BH}>10^8\,\mathrm{M_{\odot}}$) are plotted as filled symbols.}
\label{fig:masses}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/Mdot_z.png}
\caption{Mass accretion rate as a function of redshift for the \textit{AGNcone}\xspace (left) and \textit{AGNsphere}\xspace (right). The corresponding bolometric luminosity (Eq.~\ref{eq:luminosity_bh}) is reported in the right axis. }
\label{fig:mdot}
\end{figure*}
\subsection{AGN selection}\label{selection}
We analyse the simulation snapshots in steps of $\Delta z =0.2$ from $z=10$ to $z=8$ and $\Delta z =0.1$ from $z=8$ to $z=6$. In particular, we follow the most massive SMBH at $z=6$ in each simulation set, and consider a box with side size of 60 kpc centred on it. We refer to all of the SMBHs in the box accreting at $\dot{M_{BH}}>0.02\,\mathrm{M_\odot\,yr^{-1}}$ (i.e., $L_{bol}\approx10^{44}\mathrm{~erg ~s^{-1}}$) as AGN. Fig.~\ref{fig:masses} presents the BH mass evolution of AGN in the two simulations. Each AGN is labelled with the initial letter of the run (C for \textit{AGNcone}\xspace, S for \textit{AGNsphere}\xspace).
\textit{AGNcone}\xspace forms two very massive ($>10^9\,M_\odot$) BHs at $z<7$, while only less massive BHs are formed in the \textit{AGNsphere}\xspace run. This behaviour is linked to the implementation of the feedback: \textit{AGNcone}\xspace allows the gas to accrete continuously along the equatorial directions, while the lack of a preferential direction along which the outflow is launched in \textit{AGNsphere}\xspace does not allow for a steady and efficient accretion onto the SMBH. This effect can be appreciated in Fig.~\ref{fig:mdot}: the accretion rate of \textit{AGNcone}\xspace is generally higher than that of \textit{AGNsphere}\xspace, at least up to $\dot{M}\approx1-30\,\mathrm{M_\odot\,yr^{-1}}$. At higher accretion rates, which are reached by the most accreting BHs at $z<7$, AGN feedback prevents further increase of the accretion rate.
Hereafter, we focus our analysis on the AGN that reach $z=6$ with $M_{BH}>10^8\,\mathrm{M_\odot}$ and $L_{bol}>10^{46}\,\mathrm{erg\,s^{-1};}$ (see filled symbols in Fig.~\ref{fig:masses} and Fig.~\ref{fig:mdot}), which we refer to as ``bright AGN" (i.e., C1, C2, and C3 in \textit{AGNcone}\xspace; S1 and S2 in \textit{AGNsphere}\xspace).
These BH mass and luminosity values are typical of known $z>6$ QSOs \citep[e.g., ][]{Yang21}, allowing us to compare the physical properties of simulated and observed AGN in a consistent way. We note that, since the simulations focus on a single cosmic region at high redshift, the derived expectations on the AGN observable properties might be affected by cosmic variance.
\subsection{Gas column density and radial distribution}\label{NH}
Here we describe the method that we use to derive the distribution of hydrogen, helium, and metal column densities in the ISM for galaxies hosting AGN in the considered simulations. We make use of the hydrogen column density in the remaining of the paper to derive the observational properties predicted by the two considered simulations.
We estimate the distribution of the column densities for the bright AGN in the simulations by launching 1000 randomly selected lines of sight (LOSs) toward each AGN from a distance $d= 30$ kpc. Each LOS is considered as the axis of a cylinder with basis radius of $R_{\mathrm{soft}}$. We note that the resolutions of the simulations do not allow us to probe structures on smaller scales, as, for instance, the existence of a dusty torus on pc scales. Then, each cylinder is divided along its length into bins of $l_{\mathrm{bin}}=0.25$ kpc width, for a total of $\frac{d}{l_\mathrm{bin}}=120$ radial bins. We compute the density of each chemical element in a bin of the cylinder from the mass carried by each particle included in that bin.
With this approach, we also obtain the radial distribution of the gas density.
Finally, we integrate along the cylinder to compute the total column density of hydrogen ($N_H$) and of the other elements. The resulting total $N_H$ is not sensitive to reasonably different values of $l_{\mathrm{bin}}$ (i.e., from 0.25 kpc to 1 kpc). Therefore, we used $l_{\mathrm{bin}}=0.25$ as this value allows us to sample well the radial distribution of the gas (see \S~\ref{radial_distro}).
Fig.~\ref{fig:Mollweide} (upper panel) presents an example of the derived column-density map centred on the QSO C1 in \textit{AGNcone}\xspace. Each circle represents one of the 1000 random LOSs, which sample homogeneously the entire solid angle as seen from C1.
To assess the effect of feedback on the column density (\S~\ref{NH_distro}), we also consider an additional simulation run presented in \cite{Barai18}, that is identical in terms of initial conditions and physical prescriptions to the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace, except that BHs are not seeded. The only type of feedback in this run, which we refer to as \textit{noAGN}\xspace, is due to supernovae explosions (see \citealt{Barai18} for detailed discussion).
We associate each AGN in a simulation to the corresponding galaxy in the \textit{noAGN}\xspace runs following a method similar to that described in Zana et al. (accepted): first, we identify the DM halo hosting the AGN as the one having its centre of mass closest to the position of the SMBH. Then, we identify the corresponding halo in the \textit{noAGN}\xspace run by cross-matching the DM particle IDs in the two runs, and selecting the halo in \textit{noAGN}\xspace which shares the largest fraction of particles with the initial AGN halo, further imposing that the mass difference must be within a factor of 10-50.\%\footnote{The exact threshold is adjusted at each time step in order to find at least one halo counterpart.} Finally, we repeat the procedure described above on the selected halo in \textit{noAGN}\xspace, and derive the column density distribution in absence of AGN feedback. At $z>8$, the redshifts at which the \textit{noAGN}\xspace snapshots are taken are significantly different from those of the runs including AGN, making the DM-halo match procedure highly uncertain. Thus, we limit the identification of the AGN-hosting galaxies counterparts in the \textit{noAGN}\xspace run to $z<8$.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/NH_Mollweide.png}
\includegraphics[width=1\textwidth ]{figures/vr_Mollweide.png}
\caption{\textit{Upper panel}: Mollweide projection of the column density along 1000 random lines of sights centred on the QSO C1 at $z=7.1$. \textit{Lower panel}: Mollweide projection of the radial velocities of all particles within 10 kpc from C1 at $z=7.1$. The different sampling of the maps is intended to show the homogeneity of the 1000 LOSs used to compute $N_H$ in the upper panel, and the velocity of the individual gas particles in the lower panel. The map is aligned with the outflow cone direction. Regions where the particles have high positive velocities correspond to the two cones along which the kinetic energy is distributed by the AGN feedback in the \textit{AGNcone}\xspace simulation. Such cones are characterised by the lowest values of column densities.}
\label{fig:Mollweide}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/NH_z_analysis.png}
\caption{Evolution of column density for bright AGN in the \textit{AGNcone}\xspace (C1, C2, C3) and \textit{AGNsphere}\xspace (S1, S2) simulations. We show the median value (solid line, color coded according to the AGN bolometric luminosity and accretion rate), and the 10\% and 90\% percentiles (dashed lines) computed by launching 1000 lines of sight. The gray stripes enclose the 10\% to 90\% percentiles of the column densities of matched galaxies in the same simulation sets where, however, BHs have not been seeded (i.e., the \textit{noAGN}\xspace case). To compare with observational results (\S~\ref{Xray_obsc}), the red arrows mark the 3$\sigma$ upper limits derived for X-ray detected QSOs with $>10$ counts from \citet{Nanni18} and \citet{Connor19}.}
\label{fig:NH_z_all}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/fLOS_z_analysis.png}
\caption{Fraction of lines of sight obscured by column densities $<10^{22}\,\mathrm{cm^{-1}}$ (solid lines) and $<10^{23}\,\mathrm{cm^{-1}}$ (dashed lines) as a function of redshift for the bright AGN in the \textit{AGNcone}\xspace (C1, C2, C3) and \textit{AGNsphere}\xspace (S1, S2) simulations. The symbols are color coded according to the AGN bolometric luminosity and accretion rate.}
\label{fig:fLOS_all}
\end{figure*}
\section{Column density evolution}\label{NH_distro}
Fig.~\ref{fig:NH_z_all} presents the evolution of the column density for bright AGN in the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations.
Considering the \textit{AGNcone}\xspace simulation, the AGN column densities are similar to, or slightly lower than, those derived for the corresponding galaxies in the \textit{noAGN}\xspace run until the AGN accretion rate reaches $\dot{M}\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$. This happens at $z\approx7$ for C1 and C2, and $z\approx6.3$ for C3 (see Fig.~\ref{fig:mdot}). At later times, the AGN column density drops significantly by up to $\approx1$ dex and the accretion rate starts to oscillate. The 10\% and 90\% percentiles span up to one order of magnitude, especially at $z=6-7$, when the accretion rates reach the maximum values, producing the most powerful conical outflows.
Instead, the column densities of the corresponding galaxies in \textit{noAGN}\xspace (grey stripes in Fig.~\ref{fig:NH_z_all}) keep on increasing relatively smoothly. This finding confirms the AGN $N_H$ drop and the presence of unobscured LOSs to the effect of the conical kinetic feedback. At low accretion rates the produced outflow cannot stop the infall of material, but once the accretion rate reaches high enough values, the energy carried by the outflow impacts a significant part of the gas in the halo, hindering further infalling, especially along the conical outflow directions. As a result, the $N_H$ decreases, as well as the AGN accretion rate, until more material is allowed to accrete, producing a new burst of powerful feedback. Such a cyclic activity explains the decreasing median $N_H$, the wider $N_H$ distribution, and the oscillating $\dot{M}$ behaviour at later cosmic times.
This result is in qualitative agreement with the self-regulation scenario discussed by, e.g., \cite{Sijacki09, Dubois13, Costa14, Feng14,Richardson16, Trebitsch19}, according to which the AGN feedback controls the growth of the black hole and limits the duration of high accretion episodes by emptying the host galaxy gas reservoir, provided that the accretion rate is sufficiently high.
However, we note that the physical interpretation of our results is complicated by the effect that one AGN may have on other AGN-hosting galaxies passing through its feedback cone. In fact, C1, C2, and C3 in \textit{AGNcone}\xspace at $z<7$ are always closer than 30 kpc, and reach minimum distances as small as 4 kpc. At these distances, powerful outflows launched from one AGN may affect nearby galaxies (e.g., Zana et al. accepted).
As an example of the feedback effect on the column density, in Fig.~\ref{fig:Mollweide}, we compare the $N_H$ map centred on C1 with the radial velocity map of all particle within 10 kpc from C1. The maps correspond to $z=7.1$, when C1 reaches a local maximum in accretion rate before the strong AGN feedback starts to impact significantly the $N_H$ (Fig.~\ref{fig:NH_z_all}) and $\dot{M}$ starts to oscillate. Comparing the column density map (upper panel) with the map of the radial velocity of individual particles (lower panel), we notice that the two conical outflows, identified as regions with positive radial velocities, correspond to LOSs with low column densities. Such LOSs are those along which high-redshift AGN are more easily to be detected in the rest-frame UV band, as we investigate in details in \S~\ref{UV}.
Fig.~\ref{fig:fLOS_all} presents the fraction of LOSs along which $N_H<10^{22}\,\mathrm{ ~cm^{-2}}$ (solid lines) and $N_H<10^{23}\,\mathrm{ ~cm^{-2}}$ (dashed lines) for each bright AGN.
Hereafter, we use the widely used threshold $N_H=10^{22}\,\mathrm{ ~cm^{-2}}$ to separate obscured and unobscured AGN.\footnote{ However, we note that we consider the dust extinction as a more relevant quantity when we study the AGN rest-frame UV emission in \S~\ref{UV}.} For instance, \cite{Merloni14} found that such a value returns the best agreement between samples of obscured AGN as defined in optical (e.g., narrow emission-line AGN) and X-ray bands.
From Fig.~\ref{fig:fLOS_all} we infer that only at $z\lesssim7$ a fraction of the LOSs would appear as unobscured. In particular, at $z\lesssim7$ C1 presents unobscured LOSs over $10-40\%$ of the solid angle, while this fraction is much more variable with redshift (i.e., $0-80\%$) for C2 and C3.
The most massive BH in the \textit{AGNsphere}\xspace simulation, S1, follows a somewhat similar $N_H$ evolution to that of the AGN in \textit{AGNcone}\xspace: a roughly constant median $N_H$ value up to $z\approx7$ followed by a slightly decreasing and wider $N_H$ distribution (Fig.~\ref{fig:NH_z_all}), and the appearance of unobscured LOSs (Fig.~\ref{fig:fLOS_all}) at later cosmic times. However, some differences exist: first, the AGN $N_H$ is always significant lower than that of the corresponding galaxy in the \textit{noAGN}\xspace run (grey stripe), even at $z>7$. Secondly, the column density drop at $z<7$ is not as strong as in the \textit{AGNcone}\xspace case. Finally, at $z>7$ the accretion rate of S1 is not as smooth as in the \textit{AGNcone}\xspace case, and keeps on increasing even at $z<7$.
These differences may be due to the prescripted geometry of the kinetic feedback in the \textit{AGNsphere}\xspace case, in which gas particles are accelerated in a random direction during every accretion event. Therefore, in contrast with the \textit{AGNcone}\xspace case, there is no preferential direction (i.e., the equatorial plane of the conical outflow) along which material can keep on accreting undisturbed for long periods of time at $z>7$. In particular, the accretion rate of S1 never exceeds $\approx10\,\mathrm{M_\odot\,yr^{-1}}$, which is the approximate threshold after which the AGN kinetic feedback affects more evidently the $N_H$ distribution and the accretion rate of AGN in the \textit{AGNcone}\xspace run.
The column density evolution of S2, instead, does not appear to be strongly influenced by the AGN feedback. Although the median $N_H$ is slightly lower than the values found in the \textit{noAGN}\xspace case, it remains constant with time, and does not drop even at $z<6.5$, when S2 reaches similar accretion rate to S1. As a result, S2 would never appear as an unobscured AGN. We note that the typical column density of S2 is a factor $\approx3$ higher than that of S1 at any redshift, and its accretion rate rises smoothly from $z=7$ to $z=6$. These properties suggest that higher accretion rates than the values reached by S2 are required in order to launch outflows powerful enough to sweep away the gas in the case of large column densities (e.g., \citealt{Trebitsch19}), even when kinetic energy is distributed along random directions by the AGN feedback.
The median values of $N_H$ we derive from the \cite{Barai18} simulations are consistent with typical values found by \cite{Lupi22}. However, the resolution of that work is $\times 85$ higher than our simulations, and allows the authors to sample compact regions of dense gas with $N_H\gtrsim10^{24}\,\mathrm{cm^{-2}}$, especially at $z>8$, when AGN feedback has not yet affected significantly the ISM distribution and density in the host galaxies. One of the main methodological differences with that work is that we compare the ISM densities in the same galaxies in which SMBHs are actively accreting or are not seeded at all. Thus, we probe directly the effect of AGN feedback on the ISM in the host galaxy.
\section{Comparison with observations}\label{comparison_obs}
In this section, we compare the observable properties derived from the $N_H$ distributions of the AGN predicted by the simulations (\S~\ref{NH_distro}) with observational results. In particular we focus
on the comparison with constraints from X-ray observations (\S~\ref{Xray_obsc}),
the radial distribution of the gas reservoirs (\S~\ref{radial_distro}), and the observed UV magnitudes (\S~\ref{UV}).
\subsection{X-ray obscuration}\label{Xray_obsc}
X-ray observations are routinely used to constrain the column density of obscuring material along the LOSs of AGN.
Low and moderate values of column densities (\mbox{$N_H\lesssim10^{22}\,\mathrm{ ~cm^{-2}}$}) can absorb soft X-ray photons (rest-frame energies $\lesssim2$ keV), whereas larger column densities are required to absorb a high fraction of more energetic photons.
However, X-ray observations of high-redshift QSOs \citep[e.g.][]{Vito19b,Wang21a} sample rest-frame energies $E>3$ keV, and are thus sensitive only to high column densities ($N_H\gtrsim 3\times10^{23}\,\mathrm{ ~cm^{-2}}$), at least at the sentivities of currently available facilities. Moreover, all of the known $z>6$ QSOs have been selected based on their unobscured rest-frame UV emission (i.e., they are optically classified as type 1 QSOs), and thus are not expected to be heavily obscured in the X-ray band. For these reasons, existing X-ray observations of bright $z>6$ QSOs provide us with only loose upper limits of $N_H$. The downward-pointing red arrows in Fig.~\ref{fig:NH_z_all} are the observed upper limits on $N_H$ derived for a sample of $z>6$ QSOs by \citealt{Nanni17} and \citealt{Connor19}, with typical luminosities $L_{bol}=10^{46}-10^{47}\,\mathrm{erg\,s^{-1}}$. The column densities derived for bright AGN in all of the considered simulations are lower than, or consistent with, such loose upper limits. Although the $N_H$ values found for the \textit{noAGN}\xspace case are typically higher, they are still consistent with some measured upper limits. Therefore, the constraints on $N_H$ obtained from X-ray observations of high-redshift QSOs only marginally favour the presence of kinetic feedback.
We note that constraining AGN obscuration using X-ray observations requires an assumption on gas metallicity, as X-ray photons are mainly absorbed by metal atoms. Typically, solar metallicity is assumed, whereas the ISM metallicity of the host galaxies of the AGN in the \citetalias{Barai18} simulations is sub-solar (e.g., by factors of $\approx2-3$ at $z=6$; e.g., Zana et al. in prep.). This consideration reinforces the overall consistency between the $N_H$ values constrained from X-ray observations and found in the simulations, as significantly larger column densities would be required in the case of sub-solar metallicities to produce X-ray obscuration in excess to that observed in real QSOs. In \S~\ref{environment} we discuss the X-ray detectability of the QSOs in the simulations.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/NH_radii_analysis.png}
\caption{Median radius ($R_{90}$, computed over all of the lines of sight) containing the $90\%$ of the total gas as a function of redshift. The color code indicates the median total $N_H$, averaged over all of the lines of sight. The dashed grey lines mark the same quantity computed for matched galaxies in the same simulation sets where, however, BHs have not been seeded (i.e., the \textit{noAGN}\xspace case). The black ticks mark $R_{90}$ for 25 $z>6$ QSOs, as estimated from the [C II] emission beam-deconvolved sizes presented by \citet{Venemans20}.}
\label{fig:NH_radius}
\end{figure*}
\subsection{Gas radial distribution}\label{radial_distro}
We investigate the effect of kinetic feedback on the observable sizes of the gas reservoirs in high-redshift QSOs. From the radial distribution of $N_H$ derived
for each LOS in \S~\ref{Method}, we computed the radius from the centre of the galaxy which includes $90\%$ of the gas contributing to the total $N_H$. Then, for each galaxy, we computed the median value considering all of the 1000 LOSs, and define it as $R_{90}$. We use such a quantity to quantify the size of the gas reservoir in a galaxy.
Fig.~\ref{fig:NH_radius} presents $R_{90}$ as a function of redshift for every bright AGN in the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations, as well as for the matched galaxies in the \textit{noAGN}\xspace runs. All of the bright AGN in the \textit{AGNcone}\xspace simulation (C1, C2, C3) have a similar evolution of $R_{90}$: their gas reservoir sizes are constant ($\approx1$ kpc) at $z\gtrsim7$. At lower redshift, where $N_H$ decreases due to strong effect of the kinetic feedback, which is proportional to $\dot{M}$ and $L_{bol}$ (see the color-code of the circles in Fig.~\ref{fig:NH_z_all} and Fig.~\ref{fig:NH_radius}), $R_{90}$ increases up to several kpc. This behaviour is expected considering that the AGN feedback applies a mechanical push to the surrounding gas particles. In fact, the size of the gas reservoir in the \textit{noAGN}\xspace run, where the AGN feedback lacks (grey dashed lines in Fig.~\ref{fig:NH_radius}), remains constant or tends to even decrease at later cosmic times.
The evolution of $R_{90}$ for S1 in the \textit{AGNsphere}\xspace simulation is similar to that of the AGN in the \textit{AGNcone}\xspace simulation. However, the increase of $R_{90}$ is stronger and begins at earlier cosmic times. We recall that the accretion rate of S1 is typically lower than that of the AGN in \textit{AGNcone}\xspace (see Fig.~\ref{fig:mdot}), and therefore the stronger evolution of $R_{90}$ is not due to intrinsically stronger outflows launched by the AGN, but, as discussed in \S~\ref{NH_distro}, to the different geometry of the outflow: being launched along random directions at every accretion event, the outflow is more likely to transmit the kinetic energy to the gas particles in the galaxy even at low or moderate accretion rates. Instead, S2 does not follow the same evolution as S1. On the contrary, $R_{90}$ decreases to sub kpc values approaching $z=6$. As discussed in \S~\ref{NH_distro} we ascribe this behaviour to the relatively low accretion rate, which does not produce feedback strong enough to efficiently affect the gas distribution in the host galaxy.
We compare our findings with the observed extent of the [C II] emission of 25 $z>6$ QSOs presented by \cite{Venemans20}, assuming that the [C II] emission line is a good tracer of the spatial extent of the total gas reservoir \citep[e.g.,][]{Zanella18, Sommovigo21}. We used the major axis of the deconvolved [C II] emission size (Tab. 3 of \citealt{Venemans20}), which represents the FWHM of the emitting source, and converted it into the radius that includes 90\% of the [C II] light, assuming a Gaussian distribution.\footnote{We note that the conclusions hold if we use an exponential profile (e.g., \citealt{Fujimoto20}) and convert the FWHM values reported by \cite{Venemans20} into exponential scale lengths. In this case, we obtain larger radii than in the Gaussian case by a factor of $\approx1.75$.} The resulting values are reported in
Fig.~\ref{fig:NH_radius}) as black ticks at the redshift of each QSO.
The AGN in the \textit{AGNcone}\xspace simulation have $R_{90}$ consistent with the observed values, while the median gas size of S1 is larger at nearly every redshift. S2 have a size consistent with the most compact QSOs in the \cite{Venemans20} sample. However, this comparison is not fair: the ISM in S2 produces very large column densities at all redshifts and all LOSs (Fig.~\ref{fig:NH_z_all}), and thus large expected values of dust extinction. All of the QSOs studied in \cite{Venemans20} are instead rest-frame UV selected objects: we lack observational information about the extent of the gas reservoirs of buried high-redshift QSOs, as is S2. In all cases, the median gas size of the \textit{noAGN}\xspace control galaxies are smaller than the observed values for QSOs, suggesting that kinetic feedback is required to produce the gas extents observed in real QSOs.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/m1450_z_stripe_analysis.png}
\caption{ Apparent magnitude at rest-frame $\lambda = 1450\,\text{\normalfont\AA}$ as a function of redshift for bright AGN in the \textit{AGNcone}\xspace (C1, C2, C3) and \textit{AGNsphere}\xspace (S1, S2) simulations. The purple regions encompass the 50\% least extincted LOSs, while the grey hatched regions represent the 50\% most extincted LOSs. The grey circles are $z>6$ QSOs collected from \citet{Banados16, Banados18a}, \citet{Chehade18}, \citet{Matsuoka18a,Matsuoka18b, Matsuoka19a,Matsuoka19b}, \citet{Mazzucchelli17b}, \citet{Reed17}, \citet{Tang17}, \citet{Wang17, Wang18a,Wang18b, Wang19, Wang21b}, and \citet{Yang19,Yang20}.}
\label{fig:m1450_z}
\end{figure*}
\subsection{UV magnitudes}\label{UV}
In \S~\ref{Xray_obsc} we discussed how the available X-ray observations of $z>6$ QSOs are not sensitive to the column density values that we derived for bright AGN in the simulations. Instead, the rest-frame UV emission of high-redshift AGN is expected to be severely affected by dust extinction even for low values of $N_H$. In this section, we compare the expected rest-frame UV magnitudes of bright AGN in the simulations with the observed values of known $z>6$ QSOs.
We assumed that the intrinsic (i.e., unextincted) rest-frame UV spectra of the AGN-hosting galaxies in the simulations are dominated by the AGN (i.e., we do not include stellar emission) and are well represented by the \cite{VandenBerk01} composite spectrum of type 1 QSOs, rescaled to their bolometric luminosity via the bolometric correction of \cite{Venemans16} and \cite{Decarli18}
\begin{equation}
\mathrm{log}\left(\frac{L_{bol}}{\mathrm{erg\,s^{-1}}}\right)=4.553+0.911\times \mathrm{log}\left(\frac{\lambda L_{\lambda}(1450\text{\normalfont\AA})}{\mathrm{erg\,s^{-1}}}\right).
\end{equation}
We assumed a simple uniform slab of dust located in front of each AGN and an SMC extinction curve, and computed the measured rest-frame UV flux as
\begin{equation}
F_\lambda^\mathrm{obs}=F_\lambda^\mathrm{intr}e^{-\tau_\lambda},
\end{equation}
where $\tau_\lambda=k_\lambda\Sigma_m f_{dust}$, $k_\lambda$ is the extinction cross section at wavelength $\lambda$, $\Sigma_m$ is the mass column density of metals, which we computed in \S~\ref{NH}, and the fraction of metal mass locked into dust is assumed to be $f_{dust}=0.15$ as in \cite{DiMascia21b}. Finally, we computed the apparent magnitude at the wavelength corresponding to rest-frame 1450 \text{\normalfont\AA}, that is $m_{1450}$.
For all considered AGN, the metal mass is computed from the column densities of metals derived in \S~\ref{NH} for 1000 LOSs at every simulation snapshot. Thus, we obtain a distribution of 1000 values of $m_{1450}$ at every redshift. In Fig.~\ref{fig:m1450_z} we show the magnitudes obtained for the 50\% least (purple regions) and most (grey hatched regions) extincted LOSs.
To allow for a comparison with observations, we add the magnitudes of a sample of $z>6$ QSOs collected from \cite{Banados16, Banados18a}, \cite{Chehade18}, \cite{Matsuoka18a,Matsuoka18b, Matsuoka19a,Matsuoka19b}, \cite{Mazzucchelli17b}, \cite{Reed17}, \cite{Tang17}, \cite{Wang17, Wang18a,Wang18b, Wang19, Wang21b}, \cite{Yang19,Yang20}, with typical magnitudes of $19\lesssim m_{1450}\lesssim 24$.
Among the considered simulations, \textit{AGNcone}\xspace produces the UV brightest AGN, which are consistent with the magnitudes of known QSOs at $z\lesssim7$. As discussed in \S~\ref{NH_distro}, such redshift range corresponds to the period when the AGN strong kinetic feedback strongly affects the gas column density in the host galaxy, strongly suggesting that known, optically selected $z>6$ AGN are indeed observed preferentially along directions where AGN feedback has cleared the LOS of most of the gas and dust.
This prediction is hard to be tested observationally. Not only estimating the outflow direction is a difficult task, but the incidence of outflow in high-redshift AGN itself is still a matter of debate (e.g., \citealt{Maiolino12, Cicone15, Bischetti19, Novak20, Izumi21, Meyer22}). Moreover, $z>6$ QSOs might have been detected along LOSs which have been previously cleared of most of the gas and dust by past outflows.
In this respect, a caveat arises from the numerical implementation of the ISM properties in the \cite{Barai18} simulations, which, as described in \S~\ref{Numerical_methods}, follow the prescription of \cite{Springel03}. This model does not capture the ISM porosity and therefore is not able to resolve clumpy structures on ~pc scales. Resolving such structures might decrease the effective opacity of the medium and possibly produce more unobscured lines of sight, even in the absence of AGN feedback.
In Fig.~\ref{fig:m1450_z}, only $<50\%$ of the LOSs of an individual AGN have extinction values small enough to reproduce the observed magnitudes.
We computed the probability that multiple AGN appear as UV bright (i.e., $m_{1450}\lesssim24$) sources along the same LOS, and found that it is negligible. This result is consistent with observations, according to which, to date, no such a system of multiple UV-bright AGN has been discovered at high redshift.
The most luminous AGN in the \textit{AGNsphere}\xspace run, S1, reaches magnitudes as bright as the observed values only at $z\approx6.5$, while it fails at reproducing the magnitudes of $z>6.5$ QSOs. This is due to the lower accretion rate, and thus lower intrinsic luminosity, of S1 than the accretion rates of bright AGN in \textit{AGNcone}\xspace. The large column density of S2 results in dramatic extinction levels along all of the LOSs, such that only along a small fraction of the LOSs S2 has apparent magnitude consistent with those of observed high-redshift QSOs, despite its intrinsic luminosity being similar to that of S1 at $z<6.5$ (Fig.~\ref{fig:mdot}).
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/Flux_z.png}
\caption{Expected X-ray flux in the 0.5--7 keV band as a function of redshift for the \textit{AGNcone}\xspace (\textit{left panel}), and \textit{AGNsphere}\xspace (\textit{right panel}) runs. For each AGN at each redshift, we assumed the median $N_H$ computed for 1000 lines of sight. The horizontal dotted lines mark the flux limit computed for \textit{Chandra}\xspace (50 ks observation), \textit{Athena}\xspace (10 ks), \textit{AXIS}\xspace (10 ks), and \textit{Lynx}\xspace (10 ks). }
\label{fig:Fx}
\end{figure*}
\section{Multiple high-redshift AGN on 1-10 arcsec scales}\label{environment}
Typical separations between AGN in the \cite{Barai18} simulations are $\approx5-50$ kpc, corresponding to only a few arcseconds in projection. To date, no multiple AGN system has been discovered observationally at $z>6$ (e.g., \citealt{Greiner21}), with the highest redshift AGN pair being recently discovered at $z=5.7$ \citep{Yue21}. This result could be due to dust extinction preventing the detection of other possible accreting SMBHs close to high-redshift QSOs, as we found in our simulations (\S~\ref{UV}). Alternatively, QSOs observed at $z\gtrsim6$ intrinsically have no AGN satellite. The latter hypothesis implies that the simulations overpredict the number of bright AGN, due to, e.g., the specific numerical setup and seeding prescription. In addition, as discussed in \S~\ref{Numerical_methods}, the simulations focus on an overdense region, which maximizes the probability of forming multiple SMBHs, and thus bright AGN, in a small volume.
To investigate better the relation between the predicted and observed number of systems of multiple AGN at high redshift,
in \S~\ref{mock_Xray} we produce mock X-ray observations with the \textit{Chandra}\xspace X-ray observatory\footnote{\url{https://cxc.harvard.edu/}} based on the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations. Then, in \S~\ref{multiple_AGN} we compute the probability of detecting multiple AGN on small angular separations, and compare the findings with observational results. Finally, in \S~\ref{future_facilities} we investigate the potential of future X-ray facilities in detecting possible multiple faint AGN over small scales around bright high-redshift QSOs.
\subsection{Mock X-ray observations}\label{mock_Xray}
As discussed in \S~\ref{Xray_obsc}, the column densities that we derived in \S~\ref{NH_distro} for simulated $z>6$ AGN have a negligible effect on the X-ray emission at the observed-frame energies probed by X-ray telescopes, allowing us to factor out the effect of varying $N_H$ along different LOSs. However, we have to take into account another effect related to the specific choice of the LOS: the emission of different AGN might be blended along some LOSs due to projection effects, and appear as a single X-ray source. This effect might be important as the projected angular separations of the AGN in the considered simulations are comparable with the angular resolution of \textit{Chandra}\xspace (i.e., $\approx0.5^{\prime\prime}$), which is the existing X-ray observatory with the sharpest view.
We produce mock observations using the SOXS v. 3.0 software,\footnote{\url{https://hea-www.cfa.harvard.edu/soxs/}} using \textit{Chandra}\xspace response matrices and ancillary files suitable for Cycle 20. SOXS accounts for three background components: a uniform Galactic component, a cosmic background due to point-like sources, and an instrumental component. For each simulation, we produce two sets of mock images, assuming an exposure time of 30 ks or 50 ks, which are typical lengths of real \textit{Chandra}\xspace observations of $z>6$ QSOs \citep[e.g.,][]{Vito19a,Wang21a}.
For each set, we considered 100 random LOSs, along which all AGN have been projected on the sky plane according to their tri-dimensional positions in the simulations. This allows us to statistically take into account 1) the possible blending of multiple sources due to projection effects, and 2) the Poisson fluctuations of the number of detected X-ray photons at a given intrinsic flux.
We convert the bolometric luminosities of AGN in the simulations into X-ray luminosities in the rest-frame $2-10$ keV energy band using the \cite{Duras20} relation.
Then, we compute the fluxes in the 0.5-7 keV band (i.e., one of the standard energy bands used to analyse \textit{Chandra}\xspace observations) for every AGN, and use them as input
values to simulate the images. We adopt intrinsic powerlaw emission with photon index $\Gamma=2$. This is a typical value for AGN up to $z\approx6.5$ \citep[e.g.][]{Nanni17,Vito19b}, although \cite{Vito19b} and \cite{Wang21a} find hints for a steepening at higher redshifts. We also include absorption due to the measured value of column density along the considered LOS, although, as discussed above, the produced obscuration is negligible for our high-redshift objects, and a Galactic absorption component with $N_H=5\times10^{20}\mathrm{ ~cm^{-2}}$. These computations have been performed with XSPEC v.12.11 (\citealt{Arnaud96}; model $phabs\times zvphabs\times powerlaw$)\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/}}. Fig.~\ref{fig:Fx} presents the expected X-ray flux of every AGN in the simulations as a function of redshift.
\subsection{X-ray detection of multiple AGN}\label{multiple_AGN}
We ran a blind source detection procedure on the \textit{Chandra}\xspace mock observations in the 0.5-7 keV band using the \textit{wavdetect} tool in CIAO v.4.12\footnote{\url{https://cxc.harvard.edu/ciao4.12/}} \citep{Fruscione06}, with a significance threshold of $10^{-5}$, over an area corresponding to $<30$ kpc from the central QSO, to be consistent with the volume considered throughout this work (see \S~\ref{Method}). We repeated this procedure for all snapshots in the $z=6-7$ range, which includes most of the $z>6$ QSOs observed with \textit{Chandra}\xspace, thus allowing for a fair comparison with real observations.
Fig.~\ref{fig:Ndet} presents the number of AGN detected in the mock \textit{Chandra}\xspace observations with 30 ks and 50 ks exposures, averaged over the 100 LOSs, for each simulation. \textit{AGNcone}\xspace predicts an average of $\approx1$ detectable AGN already with relatively short exposures (30 ks) and multiple detected X-ray sources using slightly longer observations (50 ks) over all of the considered redshift range. Instead, according to the \textit{AGNsphere}\xspace run, 30 ks (50 ks) \textit{Chandra}\xspace observations of $z\gtrsim6.2$ ($z\gtrsim6.5$) should typically return no detected source, but the probability to detect one or more AGN increases quickly approaching $z=6$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth ]{figures/Ndet_z_analysis.png}
\caption{Average number of detected X-ray sources, averaged over 100 LOSs, detectable in the two simulations within $<30$ kpc from the central AGN with 30 ks (left) and 50 ks (right) \textit{Chandra}\xspace observations. The black dashed line mark the average number of detected sources in real observations of $z>6$ QSOs. }
\label{fig:Ndet}
\end{figure}
\begin{table*}
\caption{Comparison sample of \textit{Chandra}\xspace observations of $z=6-7$ QSOs (see \S~\ref{multiple_AGN}).}
\begin{tabular}{cccccccccc}
\hline
\multicolumn{1}{c}{{ ID }} &
\multicolumn{1}{c}{{ z}} &
\multicolumn{1}{c}{{ Ref}} &
\multicolumn{1}{c}{{ ObsID}} &
\multicolumn{1}{c}{{ $t_{exp}$ [ks] }} &
\multicolumn{1}{c}{{ $N_{det}$}} \\
\multicolumn{1}{c}{{ (1) }} &
\multicolumn{1}{c}{{ (2)}} &
\multicolumn{1}{c}{{ (3)}} &
\multicolumn{1}{c}{{ (4)}} &
\multicolumn{1}{c}{{ (5)}} &
\multicolumn{1}{c}{{ (6)}} \\
\hline
\multicolumn{6}{c}{{ 20-40 ks sample}} \\
J002429.77+391319.0 & 6.621 & W21 & 20416 & 20 & 0 \\
J005006.67+344521.6 & 6.253 & V19 & 20393 & 34 & 1 \\
J022601.87+030259.4 & 6.541 & V19 & 20390 & 26 & 1 \\
J084229.43+121850.5 & 6.076 & V19 & 20392 & 29 & 0 \\
J104819.09-010940.2 & 6.676 & W21 & 20415 & 35 & 0 \\
J150941.78-174926.8 & 6.122 & V19 & 20391 & 27 & 1 \\
J152637.84-205000.7$^*$ & 6.586 & C20 & 22165 & 33 & 0\\
J163033.90+401209.7 & 6.065 & V19 & 5618 & 27 & 1 \\
\multicolumn{6}{c}{{ 40-80 ks sample}} \\
J010953.13-304726.3 & 6.791 & V19 & 20398,22214 & 66 & 0\\
J030516.92-315055.9 & 6.614 & V19 & 20394 & 50 & 0 \\
J103027.11+052455.1$^*$ & 6.308 & N17 & 19926 & 50 & 1 \\
J111033.98-132945.6$^*$ & 6.515 & V19 & 20397 & 54 & 0\\
J114816.65+525150.4 & 6.419 & G17 & 17127 & 78 & 1 \\
J164121.73+375520.2 & 6.047 & V19 & 20396,21961 & 54 & 1 \\
J203210.0-211402.3$^*$ &6.24& C19 & 20470 & 45 & 1 \\
J223255.14+293032.3 & 6.666 & V19 & 20395 & 54 & 1 \\
J234833.34-305410.0 & 6.902 & W21 & 20414 & 42 & 0 \\
\hline
\end{tabular} \\\label{tab:highz_QSOs}
(1) ID of targeted QSO; (2) redshift of targeted QSO; (3) reference for published X-ray data. C19: \cite{Connor19}. C20: \cite{Connor20}. G17: \cite{Gallerani17}. N17: \cite{Nanni17}. V19: \cite{Vito19b}. W21: \cite{Wang21a}. (4) \textit{Chandra}\xspace observation ID considered in this work; (5) Exposure time; (6) number of detected X-ray sources according to the procedure described in \S~\ref{multiple_AGN}. $^*$ These QSOs have been observed with multiple ObsIDs, resulting in longer total exposure times than those reported here. We only consider the reported ObsIDs to allow for a fair comparison with our 30 ks and 50 ks mock observations.
\end{table*}
In order to compare these results with real data, we collected all of the available \textit{Chandra}\xspace observations of $z=6-7$ QSOs with exposure times of 20-40 ks and 40-80 ks (Tab.~\ref{tab:highz_QSOs}). The median exposure time of the 20-40 ks (40-80 ks) observations is 38 ks (54 ks) and the median redshift of the targeted QSOs is $z=6.4$ ($z=6.5$). These values are well matched to our sets of 30 ks and 50 ks mock images, respectively. We repeated the detection procedure described above on the real \textit{Chandra}\xspace observations, considering only an area of $R<30$ kpc from the targeted QSO, to allow for a fair comparison with the mock image results. We stress that the blind detection procedure prevents any bias related to rest-frame UV pre-selection of possible X-ray sources.
The last column of Tab.~\ref{tab:highz_QSOs} reports the number of detected sources in the real observations,\footnote{We note that for almost all of the QSOs considered here, the results of the blind detection procedure agree with what reported in the literature, but for J084229.43+121850.5. \cite{Vito19b} reported a detection of X-ray emission from this QSO, while here we report it as undetected. This apparent discrepancy is due to the different detection procedure (i.e., blind detection vs. rest-frame UV pre-selection of the target position) and significance threshold.} which are almost equally split between no detected source and one detected source (i.e., the targeted QSO): the average numbers of detected X-ray sources in one observation are 0.50 and 0.56 for the 20-30 ks and 40-80 ks samples, respectively. Similar values are obtained by splitting each sample according to its median redshift. Comparing these results with the expected numbers of detected sources in simulations (Fig.~\ref{fig:Ndet}), we find that \textit{AGNcone}\xspace overestimates the number of detectable AGN at all redshifts, assuming both 30 ks and 50 ks exposure times. Instead, \textit{AGNsphere}\xspace underestimates such number assuming 30 ks observations, while shows a strong dependence on redshift for longer exposures: at $z>6.5$ and $z<6.5$ it underestimates and overestimates, respectively, the average number of detected X-ray sources.
Due to the small sample sizes of real QSO observations and the narrow range covered by the number of detectable X-ray sources, it is difficult to provide a quantitatively robust comparison with the predictions from simulations. Nonetheless, we attempt to do it by comparing the normalized histograms of detected sources in the mock and real observations over the entire $z=6-7$ range (Fig.~\ref{fig:Ndet_hist}). This is justified by the relatively flat redshift distribution of the QSOs targeted by real observations (Tab.~\ref{tab:highz_QSOs}). For each set of mock images, we computed the two-sample Anderson-Darling test.\footnote{We used the \textit{anderson\_ksamp} method of the SciPy package \citep{Scipy20}.} The null hypothesis is that the mock and real observations are drawn from the same parent population, for what the number of detected X-ray sources is concern.
We found that the null hypothesis can be rejected with high significance (i.e., Anderson-Darling test sigificance level $\lesssim0.001$) for almost all combinations of simulations and exposure times: Fig.~\ref{fig:Ndet_hist} confirms that \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace overestimate and underestimate, respectively, the number of detectable X-ray sources. Mock simulations of \textit{AGNsphere}\xspace with $t_{exp}=50$ ks is the only set for which the null hypothesis cannot be rejected, although this simulation is not consistent with real observations for $t_{exp}=30$ ks.
It is worth noting that few $z>6$ QSOs have been pointed with long \textit{Chandra}\xspace exposures (100--500 ks; e.g. \citealt{Nanni18}, \citealt{Connor20}, \citealt{Vito21}). Some of these observations were performed to check the presence of faint and possibly obscured AGN around $z>6$ QSOs, for which companion galaxies have been detected with ALMA and HST. However, to date, no solid detection of such satellite AGN has been obtained (\citealt{Vito19a,Vito21, Connor19,Connor20}).
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/Ndet_hist_analysis.png}
\caption{Normalized histograms of the number of detected X-ray sources in the mock and real \textit{Chandra}\xspace observations of $z=6-7$ AGN, for $t_{exp}=$ 30 ks (left) and 50 ks (right).}
\label{fig:Ndet_hist}
\end{figure*}
\subsection{Predictions for future X-ray facilities}\label{future_facilities}
The high sensitivities of future X-ray facilities will allow us to push the search for AGN satellites of luminous optically selected QSOs at $z>6$ down to intrinsic luminosities significantly lower than those probed with \textit{Chandra}\xspace. In Fig.~\ref{fig:Fx} we report as dotted grey lines the approximate expected sensitivity limits of future missions such as \textit{Athena}\xspace/WFI \citep{Nandra13}, \textit{AXIS}\xspace \citep{Mushotzky19, Marchesi20}, and \textit{Lynx}\xspace/HDXI \citep{Gaskin19}, each one computed assuming 10 ks exposure time, and compare them with the sensitivity of a 50 ks \textit{Chandra}\xspace observation. We computed these values by simulating X-ray observations of an X-ray source, assuming a simple power-law spectrum with photon index $\Gamma=2$ and varying flux. In particular, for each instrument, we loaded response matrices and background files\footnote{We use real response matrices and background files for \textit{Chandra}\xspace, and the preliminary files included in SOXS for \textit{Lynx}\xspace, \textit{AXIS}\xspace, and \textit{Athena}\xspace.} in XSPEC, and computed the expected source and background count rates in a region including $\approx90\%$ of the expected point spread function (PSF); i.e., $R=1^{\prime\prime}$ for \textit{Chandra}\xspace, \textit{AXIS}\xspace, and \textit{Lynx}\xspace, and $R=5^{\prime\prime}$ for \textit{Athena}\xspace. Then, we computed the flux that returns a binomial no-source detection probability \citep[i.e., $P_B$;][]{Weisskopf07} such that $(1-P_B)=0.997$, corresponding to $3\sigma$ in the Gaussian approximation.
Fig.~\ref{fig:Fx} shows that all of the considered next-generation X-ray mission will provide us with a huge improvement in the capability of detecting faint AGN at $z>6$, including satellite AGN around bright QSOs at $z>6$, in a fraction of the time of a typical \textit{Chandra}\xspace observation. Fig.~\ref{fig:sim_image} presents simulated X-ray observations with \textit{Chandra}\xspace (50 ks), \textit{Lynx}\xspace (10 ks), \textit{AXIS}\xspace (10 ks), and \textit{Athena}\xspace (10 ks) of a representative snapshot (i.e., $z=6.5$) and LOS of the two simulation runs. The satellite AGN will appear as multiple X-ray sources on a few arcsec scales. This implies that, in addition to high sensitivity, excellent angular resolution, such as that provided by \textit{AXIS}\xspace and \textit{Lynx}\xspace, is required to detect them individually. To probe this issue, we performed a blind detection run with \textit{wavdetect} on these images, and compared the detected sources (black stars in Fig.~\ref{fig:sim_image}) with the input AGN (colored circles): the identification of close objects like C1 and C2 is difficult even with missions with $\approx0.5$ arcsec angular resolution. The problem is clearly more evident with \textit{Athena}\xspace, due to its PSF of a few arcsec.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth ]{figures/sim_Xray_image.png}
\caption{Simulated X-ray observations in the 0.5--7 keV band of the most-massive AGN at $z=6.5$ and the surrounding satellite AGN in the \textit{AGNcone}\xspace (upper row) and \textit{AGNsphere}\xspace (lower row) simulations. From the leftmost to the rightmost columns, we simulated observations with \textit{Chandra}\xspace/ACIS-S (50 ks), \textit{Lynx}\xspace/HDXI (10 ks), \textit{AXIS}\xspace (10 ks), and \textit{Athena}\xspace/WFI (10 ks). For presentation purpose, the angular scale of the \textit{Athena}\xspace image is different from the other cases, due to the larger PSF. The circles mark the location of the simulated AGN for a representative line of sight, and are color coded as in Fig.~\ref{fig:masses}. The black stars mark the position of X-ray detected sources obtained with a blind detection procedure.}
\label{fig:sim_image}
\end{figure*}
\section{Discussion}\label{discussion}
As discussed in \S~\ref{Numerical_methods}, the outflow directions in the considered simulations are assumed not to be physically related to the host-galaxy properties and to be time-independent.
In particular, the \textit{AGNcone}\xspace simulation does not assume the outflow to be perpendicular to the plane of the host galaxy, as suggested by several observations of kpc-scale outflows or radio jets in the local universe \citep[e.g.,][]{Garcia-Burillo14, Cresci15, Morganti15, Venturi21}, where the outflow geometry can be studied in details, and by some numerical simulations \citep[e.g.,][]{Hopkins12}.
Several physical mechanisms can concur in the acceleration of winds at sub-pc scales that eventually produce large-scale outflows, including magneto-hydrodynamic effect (e.g., \citealt{Sadowki13}), thermal driving (e.g., \citealt{Proga07}), radiation pressure acceleration, either applied on dust (e.g., \citealt{Ishibashi15}) or mediated by UV transitions \citep[e.g.][]{Proga04,mizumoto2021}, which might produce outflows with different geometries. Moreover, the outflow geometry might be affected by interactions with the surrounding environment as the outflow expands \citep[e.g.][]{Nelson19, talbot2021}, and might change with time. Cosmological simulations cannot describe in detail such a complex, and largely unknown, physics and evolution of outflows with relatively simple numerical recipes.
The goal of this paper is to investigate the effect of two particular large-scale outflow geometries (i.e., a spherical outflow and a bi-conical outflow parametrized as described in \S~\ref{Numerical_methods}) on the observable properties of high-redshift AGN, regardless of the sub-grid physical mechanisms responsible for their acceleration. Extensive numerical simulations with identical initial conditions and physics except for the outflow parameters would be required to check whether and how the results are sensitive to different choices of the outflow parameters.
Kinetic feedback produced during the phases of fast accretion of SMBHs in the \cite{Barai18} simulations has a significant impact on the surrounding material and is required to match the predicted observable properties of bright AGN with observational results. One of the strongest piece of evidence is represented by the study of the gas extent in the AGN host galaxies (Fig.~\ref{fig:NH_radius}): the gas reservoirs in the \textit{noAGN}\xspace case (i.e., in absence of AGN feedback) are always more compact than those derived from ALMA observations of $z>6$ QSOs (see also, e.g., \citealt{vanderVlugt19}). The effect of AGN feedback pushes the gas in the host galaxies to larger distances (i.e., up to a few kpc) from the centres, in agreement with observations (e.g., \citealt{Cicone15,Bischetti19, Venemans20, Izumi21}). Although other mechanisms related to AGN feedback may produce such an observable, by, for instance, preventing gas infall from large scales (e.g., \citealt{Trussler20}) or causing fluctuations in the gravitational potential, which may lead to a radial migration of the material (e.g., \citealt{vanderVlugt19}), \cite{Barai18} found that the mechanical removal of gas from the inner region of the host galaxies is the main process that affects their gas content in their simulations.
We underline that also some $5<z<7$ star-forming ($1-70\,\mathrm{M_\odot\, yr^{-1}}$) galaxies have been found to show both an extended [C II] halo \citep[e.g.,][]{Fujimoto20} and broad wings in the [C II] emission-line profile \citep[e.g.,][]{Gallerani18, Ginolfi20}, suggestive of outflows possibly powered by a yet undetected accreting MBH \citep[e.g.,][]{Orofino21}.
At $z<7$ the feedback produces a general decrease of the $N_H$ (Fig.~\ref{fig:NH_z_all}), allowing for the appearance of unobscured (i.e., $N_H<10^{22}\,\mathrm{ ~cm^{-2}}$) LOSs (Fig.~\ref{fig:fLOS_all}).
Such directions are most probably those along which known $z>6$ QSOs are preferentially observed, as the rest-frame UV selection of these objects requires low dust extinction. In fact, at $z\lesssim6.5$, when the feedback effect is the strongest, bright AGN in the \textit{AGNcone}\xspace simulation are able to reach the UV magnitudes observed for known $z>6$ QSOs (Fig.~\ref{fig:m1450_z}).
However, such LOSs represent only a fraction of the total LOSs of an AGN (see also, e.g., \citealt{Ni20, Trebitsch19, Lupi22}): more than half of the LOSs would appear too faint to be selected as high-redshift objects in current optical/near-IR surveys, suggesting that a large fraction of the high-redshift, intrinsically luminous QSO population is observationally missed due to strong UV extinction produced by the ISM only. The presence of a dusty torus on pc scales, which is not included in the simulations we have analysed, would further increase such a fraction.
The outflow geometry likely plays an important role: in the case of a conic outflow, SMBH accretion proceeds at maximum efficiency through equatorial infalling of gas until $\dot{M}\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$ (Fig.~\ref{fig:mdot}), producing BHs with masses of $>10^9\,\mathrm{M_\odot}$ at $z=6-7$ (Fig.~\ref{fig:masses}). At these accretion rates, the feedback regulates further accretion and reduces the typical obscuring column density, in particular along the cone direction (Fig.~\ref{fig:Mollweide}). In the case of outflows launched along random directions, the feedback can affect the growth of the SMBH and the $N_H$ distribution even at lower accretion rates, resulting in $<10^9\,\mathrm{M_\odot}$ BHs at $z=6$, provided that the gas in the host galaxy is not too dense, as in the case of S2. Thus, the ISM properties (i.e., $N_H$ and radial size of the gas) of the brightest AGN in the \textit{AGNsphere}\xspace run is in agreement with observations. However, hindering the formation of $>10^{9}\,\mathrm{M_\odot}$ BHs, the spherical geometry of the feedback in \textit{AGNsphere}\xspace prevents AGN from reaching intrinsic luminosities comparable to known $z>6$ QSOs at most redshifts (Fig.~\ref{fig:m1450_z}).
Interestingly, even the most luminous AGN in \textit{AGNcone}\xspace cannot explain the detection of UV-bright QSOs at $z\approx7.5$ (Fig.~\ref{fig:m1450_z}), due to the combination of the relatively small BH masses, and hence low accretion rates, which, by construction, are capped at the Eddington rate, and typically high $N_H$ at that early cosmic time in this simulation. The existence of bright QSOs at $z\approx7.5$ \citep[e.g.,][]{Banados18a,Wang21a} requires different physical conditions for the SMBH formation and mass growth from those adopted in the considered simulations.\footnote{ As mentioned in \S~\ref{selection}, cosmic variance may affect our conclusions, as the simulations focus on a single cosmic region at high redshift.} Future numerical simulations may explore such conditions as viable ways to reconcile the expected and observed properties of $z>7$ AGN. Non-mutually exclusive possibilities are:
\noindent (a) different BH seeding mechanisms, that is, bright and massive QSOs discovered at $z\approx7.5$ may be grown from more massive BH seeds or have been seeded at earlier redshift than the SMBHs in the simulations.
\noindent (b) Sustained periods of super-Eddington accretion at $z>7.5$, whereas in the simulations the SMBH accretion rate is capped at the Eddington limit.
\noindent (c) Mass accretion characterized by a lower radiative efficiency than the value used in the simulations (i.e., $\epsilon_r=0.1$). In this case, the mass that is not converted into radiation contributes to the growth of SMBH, which can reach higher masses than those found in simulations at a given time. For instance, \cite{Davies19} report observational evidence for possible low radiation efficiency ($\epsilon_r\approx0.001$) in high-redshift QSOs.
\noindent (d) High-redshift AGN typically reside in regions which are even more overdense than that investigated in the \cite{Barai18} simulations, thus favouring the formation of SMBHs at earlier epochs. However, this possibility would arguably make the discrepancy between the observed and expected number of multiple X-ray detected AGN on small scales even worse. In addition, observational studies return contradictory results on the typical large-scale environment of high-redshift AGN \citep[e.g., ][]{Ota18,Mazzucchelli19, Mignoli20,Overzier21}.
The analysis that we have performed demonstrates that the comparison between several observable properties of AGN predicted by the \cite{Barai18} simulations and the observational results, including both the properties of the individual galaxies and the environment,
can help us to validate the recipes and assumptions adopted in numerical simulations. In particular, we found that AGN in the considered simulations match the gas radial distributions and apparent UV magnitudes of high-redshift QSOs. In addition, the same set of simulations has been demonstrated to reproduce well a number of physical properties of $z>6$ QSOs, such as dust properties \citep{DiMascia21b}, multi-wavelength spectral energy distribution \citep{DiMascia21a}, and the number of UV-detected and [C II]-detected satellite galaxies (Zana et al. accepted).
However, we also found that the predicted number of X-ray detectable satellite AGN located over small scales around luminous high-redshift QSOs both in the \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace simulations does not agree with the observational results.
This observable is relatively easy to estimate from simulations as it depends primarily on the BH accretion rate only, once a suitable conversion to X-ray luminosity is assumed. Moreover, gas and dust absorption does not affect significantly the observed X-ray emission from high-redshift AGN, as opposed to UV emission, up to high column densities (log$\frac{N_\mathrm{H}}{\mathrm{cm^{-2}}}\approx23.5-24.0$; see \S~\ref{Xray_obsc} and \S~\ref{UV}).
The mismatch between the number of multiple X-ray detected AGN on small scales between simulations and observations may be related to numerical issues and physical prescriptions. In particular, the simplistic BH seeding recipe implemented in the considered simulations (i.e., a $10^5\,\mathrm{M_\odot}$ BH is placed in the centre of a galaxy when this reaches a given mass threshold) naturally leads to the formation of a large number of SMBHs, that would appear as bright AGN at later cosmic times. Similar seeding recipies have been commonly adopted by most cosmological simulations (e.g., \citealt{Costa14}, \citealt{DiMatteo17}, \citealt{Barai18}, \citealt{Smidt18}, \citealt{Lupi19}, \citealt{Valentini21}), and typically mimic the ``heavy seed" formation channel for SMBHs \citep[e.g.,][]{Lodato06, Ferrara14}. However, theoretical models of ``heavy seed" formation require stringent physical conditions on, e.g., metallicity, physical state of the gas, ad radiation fields \citep[e.g.,][]{Ferrara14}. Accounting for such conditions in cosmological simulations is particularly difficult, but would reduce the number of formed SMBHs, and thus the discrepancy with observational results.
Another possibility is that observed QSOs at high redshift do not reside in regions as dense as those probed in the analysed simulations (but see, e.g., Zana et al. accepted). In this case, the formation of multiple SMBHs is expected to be hindered, helping us reconcile the expected number of X-ray sources with observational results. In addition, we would also expect to form less massive BHs, with direct consequences on the observational expectations discussed in this paper, as the BH mass is tightly linked with the maximum accretion rate, and thus AGN luminosity and feedback strength. Qualitatively, we would expect to derive fainter rest-frame UV and X-ray fluxes, weaker feedback, and, as a consequence (see Fig.~\ref{fig:NH_radius}), more compact gas reservoirs (i.e., similar to the \textit{noAGN}\xspace case) than the values discussed in \S~\ref{radial_distro}, \S~\ref{UV}, and \S~\ref{environment}.
Future X-ray facilities will provide us with the required sensitivity and angular resolution to investigate the presence of multiple faint AGN around bright high-redshift QSOs down to unprecedented flux limits (see \S~\ref{future_facilities}).
\section{Summary and conclusions} \label{conclusions}
We studied the observable properties of $z=6-10$ bright
AGN in a suite of zoom-in cosmological simulations by \cite{Barai18} characterized by the inclusion of AGN kinetic feedback with either bi-conical (namely, \textit{AGNcone}\xspace) and spherical (\textit{AGNsphere}\xspace) outflow geometry. We focused our investigation on the gas column density and size in the host galaxies, the AGN rest-frame UV magnitude and X-ray fluxes, and the detectability of systems of multiple AGN over a few kpc scale in the X-ray band. We compared these quantities with a control simulation in which SMBHs are not seeded (i.e., \textit{noAGN}\xspace), and observational results of $z>6$ AGN. We summarize our findings as follows.
\begin{itemize}
\item \textit{AGNcone}\xspace produces three bright AGN that grow up to $5\times10^8 < M_{\mathrm{BH}}<5\times10^9\,\mathrm{M_\odot}$ at $z=6$. These objects are characterized by a steady increase of their accretion rate up to $\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$. Once such high values are reached (at $z\approx6.5-7$), the strong AGN feedback prevents further increase of the accretion rate. This behaviour is linked with the bi-conical geometry of the outflow, that allows steady infalling of material along the equatorial directions, at least until the feedback grows strong enough to affect most of the gas in the galaxy halo.
In \textit{AGNsphere}\xspace, the spherical geometry of the outflow affects gas accretion already at low and moderate SMBH growth rate. For this reason, the two bright AGN produced in \textit{AGNsphere}\xspace reach lower values of BH masses (i.e., $2\times10^8 < M_{\mathrm{BH}}<5\times10^8\,\mathrm{M_\odot}$) and accretion rates ($\dot{M}<10\,\mathrm{M_\odot\,yr^{-1}}$) than objects in \textit{AGNcone}\xspace.
\item AGN host galaxies in \textit{AGNsphere}\xspace have gas column densities of $N_H\approx10^{23}\,\mathrm{cm^{-2}}$ from their formation up to $z=6.5-7$, when $N_H$ presents a remarkable drop due to the strong AGN feedback. In fact, the $N_H$ in matched galaxies in \textit{noAGN}\xspace continues to increase during the entire considered redshift range. The brightest AGN in \textit{AGNsphere}\xspace presents a similar behaviour as those in \textit{AGNcone}\xspace, although the $N_H$ is typically slightly lower. We interpret this difference again as due to the assumed spherical symmetry of the outflow. Instead, the second bright AGN in \textit{AGNsphere}\xspace do not reach accretion rate sufficiently high to significantly affect the gas in the host galaxy.
Our findings are consistent with the upper limits on $N_H$ recently reported for a set of $z>6$ AGN observed in the X-rays.
\item Kinetic feedback is required to match the gas extent reported for high-redshift QSOs (i.e., up to a few kpc). In fact, galaxies in \textit{noAGN}\xspace present typical gas sizes of $<1$ kpc, while the extents of the gas reservoirs of AGN in \textit{AGNcone}\xspace and \textit{AGNsphere}\xspace increase up to the observed values of a few kpc at $z\lesssim7$. The exception is the second bright AGN in \textit{AGNsphere}\xspace, due to its relatively low values of accretion rate.
\item All AGN in the simulations would appear as obscured (i.e., $N_H>10^{22}\,\mathrm{cm^{-2}}$) along all lines of sight (LOSs) at $z>7$. These objects would be missed by currently employed UV-based selection methods, which are heavily affected by dust extiction, and would require observations in different bands (e.g., X-ray or infrared) to be unveiled. At later cosmic times, a fraction of LOSs (up to $\approx80\%$, depending on the specific AGN and redshift) have $N_H<10^{22}\,\mathrm{cm^{-2}}$. These are the preferential directions along which known, UV-selected $z>6$ QSOs are observed.
\item Under simple, but reasonable, assumptions on the gas-to-dust mass scaling and dust distribution, we estimate the apparent UV magnitudes ($m_{1450}$) of the AGN in the simulations along different LOSs. We found that AGN in \textit{AGNcone}\xspace have $m_{1450}$ consistent with those observed for real high-redshift QSOs (i.e., $m_{1450}<25$) along $\lesssim50\%$ of the LOSs at $z<7$. AGN in \textit{AGNsphere}\xspace, instead, have fainter magnitudes, due to the lower instrinsic luminosities, and, for the second AGN, the high extinction levels along most of the LOSs. No AGN in the simulations can reproduce the observed UV magnitudes of the few $z\approx7.5$ QSOs known to date, whose formation and accretion history are likely not well captured by the prescriptions assumed in the simulations.
\item The presence of multiple bright AGN over scales of a few kpc led us to investigate their detectability in X-ray observations with \textit{Chandra}\xspace, and to compare the results with real observations of $z>6$ QSOs. We found that the \textit{AGNcone}\xspace run significantly overpredicts the number of X-ray detected multiple AGN at high redshift. Instead, \textit{AGNsphere}\xspace produces AGN with lower rate of X-ray detection than typical values derived in relatively shallow (i.e., $30$ ks) observations, while it is consistent with the results obtained with longer (i.e., $50$ ks) observations.
\end{itemize}
These results demonstrate that the AGN in the considered simulations have physical properties consistent with those of real QSOs for what the column density and extent of the gas in the host galaxies and the UV magnitudes are concerned. A bi-conical geometry for the outflow is favored over a spherical geometry, as it reproduces AGN with the high luminosities and SMBH masses observed for $z=6-7$ QSOs. However, both simulations cannot explain the recent discovery of luminous QSOs at $z\approx7.5$, which may have been formed at higher redshift than the assumed seeding time in our simulations, or may have undergone extensive periods of super-Eddington accretion.
Moreover, we showed that the number of multiple AGN detectable in X-ray band over few kpc scales is the observable property that the considered simulations struggled the most to reproduce. We propose that this issue can be due to the simplistic BH seeding methods generally implemented in cosmological simulations, that do not account for the complex physics related with the formation and rapid growth of massive BHs in the early Universe. Future X-ray observatories will provide us with the sensitivity required to investigate the possible presence of multiple faint AGN satellites around luminous QSOs at high redshift.
\section*{acknowledgements}
We thank the anonymous referee for their valuable comments.
This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO and Sherpa.
This research made use of Astropy,\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{Astropy13, Astropy18}. SG acknowledges support from the PRIN-MIUR 2017 grant (PI Fabrizio Fiore).
\section*{Data Availability Statement}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
1,314,259,995,603 | arxiv | \section{Introduction}
\label{sec:intro}
Many biological and technological soft materials involve fixed and mobile charges, for which long-range electrostatic forces play a major role in their structure and function. For instance, ion channels embedded in cellular membranes enable an incredibly selective and controllable transmembrane transport, vital for signal transduction in the nervous system and other processes of life~\cite{jensen2010principles, yoder2018gating, kato2018structural}. Importantly, selective transport of ions is paramount also in numerous present-day applications with synthetic materials, ranging from filtration, desalination, battery electrolytes, fuel cells, biomimetic nanochannels, drug delivery, nanocatalysis, and many more~\cite{li2016designing,lu2019tuning, kusoglu2017new, xie2018bacteriorhodopsin, xin2019high, zhu2020bioinspired, xu2020molecular, liu2020neutralization, widstrom2021water, epsztein2020towards}. A common challenge in these applications is to design and manufacture polymeric membranes in some solvent environments to achieve controllable permeability and selectivity (“permselectivity”) in the charge transport for the desired function.
Molecular mass transport in dense polymeric membranes is typically governed by the solution--diffusion mechanism~\cite{wijmans1995solution}: Small ions and molecules first partition into the polymer matrix from bulk solvent and then diffuse in the polymer under external fields or chemical-potential gradients.
In this process, ions generally move through networks of nanochannels or porous structures of various complex morphologies and topologies. These pores are often filled with high-dielectric solvents, such as water. Whereas it is clear that bulk solvent is an indispensable medium in which charged molecules are solvated and transported, it is not well known how the solvent behaves and influences charge transport inside the dense, low dielectric polymer matrix.
The poor understanding is mainly due to the lack of experimental techniques with the necessary temporal and spatial resolution to probe the kinetics of ions.
Fortunately, computer simulations are a powerful complementary tool that offers insight into those processes~\cite{epsztein2020towards}.
For instance, recent computer simulations revealed that water distributes very heterogeneously in dense polymers in fractal-like cluster structures embedded in the nanometer-sized voids of the polymer matrix~\cite{kanduc2018diffusion, mabuchi2018relationship}. The nanoclustered water was found to act as an important player in the penetrant diffusion and also to govern ion partitioning and permeability~\cite{kanduc2019aqueous, kanduc2021shape, widstrom2021water}.
Whereas these simulations provide the first unprecedented views on the cluster structure inside polymer matrices and its far-reaching effects on transport, many quantitative details still remain elusive. For example, how does cluster shape, interfaces, or connectivity affect ion partitioning or transport in detail? How is the water cluster structure affected and controlled by temperature or water volume fraction? What role does the ‘chemistry’ of ions play, that is, ionic shape, polarity, and charge structure? Are some of these aspects universal and addressable by relatively non-specific continuum concepts, even by empirical laws?
In this paper, we present our view on the role of aqueous nanoclusters (droplets and channels) in polymer networks on ion diffusion, solvation, and permeability and address a few of the above open questions. The perspective is based foremost on our recent, extensive molecular simulations of ion transport in aqueous poly($N$-isopropylacrylamide) (PNIPAM) systems~\cite{kanduc2018diffusion, kanduc2019free, kanduc2019aqueous, kanduc2021shape}. We do not provide detailed answers to all the open questions but discuss possible starting points and avenues for further quantitative developments. For this, we first take a look at polymer morphologies and describe how nanoclusters evolve in space. We then present challenges related to the low dielectric environment of polymers and the fact that those polymers that selectively transport ions are highly heterogeneous in terms of polymer and water domains. Importantly, based on the solution--diffusion mechanism, our discussion involves two types of material parameters: (i) The solvation free energy of the ion species in the material, and (ii) the diffusion coefficients of the ion species~\cite{yaroshchuk2001non}. We clarify how solvation and partitioning can be characterized and interpreted by the free energy needed to transfer the ion from bulk solvent (e.g., water) into the material. We then turn to diffusion and explain activated hopping mechanisms and the role of water. We discuss how diffusion and solvation contribute to ion permeability and selectivity in dense polymer membranes. Finally, we bring our view in relation to some contemporary applications and challenges for membrane design.
\section{From nanodroplets to water channels}
Water molecules can penetrate various nanoporous materials, ranging from liquids, soft polymers, solid minerals (e.g., zeolites) to biological structures (e.g., plant vessels and wood)~\cite{kusoglu2017new, jiang2018low, santoro2019insertion, medeiros2019characterization, chen2019wood}.
Yet, a prerequisite for water to do so is a sufficient amount of hydrophilic groups in the material.
Sorption (i.e., uptake) of water strongly modifies the material properties and
is hence of great relevance for chemistry, materials science, and Earth science. However, in this article, we will focus solely on polymers.
A famous example of a neutral polymer in applications of hydrogels (hydrated polymer networks)
is PNIPAM, which will be central to our discussion. PNIPAM undergoes a sharp transition from good to poor solvent conditions upon heating, thus making it a thermoresponsive polymer~\cite{halperin2015poly}. It has a hydrophobic backbone and the polar amide group (--CO--NH--) in its sidechains, which gives the polymer its tunable hydrophilic character.
On the other hand, charged groups are found in ionomers, such as Nafion, a sulfonated tetrafluoroethylene, developed by DuPont in the '70s~\cite{kusoglu2017new}. This ionomer also has a hydrophobic backbone, whereas the sidechains are terminated by charged sulfonate groups (--SO$_3^{-}$).
Water uptake in hydrophilic polymers is typically a multistep and multiscale process, driven by complex interactions between water, the hydrophilic, and the hydrophobic domains in the polymer.
How much water does a polymer take up depends on many parameters, such as water activity (tuned by humidity or dissolved solutes), temperature, and the chemical composition of the polymer. Very generally, the uptake is larger for higher water activity and for polymers with more hydrophilic groups and fewer cross-linkers~\cite{aryal2018impact, xu2020molecular}.
\begin{figure}[h]\begin{center}
\begin{minipage}[b]{0.49\textwidth}\begin{center}
\includegraphics[width=\textwidth]{uptake.pdf}
\end{center}\end{minipage}
\caption{
Generic sorption isotherm of a hydrophilic polymer, indicating the amounts of (a) bound, (b) intermediate, and (c) free water. Depictions on the right show the corresponding growing water domains around hydrophilic groups.}
\label{fig:uptake}
\end{center}\end{figure}
\begin{figure*}[t]\begin{center}
\begin{minipage}[b]{0.96\textwidth}\begin{center}
\includegraphics[width=\textwidth]{Droplets.pdf}
\end{center}\end{minipage}
\caption{Structural idealization of polymer regimes upon hydration, showing dry polymer domains (yellow) and water domains (blue).
At very low hydration (low water packing fractions in the polymer), water forms individual nanosized domains in the form of droplets or clusters throughout the phase, which grow with increasing hydration. At even higher hydration, they connect and form a network, which goes on to an inverted structure and finally to a polymer network in water at very high hydration levels (high water volume fraction, close to unity). The figure was inspired by a similar one in Ref.~\cite{kusoglu2017new}.
}
\label{fig:regimes}
\end{center}\end{figure*}
A generic sorption isotherm---the amount of sorbed water versus the water activity---is shown in Fig.~\ref{fig:uptake}.
In the `70s, Jhon and Andrade introduced a three-state classification of the water structure inside hydrogels based on their observations, which remains a useful concept up to this day~\cite{jhon1973water}: (a) ‘‘Bound’’ water is formed by water molecules that interact directly and strongly with primary hydrophilic sites, such that it behaves dynamically and thermodynamically as a part of the polymer chains. (b) ‘‘Intermediate’’ water consists of water molecules with weaker interaction with polymeric chains. Finally, (c) ‘‘free’’ water is formed by water molecules with negligible interactions with polymer chains and retains the properties of bulk water.
A simplified and idealized view on different morphological regimes of water in polymers is sketched in Fig.~\ref{fig:regimes}.
At very low hydration levels, water molecules localize around hydrophilic groups and form small water domains---a kind of nanoclusters or nanodroplets.
With increasing hydration, the clusters grow and start connecting with each other.
The water morphology eventually undergoes a percolation transition from isolated water clusters to a three-dimensional interconnected network of water channels.
Once the water amount becomes excessive, we can speak of an inverted structure, ultimately leading to a swollen polymer network in water, as the limiting scenario.
Our understanding of the transport of small molecules (i.e., much smaller than the mesh size of the network) in swollen networks is generally better than that in poorly hydrated, collapsed states.
Transport in swollen states, featuring large amounts of water, can be more or less successfully described by various continuum, perturbative approaches~\cite{amsden1998solute}. In contrast, transport in poorly hydrated states is usually much more sensitive to the molecular architecture of the polymer and penetrants.
This means that already tiny chemical modifications in the structure can change the transport properties enormously. Yet, precisely this trait gives low hydrated materials their ability to be highly selective and favor passing certain kinds of penetrants over the others~\cite{lu2019tuning, epsztein2019activation, kanduc2021shape}.
In the rest of the paper, we focus exclusively on collapsed, low hydrated polymer states.
Water domains in these states have been identified in numerous computer simulations, an example of which is shown in Fig.~\ref{fig:clusters}A for a dense PNIPAM polymer structure containing around 20 wt\% of water---thus mimicking a collapsed hydrogel at high temperature~\cite{kanduc2018diffusion}.
Individual nanosized water clusters (each one depicted in a different color in Fig.~\ref{fig:clusters}B) are far from being compact structures but rather of lacy, fractal-like forms. Their radius of gyration roughly follows a square-root dependence on the number of containing water molecules, $R_\trm{g}\propto N_\trm{w}^{1/2}$ for small clusters, which resembles a random walk~\cite{kanduc2018diffusion}. Besides, the clusters are polydisperse~\cite{kanduc2018diffusion, mabuchi2018relationship}, approximately following a power-law distribution, as shown in Fig.~\ref{fig:clusters}C.
The formation of individual clusters can be understood as a competition between water--water interactions (favoring two-phase separation) and the interactions between water and the hydrophilic polymer groups (favoring dispersion of water molecules).
In an entirely nonpolar material (such as oil), the water completely phase separates from the rest of the material, ending up in the form of one single water drop.
\begin{figure*}[t]\begin{center}
\begin{minipage}[b]{0.62\textwidth}\begin{center}
\includegraphics[width=\textwidth]{MD_clusters.pdf}
\end{center}\end{minipage}\hspace{2ex}
\begin{minipage}[b]{0.28\textwidth}\begin{center}
\includegraphics[width=\textwidth]{PCL-eps-converted-to.pdf}
\end{center}\end{minipage}
\caption{
(A) Molecular dynamics simulation snapshot of PNIPAM with 19 wt\% of water. (B) The same configuration showing individual water clusters distinguished by different colors (shown as connected water oxygen atoms). Panels (A) and (B) reprinted with permission from Ref.~\cite{kanduc2018diffusion}, copyright 2018 American Chemical Society.
(C) Size distribution of water clusters in the collapsed polymer at three different water fractions and temperatures (water fractions result from chemical equilibrium with bulk water). Data points taken from Ref.~\cite{kanduc2018diffusion}.
}
\label{fig:clusters}
\end{center}\end{figure*}
\section{Partitioning of ions in heterogeneous membranes}
We now turn our attention to the question of how easily ions can enter a polymer material that features low hydration and thus a low dielectric environment.
It has long been known that nonpolar or weakly polar media, such as poorly hydrated polymers or lipid bilayers, act as a barrier to the passage of ions between two aqueous solutions.
Since the electrostatic interaction is long-ranged, the leading term in the solvation free energy of a charged species in a given medium can be estimated within a continuum dielectric description~\cite{parsegian1969energy}.
For a spherical elementary charge $e$ of radius $a$
in an infinitely large medium of dielectric constant $\varepsilon_i$, the electrostatic self-energy is expressed in terms of the Born charging energy as $G_\trm{B}=e^2/(8\pi\varepsilon_i\varepsilon_0 a)$, where $\varepsilon_0$ is the vacuum permittivity.
Thus, the work needed to transfer the charge from water (of dielectric constant $\varepsilon_\trm{w}=80$) into a polymer phase (of dielectric constant $\varepsilon_\trm{p}$) is equal to
\begin{equation}
\Delta G_\trm{B}=\frac{e^2}{8\pi\varepsilon_0 a}\left(\frac{1}{\varepsilon_\trm{p}}-\frac{1}{\varepsilon_\trm{w}}\right)
\label{eq:dGsphere}
\end{equation}
and is referred to as the Born transfer free energy. To get the feeling about the energy scale, let us consider a monovalent ion of radius $a=0.25$~nm and a pure hydrocarbon material (e.g., oil) with $\varepsilon_\trm{p}=2$. Equation~\ref{eq:dGsphere} then amounts to a considerable value of $\Delta G_\trm{B}=55\,k_{\mathrm{B}} T$, where $k_{\mathrm{B}} T$ is the thermal energy---$k_{\mathrm{B}}$ being the Boltzmann constant and $T$ the temperature.
In this simple Born solvation picture, one can immediately realize that a low dielectric constant, as encountered in hydrocarbon materials, leads to notable electrostatic penalties for ions.
\begin{figure*}[t]\begin{center}
\begin{minipage}[b]{0.66\textwidth}\begin{center}
\includegraphics[width=\textwidth]{Parsegian-geom.pdf}
\end{center}\end{minipage}\hspace{3ex}
\begin{minipage}[b]{0.3\textwidth}\begin{center}
\includegraphics[width=\textwidth]{energies.eps}
\end{center}\end{minipage}
\caption{(A) Various ways by which an ion of radius $a$ can enter a dielectric medium (yellow) of dielectric constant $\epsilon_p$. The blue regions depict water with dielectric constant $\epsilon_w$; see text for details. (B) Calculated Born transfer free energies from bulk water into the configuration shown in (A), assuming $\varepsilon_\trm{w}=80$, $\varepsilon_\trm{p}=2$, and $a=0.25$~nm~\cite{parsegian1969energy}.
}
\label{fig:energies}
\end{center}\end{figure*}
There are, however, important factors that lower the above estimate in various cases
dealing with ions crossing low dielectric materials.
Prototypical scenarios of ion crossings were analyzed by Parsegian more than half a century ago~\cite{parsegian1969energy}, which we briefly recap in the following, see Fig.~\ref{fig:energies}.
In the first scenario, the low dielectric medium is a planar slab of thickness $b$, bounded on two sides by semi-infinite regions of water (Fig.~\ref{fig:energies}A(i)).
The finite thickness of the slab material causes the free energy penalty to decrease because of the high dielectric constant of water $\varepsilon_\trm{w}$ outside. For an ion at the center of the slab (where the penalty is the highest), the decrement to the infinite-slab transfer free energy (i.e., Eq.~\ref{eq:dGsphere}) due to the finite thicknesses is
\begin{equation}
\Delta\Delta G=-\frac{e^2}{4\pi\varepsilon_\trm{p}\varepsilon_0 b}\,{\operatorname{ln}}\frac{2\varepsilon_\trm{w}}{\varepsilon_\trm{w}+\varepsilon_\trm{p}}
\label{eq:GB}
\end{equation}
Figure \ref{fig:energies}B shows the calculated transfer free energy for several different slab thicknesses and reveals that the influence of the finite thickness is negligible for membranes more than several nanometers across.
However, an ion may not get entirely rid of its hydration shell but remains entrapped in a water shell, which acts as a ``carrier''. In the simplest view, the hydration shell can be represented as a spherical water droplet (Fig.~\ref{fig:energies}A(ii)) of radius $r_\trm{w}$, such that the transfer free energy from the water phase into the center of the droplet is
\begin{equation}
\Delta G_\trm{B}=\frac{e^2}{8\pi\varepsilon_0 r_\trm{w}}\left(\frac{1}{\varepsilon_\trm{p}}-\frac{1}{\varepsilon_\trm{w}}\right)
\label{eq:GBdroplet}
\end{equation}
This expression is essentially the same as Eq.~\ref{eq:dGsphere}, but with the bare ion radius $a$ replaced by the droplet radius $r_\trm{w}$, and precisely this detail substantially reduces the free energy. As depicted in Fig.~\ref{fig:energies}B, the hydration shell of radius 0.5~nm reduces the free energy 2-fold compared with the completely dehydrated scenario, whereas a shell of a one-nm-radius yields a 4-fold reduction.
Finally, an ion can pass a low-dielectric medium through a water pore or channel.
The channel can be represented as a very long cylinder of radius $r_\trm{w}$ and the dielectric constant of water $\varepsilon_\trm{w}$ (Fig.~\ref{fig:energies}A(iii)). With the surrounding dielectric constant of $\varepsilon_\trm{p}$, the work for transferring a charge from bulk water into the middle of the cylinder is
\begin{equation}
\Delta G_\trm{B}=\frac{e^2}{4\pi\varepsilon_p\varepsilon_0 r_\trm{w}} F\left(\frac{\varepsilon_\trm{p}}{\varepsilon_\trm{w}}\right)
\end{equation}
The dimensionless function $F$ should be calculated numerically (see, e.g., Refs.~\cite{parsegian1969energy, cui2006electrostatic}). For $\varepsilon_\trm{p}/\varepsilon_\trm{w}=2/80$, its value is around $F\approx 0.165$~\cite{parsegian1969energy}.
For the cylindrical channel, similarly as for the droplet carrier, the transfer free energy is inversely proportional to the pore radius and is consequently much lower than that for the bare ion (see Fig.~\ref{fig:energies}B).
Another relevant effect that reduces the electrostatic penalty is the increase of the global dielectric constant of the polymer due to water clusters. While pure hydrocarbons typically feature $\varepsilon_\trm {p}\approx2$, introducing water into the polymer matrix gradually increases a dielectric constant. Clearly, in the limit of a highly swollen network, $\varepsilon_\trm{p}$ approaches the one of bulk water, $\varepsilon_\trm{w}$.
Collapsed PNIPAM hydrogels have the dielectric constant much below that of bulk water, but with a typical value of $\varepsilon_\trm{p}\approx10$, the transfer free energies in Fig.~\ref{fig:energies} get reduced by around a factor of 5.
Even though the above simple calculations are based on a continuum dielectric model and neglect molecular details, they are very illustrative and allow drawing this fundamental conclusion:
Removing the entire hydration shell from an ion is energetically too costly. Reasonable energies associated with the transfer of ions into low dielectric materials are inseparably linked to a hydration carrier~\cite{parsegian1969energy}.
This notion has been confirmed and studied by many computer simulations in various contexts~\cite{kanduc2019free,duvail2019uo, widstrom2021water}. Figure~\ref{fig:ions}A shows a snapshot of chloride ions, hydrated with water, in a collapsed PNIPAM phase~\cite{kanduc2019free}.
Despite the irregular shapes of the water clusters, it nevertheless turns out that
a spherical approximation of clusters (as depicted in Fig.~\ref{fig:ions}B top) is
good enough for simple estimates involving monatomic ions~\cite{kanduc2019aqueous}.
Things get more complicated, however, with some molecular ions that are not well hydrated, as we will discuss later on.
\begin{figure*}[t]\begin{center}
\begin{minipage}[b]{0.36\textwidth}\begin{center}
\includegraphics[width=\textwidth]{Ion-snap.pdf}
\end{center}\end{minipage}\hspace{1ex}
\begin{minipage}[b]{0.25\textwidth}\begin{center}
\includegraphics[width=\textwidth]{sketch-solvation.pdf}
\end{center}\end{minipage}\hspace{1ex}
\begin{minipage}[b]{0.35\textwidth}\begin{center}
\includegraphics[width=\textwidth]{dG.eps}
\end{center}\end{minipage}
\caption{
(A) Snapshot of chloride ions in a PNIPAM phase (shown only water molecules and ions, no polymer). Reprinted with permission from Ref.~\cite{kanduc2019aqueous}, copyright 2019 American Chemical Society.
(B)~Continuum picture of an ion encapsulated by a water cluster (top) and a larger molecular ion that partially sticks out of the hydrating cluster (bottom).
(C)~Transfer free energies of ions from water into collapsed PNIPAM obtained from simulations (red circles)~\cite{kanduc2019aqueous}. The blue dashed lines depict the contributions from the water interface potential $\pm e\psi_\trm{s}$, the green solid lines are the added Born prediction (Eq.~\ref{eq:GBdroplet}) of $\Delta G_\trm{B}=2.6\,k_{\mathrm{B}} T$ for monatomic ions hydrated by water droplets of radius $r_\trm{w}=1$~nm. Orange lines are Born predictions~\cite{kanduc2019aqueous} for nitrophenolate in the case of full and the actual partial hydration (see text). Adapted with permission from Ref.~\cite{kanduc2019aqueous}, copyright 2019 American Chemical Society.
}
\label{fig:ions}
\end{center}\end{figure*}
A subtle ingredient to the story of ionic solvation, not considered in the Born solvation picture, is the water interface potential.
The water interface at a nonpolar medium (e.g., air or hydrocarbon) acquires an electrostatic potential stemming from the ordering of water dipoles at the boundary.
There is a lack of consensus about the value of the interface potential, yet classical simulations typically give the value of around $\psi_\trm{s} \approx -$0.5 V with respect to the surrounding nonpolar medium~\cite{vacha2011orientation, caleman2011atomistic, beck2013influence, kanduc2019free}.
It turns out that this potential is unimportant in the vast majority of cases. When an ion enters a nanocluster from a bulk water phase, it crosses two water boundaries, and with that, the two opposing contributions from the surface potential cancel one another. However, if the potential at the macroscopic water--polymer interface is screened by ions, only the potential jump at the nanocluster remains. The interface potential contribution from water nanoclusters can be observed in simulations for single-ion transfer free energies, as shown in Fig.~\ref{fig:ions}C. There, the distinction between cations and anions (red circles) is primarily due to the water potential of nanoclusters, $\pm e\psi_\trm{s}=\pm 17\,k_{\mathrm{B}} T$ (indicated by blue dashed lines)~\cite{kanduc2019aqueous}. The estimated Born free energy from Eq.~\ref{eq:GBdroplet} for a spherical droplet with $r_\trm{w}=1$~nm and $\varepsilon_\trm{p}=8.5$ (estimated from the simulations~\cite{kanduc2019aqueous}) is
$\Delta G_\trm{B}=2.6~k_{\mathrm{B}} T$, and is added on top of the interface potential contributions as green solid lines in Fig.~\ref{fig:ions}C.
Despite the significant contribution of the water interface potential to the single-ion transfer free energies, it has, on the other hand, no influence on the final concentrations of fully hydrated ions in the thermodynamic limit---namely, the same number of cations and anions are enclosed by water clusters and the opposing contributions from the interface potential cancel out.
The above notion of ion solvation also applies to the uptake of salt.
The uptake is typically quantified by the partition ratio, $K_\trm{salt}$, defined as the concentration ratio of ions inside and outside the polymer.
For the simplest case of 1:1 electrolyte (such as sodium chloride), the partition ratio is related to the transfer free energies as
\begin{equation}
K_\trm{salt}={\mathrm{e}}^{-\beta\left(\Delta G^{(+)}+\Delta G^{(-)}\right)/2}
\label{eq:Ksalt}
\end{equation}
where $\Delta G^{(+)}$ and $\Delta G^{(-)}$ are the transfer free energies of cations and anions, respectively, from water into the polymer material. As seen, salt partitioning is a collective effect, dependent on the sum of transfer free energies of the two ion species, and clearly shows how the interface potential contributions $e\psi_\trm{s}$ and $-e\psi_\trm{s}$, depicted in Fig.~\ref{fig:ions}C, cancel out.
Figure \ref{fig:Ksalt} shows the correlation between the sodium chloride salt partitioning ($K_\trm{salt}$) and water partitioning ($K_\trm{w}$; defined as the ratio of water densities inside and outside the polymer) for a few uncharged hydrogels (taken from literature, see references in Ref.~\cite{kanduc2019aqueous}).
The uptake of ions evidently depends on water amount:
Polymers containing more water in general also sorb more salt than those polymers with less water.
The diagonal dashed line depicts an apparent limiting scenario of $K_\trm{salt} = K_\trm{w}$ for which the salt concentration in the sorbed water is equal to that in bulk water.
However, most data points are below the dashed line, implying that both polymer--ion and polymer--water interactions influence ion partitioning~\cite{kanduc2019aqueous}.
The single-ion transfer free energies for Na$^+$ and Cl$^-$ from our simulations in Fig.~\ref{fig:ions}C result in the partitioning (using Eq.~\ref{eq:Ksalt}) indicated by the white triangle symbol in Fig.~\ref{fig:Ksalt}, which is in the ballpark of the experiments. Moreover, the Born model (with the estimated $\Delta G_\trm{B}=2.6\,k_{\mathrm{B}} T$ for both ions; see Fig.~\ref{fig:ions}C) predicts $K_\trm{salt}=\exp(-\beta\Delta G_\trm{B})\approx 0.05$ (regardless of whether $\psi_\trm{s}$ is included or not), which is in good agreement with the simulation result.
However, things get more involved when the symmetry between positive and negative charges of the hydrated parts of the molecule is broken. This occurs, for instance, when the electron charge of at least one ion species is delocalized (i.e., the charge is smeared over several atoms), such as in ionized conjugated molecules (e.g., aromatic compounds).
In this way, the charge density is lower and attracts water less strongly, which in turn can lead to partial dehydration of the charge. In such a case, not the entire molecule's charge is enclosed by a water cluster and subjected to the water interface potential, as schematically depicted in Fig.~\ref{fig:ions}B (bottom). The net contribution from the water interface potential is therefore non-zero, and it impacts the partitioning of ions.
In our recent simulation study~\cite{kanduc2019aqueous}, we found that the molecular ion nitrophenolate (see its depiction in the circular inset of Fig.~\ref{fig:ions}C) is partially dehydrated. Consequently, its transfer free energy decreases---compare the estimated values for a hypothetically fully hydrated and the actually partially hydrated ion, indicated by the orange lines in Fig.~\ref{fig:ions}C.
This reduction of the transfer free energy by several $k_{\mathrm{B}} T$ increases the partitioning by orders of magnitude. These results suggest that ionizing a molecule can, in fact, even boost the partitioning in collapsed, poorly hydrated hydrophobic gels in some cases rather than hinder it, which challenges the traditional simplistic view on ion solvation.
Moreover, molecular ions gain a significant contribution to the free energy from their neutral parts. This contribution scales very well with the solvent-accessible surface area~\cite{kanduc2019free, kanduc2021shape}.
The nonpolar parts of the molecules preferentially sorb in dry regions of the polymer, whereas polar parts are immersed inside the water nanodroplets~\cite{kanduc2019free}. A more thorough discussion on the hydration of molecular ions can be found in Ref.~\cite{kanduc2019aqueous}.
Entrapped ions in hydrophilic nanodomains can also be found in numerous other situations; one nice example is ion extraction from aqueous phases (such as ore processing and recycling).
These procedures typically use amphiphilic extractant molecules that form self-assembled aggregates with hydrophilic nanodomains in the middle~\cite{duvail2019uo}. These nanodomains, which resemble water nanoclusters in polymers, can trap and hydrate ions from the aqueous solution and enable their removal from the aqueous phase~\cite{spadina2019synergistic,spadina2020multi}.
\begin{figure}\begin{center}
\begin{minipage}[b]{0.37\textwidth}\begin{center}
\includegraphics[width=\textwidth]{K_Kw_experiments+MD.eps}
\end{center}\end{minipage}
\caption{Partition ratio of NaCl versus the water partition ratio for several polymers
at different temperatures or with different degrees of copolymerization as measured experimentally (for references, see Ref.~\cite{kanduc2019aqueous}) and obtained from MD simulations of PNIPAM~\cite{kanduc2019aqueous}).
Adapted with permission from Ref.~\cite{kanduc2019aqueous}, copyright 2019 American Chemical Society.
}
\label{fig:Ksalt}
\end{center}\end{figure}
\section{Diffusion in poorly hydrated polymers}
The other necessary quantity for understanding ionic transport is the diffusion coefficient.
Diffusion in dense polymer systems is a highly complex and frequently debated topic.
It differs significantly from Brownian diffusion in simple liquids and is featured by various mechanisms and regimes, depending on material and environmental parameters, such as polymer volume fraction, penetrant size, and temperature~\cite{zhang2018coarse}.
In dense polymers, penetrants most of the time dwell in a local cavity, trapped by surrounding polymer chains.
A large enough thermal fluctuation creates a short-lived channel between the polymer chains into which the highly confined penetrant can jump and propagate to a new location, where it then dwells again for some time, as depicted in Fig.~\ref{fig:diffusion}A.
This, the so-called, hopping mechanism has been revealed in computer simulations of polymer melts (i.e., without water) in the `90s~\cite{takeuchi1990jump, muller1991diffusion}.
\begin{figure*}[t]\begin{center}
\begin{minipage}[b]{0.5\textwidth}\begin{center}
\includegraphics[width=\textwidth]{hopping.pdf}
\end{center}\end{minipage}\hspace{3ex}
\begin{minipage}[b]{0.33\textwidth}\begin{center}
\includegraphics[width=\textwidth]{D_spherical.eps}
\end{center}\end{minipage}
\caption{(A) Schematic depiction of hopping diffusion in the presence of water on the molecular level (top) and the continuum level (bottom).
(B) MD simulation results for diffusion coefficients of spherical neutral penetrants [circles: helium (He), neon (Ne), argon (Ar), methane (Me), neopentane (NPe), and tetrachloromethane (CCl$_4$)]~\cite{kanduc2021shape} and monovalent monatomic ions (squares; \cite{kanduc2018diffusion})
in a collapsed PNIPAM polymer matrix versus their Stokes radii in bulk water.
For Na$^+$ and I$^-$ data from two different force fields were used; see Ref.~\cite{kanduc2018diffusion} for more details. Adapted with permission from Ref.~\cite{kanduc2021shape}, copyright 2021 American Chemical Society.
}
\label{fig:diffusion}
\end{center}\end{figure*}
However, it is less known what role water plays in hopping diffusion in collapsed hydrated polymers, such as PNIPAM, for instance. Once a channel is created, water in the polymer can ``flood'' the created passage. It is known that water can wet pores and cavities of nanoscopic dimensions, such as those in nanotubes, proteins, and ion channels, sometimes even in as a single-file hydrogen-bonded wire~\cite{rasaiah2008water, brewer2001formation, dellago2003proton}.
In addition to that, nonpolar pores form excellent low-friction conduits for the flow of water~\cite{rasaiah2008water}.
The reason is that as water molecules pass through the pore, they do not form strong interactions with the pore, and therefore do not transfer translational momentum to it.
These characteristics play a key role in the diffusion of penetrant molecules in dense hydrogels.
It is, therefore, no surprise that small penetrants, regardless of being polar or nonpolar, charged or neutral, travel through water channels rather than diffusing through dry parts of the polymer~\cite{kanduc2018diffusion}.
This is because the transient channels, which are the primary pathway for transportation by hopping, are inevitably filled with water.
Hopping diffusion is an activated process since the creation of a pore relies on a large enough thermal fluctuation. Consequently, the diffusion coefficient scales as $D=D_0 \exp(-\Delta F_\trm{a}/k_{\mathrm{B}} T)$, where $\Delta F_\trm{a}$ is the free energy for creating the pore.
Nonetheless, a so-far unresolved conundrum is, how does $\Delta F_\trm{a}$ scale with the pore radius, or equivalently, the radius of the penetrant, $a_\trm{w}$, that passes through?
Most of the established theories and computer simulations of polymer melts (i.e., without solvent) in the rubbery regime as well as (implicit solvent) coarse-grained simulations suggest a square dependence, $\Delta F_\trm{a}\propto a_\trm{w}^2$, or even a cubic dependence $\Delta F_\trm{a}\propto a_\trm{w}^3$.
However, the understanding has been challenged by our recent studies of a hydrated PNIPAM polymer, which convincingly demonstrated a linear scaling $\Delta F_\trm{a}\propto a_\trm{w}$~\cite{kanduc2018diffusion, kanduc2021shape}.
In other words, diffusion in a system containing water is faster for larger penetrants than in dry systems. It is, nevertheless, widely known that water in polymers acts as a plasticizer and softens the polymer matrix, which also eases the diffusion of small molecules.
A theoretical explanation in this direction is offered by recent theoretical concepts by Schweizer and coworkers, who showed that a coupled dynamics in dense liquids indeed results in a linear size dependence of the free energy barrier~\cite{zhang2017correlated, mei2021activated}.
Our knowledge of diffusion becomes even more obscured when it comes to ions.
Figure \ref{fig:diffusion}B compares diffusion coefficients of neutral spherical molecules and monovalent monatomic ions in the collapsed PNIPAM model~\cite{kanduc2018diffusion, kanduc2021shape}. Spherical neutral penetrants (ranging by size from helium, neon, methane, neopentane, to tetrachloromethane), plotted by green circles, clearly follow the relationship $D=D_0 \exp(-a_\trm{w}/\lambda)$, depicted by a dashed line. Nonetheless, ions (plotted by red squares) significantly deviate from the trend of neutral penetrants and do not seem to exhibit a well-defined trend.
All we can conclude from the plot is that ions diffuse more slowly than neutral molecules of a similar size.
The mechanisms for ion diffusion in PNIPAM polymer membranes have so far not been scrutinized, which prevents us from offering a firm explanation. Based on the current more general understanding of ion diffusion through membranes~\cite{epsztein2020towards}, we can only speculate on several reasons for the slower diffusion.
The first one is that ions maintain a sizable hydration shell. If a transient channel is too narrow, the ion cannot pass readily through because it would need to shed a significant fraction of its hydration shell~\cite{epsztein2019activation}, as also we already concluded from the continuum dielectric approach, Fig.~\ref{fig:energies}B.
The second reason is that ions interact with polar groups of the polymer additionally with strong Coulomb forces, thus making the energy landscape rougher, which in turn slows down diffusion~\cite{kim2019tuning}. In contrast, Coulomb forces are absent in neutral penetrants, whose diffusion is primarily governed by the steric effects between the penetrant and the polymer matrix~\cite{zhang2017correlated, kanduc2021shape}.
Another possible {\it modus operandi} is that ions are only transported by the slow diffusion of the water clusters themselves. Probably the various mechanisms are all operational and balanced by system-specific features, foremost the hydration level.
Figure \ref{fig:diffusion}B also implies a high ion specificity in membrane diffusion, that is, tiny details in the ionic interactions with water and polymers impact the diffusion, unlike for neutral solutes. This is a noteworthy observation for filtration and selective permeability in dense polymers and requires further investigation.
\section{Transport through membranes}
From the two material parameters discussed in the previous two sections, partitioning and diffusivity, it is possible to quantify the transport of ions based on established electrodiffusion theories, such as the Poisson--Nernst--Planck equation, for instance~\cite{yaroshchuk2001non, graf2000dynamic, lu2010poisson}.
The transport can be quantified in various ways, such as by conductance or, even more generally, by permeability. Permeability is a measure of how easily an ion (or any other molecule) can cross a material.
A flux of ions, driven either by a concentration gradient or electric field, is proportional to the permeability $P$~\cite{yaroshchuk2001non}.
In general, permeability can be quite a complex function of diffusivities and partitionings of all ionic species involved in the electrolyte.
Nevertheless, in simple cases in which a charged penetrant species is highly dilute and electrostatically screened by background salt, or for neutral penetrants, the solution--diffusion theory~\cite{wijmans1995solution} provides a very simple and fundamental relation, according to which the permeability $P$ is the product of the diffusion coefficient and partitioning, $P=KD$.
Based on this, the permeability of a molecule can be seen as an outcome of a complex and competing interplay between diffusion and solubility.
As it turns out, $K$ and $D$ are quite often anti-correlated for different morphologies of a polymer or for different penetrants in a given polymer. That is to say, a penetrant that tends to diffuse fast through a given material typically sorbs weakly in there. On the contrary, a penetrant that tends to sorb well generally diffuses slowly. The net effect of this trade-off is that the product $P=KD$ becomes less sensitive to various parameters than either the diffusivity or solubility~\cite{palasis1992permeability, ban2011molecular, kucukpinar2003molecular, novitski2015determination, kusoglu2017new, kim2019tuning, kim2020tuning, kanduc2021shape}.
Because of the cancellations, the permeability depends on tiny details of the polymer matrix and the penetrant.
The fact that slight differences in penetrants can result in substantial differences in their permeabilities gives rise to selectivity---the ability of a membrane to selectively pass one type of ions but not others~\cite{kanduc2021shape}.
In materials science, tailoring selective transport of ions and molecules through polymer membranes and other porous materials is of utmost importance for applications ranging from water desalination and filtration to drug delivery~\cite{park2017maximizing}.
The universally low ionic permeability compared with neutral molecules, such as water, is exploited in state-of-the-art desalination membranes (e.g., polyamide), which offer great water--salt selectivity. However, their ability to discriminate between ions is fairly limited~\cite{zhou2020intrapore, epsztein2020towards}.
Yet, the demand for ion--ion selectivity is rapidly growing, for example, in the recovery of valuable ions from seawater (e.g., lithium and uranium) to mitigate resource shortages or in the development of new battery systems.
In lithium-ion batteries, solid amorphous polymers [most notably poly(ethylene) oxide (PEO)] are regarded as attractive candidates to replace today’s liquid organic electrolytes~\cite{molinari2018effect, widstrom2021water}.
Improvements in the selective transport of lithium ions over other ions often rely on plasticizing the polymer network by adding various materials.
It has been shown, for instance, that introducing water into the PEO matrix enormously improves the conductance of Li$^+$ compared to other anions. Simulations revealed that these exceptional transport properties arise from strong lithium solvation and diffusion in percolated water nanodomains~\cite{widstrom2021water}.
Of particular interest in selective ionic transport is also the transport of protons.
In an aqueous environment, a proton manifests as the hydronium ion (H$_3$O$^+$) and can diffuse via two complementary mechanisms. The first mechanism is a classical center-of-mass motion of the hydronium ion, termed the vehicular mechanism. In the second, termed the Grotthuss mechanism, the excess proton hops from the hydronium ion across the hydrogen bond network of water, which is possible because of low barriers in the proton energy landscape~\cite{kusoglu2017new, fischer2018correlated}.
This hopping ability gives protons in water an anomalously large diffusion coefficient, which is up to 7 times that of similarly sized cations~\cite{fischer2018correlated,peng2018transport, mabuchi2018relationship, okuwaki2018theoretical, vishnyakov2018coarse, huo2019molecular}.
A group of synthetic polymers that are nowadays maybe the most explored in terms of perm-selective proton conductivity belongs to perfluorinated sulfonic-acid (PFSA) ionomers, such as Nafion, mentioned above.
Narrow water channels, many times even single-file water wires, in a PFSA polymer allow the diffusion of protons, but to a much lesser extent the diffusion of other ions. This property makes PFSA polymers widely used as perm-selective conductive membranes in various electrochemical technologies, including fuel cells, and as diffusion protection barriers against various toxic and waste chemicals~\cite{kusoglu2017new}.
In the end, we glance at biological ion channels, which provide incredible ion--ion selectivity.
For instance, potassium channels transport K$^+$ ions 10,000 times faster than Na$^+$ ions through the cell membrane~\cite{epsztein2020towards}. Furthermore, transmembrane proteins such as cytochrome {\sl c} oxidase, photosystem II, channelrhodopsin, and bacteriorhodopsin extremely selectively conduct protons through internal single-file water wires.
These inspirational examples from nature can offer guidelines for ultra-high ion--ion selectivity in the fabrication of modern synthetic membranes.
Despite the progress, the selectivity of synthetic membranes is often modest or limited to a particular property (e.g., divalent cations)~\cite{epsztein2020towards}.
Without a doubt, to fabricate novel polymer materials with high ion--ion selectivity, there is a crucial need to obtain a better understanding of the mechanisms for diffusion and solvation.
\section{Conclusions}
\label{sec:conclusions}
The transport of ions is a ubiquitous process in a vast range of different materials. The detailed knowledge of how ions diffuse and solvate in these materials is not only key to understand nature but also to devise a desired property of synthetic materials.
Weakly hydrated polymer membranes are more challenging to understand than highly hydrated, swollen polymers, owing to a much more intricate interplay between molecular interactions in the former and consequently a higher sensitivity to details on molecular structure. Yet,
on the flip side, weakly hydrated systems offer more possibilities for a fine-tuned selective transport of ions.
In weakly hydrated polymers, water organizes into isolated nanoclusters---droplets of a high dielectric constant in an otherwise hydrophobic, low-dielectric surrounding of the polymer. Already a simple continuum picture provides a clue that ions tend to strongly solvate in these water nanoclusters, which is supported by simulations. Detailed simulations and resulting analysis also reveal that the interface potential of water clusters, being unimportant for most cases with simple ions, can become critical for molecular ions that are less hydrated because of a smeared charge.
Thanks to modern simulation approaches, the field of polymer science has started unraveling the fine details of the diffusion of neutral molecules through dense polymers. However, our understanding of ion diffusion is still very limited. The diffusion of ions involves several intertwined molecular mechanisms, which increase the complexity of the problem.
General conclusions reached so far are that ions diffuse slower than similarly-sized neutral molecules and that ion-specific effects turn out to be crucial.
The diffusivity and solvation of penetrants generally have opposite trends for different morphologies of a polymer or for different penetrants. Consequently, the resulting permeability, which is the product of the two, encounters enormous cancellation effects and depends on tiny molecular details.
Obviously, the number of chemical ways to synthesize a polymer membrane (e.g., with different combinations of copolymerization) is essentially unlimited. Synthesizing or modeling diverse polymer systems to optimize a desired property or functionality is thus out of reach. The fundamental understanding of the underlying phenomena is thus a prerequisite to attain the ambitious goal.
\section{Acknowledgments}
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No.\ 646659-NANOREACTOR).
M.K.\ acknowledges the financial support from the Slovenian Research Agency (contracts P1-0055 and J1-1701). W.K.K. acknowledges the support by a KIAS Individual Grant (CG076001) at Korea Institute for Advanced Study.
\bibliographystyle{elsarticle-num}
|
1,314,259,995,604 | arxiv | \section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm}
{#1} }
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}
}
\newcommand{\eq}[1]{(\ref{#1})}
\def {\textstyle {1\ov 4}}{{\textstyle {1\over 4}}}
\def \textstyle {1\ov 3} { \textstyle {1\over 3}}
\def\hbox{det}{\hbox{det}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def \cite {\cite}
\def \footnote {\footnote}
\def \bibitem{\bibitem}
\def {\rm tr} {{\rm tr}}
\def {1 \over 2} {{1 \over 2}}
\def \td {\tilde}
\def \cite{\cite}
\def {\cal N} {{\cal N}}
\def \ww {\Omega}
\begin{document}
\null\vskip-24pt
\hfill {\tt hep-th/0508125}
\vskip0.2truecm
\begin{center}
\vskip 0.2truecm {\Large\bf
String spectrum of curved string backgrounds\\
\vskip 0.2truecm
obtained by T-duality and shifts of polar angles
}
\\
\vskip 0.7truecm
{\bf Jorge G. Russo}\\
\vskip 0.4truecm
{\it
Instituci\' o Catalana de Recerca i Estudis Avan\c{c}ats (ICREA),\\
Departament ECM,
Facultat de F\'\i sica, Universitat de Barcelona,
Spain}
\end{center}
\vskip 0.2truecm
\noindent\centerline{\bf Abstract}
A class of exactly solvable string models can be obtained
by starting with flat space and
combining T-duality and shifts of angular coordinates of several
polar planes.
The models are the analog of the Lunin-Maldacena
$\beta $-deformation of the $AdS_5\times S^5$ type IIB string
background, which is dual
to a Leigh-Strassler deformation of ${\cal N}=4$ Super Yang-Mills Theory.
We determine the complete physical string spectrum for two string models obtained in this way, by
explicitly solving the string equations and quantizing in terms of
free
creation and annihilation operators.
We also show that the 3-parameter $(b_1,\, b_2,\, b_3)$ model,
obtained by three independent TsT transformations,
has tachyons in some regions of the parameter space.
\newpage
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
Conformal theories representing strings in curved backgrounds are in
general extremely complicated and the physical string spectrum is
known
only in a few cases.
One of these cases is the class of string backgrounds obtained by a
sequence of T-duality transformations
and shifts of periodic coordinates involving other periodic
coordinates
of different periods \cite{RT}. Due to the shifts, the
new conformal field theories are not equivalent to the original flat-space starting point.
By construction, the resulting model is nevertheless an exact
conformal field theory, to all orders
in the $\alpha '$ expansion.
The models, being completely solvable,
were used to test
physical aspects of string propagation in curved spacetime, including
closed superstrings in magnetic fields \cite{magnetic},
supersymmetry breaking and
closed string tachyons \cite{magnetic,taka,flux},
D branes in
magnetic fields \cite{taka2}, decay of type 0 string vacuum
\cite{costa,RT01}, spacetime singularities \cite{RT}, strings in plane
wave backgrounds \cite{blau,hashi} and closed time-like
curves \cite{fiol}.
Finding new conformal $\sigma $ models of strings in curved
backgrounds where the string spectrum can be found exactly
is of obvious interest.
New solvable models have been recently introduced by Lunin and
Maldacena in \cite{LM}.
The models are constructed by applying a T duality
transformation, then a
shift of a periodic coordinate, and then another T-duality
transformation. These transformations were generally referred as
TsT transformations in \cite{frt}, and here we will adopt this name.
In contrast to \cite{RT}, where the shift
involves an $S^1$ coordinate, in \cite{LM}
the shift only involves polar angles. This novel application
of TsT transformations gives rise to new
exactly solvable string models which have not been studied before.
The main motivation for the study of these models is that they are
closely related to the analogous
deformation
of $AdS_5\times S^5$ that yields a background $(AdS_5\times
S^5)_\beta$,
which is the
supergravity dual of the Leigh-Strassler $\beta $-deformation of ${\cal N}=4$
Super Yang-Mills Theory to a ${\cal N}=1$ superconformal gauge theory
\cite{LEST} (some recent interesting works on the
$(AdS_5\times S^5)_\beta$ string theory can be found in
\cite{NIPR,frolov,nunez,beisert,koch,mat,bobev,freedman,penati,frt2,Ahn,kuzenko}).
The Lunin-Maldacena deformation applies to the $S^5$ part of the
space.
The 5-sphere can be represented by three complex planes with the
restriction
$z_1z_1^*+ z_2z_2^*+ z_3z_3^*=R^2$, where $R$ is the
radius of the sphere (related to the 't Hooft coupling $\lambda=g_{\rm YM}^2N$ by
$R^2=\sqrt{\lambda }\, \alpha' $). The present models are obtained
by a similar deformation applied on a
flat space $(z_1,z_2,z_3)$.
This paper is organized as follows. In section 2 we consider
the simplest model involving two complex planes, and find the mass spectrum
of quantum superstring states.
A model involving three complex planes related to the
Lunin-Maldacena deformation is then considered in section 3.1.
In section 3.2 we consider a model which is the analog of the
three-parameter deformation of $AdS_5\times S^5$
introduced by Frolov \cite{frolov} and further studied in \cite{frt2}.
We find tachyon states in some regions of the parameter
space. Finally, section 4 contains some remarks on
the string spectrum in $\beta$-deformed $AdS$ backgrounds.
\setcounter{equation}{0}
\section{String model I: TsT on two polar planes}
This model was introduced in \cite{LM} to illustrate TsT
transformations in a simple setting. The starting Lagrangian is
\begin{equation}
L=\partial_+x_\mu \partial_-x^\mu+
\partial _+ r_1\partial _- r_1+r_1^2\partial _+\varphi_1\partial _-\varphi_1 +
\partial _+ r_2\partial _- r_2+r_2^2\partial _+\varphi_2\partial _-\varphi_2\ ,\ \ \ \ \
\mu=0,1,...,5
\ .
\label{aaa}
\end{equation}
We use the notation $\sigma_\pm=\tau\pm\sigma $.
{}For simplicity in the presentation, here we have written bosonic
fields only. Restoring fermion contributions in the formulas is
straightforward and will be done at the end. In what follows we
will omit from most formulas the contribution of the
free coordinates $x^\mu$, which are
decoupled and can be treated as in standard free superstring theory.
After T-duality in the $\varphi_1 $ coordinate
to a new coordinate $\td \varphi_1$, and a shift $\varphi_2\to\varphi_2+b\td
\varphi_1$, the Lagrangian becomes
\begin{eqnarray}
L &=& \partial _+ r_1\partial _- r_1+r_1^{-2}\partial _+\td \varphi_1\partial _-\td \varphi_1 +
\partial _+ r_2\partial _- r_2+r_2^2(\partial _+\varphi_2+b\partial _+ \td \varphi_1) (\partial _-\varphi_2+b\partial _- \td
\varphi_1)\nonumber\\
&+& {\cal R}(\phi_0- {1 \over 2} \log r_1^2)\ ,
\label{bbb}
\end{eqnarray}
where
$$
{\cal R}={1\over 4}\alpha' \sqrt{g} R^{(2)}\ ,
$$
and $\tilde \varphi_1 $ has period $2\pi\alpha '$.
Finally, by performing a T-duality back in $\td \varphi_1$,
one gets the Lagrangian
\begin{eqnarray}
L &=& \partial _+ r_1\partial _- r_1+ \partial _+ r_2\partial _- r_2
+ F (r_1^2 \partial _+ \varphi_1\partial _- \varphi_1 + r_2^2 \partial _+\varphi_2\partial _-\varphi_2)
\nonumber \\
&+&
b F\, r_1^2r_2^2(\partial _+\varphi_2\partial _- \varphi_1 - \partial _+\varphi_1\partial _- \varphi_2)
+{\cal R}(\phi_0 + {1 \over 2} \log F)\ ,
\label{bba}
\end{eqnarray}
$$
F\equiv (1+b^2 r_1^2r_2^2)^{-1}\ .
$$
{}This describes strings propagating in the supergravity background
\begin{eqnarray}
ds^ 2 & =& dr_1^2+ dr_2^2+ F(r_1^2d\varphi_1^2 +r_2^2d\varphi_2^2) \ ,
\nonumber\\
B_{12} & =& {b\, r_1^2r_2^2 F}\ ,\qquad e^{2\phi}=e^{2\phi_0}F\ .
\label{gre}
\end{eqnarray}
By construction, this background is a solution of the string equations to all
orders in $\alpha '$.
The reason is that the model is equivalent to (\ref{bbb}) as a CFT, being related by T-duality,
and (\ref{bbb}) is locally equivalent
to a background related to flat space by T-duality.
In order to determine the physical string spectrum, one can either consider
the Lagrangian (\ref{bbb}) or (\ref{bba}), since they are equivalent as CFT.
We shall follow closely ref. \cite{RT},
section 5, where a class of string models
obtained by T-duality and shifts are solved,
since there are similarities in the structure of the solution.
In general,
the solution to the string equations of motion for two T-dual $\sigma$
models are related by $(G_{\mu\nu} +B_{\mu\nu})\partial_\pm x^\nu=\mp
\partial_\pm\tilde x_\mu$. Using this relation, we find the general solution
to the string equations of motion in the curved background (\ref{gre}),
\begin{equation}
\varphi_1={1\over 2i}\log{X_1\over X_1^*} + b \td\varphi_2
\ ,\ \ \ \varphi_2={1\over 2i}\log{X_2\over X_2^*}-b \td\varphi_1\ ,
\label{khh}
\end{equation}
where
$$
X_1=X_{1+}(\sigma_+ )+X_{1-}(\sigma_-)\ ,\ \ \ X_2=X_{2+}(\sigma_+ )+X_{2-}(\sigma_-)\ ,
$$
\begin{equation}
\partial _\pm \td\varphi_i=\pm {i\over 2}\Big( X_{i}^*\partial _\pm
X_{i} - X_{i} \partial _\pm X_{i}^* \Big)\ .
\end{equation}
Hence
$$
\td\varphi_i=2\pi\alpha'\Big(J_{i-}(\sigma_-)- J_{i+}(\sigma_+)\Big) +{i\over 2}(
X_{i+}X_{i-}^*- X_{i-}X_{i+}^*),\ \ \ \ i=1,2\ ,
$$
\begin{equation}
J_{i\pm} (\sigma_\pm )\equiv {i\over 4\pi\alpha' }\int_0^{\sigma_\pm }d\sigma_\pm (X_{i\pm}\partial_\pm
X^*_{i\pm}- X^*_{i\pm}\partial_\pm X_{i\pm })\ .
\label{jjjj}
\end{equation}
Using
\begin{eqnarray}
\varphi_i(\sigma+\pi ,\tau )&=& \varphi_i(\sigma,\tau )+2\pi n\ ,
\nonumber\\
\td \varphi_i (\sigma+\pi,\tau ) &=& \td \varphi_i(\sigma,\tau )-2\pi\alpha' J_i\ ,\ \ \ \ J=J_{iL}+J_{iR}\ ,
\label{klop}
\end{eqnarray}
$$
J_{iL}=J_{i+}(\pi )\ ,\ \ \ \ \ J_R=J_{i-}(\pi )\ ,
$$
one finds that the free fields $X_1\ ,\ X_2$ satisfy the twisted
boundary conditions
\begin{equation}
X_1(\sigma+\pi, \tau )= e^{ 2\pi i\nu _1} X_1(\sigma ,\tau )\ ,\qquad
X_2(\sigma+\pi ,\tau )= e^{ 2\pi i\nu _2} X_2(\sigma ,\tau )\ ,
\label{reww}
\end{equation}
with
\begin{equation}
\nu _1= \alpha' b J_2\ ,\ \ \ \
\nu _2= -\alpha' b J_1\ .
\end{equation}
Note that $\nu _1, \nu _2 $ are defined modulo $n$, $n=$ integer.
These boundary conditions are satisfied by writing
\begin{equation}
X_{i\pm} =e^{\pm 2i\nu _i\sigma_\pm }\chi_{i\pm }\ ,\ \ \ \ \chi_i(\sigma+\pi ,\tau
)=\chi_i(\sigma ,\tau )\ .
\label{rew}
\end{equation}
The fields $\chi_{i\pm}$ are single-valued and can be expanded as follows
\begin{equation}
\chi_{i-}=i\sqrt{\alpha'\over 2}\sum_n a_{ni} e^{-2in\sigma_-}\ ,\ \ \ \ \
\chi_{i+}=i\sqrt{\alpha'\over 2}\sum_n \td a_{ni} e^{-2in\sigma_+}\ .
\label{exde}
\end{equation}
In terms of the free fields $X_1,\ X_2$, the energy-momentum tensor of the string
model (\ref{bba}) is simply
\begin{equation}
T_{\pm\pm}=\partial _\pm X_1\partial _\pm X_1^*+\partial _\pm X_2\partial _\pm X_2^*\ .
\label{bggg}
\end{equation}
This can be checked by plugging (see eq. (\ref{khh})~)
\begin{equation}
X_i=r_i e^{i\phi_i}\ ,\ \ \ \ \ \phi_i=\varphi_i - b
\epsilon_{ij}\td\varphi_j\ .
\end{equation}
In this way we recover the $T_{\pm\pm}$ in terms of $\varphi_1 ,\ \varphi_2$
that follows directly
from the original lagrangian (\ref{bba}),
\begin{equation}
T_{\pm\pm}=\sum_{i=1}^2 \big[\partial _\pm r_i\partial _\pm r_i+
{r_i^2\over 1+b^2r_1^2r_2^2}
\partial _\pm \varphi_i\partial _\pm \varphi_i\big]\ .
\end{equation}
Inserting (\ref{rew}) in (\ref{bggg}),
the energy-momentum tensor components $T_{\pm\pm}$ take
the form
\begin{equation}
T_{++}=\sum_{i=1}^2 \big[
\partial _+\chi_{i+}\partial _+\chi^*_{i+} + 2i\nu _i (\chi_{i+}\partial _+\chi^*_{i+} -
\chi_{i+}^*\partial _+\chi_{i+})+4\nu ^2_i \chi_{i+}^*\chi_{i+}\big]\ ,
\end{equation}
\begin{equation}
T_{--}=\sum_{i=1}^2 \big[
\partial _-\chi_{i-}\partial _-\chi^*_{i-} - 2i\nu _i (\chi_{i-}\partial _-\chi^*_{i-} -
\chi_{i-}^*\partial _-\chi_{i-})+4\nu ^2_i \chi_{i-}^*\chi_{i-}\big]\ .
\end{equation}
Inserting the expansions
(\ref{exde}) and integrating over $\sigma $,
we find the Virasoro operators
\begin{equation}
L_0={1\over 2} \sum_{i=1}^2 \sum_n (n+\nu _i )^2a_{ni}^* a_{ni}
\ ,\qquad
\td L_0={1\over 2}\sum_{i=1}^2 \sum_n (n-\nu _i )^2
\td a_{ni}^* \td a_{ni} \ .
\end{equation}
We will also need the expression of the angular momentum components
which are conjugate to $\phi_1 $ and $\phi_2$. In terms of the
mode operators, they are given by
\begin{equation}
J_{iR}=- {1\over 2} \sum_n (n+\nu _i )a_{ni}^* a_{ni} \ ,
\qquad
J_{iL}=- {1\over 2} \sum_n (n-\nu _i )\td a_{ni}^* \td a_{ni} \ .
\end{equation}
Let us now consider the operator quantization of the model.
Canonical commutation relations for $x_i\equiv r_ie^{i\varphi_i}$ imply
\begin{equation}
[P_{X_i}(\sigma,\tau ),X^*_j(\sigma ',\tau)]=-i\delta_{ij} \delta(\sigma-\sigma ')\ .
\end{equation}
Hence
\begin{equation}
[a_{ni},a_{mj}^*]=2(n+ \nu _i )^{-1}\delta_{ij} \delta_{nm}\ ,
\ \ \ \
[\td a_{ni},\td a_{mj}^*]=2(n- \nu _i )^{-1}\delta_{ij}\delta_{nm}\ .
\end{equation}
We now introduce standard creation and annihilation operators $b_{n\pm}, \
b_{n\pm}^\dagger $, satisfyng $[b,b^\dagger]=1$ by a proper rescaling
of $a_n,\ \td a_n$ as in \cite{RT},
\begin{eqnarray}
b_{n+} &=& a_{-n}^*\omega_-\ ,\ \ \ b_{n-}=a_n \omega_+\ ,\
\nonumber\\
\td b_{n+} &=& \td a_{-n}^*\omega_+\ ,\ \ \ \td b_{n-}=\td a_n
\omega_-\ ,\qquad b_0=\sqrt{\nu /2}\, a_0 \ ,\ \ \td b_0=\sqrt{\nu /2}\, \td a_0^* \ ,
\nonumber\\
\omega_{\pm }&\equiv & \sqrt{{1 \over 2} (n\pm \nu )}\ ,\ \ \ \ n=1,2,...\ ,\qquad
0<\nu <1\ ,
\end{eqnarray}
where indices 1 and 2 have been omitted.
The Virasoro operators then take the form
\begin{eqnarray}
L_0 &=& {1\over 4}\alpha' p^2_\mu +(\hat N_R- \nu _1 \hat J_{1R}- \nu _2
\hat J_{2R})\ ,
\nonumber\\
\td L_0 &=&
{1\over 4}\alpha' p^2_\mu +(\hat N_L+ \nu _1\hat J_{1L}+ \nu _2\hat J_{2L})\ ,
\end{eqnarray}
where $\hat J_{iR},\, \hat J_{iL}$ are given by
\begin{equation}
\hat J_{R} =J_R-{1 \over 2}= -b^\dagger_{0} b_{0} -{1 \over 2} +
\sum_{n=1}^\infty ( b_{n+}^\dagger b_{n+}-
b_{n-}^\dagger b_{n-}\big)+ K_{R}\ ,
\end{equation}
\begin{equation}
\hat J_{L} =J_L+{1 \over 2}= \tilde b^\dagger_{0} \td b_{0} +{1 \over 2} +
\sum_{n=1}^\infty ( \tilde b_{n+}^\dagger \tilde b_{n+}-
\tilde b_{n-}^\dagger \tilde b_{n-}\big)+ K_{L}\ ,
\end{equation}
\begin{equation}
K_{R}^{\rm (NS)}=-\sum_{r=1/2}^\infty (c_{r}^* c_{r}+c_{-r} c_{-r}^*)\ ,
\ \ \
K_R^{\rm (R)}=-[d_{0}^*,d_{0}] +
\sum_{n=1}^\infty (d_{n}^* d_{n}+d_{-n} d_{-n}^*)\ ,
\nonumber
\end{equation}
and there is an index $i=1,2$ in all mode operators that has been
omitted for the sake of clarity.
The expression for $ K_{L}$ is similar, with tildes in the mode
operators. We have restored the fermion
contributions, following the notation of \cite{magnetic}
($c_r$ and $d_n$ are the fermion mode operators in the NS and R
sector, respectively).
The eigenvalues of $\hat J_{L,R}$ are
\begin{equation}
\hat J_{L,R}=\pm (l_{L,R}+{1 \over 2} )+S_{L,R}\ ,\qquad
\hat J=\hat J_L+\hat J_R=l_L-l_R+S_L+S_R\ ,
\label{polk}
\end{equation}
where $l_L,\, l_R=0,1,2,...$ are Landau quantum numbers
and the spin $S_L+S_R$ is an integer in the
NS-NS and R-R sectors, and half-integer in the NS-R and R-NS sectors.
The operators $ \hat N_R,\ \hat N_L$ have the standard expression in
terms
of free creation and annihilation operators,
$ \hat N_{R,L}=N_{R,L}-a$, with $a^{\rm (NS)}=1/2$, $a^{\rm
(R)}=0$, where e.g. in the Ramond sector,
\begin{equation}
N_R = \sum_{n=1}^\infty n \left(
b_{ni+}^{\dagger} b_{ni+} + b_{ni-}^\dagger
b_{ni-}
+a_{n\alpha }^\dagger a_{n\alpha } +d^*_{ni} d_{ni} +d_{-ni}d^*_{-ni}+
d_{-n\alpha }d_{n\alpha }\right) \ ,
\end{equation}
where summation over $i$ is understood and $a_{n\alpha },\,
d_{n\alpha }$ stand for the remaining ($\alpha=1,...,4$) transverse mode
operators. For physical states satisfying the GSO condition, the
eigenvalues
are $\hat N_{L,R}=0,1,2,... $~.
The Hamiltonian and level matching constraints are
\begin{equation}
L_0+\td L_0=0\ ,\ \ \ \ L_0=\td L_0\ .
\end{equation}
They lead to the string spectrum:
\begin{equation}
\alpha' M^2= 2(\hat N_R-\hat \nu _1 \hat J_{1R}-\hat \nu _2 \hat J_{2R})+
2(\hat N_L+\hat \nu _1 \hat J_{1L}+\hat \nu _2 \hat J_{2L})\ ,
\label{zzz}
\end{equation}
\begin{equation}
\hat N_R=\hat N_L \ ,
\label{lmc}
\end{equation}
where $\hat \nu _i=\nu _i-[\nu _i]$, and
\begin{equation}
\nu _1= \alpha' b(\hat J_{2R}+ \hat J_{2L}) \ ,\ \ \ \
\nu _2=- \alpha' b( \hat J_{1R}+ \hat J_{1L})\ .
\end{equation}
Note that $ \alpha' b $ is dimensionless.
\medskip
A few remarks are in order:
\begin{itemize}
\item The spectrum has a periodic dependence on the twist parameters
$\nu _i $. This must be the case, since the boundary conditions
(\ref{rew}) are unchanged if we replace $\nu _i\to\nu _i +n_i$, $n_i=$
integer. Due to the presence of fermions, the actual periodicity is
$\nu _i\to\nu _i +2n_i$.
When $2n_i\leq \nu _i<2n_i+1$, $i=1,2$ one should use the standard GSO projection.
When one of the $\nu _i $ is in the interval $2n_i+1\leq \nu _i<2n_i+2$, one
should use the reversed GSO projection, meaning that only states having half-integer eigenvalues of
the operators $\hat N_{L,R}$ will survive, i.e. $\hat N_{L,R}=-{1 \over 2}, {1 \over 2}, {3\over 2},...$
(see \cite{magnetic}).
\item The fact that in (\ref{zzz}) $\hat \nu _i \hat J_{iL}$ appear with the opposite sign
of $\hat \nu _i \hat J_{iR}$ is due to our conventions. What is independent
of conventions is that the terms proportional
to the Landau numbers $l_{iL}$ and
$l_{iR}$ both contribute with positive sign to $M^2$, and this is
of course the case in eq.~(\ref{zzz}).
\item If $\alpha' b $ is irrational, then $\nu _1,\, \nu _2$ are not integer numbers
for any $\hat J_1,\hat J_2\neq 0$. When $\alpha' b $ is rational, $\alpha'
b=p/q$, there are sectors $\hat J_{1}$ or $\hat J_2=qn$,
where one of the $\nu _i $ is an integer and the corresponding
$\hat\nu _i $ vanishes.
\item If one of the $\hat \nu _i $ vanishes,
say $\hat\nu _2$, then
the zero mode structure in the plane 2 changes. The oscillator modes
$b_{02},b_{02}^\dagger$ and $\tilde b_{02},\tilde b_{02}^\dagger$
(giving rise to Landau numbers $l_{2R}, \, l_{2L}$) are replaced by
$x_{2},\, x_2^*, \, p_2,\, p_2^*$.
\item If $\hat \nu_1$ and $\hat\nu_2$ do not vanish,
then $M^2\geq 0$ for any $b$
(see below).
\end{itemize}
As an application, let us consider states of minimal energy for a given level $N\equiv \hat N_R=\hat N_L$.
From the explicit representation in terms of creation and annihilation operators, one can see that
the $S_{iR},\ S_{iL}$ satisfy the bounds
\begin{equation}
\big|S_{1R}\pm S_{2R}\big|\leq \hat N_R+1\ ,\qquad
\big|S_{1L}\pm S_{2L}\big|\leq \hat N_L+1\ .
\label{mabo}
\end{equation}
We consider a state having $l_{iL}=l_{iR}=0$, and
$$
S_{1R}+S_{2R}=N+1\ ,\qquad S_{1L}+S_{2L}=-N-1\ ,
$$
It follows that $\hat J_{1}=-\hat J_2$. We assume $\hat J_2=S_{2L}+S_{2R}>0$ and $0<\alpha' b<1$.
Then $\nu _1=\nu _2=\alpha' b\, s,\ s\equiv S_{2L}+S_{2R}$. The mass formula takes the form
\begin{equation}
\alpha' M^2=4N\big(1-(\nu _1 -[\nu _1])\big)\ .
\label{vfe}
\end{equation}
This is manifiestly positive definite.
It is possible to take a limit, which is similar to the limits
studied in \cite{russo},
where most string states decouple. The number of surviving states at each level is proportional to $N$.
In the present case, we consider $\alpha' b=1-\varepsilon$. Then
$$
\nu _1 -[\nu _1]= \alpha' b\, s- [\alpha' b\, s] =1-\varepsilon s\ ,
$$
where we have assumed $\varepsilon s<1$, which is a valid assumption for any given $s$,
since we are going to take the limit $\varepsilon\to 0$.
Now write $\alpha'=\varepsilon \alpha'_{\rm eff}$, and take the limit $\varepsilon\to 0$
with fixed $\alpha'_{\rm eff}$. In this limit, the masses of all states with
$S_{1R}+S_{2R}<N+1$ or $ S_{1L}+S_{2L}>-N-1$ go to infinity.
For the special states considered above,
the mass formula (\ref{vfe}) takes the form
\begin{equation}
\alpha'_{\rm eff} M^2=4N\, s\ ,\qquad 0<s\leq 2N+2\ ,
\label{vfez}
\end{equation}
Thus these states have finite mass after the limit $\varepsilon\to 0$.
Note that one can also consider a limit with $\alpha' b=p/q-\varepsilon $, where
there are surviving states.
\section{String model II: TsT on three polar planes}
\subsection{One-parameter deformation}
The starting point is the string theory Lagrangian
\begin{equation}
L=\sum_{i=1}^3 \left( \partial _+ r_i\partial _- r_i
+r_i^2\partial _+\phi_i\partial _- \phi_i \right)\ ,
\label{hhzz}
\end{equation}
or, in Cartesian coordinates,
\begin{equation}
L=\sum_{i=1}^3 \partial _+ X_i\partial _- X_i^*\ ,
\label{yyy}
\end{equation}
where
\begin{equation}
\phi_1=\psi-\varphi_1'\ ,\ \ \ \phi_2=\psi -\varphi_2'\ ,\ \ \ \
\phi_3=\psi+\varphi_1'+\varphi_2'\ .
\label{defi}
\end{equation}
Here we omit other free coordinates in the string theory Lagrangian
as well as fermion contributions. They will be incorporated later.
Now we proceed as in the model of section 2, by performing a T-duality
transformation in
the $\varphi_1 '$ variable to a new variable $\td \varphi_1$, and
a shift: $\varphi_2'=\varphi_2+b\td \varphi_1$.
After T-duality in $\td \varphi_1$ to the T-dual variable $\varphi_1$,
one obtains a
final Lagrangian which is symmetric in $\varphi_1 $, $\varphi_2$,
representing a curved string background with B-field components and dilaton.
This model is constructed in the appendix A of \cite{LM}.
Using the relation $(G_{\mu\nu} +B_{\mu\nu})\partial_\pm x^\nu=\mp
\partial_\pm \tilde x_\mu$ between
the solutions to the string equations of motion for two T-dual $\sigma$
models, we find the solution
\begin{eqnarray}
\varphi_1 &=& \varphi_1' +b\td\varphi_2=
{1\over 3} (\phi_2+\phi_3-2\phi_1)+b \td \varphi_2\ ,
\nonumber \\
\varphi_2 &=& \varphi_2' -b\td\varphi_1= {1\over 3} (\phi_1+\phi_3-2\phi_2)-b \td \varphi_1\ ,
\nonumber \\
\psi &=& {1\over 3} (\phi_1+\phi_2+\phi_3)\ ,
\label{defiz}
\end{eqnarray}
where
\begin{eqnarray}
\partial_\pm \td\varphi_1 &=& \mp \left( r_3^2 \partial_\pm \phi_3 -
r_1^2\partial_\pm \phi_1 \right) \ ,
\nonumber\\
\partial_\pm \td\varphi_2 &=& \mp \left( r_3^2 \partial_\pm \phi_3 -
r_2^2\partial_\pm \phi_2 \right) \ .
\end{eqnarray}
Using (\ref{defi}), (\ref{defiz}) and
the fact that $\varphi_1,\ \varphi_2$ and $\psi $ are $2\pi $ periodic,
we find
the boundary conditions of $\phi_i$ variables,
\begin{eqnarray}
\phi_1(\sigma +\pi )&=& \phi_1(\sigma ) +b \Delta \td \varphi_2\ ,\ \ \ \nonumber \\
\phi_2(\sigma +\pi )&=& \phi_2(\sigma ) -b \Delta \td \varphi_1\ ,\ \ \ \nonumber \\
\phi_3(\sigma +\pi )&=& \phi_3(\sigma ) - b \Delta \td \varphi_2+b \Delta \td \varphi_1
\ ,\
\end{eqnarray}
with
\begin{equation}
\Delta \td \varphi_1=2\pi\alpha' (J_1-J_3)\ ,\ \ \ \ \ \Delta \td \varphi_2=2\pi\alpha' (J_2-J_3)\ ,
\end{equation}
$$
J_i=J_{iL}+J_{iR}\ ,\ \ \ \ J_{iL}=J_{i+}(\pi )\ ,\ \ \
J_{iR}=J_{i-}(\pi )\ .
$$
The operators $J_i$ are as in (\ref{jjjj}), with $i=1,2,3$.
Thus the twists $\nu _i$ (defined as in (\ref{reww})~) in the free
fields $X_i$ are given by
\begin{equation}
\nu _1=\alpha' b (J_2-J_3)\ ,\ \ \ \nu _2=\alpha' b (J_3-J_1)\ ,\ \ \ \
\nu _3=\alpha' b (J_1-J_2)\ .
\end{equation}
We then proceed exactly as in section 2:
we redefine $X_i$ in terms of single-valued fields $\chi_i$ as in
(\ref{rew}), and the
the expressions that follow are the same as in section 2,
with the only difference that now $i=1,2,3$.
Therefore, we find the string spectrum
\begin{equation}
\alpha' M^2= 2(\hat N_R-\hat \nu _1 \hat J_{1R}- \hat \nu _2 \hat J_{2R}-\hat \nu _3 \hat J_{3R} )+
2(\hat N_L+\hat \nu _1 \hat J_{1L}+\hat \nu _2 \hat J_{2L}+ \hat \nu _3 \hat J_{3L})\ ,
\label{xxx}
\end{equation}
\begin{equation}
\hat N_R=\hat N_L\ .
\label{jjj}
\end{equation}
We recall the notation $\hat\nu _i=\nu _i-[\nu _i]$,
$\hat J_{iR}=J_{iR}-{1 \over 2} $, $\hat J_{iL}=J_{iL}+{1 \over 2} $, so that
$\hat J_i=\hat J_{iL}+\hat J_{iR}=J_i$.
The same remarks given at the end of section 2 apply to this model.
\subsection {Three independent deformations}
Here we consider three independent deformations $b_i $,
which are the analog to the 3-parameter deformation
of $AdS_5\times S_5$ studied in \cite{frolov,frt2}.
This model is obtained by a sequence of transformations,
$({\rm TsT})_{b_1}({\rm TsT})_{b_2}({\rm TsT})_{b_3}$.
Following the same procedure as in the previous sections, we now find the spectrum
\begin{equation}
\alpha' M^2= 2\Big( \hat N_R+\hat N_L-\hat \nu _1 (\hat J_{1R}-\hat J_{1L})
- \hat \nu _2( \hat J_{2R}-\hat J_{2L})
-\hat \nu _3 (\hat J_{3R}-\hat J_{3L})\Big)\ ,
\label{xxxa}
\end{equation}
\begin{equation}
\hat N_R=\hat N_L\ ,
\label{jjja}
\end{equation}
where $\hat \nu _i=\nu _i-[\nu _i]$,
\begin{equation}
\nu _1= \alpha' (b_3 \hat J_2-b_2\hat J_3)\ ,\ \ \
\nu _2= \alpha'(b_1 \hat J_3-b_3 \hat J_1)\ ,\ \ \ \
\nu _3= \alpha'(b_2\hat J_1-b_1\hat J_2)\ ,
\label{fas}
\end{equation}
or
\begin{equation}
\nu _i=\alpha' \epsilon_{ijk} b_k \hat J_j\ .
\end{equation}
In the case $b_1=b_2=b_3=b$, we recover the mass spectrum (\ref{xxx})
of the model of section 3.1.
\medskip
For generic values of $b_1,\, b_2,\, b_3$, all supersymmetries are broken.
An important question is whether the mass spectrum contains tachyons.
To look for a tachyon, we shall consider a state with $ \hat \nu _1,\, \hat \nu _2,\,
\hat \nu _3 $ different from zero, and with maximum value of $ (\hat J_{1R}-\hat J_{1L})
$.
\smallskip
The different situations that typically arise are illustrated below by
considering different regions of the parameter space.
\medskip
\noindent 1) All $\nu _i $ are in the interval $0<\nu _i<1 $:
\smallskip
{} The $S_{iR},\ S_{iL}$ satisfy the bounds
\begin{equation}
\big|S_{1R}\pm S_{2R}\pm S_{3R}\big|\leq \hat N_R+1\ ,\qquad
\big|S_{1L}\pm S_{2L}\pm S_{3L}\big|\leq \hat N_L+1\ .
\end{equation}
{}So we choose
$S_{1R}=N+1$, $S_{1L}=-N-1$, where $N\equiv \hat N_R=\hat N_L$,
$l_{1L}=l_{1R}=0$
and, in addition,
\begin{equation}
l_{2R}=1\ ,\ \ l_{2L}=S_{2R}=S_{2L}=0\ ,\ \
\label{kmmm}
\end{equation}
\begin{equation}
l_{3L}=1\ ,\ \ l_{3R}=S_{3R}=S_{3L}=0\ .\ \
\label{knnn}
\end{equation}
Hence we have (see (\ref{polk})~)
\begin{equation}
\hat J_{1R}=N+{1 \over 2}\ ,\ \ \ \ \hat J_{1L}=- N -{1 \over 2}\ ,\
\end{equation}
\begin{equation}
\hat J_{2R}=-{3\over 2}\ ,\ \ \ \ \hat J_{2L}={1 \over 2}\ ,\ \ \qquad
\hat J_{3R}=-{1 \over 2}\ ,\ \ \ \ \hat J_{3L}= {3\over 2}\ .\
\label{kbbb}
\end{equation}
For the deformation parameters, we assume $-1<b_2+b_3<0$,
$0<b_1<1$.
Note that this range already
excludes the supersymmetric case $b_1=b_2=b_3$.
In what follows we set $\alpha' =1$.
{} From eq. (\ref{fas}), we find
\begin{equation}
\hat\nu _1= \big| b_2+b_3 \big|\ ,\qquad \hat \nu _2=b_1\ ,\qquad \hat\nu _3=b_1\ .
\end{equation}
Then the mass formula takes the form
\begin{equation}
M^2=2 \Big( 2N +4b_1 - \big| b_2+b_3
\big|\ (2N+1)\Big)\ .
\label{uuuu}
\end{equation}
These states become tachyonic for
\begin{equation}
\big| b_2 +b_3 \big|> b_{\rm cr}\ ,\qquad b_{\rm
cr}\equiv {2N +4b_1\over 2N+1}\ .
\label{yyyyy}
\end{equation}
Thus, for any $N=0,1,2,...$, the states are tachyonic for sufficiently large $\big| b_2+b_3 \big|$.
The assumption $\big| b_2+b_3 \big|<1$ in the tachyonic regime
(\ref{yyyyy}) is satisfied by choosing $4b_1<1$.
This leaves a wide range of $b$-parameters where these tachyons exist.
Finally, one can show that fermions have positive mass squared, as expected.
\medskip
\noindent 2) $\nu _1 $ is in the interval $-1 <\nu _1<0 $:
\smallskip
As pointed out in section 2, in this case the GSO projection is the reversed one.
Now $\hat N_{L,R}$ take the values $\hat N_{L,R}=-{1 \over 2},{1 \over 2},{3\over 2},...$ and we have the bound
\begin{equation}
\big|S_{1R}\pm S_{2R}\pm S_{3R}\big|\leq \hat N_R+{1 \over 2}\ ,\qquad
\big|S_{1L}\pm S_{2L}\pm S_{3L}\big|\leq \hat N_L+{1 \over 2}\ .
\end{equation}
{} We choose
$S_{1R}=N+{1 \over 2} $, $S_{1L}=-N-{1 \over 2}$, $N\equiv \hat N_R=\hat N_L$,
$l_{1L}=l_{1R}=0$,
so that
\begin{equation}
\hat J_{1R}=N\ ,\ \ \ \ \hat J_{1L}=- N \ .\
\end{equation}
In the planes 2 and 3, we choose the same quantum numbers as in
eqs.~(\ref{kmmm}), (\ref{knnn}), (\ref{kbbb}).
For the deformation parameters, we now
assume $0<b_2+b_3<1$, \ $0<b_1<1$.
{} From (\ref{fas}), we now find
\begin{equation}
\hat\nu _1= 1- b_2 -b_3 \ ,\qquad \hat \nu _2=b_1\ ,\qquad \hat\nu _3=b_1\ .
\end{equation}
Then the mass formula (\ref{xxxa}) becomes
\begin{equation}
M^2= 2 \Big( 2N +4b_1 - (1- b_2 -b_3 )\ 2N\Big)=
2\Big( 4b_1 + ( b_2+b_3 )\ 2N\Big)\ .
\end{equation}
The state $N=-{1 \over 2} $ is tachyonic for
\begin{equation}
b_2+b_3 > 4 b_1\ .
\label{uio}
\end{equation}
In the supersymmetric case, $b_1=b_2=b_3$, the condition
(\ref{uio}) is not satisfied and the state has a positive squared mass.
\medskip
\noindent 3) $\nu _1 $ is in the interval $-2 <\nu _1<-1 $:
\smallskip
Here we have again the standard GSO projection.
We choose exactly the same quantum numbers as in the case 1).
But now we assume $1<b_2+b_3<2$, $0<b_1<1$.
The supersymmetric case $b_1=b_2=b_3$ is included in the discussion, and it is interesting
to see how tachyons disappear.
{} We have
\begin{equation}
\hat\nu _1= 2- b_2-b_3 \ ,\qquad \hat \nu _2=b_1\ ,\qquad \hat\nu _3=b_1\ .
\end{equation}
Then the mass formula takes the form
\begin{equation}
M^2=2 \Big( 2N +4b_1 - (2- b_2-b_3)\ (2N+1)\Big)\ .
\label{moon}
\end{equation}
These states are tachyonic for
\begin{equation}
2-b_2-b_3>{2N+4b_1\over 2N+1}\ ,\qquad 4b_1< 1\ .
\end{equation}
Now let us specialize to the supersymmetric case $b_1=b_2=b_3$.
Then $4b_1=2(b_2+b_3)=4-2\hat \nu _1$. Hence
\begin{equation}
M^2=2\Big( 2N +(4-2\hat\nu _1) - \hat\nu _1\ (2N+1)\Big)=
2 \Big( (2N +4)(1 - \hat\nu _1) +\hat\nu _1 \Big)\ ,
\end{equation}
which is positive definite, since, by definition, $0<\hat\nu _1<1$.
\section{Energies of short strings in the Lunin-Maldacena\\ deformation of $AdS_5\times S^5$ }
Given the parallel between the models considered here
and the Lunin-Maldacena deformation $(AdS_5\times
S^5)_\beta $, an interesting question is whether there may be
a relation between the corresponding string spectra.
For strings of size much less than the $AdS_5$ or $S^5$ radius $R$,
the string dynamics is essentially as in flat spacetime.
We expect that the spectrum of such short strings in the
Lunin-Maldacena background $(AdS_5\times
S^5)_\beta $ will have a similar structure as the spectra discussed in
this paper,
with the change $\alpha'\to \alpha'/\sqrt{\lambda }$,
and $\alpha' b\to \hat \beta/\sqrt{\lambda }$,
$\hat \beta\equiv \beta \sqrt{\lambda }$.
The parameter $\beta $ is assumed to be real and it is what appears in
the Yang-Mills superpotential
${\rm Tr}\big[ e^{i\pi\beta} \Phi_1\Phi_2\Phi_3-e^{-i\pi\beta}
\Phi_1\Phi_3\Phi_2\big]$. Taking a flat-space limit of the string
spectrum in $(AdS_5\times S^5)_\beta $
necessarily requires sitting
at some point of $S^5$ (e.g. $r_3=1,\, r_1=r_2=0$),
and considering short strings. This procedure breaks the $Z_3$ symmetry
associated with exchange of the 1-2-3 planes. The resulting string spectrum
approaches a truncation of the spectrum of the model of section 3.1
(the spectrum of short strings in $(AdS_5\times S^5)_\beta $
cannot be described by the full spectrum (\ref{xxx}), because the latter involves
oscillator modes associated with six dimensions, as opposed to the
five dimensions of $S^5$).
One interesting problem would be to compare the string spectrum (\ref{xxx})
with the energy of semiclassical short strings in $(AdS_5\times
S^5)_\beta $ having $1\ll \hat J_i\ll\sqrt{\lambda }$.
The existence of tachyons in the three-parameter model of section 3.2
suggests that there could also be tachyons in the analog model of
\cite{frolov,frt2}.
It will be difficult to
see such possible tachyons in a semiclassical approximation of large $N$.
In particular, note that for the existence of the above tachyon states,
it is essential that there is a ``1" in $2N+1$ in (\ref{uuuu}) and
in
(\ref{moon}).
The origin of this 1 is a normal ordering contribution, and it is
negligible in a semiclassical approximation where $N\gg 1$.
There are tachyons at any given string level number $N$
in some regions of parameter space, including low values of $N$, in particular $N=0$.
{}From the point of view of the dual gauge theory, low $N$ means
short operators (the string level number $N$ should not be confused of course with N
of U(N)).
It would be interesting to see if there is a
counterpart of the tachyon instabilities in the dual
non-supersymmetric gauge theory
of \cite{frolov,frt2}.
It would also be interesting to
understand the limit
taken at the end of section 2 (related to
$\beta\to p/q$) within the ${\cal N}=1$ superconformal gauge theory.
\smallskip
\section*{Acknowledgments}
I would like to thank A. Adams, J. Maldacena, T. Mateos, M. Spradlin and especially A.A. Tseytlin
for useful discussions and comments.
This work is
supported in part by the European
EC-RTN network MRTN-CT-2004-005104, and by MCYT FPA
2004-04582-C02-01 and CIRIT GC 2001SGR-00065.
\setcounter{section}{0}
\setcounter{subsection}{0}
\setcounter{equation}{0}
|
1,314,259,995,605 | arxiv | \section{\textbf{Introduction}}
Let $GL_n$ be the general linear group over an algebraically closed field $\mathbb{F}$. There is a much-studied decomposition of $GL_n$ into double cosets of the Borel subgroup $B\subset GL_n$ of invertible upper triangular matrices
\begin{equation}\label{E:GBdecomposition}
GL_n = \bigcup_{w\in S_n} B w B,
\end{equation}
where the union is indexed by the symmetric group $S_n$. Elements of $S_n$ are identified with
$0-1$ matrices with exactly one nonzero entry in each row and column.
The decomposition in (\ref{E:GBdecomposition}) is often refered to as the Bruhat decomposition and it holds, more generally, for reductive groups and reductive monoids (see \cite{PennellPutchaRenner,Renner86}). In the case of the monoid $M_n$ of $n\times n$ matrices, the Bruhat decomposition is given by
\begin{equation}
M_n = \bigcup_{\sigma \in R_n} B \sigma B,
\end{equation}
where the union is indexed by the rook monoid $R_n$. The elements of $R_n$ are identified with $0-1$ matrices which have at most one nonzero entry in each row and column.
The Bruhat-Chevalley order on $S_n$ is defined in terms of the inclusion relationships between double cosets in (\ref{E:GBdecomposition}). Namely,
if $v, w \in S_n$, then
\begin{equation}\label{E:BCgroup}
v \leq w\ \iff\ Bv B \subseteq \overline{Bw B},
\end{equation}
where the overline stands for the Zariski closure in $GL_n$.
There is a natural extension of this partial order on the rook monoid $R_n$ (see \cite{PennellPutchaRenner, Renner86} for more details).
\begin{equation}\label{E:BCmonoid}
\sigma \leq \tau\ \iff\ B\sigma B \subseteq \overline{B \tau B},
\end{equation}
for $\sigma,\tau\in R_n$.
In \cite{Putcha01}, Putcha describes the partial ordering
(\ref{E:BCmonoid}) for the constant-rank subsets of the rook monoid in terms of the Bruhat order
of related symmetric groups (he describes this partial order, much more generally, for any $J$-class of a Renner monoid).
In \cite{MS05}, using a partial ordering exactly like (\ref{E:BCmonoid}), Miller and Sturmfels study the poset of Zariski closures of $B\times B_+$-orbits on the space of the $k\times l$ matrices. Here $B$ denotes the group of the invertible upper triangular $k\times k$ matrices, and $B_+$ denotes the group of invertible lower triangular $l\times l$ matrices. These $B\times B_+$-orbits are indexed by the
$0-1$, $k\times l$ matrices with at most one nonzero entry in each row and column.
For computational purposes, one would like to have an efficient, combinatorial characterization of the Bruhat-Chevalley ordering on $R_n$. This characterization, in the case of the symmetric group, had been explained to us by V. Deodhar.
\subsubsection{Deodhar's characterization}
For an integer valued vector $a=(a_1,...,a_n)\in \mathbb{Z}^n$, let $\widetilde{a} = (a_{\alpha_1},....,a_{\alpha_n})$ be the rearrangement of the entries $a_1,...,a_n$ of $a$ in a non-increasing fashion;
\begin{equation*}
a_{\alpha_1} \geq a_{\alpha_2} \geq \cdots \geq a_{\alpha_n}.
\end{equation*}
The \textit{containment ordering}, ``$\leq_c$,'' on $\mathbb{Z}^n$ is then defined by
\begin{equation*}
a=(a_1,...,a_n) \leq_c b=(b_1,...,b_n) \iff a_{\alpha_j} \leq b_{\alpha_j}\ \text{for all}\ j=1,...,n.
\end{equation*}
where $\widetilde{a} = (a_{\alpha_1},....,a_{\alpha_n})$, and $\widetilde{b} = (b_{\alpha_1},....,b_{\alpha_n})$.
Let $k\in \{1,...,n\}$. The \textit{$k$'th truncation}, $a(k)$ of $a=(a_1,...,a_n)$ is defined to be
\begin{equation*}
a(k)=(a_1, a_2,...,a_k).
\end{equation*}
We represent the elements of the symmetric group $S_n$ by $n$-tuples; for $v \in S_n$ let $(v_1,...,v_n)$ be the sequence where $v_j$ is the row index of the nonzero entry in the $j$'th column of the matrix $v$. For example, the $4$-tuple associated with the permutation matrix
\begin{equation}\label{E:Pexample}
v=\begin{pmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0
\end{pmatrix}\ \text{is}\ (3142).
\end{equation}
In general, we write $v=(v_1,...,v_n)$ for the corresponding permutation matrix.
\begin{defn}
The Deodhar ordering, $\leq_D$, on $S_n$ is defined by
\begin{equation}\label{D:deodharsdef}
v=(v_1,...,v_n) \leq_D w=(w_1,...,w_n) \iff \widetilde{v(k)} \leq_c \widetilde{w(k)}\ \text{for all}\ k=1,...,n.
\end{equation}
\end{defn}
\begin{rem}
The Deodhar ordering, $\leq_D$ is equivalent to the Bruhat-Chevalley ordering on $S_n$. Although there seems to be no published proof of this fact, it follows as a corollary of our main theorem.
\end{rem}
For the rook monoid $R_n$, a combinatorial description of the Bruhat-Chevalley ordering is given in \cite{PennellPutchaRenner}. We summarize it here.
We represent the elements of $R_n$ by $n$-tuples of nonnegative integers. Given $x=(x_{ij}) \in R_n$, let $(a_1,...,a_n)$ be the sequence defined by
\begin{equation}\label{E:oneline}
a_j =
\begin{cases}
0, &\text{if the $j$th column consists of zeros;}\\
i, &\text{if $x_{ij}=1$.}
\end{cases}
\end{equation}
For example, the sequence associated with the matrix
\begin{equation*}
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0
\end{pmatrix}
\end{equation*}
is $(3040)$.
\begin{thm}\label{T:PPR}\cite{PennellPutchaRenner}
Let $x = (a_1,...,a_n)$, $y=(b_1,...,b_n) \in R_n$. Then the Bruhat-Chevalley order on
$R_n$ is the smallest partial order on $R_n$ generated by declaring
$x \leq y$ if either
\begin{enumerate}
\item there exists an $1 \leq i \leq n$ such that $b_i> a_i$ and $b_j = a_j$ for all $j\neq i$, or
\item there exist $1 \leq i < j \leq n$ such that $b_i=a_j,\ b_j=a_i$ with $b_i > b_j$, and for all $k\notin \{i,j\}$, $b_k = a_k$.
\end{enumerate}
\end{thm}
For example, let $x = (21403)$ and $y= (35201)$ in $R_5$. Then $x \leq_{PPR} y$ because
\begin{eqnarray*}
(21403) &\leq_{PPR}& (31402)\ \text{by Theorem \ref{T:PPR} part 2}\\
&\leq_{PPR}& (34102)\ \text{by Theorem \ref{T:PPR} part 2}\\
&\leq_{PPR}& (35102)\ \text{by Theorem \ref{T:PPR} part 1}\\
&\leq_{PPR}& (35201)\ \text{by Theorem \ref{T:PPR} part 2}.
\end{eqnarray*}
\begin{rem}
In Proposition 15.23 of \cite{MS05}, Miller and Sturmfels describe the particular case of Theorem \ref{T:PPR} where $y\in S_n$.
\end{rem}
For the sake of notation, the partial ordering defined by the Theorem
\ref{T:PPR} is denoted by ``$\leq_{PPR}$,'' and refered to as the ``Pennell-Putcha-Renner'' ordering on $R_n$.
Notice that Deodhar's ordering (\ref{D:deodharsdef}) on $S_n$ can be defined verbatim on the rook monoid.
\begin{defn} \label{deodonrn.defn}
The {\em Deodhar ordering} $\leq_D$ on $R_n$ is defined as follows.
\begin{equation}\label{D:deodharonrn.def}
v=(v_1,...,v_n) \leq_D w=(w_1,...,w_n) \iff \widetilde{v(k)} \leq_c \widetilde{w(k)}\ \text{for all}\ k=1,...,n.
\end{equation}
\end{defn}
\begin{example}
Let $x=(4,0,2,3,1)$, and let $y=(4,3,0,5,1)$. Then $x \leq_D y$, because
\begin{eqnarray*}
\widetilde{x(1)}=(4) &\leq_c& \widetilde{y(1)}=(4) ,\\
\widetilde{x(2)}=(4,0) &\leq_c& \widetilde{y(2)}=(4,3), \\
\widetilde{x(3)}=(4,2,0) &\leq_c& \widetilde{y(3)}=(4,3,0), \\
\widetilde{x(4)}=(4,3,2,0) &\leq_c& \widetilde{y(4)}=(5,4,3,0), \\
\widetilde{x(5)}=(4,3,2,1,0) &\leq_c& \widetilde{y(5)}=(5,4,3,1,0).
\end{eqnarray*}
\end{example}
The main theorem of this article is that, on $R_n$, the Deodhar ordering and the Pennell-Putcha-Renner ordering are identical.
The organization of the paper is as follows. In Section \ref{S:lengthfunction}, we study the length function on $R_n$. We show that
\begin{thm} \label{P:length}
Let $x = (a_1,...,a_n) \in R_n$. Then, the dimension $\ell(x)=\dim(BxB)$ of the orbit $B x B$, is given by
\begin{equation}
\ell(x) = (\sum_{i=1}^n a_i^*) - coinv(x),\ \text{where}\
a_i^* =
\begin{cases}
a_i+n-i, & \text{if}\ a_i\neq 0, \\
0, & \text{if}\ a_i=0.
\end{cases}
\end{equation}
\end{thm}
In Section \ref{S:lemmas}, we prove two lemmas, which sharpen the theorem of Pennel, Putcha and Renner. In Section \ref{S:another}, we find an equivalent description of the Deodhar's ordering.
Finally, in Section \ref{S:final}, we prove that
\begin{thm}
The Deodhar ordering $\leq_D$ on $R_n$ is the same as the Pennell-Putcha-Renner $\leq_{PPR}$ ordering on $R_n$.
\end{thm}
\section{\textbf{the length function.}}\label{S:lengthfunction}
It is well known that the symmetric group $S_n$ is a graded poset, grading given by the length function
\begin{equation}\label{E:lengthsymmetric}
\ell(w)= \dim (B w B)=inv(w)+\dim(B)=inv(w)+{n+1\choose 2},
\end{equation}
where $w \in S_n$, and
\begin{equation}\label{E:invpermutation}
inv(w) = |\{ (i,j):\ 1\leq i < j \leq n,\ w_i>w_j \}|.
\end{equation}
In \cite{Renner86}, it is shown that the rook monoid is a graded poset, with respect to the length function
\begin{equation}
\ell(\sigma)= \dim (B\sigma B),\ \sigma \in R_n.
\end{equation}
In this section we give a combinatorial formula, similar to (\ref{E:lengthsymmetric}), for the length function on $R_n$.
Let $R_n^1$ be the set of all rank one elements of $R_n$. We denote the elements of $R_n^1$ by $E_{ij}=(e_{rs}) \in R_n$, where
\begin{equation*}
e_{rs} =
\begin{cases}
1, &\text{if $r=i$, and $s=j$,}\\
0, &\text{otherwise.}
\end{cases}
\end{equation*}
Let $\mathbf{T}_n$ be the set of all upper triangular matrices in $\mathbf{M}_n$.
\begin{lem}\label{L:dim1}
Let $B$ be the Borel subgroup of invertible upper triangular matrices, and let $x=(x_{rs})$ be an element of $R_n$. Then, the dimension $\dim(Bx)$ is equal to the the dimension of the linear subspace $\mathbf{T}_n x$ of $\mathbf{M}_n$, which is spanned by the following set;
\begin{equation*}
\{E_{ij}\in R_n^1:\ there\ exists\ a\ nonzero\ entry\ x_{rs}\ of\ x\ with\ s=j\ and\ r \geq i
\}.
\end{equation*}
\end{lem}
\begin{proof}
The linearity of $\mathbf{T}_n x \subset \mathbf{M}_n$ is clear. Since $\overline{Bx} = \overline{B} x = \mathbf{T}_n x$, and since the geometric dimension of a linear space is the same as its vector space dimension, $\dim (Bx) = \dim (\overline{B x }) = \dim (\mathbf{T}_n x)$. It is easy to see that, $\mathbf{T}_n x$ is spanned by $R_n^1 \cap \mathbf{T}_n x$. Matrix multiplication shows that $E_{i,j} \in R_n^1 \cap \mathbf{T}_n x$ if and only if there exists a nonzero entry $x_{rs}$ of $x$ with $r\geq i$ and $s=j$.
\end{proof}
\begin{lem}\label{L:dim2}
Let $B$ be the Borel subgroup of invertible upper triangular matrices, and let $x=(x_{rs})$ be an element of $R_n$. Then, the dimension $\dim(xB)$ is equal to the the dimension of the linear subspace $x\mathbf{T}_n$ of $\mathbf{M}_n$, which is spanned by the following set;
\begin{equation*}
\{E_{ij}\in R_n^1:\ there\ exists\ a\ nonzero\ entry\ x_{rs}\ of\ x\ with\ r=i\ and\ s \leq j
\}.
\end{equation*}
\end{lem}
\begin{proof}
Identical to the proof of Lemma \ref{L:dim1}.
\end{proof}
\begin{example}\label{e:4023}
Let $x\in R_4$ be given by the matrix
\begin{equation*}
x= \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{pmatrix}.
\end{equation*}
Then, a generic element of $\mathbf{T}_4x$ is of the form
\begin{equation*}
\begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \\
0 & a_{22} & a_{23} & a_{24} \\
0 & 0 & a_{33} & a_{34} \\
0 & 0 & 0 & a_{44}
\end{pmatrix}
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{pmatrix}
= \begin{pmatrix}
a_{14} & 0 & a_{12} & a_{13} \\
a_{24} & 0 & a_{22} & a_{23} \\
a_{34} & 0 & 0 & a_{33} \\
a_{44} & 0 & 0 & 0
\end{pmatrix},
\end{equation*}
for some $a_{ij}\in \mathbb{F}$. Therefore, $\dim (\mathbf{T}_4 x)= 9$.
Similarly, an arbitrary element of $x\mathbf{T}_4$ is of the form
\begin{equation*}
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
b_{11} & b_{12} & b_{13} & b_{14} \\
0 & b_{22} & b_{23} & b_{24} \\
0 & 0 & b_{33} & b_{34} \\
0 & 0 & 0 & b_{44}
\end{pmatrix}
= \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & b_{33} & b_{34} \\
0 & 0 & 0 & b_{44} \\
b_{11} & b_{12} & b_{13} & b_{14}
\end{pmatrix},
\end{equation*}
for some $b_{ij}\in \mathbb{F}$.
Thus $\dim (x \mathbf{T}_4)= 7$.
\end{example}
\begin{rem}\label{R:oneline}
Let $x=(a_1,...,a_n)$ be the ``one line" representation for $x=(x_{rs})\in R_n$, as in \ref{E:oneline}.
If $a_i\neq 0$ for some $i\in \{1,...,n\}$, then $a_i$ is the row index of a nonzero entry $x_{a_i i}$ of $x$. Therefore, $E_{r,s} \in R_n^1 \cap \mathbf{T}_n x$ if and only if there exists a nonzero entry of $x$ at the position $(a_i,i)$ with $s=i$ and $r\geq a_i$. Similarly, $E_{r,s} \in R_n^1 \cap x \mathbf{T}_n$ if and only if there exists a nonzero entry of $x$ at the position $(a_j,j)$ with $r=a_j$ and $s \leq j$.
\end{rem}
\begin{defn}
Let $x=(a_1,....,a_n)\in R_n$. A pair $(i,j)$ of indices $1\leq i<j \leq n$ is called a \textit{coinversion pair} for $x$, if $0< a_i < a_j$. By abuse of notation, we use
\textit{coinv} for both the set of coinversion pairs of $x$, as well as its cardinality.
\end{defn}
\begin{example}
Let $x=(4,0,2,3)$. Then, the only coinversion pair for $x$ is $(3,4)$. Therefore, $coinv(x)=1$.
\end{example}
\begin{thm} \label{T:length}
Let $x = (a_1,...,a_n) \in R_n$. Then, the dimension, $\ell(x)=\dim(BxB)$ of the orbit $B x B$ is given by
\begin{equation}
\ell(x) = (\sum_{i=1}^n a_i^*) - coinv(x),\ \text{where}\
a_i^* =
\begin{cases}
a_i+n-i, & \text{if}\ a_i\neq 0 \\
0, & \text{if}\ a_i=0
\end{cases}
\end{equation}
\end{thm}
\begin{proof}
Recall from \cite{Renner95} that the dimension of the orbit $BxB$ can be calculated
by
\begin{equation}\label{E:dimformula}
\dim (BxB) = \dim (Bx) + \dim (xB) - \dim (Bx \cap xB).
\end{equation}
By Lemma \ref{L:dim1}, $\dim (Bx)$ is the number of positions on or above some nonzero entry of the matrix $x\in R_n$. In other words, by the Remark \ref{R:oneline}, if $x=(a_1,...,a_n)$, then $\sum_{i=1}^n a_i$ is equal to $\dim(Br)$.
Similarly, by Lemma \ref{L:dim2}, $\dim (xB)$ is the number of positions on or to the right of some nonzero entry of $x$. The number of positions on and to the right of the nonzero entry at the $(a_i,i)$'th position of the matrix $x$ is equal to $n-i+1$. This shows that
\begin{equation*}
\dim (Bx) + \dim (xB) = \sum_{i=1}^n \overline{a_i},
\end{equation*}
where
\begin{equation*}
\overline{a_i} =
\begin{cases}
a_i+n-i+1, & \text{if}\ a_i\neq 0, \\
0, & \text{if}\ a_i=0.
\end{cases}
\end{equation*}
The number of nonzero entries of $x$ is denoted by $rank(x)$.
Thus, we have
\begin{equation*}
\dim (Bx) + \dim (xB) = \sum_{i=1}^n a_i^* + rank(x),
\end{equation*}
where
\begin{equation*}
a_i^* =
\begin{cases}
a_i+n-i, & \text{if}\ a_i\neq 0, \\
0, & \text{if}\ a_i=0.
\end{cases}
\end{equation*}
Therefore, it is enough to prove that
\begin{equation*}
\dim (Bx \cap xB) = rank(x) + coinv((a_1,....,a_n)).
\end{equation*}
By a similar argument as in the proof of Lemma \ref{L:dim1}, the dimension of $Bx \cap x B$ is equal to $\dim( \mathbf{T}_n x \cap x \mathbf{T}_n)$, which is equal to the cardinality of the set $R_n^1 \cap \mathbf{T}_n x \cap x \mathbf{T}_n$.
Let $E_{rs}\in R_n^1 \cap \mathbf{T}_n x \cap x \mathbf{T}_n$ be a rank 1 element whose nonzero entry is at the $(r,s)$'th position. By the Remark \ref{R:oneline}, $E_{rs} \in R_n^1 \cap \mathbf{T}_n x \cap x \mathbf{T}_n$ if and only if there exist nonzero entries of $x$ at the positions $(a_i,i)$ and $(a_j,j)$ such that $r\geq a_i,\ s=i$ and $r=a_j,\ s \leq j$. We have two possibilities. Either $(a_i,i)=(a_j,j)$, or not.
Clearly, the number of times that the equality $(a_i,i)=(a_j,j)$ holds true is equal to the $rank(x)$. On the other hand, if $(a_i,i)\neq (a_j,j)$, then we see that $i < j$ and $0<a_i<a_j$. Therefore, the number of times that $(a_i,i)\neq (a_j,j)$, is equal to the number of coinversions of the sequence $(a_1,...,a_n)$. Therefore,
\begin{equation*}
\dim (Bx \cap xB) = |R_n^1 \cap \mathbf{T}_n x \cap x \mathbf{T}_n| =rank(x) + coinv((a_1,....,a_n)).
\end{equation*}
\end{proof}
\begin{rem}
Let $x=(a_1,...,a_n) \in R_n$ be a permutation. Then
\begin{eqnarray*}
\ell(x) &=& (\sum_{i=1}^n a_i+n-i) - coinv(x)\\
&=&{n+1 \choose 2}+{n\choose 2} - coinv(x)\\
&=& {n+1 \choose 2}+inv(x),\\
\end{eqnarray*}
which agrees with the formula (\ref{E:lengthsymmetric}).
\end{rem}
\begin{example}
We continue with the notation of the example \ref{e:4023}. The generic element of $\mathbf{T}_4x \cap x\mathbf{T}_4$ has the form
\begin{equation*}
\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & * & * \\
0 & 0 & 0 & * \\
* & 0 & 0 & 0
\end{pmatrix},
\end{equation*}
where $*$ denotes an arbitrary element of $\mathbb{F}$. Therefore, $\dim(\mathbf{T}_4 x \cap x \mathbf{T}_4) = 4$, and by formula \ref{E:dimformula}, we have
$\dim(BxB) = 9+7 - 4 = 12$. On the other hand, $x$ is represented in ``one line" notation by $(4,0,2,3)$, and by Theorem \ref{P:length} we have
\begin{equation*}
\ell(x) = (4+4-1)+(2+4-3)+(3+4-4)-1=12.
\end{equation*}
\end{example}
\section{\textbf{Two important lemmas.}}\label{S:lemmas}
Recall that we denote the Bruhat-Chevalley ordering on $R_n$, as in Theorem \ref{T:PPR}, by $\leq_{PPR}$. The following two lemmas are critical for deciding if $x \leq_{PPR} y$ is a covering relation.
\begin{lem} \label{L:PPRcovering0}
Let $x=(a_1,...,a_n)$ and $y = (b_1,...,b_n)$ be elements of $R_n$. Suppose that $a_k=b_k$ for all $k =\{ 1,...,\widehat{i},...,n\}$ and $a_i < b_i$. Then, $\ell(y) = \ell( x)+ 1$ if and only if either
\begin{enumerate}
\item $b_i = a_i +1$, or
\item there exists a sequence of indices $1 \leq j_1< \cdots < j_s < i$ such that the set $\{a_{j_1},...,a_{j_s}\}$ is equal to $\{a_i+1,...,a_i+s\}$, and $b_i=a_i+s+1$.
\end{enumerate}
\end{lem}
\begin{proof}
Note that by the hypotheses of the lemma, Theorem \ref{T:PPR} implies that $ x \leq_{PPR} y$.
We first show that if (1) or (2) holds, then $\ell(y)= \ell(x)+1$, in other words $y$ covers $x$.
If $b_i = a_i +1$, then by the Theorem \ref{T:length} the lemma follows. So, we assume that there exists a sequence of indices $1 \leq j_1< \cdots < j_s < i$ such that the set $\{a_{j_1},...,a_{j_s}\}$ is equal to $\{a_i+1,...,a_i+s\}$, and $b_i=a_i+s+1$. Then,
\begin{eqnarray*}
\ell(y) &=& \sum_{j=1}^n b_j^* - coinv(y)\\
&=& ( \sum_{j=1, j\neq i}^n a_j^* )+ b_i^* - coinv(y)\\
&=& ( \sum_{j=1, j\neq i}^n a_j^* )+ a_i+s+1+n-i - coinv(y)\\
&=& ( \sum_{j=1}^n a_j^* )+ s+1 - coinv(y).
\end{eqnarray*}
Now it suffices to show that $coinv(y)=s+coinv(x)$. Observe that, when we replace $a_i$ by $b_i$, the following set of pairs, which are not coinversion pairs for $x$,
\begin{equation*}
\{ (j_k, i)|\ k=1,....,s\},
\end{equation*}
become coinversion pairs for $y$.
Also, upon replacing the entry $a_i$ by $b_i$, a coinversion pair of $x$ of the form $(l,i)$ or $(i,l)$ (where $l\neq j_k$) stays to be a coinversion pair for $y$. Therefore,
\begin{equation*}
coinv(y) = s+ coinv(x),
\end{equation*}
and hence $\ell(y) = \ell(x) +1$.
We proceed to prove the converse statement. Assume that $\ell(y)=\ell(x)+1$. Since $b_i > a_i$, there exists $d>0$ such that $b_i=a_i+d$. Without loss of generality we may assume that $d>1$. Then the length of $y$ can be computed as follows.
\begin{eqnarray*}
\ell(y) &=& \sum_{j=1}^n b_j^* - coinv(y)\\
&=& ( \sum_{j=1, j\neq i}^n a_j^* )+ b_i^* - coinv(y)\\
&=& ( \sum_{j=1, j\neq i}^n a_j^* )+ a_i+d+n-i - coinv(y)\\
&=& ( \sum_{j=1}^n a_j^* )+ d - coinv(y)\\
&=& \ell(x)+d+coinv(x)-coinv(y).
\end{eqnarray*}
Hence $d+coinv(x)-coinv(y)=1$, or $coinv(y)-coinv(x)=d-1$.
We inspect the difference $coinv(x)-coinv(y)$ more closely.
If $(k,i)$ with $k<i$ is a coinversion for $x$, then it stays to be a coinversion for $y$, as well.
Clearly this is also true for the pairs of the form $(k,l)$ where $k<i<l$, or $i<k<l$, or $k<l<i$.
Therefore, the difference between $coinv(y)$ and $coinv(x)$ occurs at the pairs of the form
\begin{enumerate}
\item $(k,i),\ k<i$ such that $a_i < a_k < b_i$, or
\item $(i,l),\ i<l$, such that $a_i < a_l < b_i$.
\end{enumerate}
In the first case, some new coinversions are added, and in the second case some coinversions are deleted.
Let us call the number of pairs of the first type by $n_1$ and the number of pairs of the second type by $n_2$. Then, $coinv(y) = coinv(x) + n_1-n_2$, or $coinv(y)-coinv(x) = n_1-n_2$. Obviously $0 \leq n_1, n_2 \leq d-1$ (because $b_i=a_i+d$). Hence, we have that $n_1=d-1$, and that $n_2=0$. Therefore, the following is true: any $a_k$ between $a_i$ and $a_i+d=b_i$ appears before the $i$'th position. This completes the proof.
\end{proof}
\begin{example}
Let $x=(4,0,5,0,3,1)$, and let $y=(4,0,5,0,6,1)$. Then $\ell(x)= 21$, and $\ell(y)=22$. Let $z=(6,0,5,0,3,1)$. Then $\ell(z)=23$.
\end{example}
\begin{lem} \label{L:PPRcovering1}
Let $x=(a_1,...,a_n)$ and $y = (b_1,...,b_n)$ be two elements of $R_n$. Suppose that $a_j=b_i,\ a_i = b_j$ and $b_j < b_i$ where $i < j$. Furthermore, suppose that for all $k\in \{1,...\widehat{i},...,\widehat{j},...,n\}$, $a_k=b_k$.
Then, $\ell(y) = \ell( x)+ 1$ if and only if for $s=i+1,...,j-1$, either $a_j< a_s$, or $a_s < a_i$.
\end{lem}
\begin{proof}
Suppose that $x$ and $y$ are as in the hypothesis. Also suppose also that $\ell(y) = \ell(x)+1$. We proceed to show that for $s=i+1,...,j-1$, either $a_j< a_s$, or $a_s < a_i$. Clearly, the sets $\{a_1,...,a_n\}$ and $\{b_1,...,b_n\}$ are equal, hence $\sum_{t=1}^n a_t = \sum_{t=1}^n b_t $. Therefore, the difference between $\ell(x)$ and $\ell(y)$ is determined by the associated coinversion sets of $x$ and $y$.
Assume that there exists an $s \in \{i+1,...,j-1\}$ such that $a_i < a_s < a_j $. Then, upon interchanging $a_i$ with $a_j$ to get $y$ from $x$, the pairs $(i,s),\ (s,j)$ and $(i,j)$ are no longer coinversions for $y$. This shows that for every $s=i+1,...,j-2$ with $a_i < a_s < a_j$, we obtain that $\ell(y)\geq\ell(x)+2$. This contradicts the assumption that $\ell(y) = \ell(x)+1$. Therefore, there exists no $s \in \{i+1,...,j-1\}$ such that $a_i < a_s < a_j $.
Conversely, assume that for every $s=i+1,...,j-1$, we have $a_i > a_s$ or $a_s > a_j$. If $a_i > a_s$, then, the pair $(s,j)$ is a coinversion pair for both $x$ and $y$. On the other hand, the pair $(i,s)$ is neither a coinversion for $x$ nor for $y$. Similarly, if $(a_s>a_j)$, then the pair $(i,s)$ is a coinversion pair for both $x$ and $y$. Also, the pair $(s,j)$ is not a coinversion pair for $x$ and neither for $y$. Therefore, we conclude that at any pair of the form $(k,l)$ with $i \leq k < l \leq j$, the coinversion is not affected. It remains to check pairs of the form $(k,l)$ with either $k <i$, or $j< k$. In the first case, i.e., $k<i$, as $a_i$ is interchanged with $a_j$, the contribution of $(k,l)$ to the coinversion situation does not change, since relative positions of $a_k$ and $a_l$ do not alter. Similarly, in the second case, i.e., $j<k$, since the relative positions of $a_k$ and $a_l$ do not alter, their contribution to coinversion do not change. Therefore, the only coinversion change occurs at the pair $(i,j)$, and hence, $\ell(y)= \ell(x)+1$.
This completes the proof.
\end{proof}
\begin{example}
Let $x=(2,6,5,0,4,1,7)$, and let $y=(4,6,5,0,2,1,7)$. Then $\ell(x)= 35$, and $\ell(y)=36$. Let $z=(7,6,5,0,4,1,2)$. Then $\ell(z)=42$.
\end{example}
\section{\textbf{Another characterization of $\leq_D$.}}\label{S:another}
As mentioned in the introduction, our goal is to show that
the $\leq_D$ ordering on $R_n$ is the same as to the $\leq_{PPR}$ ordering. In this section, we find another, useful characterization of the Deodhar ordering.
\begin{defn}
Let $x=(a_1,....,a_n) \in R_n$, and let $r\in \{1,...,n\}$, and finally let $a \in \mathbb{Z}$. We define
\begin{equation*}
\Gamma(x,a) = \{ a_i\in x|\ a_i > a\}.
\end{equation*}
\end{defn}
\begin{rem}\label{R:cposition}
Let $a_i$ be a nonzero entry of $x=(a_1,....,a_n)\in R_n$. Then,
$|\Gamma(x,a_i)|+1$ is the position of $a_i$ in the reordering
$\widetilde{x} = (a_{\alpha_1} \geq \cdots \geq a_{\alpha_n})$ of the entries of $x$. For example,
if $x=(3,0,5,1,0,4)$, then $\widetilde{x}=(5,4,3,1,0,0)$, and $|\Gamma(x,1)|+1=4$.
\end{rem}
\begin{prop}\label{P:containment}
Let $x=(a_1,....,a_n)$ and $y=(b_1,...,b_n)$ be two elements from $R_n$. Then $x \leq_c y$ if and only if $|\Gamma(x,a_k)| \leq |\Gamma(y,a_k)|$ for all $k=1,....,n.$
\end{prop}
\begin{proof}
Let $\widetilde{y}= (b_{\alpha_1} \geq \cdots \geq b_{\alpha_n})$ and $\widetilde{x} = (a_{\alpha_1} \geq \cdots \geq a_{\alpha_n})$ be the reorderings of the entries of $y$ and of $x$ respectilvely. Then, by the Remark \ref{R:cposition}, $a_{\alpha_{s+1}}$ is the entry $a_k$ of $x$ for which $|\Gamma(x,a_k)|=s$. Therefore, $b_{\alpha_{s+1}} \geq a_{\alpha_{s+1}}$ if and only if the number of entries of $y$ which are larger than $a_k$ is more than the number of entries of $x$ which are larger than $a_k$. In other words, $b_{\alpha_{s+1}} \geq a_{\alpha_{s+1}}$ if and only if $|\Gamma(x,a_k)| \leq |\Gamma(y,a_k)|$. Thus $x \leq_c y$ if and only if $|\Gamma(x,a_k)| \leq |\Gamma(y,a_k)|$, for all $k=1,....,n.$
\end{proof}
As a corollary of the Proposition \ref{P:containment}, we have
\begin{cor}\label{C:containment}
Let $x=(a_1,....,a_n),$ and $y=(b_1,...,b_n)$ be two elements of $R_n$. Then $y \geq_D x$ if and only if for all $1\leq k \leq n$ and for all $m \leq k$, $|\Gamma(x(k),a_m)| \leq |\Gamma(y(k),a_m)|$.
\end{cor}
\begin{proof}
Immediate from Proposition \ref{P:containment}, and the definition of the Deodhar ordering.
\end{proof}
\begin{example}
Let $x=(a_1,a_2,a_3)=(1,0,3)$ and let $y=(b_1,b_2,b_3)=(3,0,2)$. Then
\begin{eqnarray*}
|\Gamma(x(1),a_1)|=0 &\leq& |\Gamma(y(1),a_1)|=1,\\
|\Gamma(x(2),a_1)|=0 &\leq& |\Gamma(y(2),a_1)|=1,\\
|\Gamma(x(2),a_2)|=1 &\leq& |\Gamma(y(2),a_2)|=2,\\
|\Gamma(x(3),a_1)|=1 &\leq& |\Gamma(y(3),a_1)|=2,\\
|\Gamma(x(3),a_2)|=2 &\leq& |\Gamma(y(3),a_2)|=2,\\
|\Gamma(x(3),a_3)|=0 &\leq& |\Gamma(y(3),a_3)|=0.\\
\end{eqnarray*}
Therefore, $x\leq_D y$.
\end{example}
\begin{rem}\label{R:deodhar0}
It follows from the definition of the Deodhar ordering that if $(a_1,....,a_n) \leq_D (b_1,....,b_n)$, then
$(a_1,...,a_k) \leq_D (b_1,...,b_k)$ for any $k\in \{1,....,n\}$. Also, by repeated application of Proposition \ref{P:containment}, it follows that $$(a_1,....,a_k,c_{k+1},...,c_m) \leq_D (b_1,....,b_k,c_{k+1},...,c_m)$$ for any set $\{c_{k+1},....,c_m\}$ of nonnegative integers.
\end{rem}
\section{\textbf{The Main Theorem.}}\label{S:final}
We show in this section that the covering relation for the ordering $\leq_{PPR}$ on $R_n$ is the same as the covering relation for the ordering $\leq_{D}$ on $R_n$. Our notation for these covering relations is
``$y \rightarrow_D x$,'' and ``$y \rightarrow_{PPR} x$,'' respectively.
\begin{lem}\label{L:preparation2}
Let $x=(a_1,....,a_n),\ y= (b_1,...,b_n)$ and $z=(c_1,...,c_n)$ be three elements from $R_n$ such that $a_k = b_k$ for all $k\in \{1,...,\widehat{i},...,n\}$ and $a_i < b_i$. Furthermore, suppose that $c_k=a_k$ for $k=1,...,i$. If $x \leq_D z \leq_D y$ and $\ell(y) = \ell(x)+1$, then $z=x$.
\end{lem}
\begin{proof}
Assume otherwise that $z \neq x$. Let $j > i$ be the smallest number such that $c_k=a_k$ for $k<j$ but $c_j \neq a_j$. Since $ x \leq_D z$, it cannot be true that $c_j < a_j$.
So, we have that $a_j < c_j$. This, in particular, implies that $c_j$ is nonzero.
We now compare $c_j$ with $a_i$. Observe that $c_j=a_i$ is not possible. Thus, there are two cases; either $c_j < a_i$ or $a_i < c_j$.
We proceed with the first case. Then, we have $a_j=b_j < c_j < a_i = c_i < b_i$. Recall that $\Gamma(z(j),b_j) = \{ c_k|\ b_j < c_k,\ k=1,...,j\}$, and that $\Gamma(y(j),b_j) = \{ b_k|\ b_j < b_k,\ k=1,...,j\}$.
Since,
\begin{equation*}
\{ b_1,..., b_j\} \setminus \{b_i,b_j\} = \{ c_1,...,c_j \} \setminus \{c_j,c_i\}.
\end{equation*}
and since $b_j < c_j< c_i$, we see that $|\Gamma(z(j),b_j) | = |\Gamma(y(j),b_j)| +1$. By the Remark \ref{R:cposition}, this is equal to the position of $b_j$ in $\widetilde{y(j)}$. In other words, the position of $b_j$ in $\widetilde{y(j)}$ is $\alpha_{s}=|\Gamma(z(j),b_j) |$.
On the other hand, $|\Gamma(z(j),b_j) |$ is equal to the number of entries of $z(j)$ which are larger than $b_j$. Therefore, in $c_{\alpha_s}>b_{\alpha_s}=b_j$, But this is a contradiction to $z(j) \leq_c y(j)$.
Therefore, the first case, $c_j<a_i$ is not possible.
We assume that $a_i < c_j$. Since $a_j=b_j$, and since by our initial assumption $a_j < c_j$, we have that $b_j< c_j$. Since $i<j$, and since $\ell(y) = \ell(x) +1$, Lemma \ref{L:PPRcovering0} implies that $b_i \leq c_j$.
Assume for a second that $b_i < c_j$. Let $\alpha_s$ be the position of $c_j$ in $\widetilde{z(j)}$.
Since,
\begin{equation*}
\{ b_1,..., b_j\} \setminus \{b_i,b_j\} = \{ c_1,...,c_j \} \setminus \{c_j,c_i\},
\end{equation*}
and since, $c_i<c_j$, $b_i< c_j$, and $b_j < c_j$,
we see that $|\Gamma(z(j),c_j) | = |\Gamma(y(j),c_j)|$. Therefore, $b_{\alpha_s}<c_{\alpha_s}=c_i$. But this contradicts the fact that $z(j) \leq_c y(j)$.
Therefore, we assume that $b_i=c_j$. Since $b_j=a_j<c_j=b_i$, and since $\ell(y) = \ell(x) +1$, Lemma \ref{L:PPRcovering0} implies that $b_j \leq c_i=a_i < c_j$. We look at the position $\alpha_s$ of $c_i$ in $\widetilde{z(j)}$. Since,
\begin{equation*}
\{ b_1,..., b_j\} \setminus \{b_i,b_j\} = \{ c_1,...,c_j \} \setminus \{c_j,c_i\},
\end{equation*}
we see that $|\Gamma(z(j),c_i) | = |\Gamma(y(j),c_i)|$. Therefore, $b_{\alpha_s}<c_{\alpha_s}=c_i$. This contradicts the fact that $z(j) \leq_c y(j)$. We have handled all the cases, and the proof is complete.
\end{proof}
\begin{lem}\label{L:preparation3}
Let $x=(a_1,....,a_n),\ y= (b_1,...,b_n)$ and $z=(c_1,...,c_n)$ be three elements from $R_n$ such that $a_k = b_k$ for all $k\in \{1,...,\widehat{i},...,n\}$ and $a_i < b_i$. Furthermore, $c_k=b_k$ for $k=1,...,i$. If $x \leq_D z \leq_D y$ and $\ell(y) = \ell(x)+1$, then $z=y$.
\end{lem}
\begin{proof}
We proceed as in the proof of Lemma \ref{L:preparation2}. Assume otherwise that $z \neq y$, and let $j > i$ be the first position where $z$ differs from $y$. Hence, there are now two subcases; either $c_j < b_j$ or else $b_j < c_j$.
In the second case, with $b_j< c_j$, we see that $y(j) <_c z(j)$, which contradicts the fact that $ z \leq_D y$.
Therefore, we assume that $c_j < b_j = a_j$. There are now two subcases; either $c_j < a_i$, or else $a_i < c_j$. We first treat the case $c_j<a_i$.
Recall that $\Gamma(z(j),c_j) = \{ c_k|\ c_j < c_k,\ k=1,...,j\}$, and that $\Gamma(x(j),c_j) = \{ a_k|\ c_j < a_k,\ k=1,...,j\}$. Then, since
\begin{equation*}
\{ a_1,..., a_j\} \setminus \{a_i,a_j\} = \{ c_1,...,c_j \} \setminus \{c_j,c_i\},
\end{equation*}
and $c_j < a_i,\ a_j$, we see that $|\Gamma(z(j),c_j) | +1 = |\Gamma(x(j),c_j)|$.
This shows the following; if the position of $c_j$ in $\widetilde{z(j)}$ is $\alpha_{s}$, then $a_{\alpha_{s}} > c_{\alpha_{s}} = c_j$, a contradiction to $x(j) \leq_c z(j)$.
We proceed with the case that $a_i < c_j$. Since $\ell(y) = \ell(x) +1$, and $z(j-1) = y(j-1)$, we see that $c_j$ must be larger than $c_i=b_i = a_i +s +1$ (or larger than $c_i=b_i =a_i +1$). Therefore, similar to the above, since
\begin{equation*}
\{ a_1,..., a_n\} \setminus \{a_i,a_j\} = \{ c_1,...,c_n \} \setminus \{c_j,c_i\},
\end{equation*}
and $a_i < c_j < a_j$, and $c_i < c_j$, we see that $|\Gamma(z(j),c_j) | +1 = |\Gamma(x(j),c_j)|$. This shows the following; if the position of $c_j$ in $\widetilde{z(j)}$ is $\alpha_{s}$, then $a_{\alpha_{s}} > c_{\alpha_{s}} = c_j$, a contradiction to $x(j) \leq_c z(j)$.
Therefore, we conclude that $z = y$.
\end{proof}
\begin{lem}\label{L:D1}
Let $x=(a_1,....,a_n)$ and $z=(c_1,...,c_n)$ be two elements from $R_n$. Suppose that
$c_i=a_r$ and $c_r=a_i$, with $i<r$. Furthermore, suppose that
$c_k=a_k$, for $k\notin \{i,r\}$.
If $a_r > a_i$, then $z \gneq_D x$.
\end{lem}
\begin{proof}
This follows directly from Corollary \ref{C:containment}.
\end{proof}
\begin{prop}\label{P:Deodharcovering0}
Let $x=(a_1,....,a_n)$ and $y= (b_1,...,b_n)$ be two two elements from $R_n$ such that
$a_k = b_k$ for all $k\in \{1,...,\widehat{i},...,n\}$ and $a_i < b_i$. Then $\ell(y) = \ell(x) +1$ if and only if $y \rightarrow_D x$.
\end{prop}
\begin{proof}
It is clear from the hypotheses that $x<_{PPR} y$, and that $x <_D y$. We first show that if $\ell(y) = \ell(x)+1$, then $y \rightarrow_D x$. Let $z=(c_1,...,c_n) \in R_n$ be such that $x \leq_D z \leq_D y$. Then, since $a_k = b_k$ for $k=1,...,i-1$, we must have $c_k=a_k$, for $k=1,...,i-1$. In other words, $x(k)=z(k)=y(k)$ for $k=1,...,i-1$. Since $x(i) \leq_c z(i) \leq_c y(i)$, we must also have $a_i \leq c_i \leq b_i$. Therefore, either $a_i=c_i$, or $a_i< c_i$. In the former case, by the Lemma \ref{L:preparation2}, $z$ is identically equal to $x$. Therefore, we have $a_i < c_i \leq b_i$, so that $x<_D z \leq_D y$. We are going to show that $z=y$.
As in the notation of Lemma \ref{L:PPRcovering0}, if $b_i = a_i+s+1$ for some $s\geq 0$, then we must have $c_i=b_i$. This is because, $c_i$ cannot be strictly larger than $b_i$ (otherwise $z(i) > y(i)$ ), and $c_i$ cannot less than $b_i$ (otherwise $c_i$ has to be one of $\{a_{j_1},...,a_{j_s}\}$, which contradicts with the fact that $z(k)=y(k)$ for all $k=1,....,i-1$). Therefore, $c_k = b_k$ for $k=1,...,i$. By the Lemma \ref{L:preparation3}, we see that $z=y$. Therefore, $\ell(y) = \ell(x) +1$ implies that $y \rightarrow_D x$.
Conversely, assume that $y \rightarrow_D x$. If $b_i=a_i+1$, then it is clear that $\ell(y)=\ell(x)+1$. So, we assume that $b_i=a_i +s+1$, for some $s>0$. To finish the proof, by the Lemma \ref{L:PPRcovering0}, it is enough to show that there exists a sequence of indices $1 \leq j_1< \cdots < j_s < i$ such that $\{a_{j_1},...,a_{j_s}\} = \{a_i+1,...,a_i+s\}$, and $b_i=a_i+s+1$.
Let $d$ be a number such that $1\leq d \leq s$. If $a_i+d$ does not appear in $y$, then we define $z=(c_1,...,c_n) \in R_n$ to be the sequence such that $c_k = a_k$ for $k\in \{1,....,\widehat{i},...,n\}$ and $c_i = a_i+d$. It is clear that $x \lneq_D z \lneq_D y$. But this contradicts with the hypotheses that $y \rightarrow_D x$. Therefore, the number $a_i+d$ is an entry of $y$. Assume for a second that $a_i+d=b_t=a_t$ for some $t > i$. Then we define $z=(c_1,...,c_n) \in R_n$ to be the element such that $c_k = a_k$ for $k\in \{1,....,\widehat{i},...,\widehat{t},...,n\}$ and $c_i = a_i+d,\ c_t=a_i$. Then, using the Lemma \ref{L:D1}, it is easy to check that $x \lneq_D z \lneq_D y$, which is a contradiction. Therefore, $t<i$. In other words, for any $1\leq d < s$, the number $a_i+d$ is an entry of $x$, with the index $<i$. This shows that there exists a sequence of indices $1 \leq j_1< \cdots < j_s < i$ such that the set $\{a_{j_1},...,a_{j_s}\}$ is equal to $\{a_i+1,...,a_i+s\}$, and $b_i=a_i+s+1$.
\end{proof}
\begin{lem}\label{L:preparation4}
Let $x=(a_1,...,a_n),\ y = (b_1,...,b_n)$ and $z=(c_1,..,c_n)$ be three element of $R_n$, such that
$\widetilde{x} = \widetilde{y}$. If $x \leq_D z \leq_D y$, then $\widetilde{z}= \widetilde{x} = \widetilde{y}$.
\end{lem}
\begin{proof}
By definition of the Deodhar ordering, $x \leq_D z \leq_D y$ is true if and only if $x(k) \leq_c z(k) \leq_c y(k)$, for all $k=1,....,n$. Recall that $\widetilde{z}$ stands for the reordering, from the largest to smallest entries of $z$. Therefore, if $\widetilde{z} \neq \widetilde{x}$, then there exits $1 \leq \alpha_r \leq n$ such that
$a_{\alpha_r} < c_{\alpha_r}$. But since $z(n) \leq_c y(n)$, we see that $c_{\alpha_r} \leq b_{\alpha_r} = a_{\alpha_r}$, a contradiction. Therefore $\widetilde{z}= \widetilde{x}$.
\end{proof}
\begin{lem}\label{L:preparation5}
Let $x=(a_1,....,a_n),\ y= (b_1,...,b_n)$ and $z=(c_1,...,c_n)$ be three elements from $R_n$ such that $\widetilde{x(n-1)}=\widetilde{y(n-1)}=\widetilde{z(n-1)}$, $a_n = b_n$ and $x \leq_D z \leq_D y$. Then, $c_n=a_n = b_n$.
\end{lem}
\begin{proof}
Since $\widetilde{x(n-1)}=\widetilde{y(n-1)}$, and since $a_n = b_n$, we see, by the Lemma \ref{L:preparation4}, that $\widetilde{z}=\widetilde{x}=\widetilde{y}$. This, together with the fact that $\widetilde{z(n-1)}=\widetilde{x(n-1)}=\widetilde{y(n-1)}$, forces the equality $c_n=a_n=b_n$.
\end{proof}
\begin{prop} \label{P:Deodharcovering1}
Let $x=(a_1,...,a_n)$ and $y = (b_1,...,b_n)$ be two elements of $R_n$. Suppose that for some $1 \leq i < j \leq n$, $a_j=b_i,\ a_i = b_j$ and $b_j < b_i$, and $a_k=b_k$ for all $k\in \{1,...\widehat{i},...,\widehat{j},...,n\}$. Then, $\ell(y) = \ell(x)+ 1$ if and only if $y \rightarrow_D x$.
\end{prop}
\begin{proof}
It is clear from Lemma \ref{L:D1} that $x <_D y$. Also, we know from Lemma \ref{L:PPRcovering1} that $\ell(y) = \ell(x)+ 1$ if and only if for each $s\in \{i+1,...,j-1\}$, either $a_j< a_s$, or $a_s < a_i$. Throughout the
proof, we shall make use of this.
Suppose first that $y \rightarrow_D x$. Assume that there exists $s \in \{i+1,...,j-1\}$ such that $a_i < a_s < a_j$. Then, define $z =(c_1,...,c_n) \in R_n$ such that $c_k=a_k$ for all $k\notin \{s,j\}$, and, $c_s = a_j$, $c_j= a_s$. Then, by the repeated applications of Lemma \ref{L:D1}, it is easy to see that $x \lneq_D z \lneq_D y$. But this implies that $y$ does not cover $x$ in the Deodhar ordering, which is a contradiction. Therefore, $\ell(y) = \ell(x)+1$.
Conversely, suppose that $\ell(y)=\ell(x)+1$. There are two cases; $j=i+1$, or $j> i+1$.
Suppose first that $j=i+1$. Notice that by the Lemma \ref{L:preparation4}, the set of the entries of $z$ is equal to the set of entries of $x$, which is also equal to the set of entries of $y$. Clearly, for $k=1,....,i-1$, we have that $x(k)=z(k)=y(k)$. Since $j=i+1$, we see that $\widetilde{x(j)}=\widetilde{y(j)}$. Thus, by Lemma \ref{L:preparation4}, we see that $\widetilde{z(j)}=\widetilde{x(j)}=\widetilde{y(j)}$. This shows that
either $c_i=a_i$ and $c_j=a_j$, or $c_i=b_i$ and $c_j=b_j$. Finally, for $k>j$, Lemma \ref{L:preparation5} shows that $c_k=a_k=b_k$. Therefore, we conclude, in the case of $j=i+1$, that either $z=x$, or $z=y$.
We proceed with the case that $j>i+1$. By Lemma \ref{L:PPRcovering1}, we know that for $s=i+1,...,j-1$, either $a_j< a_s$, or $a_s < a_i$. Let $z=(c_1,...,c_n)\in R_n$ be such that $x \leq_D z \leq_D y$. Notice that by Lemma \ref{L:preparation4}, the set of the entries of $z$ is equal to the set of entries of $x$. Furthermore, for $k=1,....,i-1$, we have that $x(k)=z(k)=y(k)$.
Also, since $x(i) \leq_c z(i) \leq_c y(i)$, we must have $a_i \leq c_i \leq b_i$.
We proceed to show that for $s=i+1,...,j-1,j+1,...,n$, $c_s=a_s=b_s$. Once we show this, the proof is finished as follows. By Lemma \ref{L:preparation4}, we know that $\widetilde{z}=\widetilde{x}=\widetilde{y}$. Since $c_s=a_s=b_s$ for all $s\in \{1,...,\widehat{i},...,\widehat{j},...,n \}$, we either have
$c_i=a_i$ and $c_j=a_j$, or $c_i=b_i$ and $c_j=b_j$, in other words, either $z=x$, or $z=y$.
We start by showing that $c_{i+1}=a_{i+1}=b_{i+}$. By Lemma \ref{L:PPRcovering1},
we know that one of the following is true.
\textit{Case 1.} $b_{i+1}= a_{i+1} < a_i$, or
\textit{Case 2.} $b_{i+1}=a_{i+1} > b_i= a_j$.
We start with the first case that $a_{i+1}< a_i \leq c_i$, and we look at the following two subcases: $c_{i+1} < a_{i+1}$ or $c_{i+1} > a_{i+1}$.
\textit{Case 1.1.} $c_{i+1} < a_{i+1}=b_{i+1}$, or
\textit{Case 1.2} $c_{i+1} > a_{i+1}=b_{i+1}$.
We first deal with the \textit{Case 1.1.}. Let $\Gamma(x(i+1), c_{i+1}) = \{ a_k|\ c_{i+1} < a_k,\ k=1,...,i+1\}$, and let $\Gamma(z(i+1),c_{i+1}) = \{ c_k|\ c_{i+1} < c_k,\ k=1,...,i+1\}$. Since
\begin{equation*}
\{ a_1,....,a_{i+1}\} \setminus \{a_i, a_{i+1}\} = \{ c_1,....,c_{i+1}\} \setminus \{c_i, c_{i+1}\},
\end{equation*}
if $c_{i+1} < a_{i+1}$, then $|\Gamma(x(i+1),c_{i+1})| = |\Gamma(z(i+1),c_{i+1})| +1$. Hence, if the position of $c_{i+1}$ in $\widetilde{z(i+1)}$ is $c_{\alpha_s}$, then $a_{\alpha_s} > c_{\alpha_s}$. This is a contradiction with $x(i+1) \leq_c z(i+1)$.
\textit{Case 1.2.} is similar; if $c_{i+1} > a_{i+1}=b_{i+1}$, then let $\Gamma(y(i+1),b_{i+1}) = \{ b_k|\ b_{i+1} < b_k,\ k=1,...,i+1\}$ and $\Gamma(z(i+1),b_{i+1}) = \{ c_k|\ b_{i+1} < c_k,\ k=1,...,i+1\}$. Since
\begin{equation*}
\{ b_1,....,b_{i+1}\} \setminus \{b_i, b_{i+1}\} = \{ c_1,....,c_{i+1}\} \setminus \{c_i, c_{i+1}\},
\end{equation*}
$|\Gamma(z(i+1),b_{i+1})| = |\Gamma(y(i+1),b_{i+1})|+1$. Therefore, if the position of $b_{i+1}$ in $\widetilde{y(i+1)}$ is $b_{\alpha_{s'}}$, then $c_{\alpha_{s'}} > b_{\alpha_{s'}}$. This is a contradiction with $z(i+1) \leq_c y(i+1)$.
We proceed with \textit{Case 2.} that $b_{i+1}=a_{i+1} > b_i=a_j$. Once again, there are two subcases;
\textit{Case 2.1.} $c_{i+1} < a_{i+1}=b_{i+1}$, or
\textit{Case 2.2.} $c_{i+1} > a_{i+1}=b_{i+1}$.
We continue with \textit{Case 2.1.}. Since,
\begin{equation*}
\{ a_1,....,a_{i+1}\} \setminus \{a_i, a_{i+1}\} = \{ c_1,....,c_{i+1}\} \setminus \{c_i, c_{i+1}\}.
\end{equation*}
we have that $|\Gamma(x(i+1),a_{i+1})| \geq |\Gamma(z(i+1),a_{i+1})|+1 $. So, if the position of $a_{i+1}$ in $\widetilde{x(i+1)}$ is $a_{\alpha_s}$, then $a_{\alpha_s} > c_{\alpha_s}$. This is a contradiction with $x(i+1) \leq_c z(i+1)$.
Finally, we look at \textit{Case 2.2.} Since
\begin{equation*}
\{ b_1,....,b_{i+1}\} \setminus \{b_i, b_{i+1}\} = \{ c_1,....,c_{i+1}\} \setminus \{c_i, c_{i+1}\},
\end{equation*}
and since, $c_i \leq b_i < b_{i+1}$ we see that
$|\Gamma(z(i+1),b_{i+1})| = |\Gamma(y(i+1),b_{i+1})|+1$. Therefore, if the position of $b_{i+1}$ in $y(i+1)$ is $b_{\alpha_{s'}}$, then $c_{\alpha_{s'}} > b_{\alpha_{s'}}$. This is a contradiction with $z(i+1) \leq_c y(i+1)$.
We have dealt with all of the cases. We conclude that $c_{i+1}=a_{i+1}=b_{i+1}$. Notice that, as long as $a_k =b_k$ and $i < k < j$, the same arguments above work. Therefore, for any $k=i+1,...,j-1$ we have $c_k = a_k = b_k$.
Note also that $\widetilde{x(j)} = \widetilde{y(j)}$. By Remark \ref{R:deodhar0}, we know that $x(j) \leq_D z(j) \leq_D y(j)$. Hence, by
Lemma \ref{L:preparation4}, $\widetilde{x(j)} = \widetilde{y(j)}= \widetilde{z(j)}$.
Since $c_k = a_k = b_k$ for $k\notin \{i,j\}$, we either have that
$c_i = a_i,\ c_j=a_j$,\ or that $c_i = a_j,\ c_j = a_i$. Therefore, we either have that $z(j) = y(j)$, or that $z(j)=x(j)$.
Finally, for $k>j$, Lemma \ref{L:preparation5} shows that $c_k=a_k=b_k$. This shows that $z=y$ or $z=x$, hence $y$ covers $x$, and hence the proof is complete.
\end{proof}
\begin{rem}
Propositions \ref{P:Deodharcovering0} and \ref{P:Deodharcovering1} show that a covering for the Pennell-Putcha-Renner ordering is a covering for the Deodhar ordering. Proposition \ref{P:Dcovers} below shows that the converse is also true.
\end{rem}
\begin{lem}\label{L:Dcovers1}
Let $x=(a_1,...,a_n),y=(b_1,...,b_n)\in R_n$. Suppose that there exists $i\in \{1,...,n-1\}$ such that
\begin{enumerate}
\item $a_k=b_k$ for $k=1,...,i-1$, and $b_i>a_i$,
\item $b_i = a_r$ for some $r>i$.
\end{enumerate}
Then, $y \rightarrow_D x$ implies that $y \rightarrow_{PPR} x$.
\end{lem}
\begin{proof}
Our strategy for proving that $y\rightarrow_D x$ implies $y\rightarrow_{PPR} x$ is as follows. We construct an element $z\in R_n$, such that $x\nleq_D z \leq_Dy $ and the pair $x,z\in R_n$ satisfy the hypothesis of the Proposition \ref{P:Deodharcovering1}. Thus, $z \rightarrow_D x$ implies that $\ell(z)= \ell(x)+1$, and this, by Lemma \ref{L:PPRcovering1} this implies that $z \rightarrow_{PPR} x$.
First, assume that $a_i=0$. Let $r'$ be the smallest index such that $i<r' \leq r$, and $a_{r'}$ is nonzero.
Define $z=(c_1,...,c_n)$ by setting $c_k=a_k$ if $k\notin \{i,r'\}$, and $c_i=a_{r'}$, $c_{r'}=a_i$. It is
easy to check that (see the proof of case $a_i>0$, below) $x \lneq_D z \leq_D y$, and that the pair $x,z$ satisfy the hypothesis of
Proposition \ref{P:Deodharcovering1}. Therefore, we are done in the case that $a_i=0$.
We proceed with the assumption that $a_i>0$.
Let $r'$ be the smallest integer such that
\begin{enumerate}
\item $i< r' \leq r$,
\item $a_i<a_{r'}$.
\end{enumerate}
Therefore,
\begin{equation}\label{E:observe1}
\text{if}\ i<s<r', \text{then}\ a_s < a_i.
\end{equation}
We define $z=(c_1,...,c_n)\in R_n$ as follows. Let $k\in \{1,...,\widehat{i},...,\widehat{r'},....,n\}$. Set $c_k=a_k$. Also, set $c_i=a_{r'}$, and $c_{r'}=a_i$. It is easy to check that $x \lneq_D z$. We are going to show that $z \leq_D y$. Note the following
\begin{enumerate}
\item $x(k)=y(k)=z(k)$ for $k=1,...,i-1$.
\item $\widetilde{x(i)} \leq_c \widetilde{z(i)} \leq_c \widetilde{y(i)}$.
\item $\widetilde{z(k)}=\widetilde{x(k)} \leq_c \widetilde{y(k)}$ for $k=r',...,n$.
\end{enumerate}
Therefore, it is enough to prove that $z(k) \leq_c y(k)$ for $k=i+1,...,r'-1$. To this end, $k\in \{i+1,...,r'-1\}$, and let $1\leq m \leq k$. We are going to show that $|\Gamma(z(k),c_m)|\leq |\Gamma(y(k),c_m)|$.
There are two cases; $c_m < a_i$, or $c_m \geq a_i$. We start with the first one.
Since $c_m< a_i$, $m\notin \{i,r\}$, hence $a_m=c_m$.
The set of entries of $z(k)$ that are larger than $c_m=a_m$ is equal to the set of entries of $x(k)$ which are larger than $a_m$. Therefore,
\begin{equation}\label{E:case11}
|\Gamma(z(k),c_m)| = |\Gamma(x(k),c_m)|\leq |\Gamma(y(k),c_m)|,\ \text{if}\ c_m < a_i.
\end{equation}
The next case we check is that $c_m \geq a_i=c_{r'}$. By the observation (\ref{E:observe1}) above,
\begin{equation}
|\Gamma(z(k),c_m)|= |\Gamma(z(i),c_m)|.
\end{equation}
On the other hand, since $z(i) \leq_c y(i)$,
\begin{equation*}
|\Gamma(z(i),c_m)| \leq |\Gamma(y(i),c_m)|,
\end{equation*}
and since $i<k$, we have
\begin{equation*}
|\Gamma(y(i),c_m)|\leq |\Gamma(y(k),c_m)|.
\end{equation*}
Therefore,
\begin{equation}\label{E:case12}
|\Gamma(z(k),c_m)|\leq |\Gamma(y(k),c_m)|, \text{if}\ c_m \geq a_i.
\end{equation}
Hence, (\ref{E:case11}) and (\ref{E:case12}) shows that $z(k) \leq_c y(k)$ for
$k\leq r'-1$.
Having constructed $z\in R_n$, such that $x \lneq_D z \leq_D y$, since $y$ covers $x$ (in the Deodhar ordering), we have that $z=y$. Thus, we are exactly as in the hypotheses of the Proposition \ref{P:Deodharcovering1}. Therefore, we have that $\ell(y)= \ell(x)+1$, and that $y \rightarrow_{PPR} x$.
\end{proof}
\begin{lem}\label{L:Dcovers2}
Let $x=(a_1,...,a_n),y=(b_1,...,b_n)\in R_n$. Suppose that there exists $i\in \{1,...,n-1\}$ such that
\begin{enumerate}
\item $a_k=b_k$ for $k=1,...,i-1$, and $b_i>a_i$,
\item $b_i \notin \{a_1,...,a_n\}$.
\end{enumerate}
Then, $y \rightarrow_D x$ implies that $y \rightarrow_{PPR} x$.
\end{lem}
\begin{proof}
We make use of the following set
\begin{equation*}
\gamma(x,i)=\{ a_t:\ t>i\ a_i>a_t\}.
\end{equation*}
There are two cases; $\gamma(x,i)=\varnothing$, o r $\gamma(x,i)\neq \varnothing$. We start with the first case that $\gamma(x,i)=\varnothing$.
Define $z=(c_1,...,c_n)$ as follows. Let $c_k=a_k$ for $k\neq i$, and let $c_i=b_i$.
Clearly $x \lneq_D z$. We are going to show that $z \leq_c y$.
It is enough to show that
\begin{eqnarray*}
|\Gamma(z(k),c_m)| \leq |\Gamma(y(k),c_m)|,
\end{eqnarray*}
for $k>i$, and $1\leq m \leq k$.
To this end, let $1\leq m \leq k$, and $i<k$. If $c_m \geq a_i$, then
\begin{eqnarray*}
|\Gamma(z(k),c_m)| = |\Gamma(z(i),c_m)| = |\Gamma(y(i),c_m)| \leq |\Gamma(y(k),c_m)|.
\end{eqnarray*}
If $c_m < a_i$, then $c_m=a_m$, and
\begin{eqnarray*}
|\Gamma(z(k),c_m)|=|\Gamma(x(k),a_m)|\leq |\Gamma(y(k),a_m)|=|\Gamma(y(k),c_m)|.
\end{eqnarray*}
Therefore, if $\gamma(x,i)=\varnothing$, then $z\leq_D y$.
Having constructed $z\in R_n$, such that $x \lneq_D z \leq_D y$, since $y$ covers $x$ (in the Deodhar ordering), we have that $z=y$. Thus, we are exactly as in the hypotheses of the Proposition \ref{P:Deodharcovering1}. Therefore, we have that $\ell(y)= \ell(x)+1$, and that $y \rightarrow_{PPR} x$.
We continue with the case where $\gamma(x,i)\neq \varnothing$. Once again, there are two subcases;
either there exits $a_t\in \gamma(x,i)$ such that $b_i>a_t$, or for every $a_t \in \gamma(x,i)$, $a_t>b_i$.
We proceed with the first one.
Then, there exists $a_t\in\gamma(x,i)$ such that $b_i>a_t$. Let $t'$ be the smallest number such that
\begin{enumerate}
\item $i<t'$,
\item $a_i<a_{t'}<b_i$.
\end{enumerate}
Therefore, if $i<s<t'$, then
\begin{equation}\label{E:observe2}
a_i > a_s.
\end{equation}
Define $z=(c_1,...,c_n)$ as follows. If $k\notin \{i,t'\}$, then $c_k=a_k$, and $c_i=a_{t'}$, $c_{t'}=a_i$.
Clearly $x \lneq_D z$. We are going to show that $z \leq_c y$. It is enough to show that
\begin{enumerate}
\item $x(k)=y(k)=z(k)$ for $k=1,...,i-1$.
\item $\widetilde{x(i)} \leq_c \widetilde{z(i)} \leq_c \widetilde{y(i)}$.
\item $\widetilde{z(k)}=\widetilde{x(k)} \leq_c \widetilde{y(k)}$ for $k=t',...,n$.
\end{enumerate}
Therefore, it is enough to prove that $z(k) \leq_c y(k)$ for $k=i+1,...,t'-1$. To this end, $k\in \{i+1,...,t'-1\}$, and let $1\leq m \leq k$. We are going to show that $|\Gamma(z(k),c_m)|\leq |\Gamma(y(k),c_m)|$.
There are two cases; $c_m < a_i$, or $c_m \geq a_i$. We start with the first one.
Since $c_m< a_i$, $m\notin \{i,t'\}$, hence $a_m=c_m$.
The set of entries of $z(k)$ that are larger than $c_m=a_m$ is equal to the set of entries of $x(k)$ which are larger than $a_m$. Therefore,
\begin{equation}\label{E:case21}
|\Gamma(z(k),c_m)| = |\Gamma(x(k),c_m)|\leq |\Gamma(y(k),c_m)|,\ \text{if}\ c_m < a_i.
\end{equation}
To deal with the other case we check that $c_m \geq a_i=c_{t'}$. By the observation (\ref{E:observe2}) above,
\begin{equation}
|\Gamma(z(k),c_m)|= |\Gamma(z(i),c_m)|.
\end{equation}
On the other hand, since $z(i) \leq_c y(i)$,
\begin{equation*}
|\Gamma(z(i),c_m)| \leq |\Gamma(y(i),c_m)|,
\end{equation*}
and since $i<k$, we have
\begin{equation*}
|\Gamma(y(i),c_m)|\leq |\Gamma(y(k),c_m)|.
\end{equation*}
Therefore,
\begin{equation}\label{E:case22}
|\Gamma(z(k),c_m)|\leq |\Gamma(y(k),c_m)|, \text{if}\ c_m \geq a_i.
\end{equation}
Hence, (\ref{E:case21}) and (\ref{E:case22}) show that $z(k) \leq_c y(k)$ for $k\leq t'-1$.
We proceed with the case that $\gamma(x,i)\neq \varnothing$, and $a_t>b_i$, for all $a_t\in \gamma(x,i)$.
Define $z=(c_1,...,c_n)$ as follows. If $k\neq i$, then $c_k=a_k$, and $c_i=b_i$.
Clearly $x \lneq_D z$. We are going to show that $z \leq_c y$.
It is enough to show that
\begin{eqnarray*}
|\Gamma(z(k),c_m)| \leq |\Gamma(y(k),c_m)|,
\end{eqnarray*}
for $k>i$, and $1\leq m \leq k$.
To this end, let $1\leq m \leq k$, and $i<k$. If $c_m \geq b_i$, then
\begin{eqnarray*}
|\Gamma(z(k),c_m)| = |\Gamma(x(k),c_m)| \leq |\Gamma(y(i),c_m)|.
\end{eqnarray*}
If $c_m < b_i$, then $m<i$, and $c_m=a_m=b_m$. Note that the following. If $t>i$, then $b_t > b_i$. Assume otherwise. Let $i<t$ be the
smallest number such that $b_i>b_t$. Then,
\begin{eqnarray*}
|\Gamma(y(t),b_i)| < |\Gamma(x(k),b_i)|,
\end{eqnarray*}
which is a contradiction. Hence,
\begin{eqnarray*}
| \{ c_s:\ i<s\leq k,\ c_s>b_i\}| &=& | \{ b_s:\ i<s\leq k, b_s>b_i\}| = k-i+1
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
|\Gamma(z(k),c_m)| &=& |\{c_s:\ i\geq s,\ c_s>c_m \}| + |\{ c_s:\ i<s\leq k,\ c_s>c_m\}|\\
&=& |\{b_s:\ i\geq s,\ b_s>c_m\}| + | \{ b_s:\ i<s\leq k,\ b_s>b_i\}| \\
&=& |\{b_s:\ i\geq s,\ b_s>c_m\}| + | \{ b_s:\ i<s\leq k,\ b_s>c_m\}| \\
&=& |\Gamma(y(k),c_m)|.
\end{eqnarray*}
Therefore, if $\gamma(x,i)\neq \varnothing$, then $z\leq_D y$. Having constructed $z\in R_n$, such that $x \lneq_D z \leq_D y$, since $y$ covers $x$ (in the Deodhar ordering), we have that $z=y$. Thus, we are exactly as in the hypotheses of the Proposition \ref{P:Deodharcovering1}. Therefore, we have that $\ell(y)= \ell(x)+1$, and that $y \rightarrow_{PPR} x$.
We have handled all the cases, and the proof is complete.
\end{proof}
\begin{prop}\label{P:Dcovers}
Let $x=(a_1,...,a_n)$ and $y=(b_1,...,b_n)$ be two elements from $R_n$. Suppose that $y \rightarrow_D x$. Then $y \rightarrow_{PPR} x$.
\end{prop}
\begin{proof}
Let $i\in \{1,....,n-1\}$ be the smallest index such that $k=1,...,i-1$, $a_k=b_k$ and $b_i > a_i$.
Then we have either
\textit{Case 1.} $b_i = a_r$ for some $r>i$, or
\textit{Case 2.} $b_i \notin \{a_1,...,a_n\}$.
Then, in the \textit{Case 1.}, the Lemma \ref{L:Dcovers1} shows that $y \rightarrow_{PPR} x$, and similarly, in the \textit{Case 2.}, the Lemma \ref{L:Dcovers2} shows that $y \rightarrow_{PPR} x$.
\end{proof}
\begin{thm}
The Deodhar ordering $\leq_D$ on $R_n$ is the same as Pennell-Putcha-Renner ordering
$\leq_{PPR}$ on $R_n$.
\end{thm}
\begin{proof}
By the Proposition \ref{P:Deodharcovering0}, and the Proposition
\ref{P:Deodharcovering1} we know that $y \rightarrow_{PPR} x$ implies $y \rightarrow_D x$. Conversely, by the Proposition \ref{P:Dcovers}, if $y \rightarrow_D x$, then
$y \rightarrow_{PPR} x$. Therefore, the two orderings have the same covering relations, hence they are the same order.
\end{proof}
\begin{cor}\label{C:deodhar} (Deodhar)
Let $x=(a_1,....,a_n)$ and $y= (b_1,...,b_n)$ be two permutations. Then, $x \leq y$ in the Bruhat ordering $\leq $ on $S_n$ if and only if $x \leq_D y$ in the Deodhar ordering on $S_n$.
\end{cor}
\bibliographystyle{amsalpha}
|
1,314,259,995,606 | arxiv | \section{Discussion and conclusions}\label{sec:conclusion}
One of the important open questions from the observations of the first
BNS merger GW170817 was whether the gamma ray emission from GRB170817A
is powered by a relativistic jet, and whether this burst is one among
the population of cosmological short GRBs. The absence of a strong gamma
ray burst associated with S190425z\,, the second BNS merger candidate by
LIGO/VIRGO, increased the importance of this question.
While the evolution of the broad-band afterglow and proper motion of the
radio afterglow of GRB170817A confirm the presence of a relativistic
jet, the low energetic $\gamma$-ray signal could still be
argued to be different from cosmological short GRBs.
If so, a novel class of low luminosity gamma ray transients will be seen
associated with BNS mergers.
Therefore, it is important to understand whether jets are associated with BNS mergers, and if so to obtain reliable constraints on their energetics.
In this analysis, we have shown that electromagnetic observations of S190425z\, is consistent with the launch of a relativistic jet typical to that of short duration GRBs. We see that a structured jet with Gaussian profile at the distance of S190425z\, is well consistent with the INTEGRAL sensitivity limits. The inferred posterior of the isotropic equivalent energy for an on-axis observer is in agreement with that of typical short GRBs. Even when we take the conservative view that \textit{INTEGRAL} observations yielded only an upper limit, thereby allowing even low energy explosions to remain consistent, the $1 \sigma$ posterior results in $E_{\rm iso}(0) \leq 3 \times 10^{48}$~erg for a wide uniform prior in $E_{\rm tot,\gamma}$. Constraints are not very tight if a broken power-law prior distribution, which reproduces the fluence distribution of standard short GRBs, is assumed for $E_{\rm tot,\gamma}$. Nevertheless, this indicates that one need not invoke a intrinsically low energy GRB or shock breakout to explain the absence of a strong $\gamma$-ray signal above \textit{INTEGRAL} limits. We do not find any significant change in the results if $\gamma$-ray emission is assumed to happen only from those regions of the jet above a moderate threshold Lorentz factor.
A limitation of this approach to estimate off-axis fluence is that it can not accommodate energy dissipation and $\gamma$-ray production mechanisms. This approach is also not sensitive to temporal and spectral evolution of the radiation, instead it returns the total fluence observed. Our conclusions are sensitive to future deeper upper limits from \textit{INTEGRAL}. Future multi-messenger observations of Binary NS
mergers in AdvLIGO/VIRGO 3rd observing run will certainly provide further
valuable insights into the physics of these mergers and Gamma Ray Bursts.
\section{Constraining the jet properties}
\label{sec:SJ}
In this section, we examine whether the \textit{INTEGRAL} observations are consistent with the BNS merger launching a structured relativistic jet similar to what is seen in GRB170817A \citep{LK17, Resmi:2018wuc, Lamb:2018qfn}.
\cite{2019GCN.24169....1M} has reported a marginal ($3.7$ sigma) excess
in \textit{INTEGRAL} SPI-ACS counts temporally coincident ($+6$~s) with
the GW trigger. Such a delay can be accounted within different models of
jet ejection and $\gamma$-ray emission \citep{zhang2017}, hence it can
very likely be associated with the BNS merger candidate. However, we
treated this observation as two different possibilities. First, we
considered the fluence reported in \cite{2019GCN.24170....1M}, $(1.6 \pm
0.4) \times 10^{-7} {\rm erg}/{\rm cm}^2$ as a detection of the
associated short GRB. However, since this is a low confidence signal,
and also since its spatial coincidence to the BNS merger can not be
established, we considered $2 \times 10^{-7} erg/cm^2$ as a
conservative $3 \sigma$ upper limit to the GRB fluence. For a $\sim 1$~s
duration signal, this number is also consistent with the position
dependent sensitivity map for the duration of the GW candidate released
by the INTEGRAL collaboration \citep{2019GCN.24178....1S}, where the
fluence sensitivity ranges from $(1.5 - 6) \times 10^{-7} {\rm erg}/{\rm
cm}^2/s$. Since the \textit{FERMI} GBM has only seen about 55 percent of
the LIGO error circle \citep{2019GCN.24185....1F}, we consider \textit{INTEGRAL} observations in the rest of this paper.
Next, we computed the expected fluence from an underlying relativistic jet. The jet velocity ($\beta$) and energy have an angular structure. The bulk Lorentz factor ($\Gamma$) distribution across the polar angle $\theta$ is given by $\Gamma \beta (\theta) = \Gamma_0 \beta_0 \exp{\frac{-\theta^2}{2 \theta_c^2}}$, where $\theta_c$ is the jet structure parameter which determines the core of the structured jet. The normalized energy profile function is given by $\epsilon(\theta) \propto \exp{\frac{-\theta^2}{\theta_c^2}}$, with the normalization constant estimated by $2 \pi \int d(\cos{\theta}) \epsilon{(\theta)} =1$. The assumed angular profile of energy and Lorentz factor are motivated by the afterglow of GRB170817A, where modelling studies have inferred such an angular profile for the outflow kinetic energy and bulk Lorentz factor \citep{Lazzati:2017zsj, Resmi:2018wuc, Granot:2017gwa, Lamb:2018qfn}. However, before extending the inferred angular profile of kinetic energy to the energy emitted in $\gamma$-rays, it must be noted that the $\gamma$-ray efficiency could have its own dependence on latitude, or the $\gamma$-ray emission mechanism could be suppressed for jet elements having low bulk Lorentz factors. We discuss these issues later in the draft.
Following the framework developed independently by \cite{donaghy05} and \cite{salafia2015structure}, the isotropic equivalent energy measured by an observer at a viewing angle $\theta_v$ is,
\begin{equation}
E_{\rm iso} (\theta_v)= \frac{E_{\rm tot, \gamma}}{2 \pi} \int_0^{2 \pi} d\phi \int_0^{\theta_{\rm max}} d\theta \sin(\theta) \frac{\epsilon(\theta)}{{\Gamma\!(\theta)}^4 \: \left[ 1-\beta\!( \theta) \cos{\alpha_v} \right]^3},
\label{eq1}
\end{equation}
where $E_{\rm tot, \gamma}$ is the total energy emitted in $\gamma$-rays, $\alpha_v$ is the angle between the the line of sight and the direction to a jet element at ($\theta, \phi$), given by $\cos(\alpha_v) =\cos(\theta_v) \cos(\theta) + \sin(\theta_v) \sin(\theta) \cos(\phi)$, and $\theta_{\rm max}$ is the upper cut-off of integration over the polar angle of the jet. Such an upper cut-off could arise in two ways, either as the edge of the jet or as a limiting angle where the Lorentz factor or $\gamma$-ray emission efficiency drops below a certain threshold value. We numerically integrate equation-\ref{eq1} to estimate the fluence measured from the structured jet by an off-axis observer as $E_{\rm iso} (\theta_v)/4 \pi d_L^2$.
Essentially, here the energy per solid angle is integrated over the jet surface after accommodating relativistic effects due to viewing angle. Therefore, this method can not reproduce time or frequency resolved quantities, such as the temporal or spectral peak in a GRB.
\begin{figure}
\label{fig:fig1}
\centering
\includegraphics[scale=0.45]{fig1.pdf}
\caption{Variation of the isotropic equivalent energy for observers at different viewing angles, for both top-hat (violet curve) and Gaussian (green curves) jets. For both jets, we assumed $E_{\rm tot, \gamma} = 10^{49}$~ergs. Both the half-opening angle of the top-hat jet and the core-angle of the structured jet are $5^{\circ}$. Bulk Lorentz factor at the axis of the Gaussian jet is same as the bulk Lorentz factor of the top-hat jet, $100$. Horizontal dashed black line shows $E_{\rm tot,\gamma}/(1-\cos(5^{\circ})$, the on-axis $E_{\rm iso}$. For the green solid curve the entire jet is assumed to emit $\gamma$-rays, while for the dashed and the dash-dotted green curves, the emission is restricted to a $\Gamma$ of $15$ (leading to a limit of $\theta=9.74^{\circ}$, vertical dashed line) in the integration and $30$ (leading to a limit of $7.76^{\circ}$, vertical dash-dot line) respectively.}
\end{figure}
In Fig 2, we show the behaviour of $E_{\rm iso}$ as a function of $\theta_v$ for both top-hat and Gaussian structured jets for $E_{\rm tot, \gamma} =10^{49}$~ergs. Bulk Lorentz factor of the top-hat jet, and that at the axis ($\Gamma_0$) for the Gaussian jet, are $100$. We assumed a core angle of $5^{\circ}$ for the Gaussian jet, and the same value is assumed to be he jet half-opening angle $\theta_j$ for the top-hat jet. We can see from the figure that on-axis isotropic equivalent energy for a Gaussian structured jet has the same form as that of the top-hat jet with $\theta_c$ replaced by $\theta_j$ (see \cite{sreelakshmiEtAl} for an analytical derivation of this), i.e, $E_{\rm iso}(\theta_v=0) = E_{\rm tot,\gamma}/\left[1-\cos(\theta_c)\right]$. Therefore, equation-\ref{eq1} can be rewritten in terms of $E_{\rm iso}(0)$, the isotropic equivalent energy an on-axis observer would measure. And in section-4, we present the results in terms of $E_{\rm iso}(0)$.
In Fig 2, we also show how the isotropic equivalent energy (or fluence) changes for off-axis observers if a cut-off $\Gamma$ is assumed for efficient $\gamma$-ray production in the jet. To begin with, there are not many strong observational or theoretical evidences for such a cut-off in the Lorentz factor below which $\gamma$-ray production mechanism stops. In one example, for the low energetic GRB980425, \cite{Lithwick:2000kh} has found from optical depth arguments that a bulk Lorentz factor as low as $6.4$ is also consistent with the data. On the other hand, there are claims for a possible cut-off at relatively large Lorentz factors ($\Gamma \sim 50$) in long GRBs, estimated through statistical properties of prompt and afterglow emission \citep{Beniamini:2018udm}. It is not clear if this is applicable to short GRBs, where a structure in energy and bulk velocity can be developed as the jet propagates through the merger ejecta \citep{2018ApJ...863...58X, Geng:2019qvn, 2019MNRAS.484L..98K} Therefore, we arbitrarily assumed various cut-off Lorentz factors to see how that affects an off-axis observer. The solid green curve assumes emission from the entire jet ($\theta_{\rm max} = \pi/2$), while the dashed green curve assumes a cut-off Lorentz factor $\Gamma_{\rm cut} = 15$ and the dash-dotted green curve assumes $\Gamma_{\rm cut} = 30$. We can see that such a cut-off affects the detection at extreme viewing angles. According to the angular profile of Lorentz factor we use, if $\Gamma_{\rm cut} = \Gamma(0)/\sqrt{e}$, the emission will be restricted upto the core angle, and the Gaussian jet will behave more-or-less the same way as a top-hat jet except for a gradual decrease in fluence for $\theta_v < \theta_c$ instead of the flat profile of the top-hat jet.
In order to better understand the constraints on a possible structured Gaussian jet, we ran a Monte-Carlo simulation with $10^5$ realizations of the jet and compared the model fluence with what is observed by \textit{INTEGRAL}. First, we used a uniform distribution in log space, ranging from $44 < \log_{10}(E_{\rm tot, \gamma}/{\rm erg})) < 51$ for $E_{\rm tot, \gamma}$. A uniform prior of $3^{\circ}< \theta_c < 20^{\circ}$ is considered for the jet core angle. With these values, we were able to cover the entire range of $E_{\rm iso} (0)$ values observed for typical cosmological short GRBs \citep{2009ApJ...703.1696Z, DAvanzo:2014urr}, and also extend the prior to much lower values if an intrinsically low energy burst is to arise from the merger. We used a wide uniform prior for the bulk Lorentz factor at the jet axis $5 < \Gamma_0 < 500$. This was done particularly because constraints on the initial bulk Lorentz factor from GRB170817A is very weak \citep{Troja2018c, Resmi:2018wuc}, and we do not have a good prior information on the kind of outflows arising from BNS mergers.
In the next step, we chose priors that best reproduces the observed short GRB fluence distribution from Fermi 4-yr catalogue \citep{Gruber:2014iza}. We found that a broken power-law prior distribution of $E_{\rm tot, \gamma}$ along with uniform distributions $3^{\circ} < \theta_c < 20^{\circ}$ and $100 < \Gamma < 500$ are able to reproduce the observed fluence distribution above $2\times 10^{-7}$~erg/cm$^2$ relatively well \citep{sreelakshmiEtAl}. We assumed $E_{\rm tot, \gamma}$ to extend from $5 \times 10^{47}$~ergs to $10^{50}$~ergs with a power-law index of $-0.53$ and from $10^{50}$~ergs to $5 \times 10^{51}$~ergs with an index of $-3.5$. For the indices, we adopted values from the luminosity function used by \cite{ghirlanda2016short}.
We used both these distributions along with the \textit{INTEGRAL} observations to see constraints on a possible GRB associated with the merger. The $D_L -\iota$ distribution computed in the previous section is substituted as the prior for $D_L$ and $\theta_v$. We extracted marginalized posterior distributions for $\theta_c, \Gamma_0, \theta_v,$ and $E_{\rm tot, gamma}$ which later we converted to the posterior of $E_{\rm iso}(0)$.
We find that the \textit{INTEGRAL} fluence provides a good constraint to the energy of the Gamma Ray Burst. The isotropic equivalent energies of cosmological short GRBs detected by FERMI GBM and SWIFT BAT ranges from $10^{48}$ to $10^{53}$~ergs \citep{2009ApJ...703.1696Z}. Our analysis shows that, had the observer been along the axis of the jet, a typical short GRB could have been detected along with S190425z\,(see Fig 3). The uniform energy prior, which has a wide range, get well constrained by the \textit{INTEGRAL} observations. When considered as a detection, $E_{\rm iso}(0)$ is tightly constrained to be between $(4.74 \times 10^{47} - 2.21 \times 10^{51})$~ergs (blue curve in the left panel of Fig 3). On the other hand, when considered as an upper limit, the posterior (orange curve) indicates that $E_{\rm iso}(0) \leq 3 \times 10^{48}$~ergs at $1 \sigma$ level broadly in agreement with the range observed for standard cosmological short GRBs. The lower end of the posterior in this case is not constrained (as expected in the absence of a detection) and hence simply follows the prior on $E_{\rm iso}(0)$ (see Fig 3, shaded grey).
On the other hand, for the second case, where we use a narrower prior distribution in energy, the observations are not able to place tight constraints on the assumed prior distribution. The $1-\sigma$ posterior bounds is $1.9 \times 10^{49}$~erg $< E_{\rm iso}(0) < 6.6 \times 10^{50}$~erg. Though the posterior bounds are sensitive to the assumed prior, both prior distributions we considered here imply that the observations can not rule out an event with typical short GRB energetics. We recall that our conclusion is sensitive to deeper limits from \textit{INTEGRAL} as well as the refined GW posteriors on $D_L - \iota$.
The $\gamma$-ray observations can not provide any useful constraints to the jet core angle or its initial bulk Lorentz factor. Taken either as upper limit or as a detection, the \textit{INTEGRAL} fluence is consistent with the expected emission from a relativistic jet. We also ran the simulations where $E_{\rm iso}(\theta_v)$ is calculated by assuming that the $\gamma$-ray emission stops below a bulk Lorentz factor of $15$. The posterior distribution from such a model did not show much differences from a model where the entire jet surface is integrated to obtain $E_{\rm iso}$.
\section{Constraints from LIGO-Virgo observations}\label{sec:GW}
Even before the discovery of GW170817, it has been argued that
multi-messenger observations of binary neutron star mergers -
especially the measurement of luminosity distance and inclination angle can
have profound implications for the modelling of the
associated gamma ray burst
jet~\citep{arun2014synergy,saleem2017agparameterspace}. This is because the jets in the case of BNS mergers are very
likely to be launched along the orbital angular momentum axis of the
binary which hence relates the inclination angle with the
viewing angle of the jet. The distance and
inclination angle in the gravitational waveforms are strongly correlated
as they both appear in the amplitude of the gravitational wave
signal~\citep{cutler1994}. Hence it is ideal to obtain the two
dimensional constraints on them using the available information and then
use that to model the $\gamma$-ray emission.
We use the following information about the binary neutron star candidate
S190425z\, that are available from the GCN~\citep{GCN1,GCN2}:
\begin{enumerate}
\item {It has a probability > 99\% of being a BNS merger.}
\item {It was observed by the network of LIGO Livingston (L1)
and Virgo (V1) detectors and since the signal to noise ratio (SNR) at Virgo was below the threshold, the candidate is considered as a single detector trigger.}
\item {The preliminary luminosity distance estimate is given by
$D_L = 155\pm41$Mpc.}
\end{enumerate}
Using the above inputs, we obtain constraints on the two-dimensional
$D_L-\iota$ space as follows. We simulate a population of BNS mergers
uniformly distributed in the comoving volume with $\cos \iota$ of the
binaries being distributed uniformly between -1 and 1. The NS masses are uniformly distributed between 1-2$M_{\odot}$.
{We then compute optimal signal to noise ratio for each one of
them using the restricted post-Newtonian waveform~\citep{cutler1994}}.
As the trigger is an L1 single-detector trigger, we assume SNR<4 at V1, following the single-detector threshold considered by GstLAL pipeline and the L1-V1 network SNR > 9 which is motivated by the fact that the network SNR of all the O1/O2 events were above > 9 \citep{gwtc-1}.
To compute the SNRs in L1 and V1,
we used the best reported O2 sensitivities \citep{gwtc-1} of L1 and V1 as
their representative (conservative) O3 sensitivities (see
\cite{saleem:2019} for more details). From this, we
extract a sub-population of mergers for which the luminosity distance
distribution follows a Gaussian distribution consistent with \citep{GCN2}. The 2D distribution of
$D_L-\iota$ of this sub-population is shown in Fig. 1
which we use as the prior for studying the prompt emission
from a short gamma ray burst associated with S190425z\,.
\section{Introduction}
The joint detection of GW170817~\citep{GW170817} and
GRB170817A~\citep{GRB+GW-2017} established the long standing hypothesis
that short GRBs are powered by binary neutron star (BNS) mergers~\citep{Narayan92}.
However, GRB170817A was
several orders of magnitude fainter than its cosmological counterparts \citep{Goldstein2017b}.
This led to a proposition that jets need not successfully emerge from
some, if not all, BNS mergers and the low-energy $\gamma$-ray emission
and the non-thermal afterglow could be the result of a sub-relativistic
cocoon originating from the tidally ejected merger debris \citep{Kasliwal:2017ngb, Hallinan2017b, Gottlieb:2017pju}.
However, late VLBI observations of GRB170817A provided a strong evidence
for the relativistic nature of the outflow \citep{Mooley:2018dlz, 2019Sci...363..968G}. In addition, temporal
evolution of broad-band afterglow emission showed excellent agreement
with emission from a relativistic jet with an angular structure in
energy and velocity \citep{Margutti2018a, Lazzati2017a, Lyman2018a, DAvanzo:2018zyz, Resmi:2018wuc, Lamb:2018qfn}. The low inferred energy of GRB170817A could be successfully explained by structured relativistic jet models \citep{Kathirgamaraju:2017igg,Resmi:2018wuc}. Numerical simulations of the relativistic jet piercing through the merger ejecta have shown that it successfully emerges with an angular structure \citep{2018ApJ...863...58X,2019MNRAS.484L..98K, Geng:2019qvn}. Yet, the possibility of the $\gamma$-ray emission from GRB170817A being intrinsically faint and resulting from a cocoon shock break out can still be debated \citep{2018MNRAS.475.2971B, 2018MNRAS.477.2128H}.
Future multi-messenger observations of BNS mergers would help us answer
several open questions related to the phenomenon which include: Do all the BNS mergers
produce relativistic jets and short GRBs similar to the cosmological
sample? If not what are the factors that determine the relative
fraction between the population which successfully launches a jet and
the one which does not?
Ongoing and future observing runs of advanced LIGO and Virgo
interferometers hence play a central role in deeply understanding the
phenomenon of BNS mergers and short GRBs.
The third observing run of LIGO and Virgo gravitational wave
interferometers reported the first binary neutron star merger candidate
S190425z\, on 25th April 2019~\citep{GCN1} by the real-time processing of
the data using the {GstLAL}~\citep{gstlalpaper} and PyCBC Live~\citep{pycbc-live-Nits} analysis pipelines.
This candidate, which was coincident in the LIGO Livingston and Virgo interferometers, has a false alarm Rate of $4.5\times10^{-13}$ Hz (about one in $10^5$
years) from the online analysis and a probability
of BNS to be $\geq 99\%$. The preliminary estimate of the luminosity
distance of to the source is $156\pm 41$~Mpc~\cite{GCN2}.
The $90$\% sky localization corresponds to $7641$~Sq degrees.
Unlike GW170817, the poor sky localization
hampered extensive electromagnetic follow up efforts of S190425z\,. However,
the INTErnational Gamma-Ray Astrophysics Laboratory (\textit{INTEGRAL})
serendipitously observed the entire localization region of the
AdvLIGO/VIRGO simultaneous to S190425z\,, and found a low signal-to-noise
short duration ($\sim 1$~s) excess $6$~s after the merger \citep{2019GCN.24169....1M}.
Since \textit{INTEGRAL} can not provide a localization of this excess, and
since no other confident EM counterpart is discovered till date, the
spacial coincidence of the BNS merger and the \textit{INTEGRAL} source can not be
firmly established. Nevertheless, as the entire localization region of
AdvLIGO/VIRGO is covered by the satellite, these observations to the
least provide an upper limit to the fluence of any $\gamma$-ray signal
associated with the merger. The GBM on board FERMI provided flux upper limits for a part of the LIGO/VIRGO localiztaion region \citep{2019GCN.24185....1F}. \cite{Song:2019ddw} obtained the
constraints on the viewing angle of the jet from FERMI observations to be
between $> 0.11-0.41$ radians, assuming the GW170817 jet to be quasi-universal.
In this letter, we ask whether the \textit{INTEGRAL} observations of
S190425z\, are consistent with a relativistic jet associated with this
BNS candidate.
We combine
two observational inputs: the luminosity distance from gravitational
waves and the INTEGRAL observations (considered both as upper limit and detection), along with
a Gaussian structured jet model parametrized by the energy, core angle, and bulk velocity. As there are no
constraints on the inclination angle $\iota$ (same as observer's viewing
angle ${\theta_v}$ when the binary orbit is not precessing due to spins) of the binary from the
gravitational wave observations yet, we use a simulated population
of BNS mergers and use the luminosity distance estimate from GWs
together with some conservative signal to noise ratio limits to obtain a 2
dimensional constraint in the $\iota-D_L$ plane~\citep{saleem:2019} (see
also \citep{Schutz2011,Seto:2014iya} for an analytical treatment of the
problem).
Our results show that S190425z\, could have produced a successful relativistic jet and the prompt gamma ray emission could well have been missed due to relativistic de-boost. We can derive
moderate constraints, though sensitive to the prior used, on the on-axis isotropic equivalent energy of the associated GRB (or on the total energy emitted in $\gamma$-rays, while
constraints on other parameters are weak. However, the conclusion that
the presence of a structured jet is completely consistent with the observations
itself is interesting and will help us in future to study the statistical properties of BNS mergers with poor source localization.
The remainder of the paper is organized as follows. Sec.~\ref{sec:GW}
details the input from gravitational wave observations which goes in as
prior information in the analysis of S190425z\,, reported in
Sec.~\ref{sec:SJ}, using structured jet
model. Sec.~\ref{sec:conclusion} discusses the implications of our
findings.
\input{gw-part}
\begin{figure}
\label{fig:resultsgw}
\centering
\includegraphics[scale=0.5]{dl-iota.png}
\caption{[Left] Constraints on the $D_L - \iota$ combination
obtained from the observed properties of S190425z\, as reported in
\cite{GCN2}}
\end{figure}
\input{GSJ-part}
\begin{figure*}
\label{fig:results2}
\centering
\includegraphics[scale=0.5]{EisoOnaxisHist_revsn.png}
\includegraphics[scale=0.5]{EisoOnaxisHist_bpl.png}
\caption{Constraints on the isotropic
on-axis energy of the short-GRB associated with S190425z\, assuming a Gaussian
structured jet. On the left panel, the grey shades indicates the prior distribution which
results from assuming uniform priors on $\log_{10} (E_{\rm tot, \gamma}/erg)$ in the range of [$44 - 51$] and on $\theta_c$ in [3,20] degrees. On the right panel, same prior is used for $\theta_c$ while for $E_{\rm tot, \gamma}$, a broken power-law function is used which reproduces the observed fluence distribution of short GRBs (see text for details). The orange curve results from considering a fluence upper limit of $2 \times 10^{-7}$ erg/cm$^2$ while the blue curve considers a detection of $1.6 \pm 0.4$~erg/cm$^2$ as reported in \citep{2019GCN.24170....1M}. Treating the low S/N excess as a detection, the isotropic equivalent energy of an associated GRB, if viewed on-axis, is tightly constrained for a flat prior. In both cases, on-axis energy of a possible associated GRB is within the range of that of the cosmological SGRB population.}
\end{figure*}
\input{conclusions}
\section*{Acknowledgements}
K. G. A. and M. S. are partially supported by a grant
from Infosys Foundation. K. G. A. acknowledges the sup-
port by the Indo-US Science and Technology Forum through
the Indo-US Center for the Exploration of Extreme Gravity
(Grant No. IUSSTF/JC-029/2016). R.L. acknowledges support from the grant EMR/2016/007127 from Department of Science and Technology, India. We thank an anonymous referee whose suggestions greatly improved this manuscript.
\bibliographystyle{apj}
|
1,314,259,995,607 | arxiv | \section*{Introduction}
\title[ On the intersection ideal graph of semigroups]{On the intersection ideal graph of semigroups}
\author[Barkha Baloda, Jitender Kumar]{Barkha Baloda, $\text{Jitender Kumar}^{^*}$}
\address{Department of Mathematics, Birla Institute of Technology and Science Pilani, Pilani, India}
\email{barkha0026@gmail.com,jitenderarora09@gmail.com}
\begin{abstract}
The intersection ideal graph $\Gamma(S)$ of a semigroup $S$ is a simple undirected graph whose vertices are all nontrivial left ideals of $S$ and two distinct left ideals $I, J$ are adjacent if and only if their intersection is nontrivial. In this paper, we investigate the connectedness of $\Gamma(S)$. We show that if $\Gamma(S)$ is connected then $diam(\Gamma(S)) \leq 2$. Further we classify the semigroups such that the diameter of their intersection graph is two. Other graph invariants, namely perfectness, planarity, girth, dominance number, clique number, independence number etc. are also discussed. Finally, if $S$ is union of $n$ minimal left ideals then we obtain the automorphism group of $\Gamma(S)$.
\end{abstract}
\subjclass[2010]{05C25}
\keywords{Semigroup, ideals, clique number, graph automorphism\\ * Corresponding author}
\maketitle
\section{Introduction}
Literature is abound with numerous remarkable results concerning a number of constructions of graphs from rings, semigroups or groups.
The intersection graph of a semigroup was introduced by Bos\'ak \cite{i.Bosak} in $1964$.
The \emph{intersection subsemigroup graph} $\Gamma(S)$ of $S$ is an undirected simple graph whose vertex set is the collection of proper subsemigroups of $S$ and two distinct vertices $A, B$ are adjacent if and only if $A \cap B \neq \emptyset$.
In \cite{i.Bosak}, it was shown that if $S$ is a nondenumerable semigroup or a periodic semigroup with more than two elements, then the graph $\Gamma(S)$ is connected. Bos\'ak then raised the following open problem: Does there exists a semigroup with more than two elements whose graph is disconnected? Y. F. Lin \cite{lin1969}, answer the problem posed by Bos\'ak, in the negative manner and proved that every semigroup with more than two elements has a connected graph. Also, B. Pond\v{e}li\v{c}ek \cite{abc} proved that the diameter of a semigroup with more than two elements does not exceed three.
Inspired by the work of J. Bos\'ak , Cs\'ak\'any and Poll\'ak \cite{a.Cskany1969} studied the intersection graphs of groups and showed that there is an edge between two proper subgroups if they have at least two elements common. Further, Zelinka \cite{B.kaka}, continued the work for finite abelian groups. R. Shen \cite{R.shen}, characterized all finite groups whose intersection graphs are disconnected. This solves the problem posed in \cite{a.Cskany1969}.
The groups whose intersection graphs of normal subgroups are connected, complete, forests or bipartite are classified in \cite{a.jafari}. Tamizh \emph{et al.} \cite{T.Chelvam2012}, continued the seminal paper of Cs\'ak\'any and Poll\'ak to introduce the subgroup intersection graph of a finite group $G$. Further, in \cite{X.Ma}, it was shown that the diameter of intersection graph of a finite non-abelian simple group has an upper bound $28$. Shahsavari \emph{et al.} \cite{a.Shahsavari2017} have studied the structure of the automorphism group of this graph. The intersection graph on cyclic subgroups of a group has been studied in \cite{a.Haghi2017}. Further, Kayacan \emph{et al.} \cite{kayacan2015abelian} studied the conjecture given in \cite{B.kaka}, that two (noncyclic) finite abelian groups with isomorphic intersection graphs are isomorphic. In \cite{kayacan2018connectivity}, finite solvable groups whose intersection graphs are not 2-connected, finite nilpotent groups whose intersection graphs are not 3-connected is classified. Further, the dominating sets of the intersection graph of finite groups is investigated in \cite{a.kalyacan}.
Recently, Chakrabarty et al. \cite{a.sen2009} introduced the notion of intersection ideal graph of rings. The \emph{intersection ideal graph} $\Gamma(R)$ of a ring $R$ is an undirected simple graph whose vertex set is the collection of nontrivial left ideals of $R$ and two distinct vertices $I, J$ are adjacent if and only if $I \cap J \neq \{0\}$. They characterized the rings $R$ for which the graph $\Gamma(R)$ is connected and obtain several necessary and sufficient conditions on a ring $R$ such that $\Gamma(R)$ is complete. Planarity of intersection graphs of ideals of ring with unity is described in \cite{MR2660547} and domination number in \cite{jafari2011dominion}. Akbari \emph{et al.} \cite{S.Akbari2013} classified all rings whose intersection graphs of ideals are not connected and also determined all rings whose clique number is finite. The intersection graphs of ideals of direct product of rings have been discussed in \cite{MR3310566}. Pucanovic \emph{et al.} \cite{MR3190084} classified all graphs of genus two that are intersection graphs of ideals of some commutative rings and obtain some lower bounds for the genus of the intersection graph of ideals of a non local commutative ring. In \cite{das2017}, Das characterized the positive integer $n$ for which the intersection graph of ideals of $\mathbb{Z}_n$ is perfect. The Intersection graph for submodules of modules have been studied in \cite{akbari2012intersection,akbari2017some, yaraneri2013intersection}. The intersecton graph on algebraic structures have also been studied in \cite{ahmadi2016planarity, akbari2015intersectiongroup, akbari2014some, jafari2011results, laison2010subspace, xu2020automorphism}.
It is pertinent as well as interesting to associate graphs to ideals of a semigroup as ideals gives a lot of information about the structure of semigroups.
Motivated with the work of \cite{S.Akbari2013, a.sen2009},
in this paper, we consider the intersection ideal graph associated with semigroups. The \emph{intersection ideal graph} $\Gamma(S)$ of a semigroup $S$ is an undirected simple graph whose vertex set is nontrivial left ideals of $S$ and two distinct nontrivial left ideals $I, J$ are adjacent if and only if their intersection is nontrivial. The paper is arranged as follows. In Section 2, we state necessary fundamental notions and recall some necessary results. Section 3 comprises the results concerning the connectedness of intersection ideal graph of an arbitrary semigroup. In Section 4, we study various graph invariants of $\Gamma(S)$ viz. girth, dominance number, independence number and clique number etc. Further, if $S$ is union of $n$ minimal left ideals then the automorphism group of $\Gamma(S)$ is obtained.
\section{Preliminaries}
In this section, first we recall necessary definitions and results of semigroup theory from \cite{b.clifford61vol1}. A \emph{semigroup} $S$ is a non-empty set together with an associative binary operation on $S$. The Green's $\mathcal{L}$-relation on a semigroup $S$ defined as $x$ $\mathcal{L}$ $y \Longleftrightarrow S^{1}x = S^{1}y$ where $S^{1}x = Sx \cup \{x\}$.
The $\mathcal{L}$-class of an element $a \in S$ is denoted by $L_a$. A non-empty subset $I$ of $S$ is said to be a \emph{left [right] ideal} if $SI \subseteq I [IS \subseteq I]$ and an \emph{ideal} of $S$ if $SIS \subseteq I$. Union of two left [right] ideals of $S$ is again a left [right] ideal of $S$. A left ideal $I$ is \emph{maximal} if it does not contained in any nontrivial left ideal of $S$. If $S$ has a unique maximal left ideal then it contains every nontrivial left ideal of $S$.
A left ideal $I$ of $S$ is \emph{minimal} if it does not properly contain any left ideal of $S$. It is well known that every non-zero element of a minimal left ideal of $S$ is in same $\mathcal{L}$-class. If $S$ has a minimal left ideal then every nontrivial left ideal contains at least one minimal left ideal. If $A$ is any other left ideal of $S$ other than $I$, then either $I \subset A$ or $I \cap A = \emptyset$. Thus we have the following remark.
\begin{remark}\label{disjoint intersection minimal}
Any two different minimal left ideals of a semigroup $S$ are disjoint.
\end{remark}
\begin{remark}\label{everynontrivial left ideal is union}
Let $S$ be union of $n$ minimal left ideals. Then each nontrivial left ideal is union of these minimal left ideals.
\end{remark}
The following lemma is useful in the sequel and we shall use this without referring to it explicitly.
\begin{lemma}\label{S minus K is lclass}
A left ideal $K$ of $S$ is maximal if and only if $S \setminus K$ is an $\mathcal{L}$-class.
\end{lemma}
\begin{proof}
First suppose that $S \setminus K$ is an $\mathcal{L}-$class. Let if possible, $K$ is not maximal left ideal of $S$. Then there exists a nontrivial left ideal $K'$ of $S$ such that $K \subset K'$. There exists $a \in K'$ but $a \notin K$. Thus, $L_a = S \setminus K$. Consequently, $L_a \subset K'$ gives $S = K'$, a contradiction.
Conversely, suppose that $K$ is a maximal left ideal of $S$. For each $a \in S \setminus K$, maximality of $K$ implies $K \cup S^1a = S$. Consequently, $a$ $\mathcal{L}$ $b$ for every $a, b \in S \setminus K$. Thus $S \setminus K$ is contained in some $\mathcal{L}-$class and this $\mathcal{L}-$class is disjoint from $K$. It follows that $S \setminus K$ is an $\mathcal{L}-$class.
\end{proof}
We also require the following graph theoretic notions \cite{westgraph}. A \emph{graph} $\Gamma$ is a pair $\Gamma = (V, E)$, where $V = V(\Gamma)$ and $E = E(\Gamma)$ are the set of vertices and edges of $\Gamma$, respectively. We say that two different vertices $u, v$ are $\mathit{adjacent}$, denoted by $u \sim v$ or $(u,v)$, if there is an edge between $u$ and $v$. We write $u \nsim v$, if there is no edge between $u$ and $v$. The \emph{distance} between two vertices $u, v$ in $\Gamma$ is the number of edges in a shortest path connecting them and it is denoted by $d(u, v)$. If there is no path between $u$ and $v$, we say that the distance between $u$ and $v$ is \emph{infinity} and we write as $d(u, v) = \infty$. The diameter $diam(\Gamma)$ of $\Gamma$ is the greatest distance between any pair of vertices. The \emph{degree} of the vertex $v$ in $\Gamma$ is the number of edges incident to $v$ and it is denoted by $deg(v)$.
A \emph{cycle} is a closed walk with distinct vertices except for the initial and end vertex, which are equal and a cycle of length $n$ is denoted by $C_n$. The \emph{girth} of $\Gamma$ is the length of its shortest cycle and is denoted by ${g(\Gamma)}$. A subset $X$ of $V(\Gamma)$ is said to be \emph{independent} if no two vertices of $X$ are adjacent. The \emph{independence number} of $\Gamma$ is the cardinality of the largest independent set and it is denoted by $\alpha(\Gamma)$. A graph $\Gamma$ is \emph{bipartite} if $V(\Gamma)$ is the union of two disjoint independent set. It is well known that a graph is bipartite if and only if it has no odd cycle {\cite[Theorem 1.2.18]{westgraph}}. A connected graph $\Gamma$ is Eulerian if and only if degree of every vertex is even {\cite[Theorem 1.2.26]{westgraph}}. A \emph{subgraph} of $\Gamma$ is a graph $\Gamma'$ such that $V(\Gamma') \subseteq V(\Gamma)$ and $E(\Gamma') \subseteq E(\Gamma)$. A subgraph $\Gamma'$ of $\Gamma$ is called an \emph{induced subgraph} by the elements of $V(\Gamma') \subseteq V(\Gamma)$ if for $u, v \in V(\Gamma')$, we have $u \sim v$ in $\Gamma'$ if and only if $u \sim v$ in $\Gamma$. The \emph{chromatic number} of $\Gamma$, denoted by $\chi(\Gamma)$, is the smallest number of colors needed to color the vertices of $\Gamma$ so that no two adjacent vertices share the same color. A \emph{clique} in $\Gamma$ is a set of pairwise adjacent vertices. The \emph{clique number} of $\Gamma$ is the size of maximum clique in $\Gamma$ and it is denoted by $\omega(\Gamma)$. It is well known that $\omega(\Gamma) \leq \chi(\Gamma)$ (see \cite{westgraph}). A graph $\Gamma$ is \emph{perfect} if $\omega(\Gamma') = \chi(\Gamma')$ for every induced subgraph $\Gamma'$ of $\Gamma$.
Recall that the {\em complement} $\overline{\Gamma}$ of $\Gamma$ is a graph with same vertex set as $\Gamma$ and distinct vertices $u, v$ are adjacent in $\overline{\Gamma}$ if they are not adjacent in $\Gamma$. A subgraph $\Gamma'$ of $\Gamma$ is called \emph{hole} if $\Gamma'$ is a cycle as an induced subgraph, and $\Gamma'$ is called an \emph{antihole} of $\Gamma$ if $\overline{\Gamma'}$ is a hole in $\overline{\Gamma}$.
\begin{theorem}\label{strongperfecttheorem}\cite{strongperfectgraph}
A finite graph $\Gamma$
is perfect if and only if it does not contain hole or antihole of odd length at least $5$.
\end{theorem}
A subset $D$ of $V(\Gamma)$ is said to be a dominating set if any vertex in $V(\Gamma) \setminus D$ is adjacent to at least one vertex in $D$. If $D$ contains only one vertex then that vertex is called dominating vertex. The \emph{domination number} $\gamma(\Gamma)$ of $\Gamma$ is the minimum size of a dominating set in $\Gamma$. A graph $\Gamma$ is said to be planar if it can be drawn on a plane without any crossing of its edges. In $\Gamma$, a vertex $z$ resolves a pair of distinct vertices $x$ and $y$ if
$d(x, z) \neq d(y, z)$. A resolving set of $\Gamma$ is a subset $R \subseteq V (\Gamma)$ such that every pair of distinct vertices of $\Gamma$ is resolved by some vertex in $R$. The metric dimension of $\Gamma$,
denoted by $\beta(\Gamma)$, is the minimum cardinality of a resolving set of $\Gamma$. For vertices $u$ and $v$ in a graph $\Gamma$, we say that $z$ \emph{strongly resolves} $u$ and $v$ if there exists a shortest path from $z$ to $u$ containing $v$, or a shortest path from $z$ to $v$ containing $u$. A subset $U$ of $V(\Gamma)$ is a \emph{strong resolving set} of $\Gamma$ if every pair of vertices of $\Gamma$ is strongly resolved by some vertex of $U$. The least cardinality of a strong resolving set of $\Gamma$ is called the \emph{strong metric dimension} of $\Gamma$ and is denoted by $\operatorname{sdim}(\Gamma)$. For vertices $u$ and $v$ in a graph $\Gamma$, we write $u\equiv v$ if $N[u] = N[v]$. Notice that that $\equiv$ is an equivalence relation on $V(\Gamma)$.
We denote by $\widehat{v}$ the $\equiv$-class containing a vertex $v$ of $\Gamma$.
Consider a graph $\widehat{\Gamma}$ whose vertex set is the set of all $\equiv$-classes, and vertices $\widehat{u}$ and $\widehat{v}$ are adjacent if $u$ and $v$ are adjacent in $\Gamma$. This graph is well-defined because in $\Gamma$, $w \sim v$ for all $w \in \widehat{u}$ if and only if $u \sim v$. We observe that $\widehat{\Gamma}$ is isomorphic to the subgraph $\mathcal{R}_{\Gamma}$ of $\Gamma$ induced by a set of vertices consisting of exactly one element from each $\equiv$-class. Subsequently, we have the following result of \cite{ma2018strong} with $\omega(\mathcal{R}_{\Gamma})$ replaced by $\omega(\widehat{\Gamma})$.
\begin{theorem}[{\cite[Theorem 2.2]{ma2018strong}}]\label{strong-metric-dim}
For any graph $\Gamma$ with diameter $2$, $\operatorname{sdim}(\Gamma) = |V(\Gamma)| - \omega(\widehat{\Gamma})$.
\end{theorem
\section{Connectivity of the Intersection graph $\Gamma(S)$}
In this section, we investigate the connectedness of $\Gamma(S)$. We show that $diam(\Gamma(S)) \leq 2$ if it is connected. Also, we classify the semigroups, in terms of left ideals, such that the diameter of $\Gamma(S)$ is two.
\begin{theorem}\label{disconnectedintersection}
The intersection ideal graph $\Gamma(S)$ is disconnected if and only if $S$ contains at least two minimal left ideals and every nontrivial left ideal of $S$ is minimal as well as maximal.
\end{theorem}
\begin{proof}
First suppose that $\Gamma(S)$ is not connected. Then $S$ has at least two nontrivial left ideals, namely $I_1, I_2$. Without loss of generality, assume that $I_1 \in C_1$ and $I_2 \in C_2$, where $C_1$ and $C_2$ are distinct components of $\Gamma(S)$. If $I_1$ is not minimal then there exists at least one nontrivial left ideal $I_k$ of $S$ such that $I_k \subset I_1$ so that their intersection is nontrivial. Therefore, $I_1 \sim I_k$. Now if the intersection of $I_2$ and $I_k$ is nontrivial then $I_1 \sim I_k \sim I_2$, a contradiction. Therefore the intersection of $I_2$ and $I_k$ is trivial. If $I_2 \cup I_k \neq S$ then $I_1 \sim I_2 \cup I_k \sim I_2$, a contradiction. Thus, $I_k \cup I_2 = S$. It follows that $I_1 \sim I_2$, again a contradiction. Thus $I_1$ is minimal. Similarly, we get $I_2$ is minimal.
Further assume that $I_1$ is not maximal. Then there exists a nontrivial left ideal $I_k$ of $S$ such that $I_1 \subset I_k$ so that $I_1 \sim I_k$. If $I_1 \cup I_2 \neq S$ then $I_1 \sim I_1 \cup I_2 \sim I_2$, a contradiction to the fact that $\Gamma(S)$ is disconnected. It follows that $I_1 \cup I_2 = S$ so that the intersection of $I_k$ and $I_2$ is nontrivial. Thus we have $I_1 \sim I_k \sim I_2$, a contradiction. Hence $I_1$ is maximal. Similarly, we observe that $I_2$ is maximal. The converse follows from the Remark \ref{disjoint intersection minimal}.
\end{proof}
\begin{corollary}\label{null graphintersection}
If the graph $\Gamma(S)$ is disconnected then it is a null graph (i.e. it has no edge).
\end{corollary}
\begin{theorem}\label{two minimalintersection}
The graph $\Gamma(S)$ is disconnected if and only if $S$ is the union of exactly two minimal left ideals.
\end{theorem}
\begin{proof}
Suppose first that $\Gamma(S)$ is disconnected. Then by Theorem \ref{disconnectedintersection}, each nontrivial left ideal of $S$ is minimal. Suppose $S$ has at least three minimal left ideals, namely $I_1, I_2$
and $I_3$. Then $I_1 \cup I_2$ is a nontrivial left ideal of $S$ which is not minimal. Consequently, by Theorem \ref{disconnectedintersection}, we get a contradiction of the fact that $\Gamma(S)$ is disconnected. Thus, $S$ has exactly two minimal left ideals. If $S \neq I_1 \cup I_2$, then $I_1 \cup I_2$ is a nontrivial left ideal which is not minimal, a contradiction ( cf. Theorem \ref{disconnectedintersection}). Thus, $S = I_1 \cup I_2$.
Conversely, suppose $S = I_1 \cup I_2$ where $I_1$ and $I_2$ are minimal left ideals of $S$. If there exists another nontrivial left ideal $I_k$ of $S$ then either $I_1 \subset I_k$ or $I_2 \subset I_k$. Without loss of generality, assume that $I_1 \subset I_k$, we have $I_1 \sim I_k$. Since $I_1 \cup I_2 = S$ we get $I_k \cup I_2 = S$. It follows that the intersection of $I_2$ and $I_k$ is nontrivial. By minimality of $I_2$, we can observe that $I_2 \subset I_k$. Consequently, $S \subseteq I_k$, a contradiction. Thus, by Theorem \ref{disconnectedintersection}, $\Gamma(S)$ is disconnected.
\end{proof}
\begin{theorem}\label{diameter2}
If $\Gamma(S)$ is a connected graph then $diam(\Gamma(S))$ $\leq$ $2$.
\end{theorem}
\begin{proof}
Let $I_1, I_2$ be two nontrivial left ideals of $S$. If $I_1 \sim I_2$ then $d(I_1, I_2)$ = 1. If $I_1 \nsim I_2$ i.e. $I_1 \cap I_2$ is trivial then in the following cases we show that $d(I_1, I_2) $$\leq 2$.
\noindent\textbf{Case 1.} $I_1 \cup I_2 \neq S$. Then $I_1 \sim (I_1 \cup I_2) \sim I_2$ so that $d(I_1, I_2)$ = 2.
\noindent\textbf{Case 2.} $I_1 \cup I_2 = S$. Since $\Gamma(S)$ is a connected graph, there exists a nontrivial left ideal $I_k$ of $S$ such that either $I_1 \cap I_k$ is nontrivial or $I_2 \cap I_k$ is nontrivial. Now we have the following subcases.
\textbf{Subcase 1.} $I_1 \not \subset I_k$ and $I_k \not \subset I_1$. Since $I_1 \not \subset I_k$ it follows that there exists $x \in I_k$ but $x \notin I_1$ so that $x \in I_2$. Consequently, $I_2 \cap I_k$ is nontrivial. Therefore, we get a path $I_1 \sim I_k \sim I_2$ of length two. Thus, $d(I_1, I_2) = 2$.
\textbf{Subcase 2.} $I_k \subset I_1$. There exists $x \in I_1$ but $x \notin I_k $. If $I_2 \cup I_k = S$ then $x \in I_2$. Thus, we get $I_1 \cap I_2$ is nontrivial, a contradiction. Consequently, $I_2 \cup I_k \neq S$. Further, we get a path $I_1 \sim (I_2 \cup I_k) \sim I_2$ of length two. Thus, $d(I_1, I_2) = 2$.
\textbf{Subcase 3.} $I_1 \subset I_k$. Since $I_1 \cup I_2 = S$ we get $I_k \cup I_2 = S$. Further, the intersection of $I_k$ and $I_2$ is nontrivial. Consequently, $I_1 \sim I_k \sim I_2$ gives a path of length two between $I_1$ and $I_2$. Thus, $d(I_1, I_2) = 2$. Hence, $diam(\Gamma(S))$ $\leq$ $2$.
\end{proof}
\begin{lemma}
Let $S$ be a semigroup having minimal left ideals. Then $\Gamma(S)$ is complete if and only if $S$ has unique minimal left ideal.
\end{lemma}
\begin{proof}
Suppose that $S$ contains a unique minimal left ideal $I_1$. Note that every nontrivial left ideal of $S$ contains at least one minimal left ideal. Since $I_1$ is unique then it must contained in every nontrivial left ideals of $S$. Thus, the graph $\Gamma(S)$ is complete.
Conversely, suppose that $\Gamma(S)$ is a complete graph. On contrary if $S$ has at least two minimal left ideals, viz. $I_1, I_2$. By Remark \ref{disjoint intersection minimal}, $I_1 \nsim I_2$, a contradiction to the fact that $\Gamma(S)$ is complete. Thus $S$ has unique minimal left ideal.
\end{proof}
\begin{lemma}
If graph $\Gamma(S)$ is a regular if and only if either $\Gamma(S)$ is null or a complete graph.
\end{lemma}
\begin{proof}
First suppose that $\Gamma(S)$ is not a null graph. Let if possible, $S$ has at least two minimal left ideals, namely $I_1, I_2$. Since $\Gamma(S)$ is not a null graph then $I_1$ and $I_1 \cup I_2$ forms a nontrivial left ideals of $S$ and $I_1 \sim (I_1 \cup I_2)$. Suppose $J$ is any nontrivial left ideal of $S$ such that $J \sim I_1$ then $J \sim (I_1 \cup I_2)$. It follows that every nontrivial left ideal of $S$ which is adjacent with $I_1$ is also adjacent with $(I_1 \cup I_2)$ and $I_2 \sim I_1 \cup I_2$ but $I_2 \nsim I_1$ implies that $deg(I_1) < deg(I_1 \cup I_2)$, a contradiction. Therefore, $\Gamma(S)$ is a complete graph.
\end{proof}
Next we classify the semigroups such that the diameter of intersection ideal graph $\Gamma(S)$ is two.
\begin{theorem}\label{diameter2clasification}
Let $S$ be a semigroup having minimal left ideals. Then for a connected graph $\Gamma(S)$, we have $diam(\Gamma(S)) = 2$ if and only if $S$ has at least two minimal left ideals.
\end{theorem}
\begin{proof}
Suppose that $diam(\Gamma(S)) = 2$. Assume that $I_1$ is the only minimal left ideal of $S$. Since $I_1$ is unique minimal left ideal then it is contained in all other nontrivial left ideals of $S$. Therefore, for any nontrivial left ideals $J, K$, we have $I_1 \subset (J \cap K)$. Consequently, $d(J, K) = 1$ for any $J, K \in V(\Gamma(S))$. Therefore $S$ has at least two minimal left ideals. Conversely suppose that $S$ has at least two minimal left ideals, viz. $I_1, I_2$. Then by Remark \ref{disjoint intersection minimal}, we have $I_1 \nsim I_2$. Consequently, by Theorem \ref{diameter2}, $d(I_1, I_2) = 2$. Thus, $diam(\Gamma(S)) = 2$.
\end{proof}
\section{Invariants of $\Gamma(S)$}
In this section, first we obtain the girth of $\Gamma(S)$. Then we discuss planarity and perfectness of $\Gamma(S)$. Also we classify the semigroup $S$ such that $\Gamma(S)$ is bipartite, star graph and tree, respectively. Further, we investigate other graph invariants viz. clique number, independence number and strong metric dimension of $\Gamma(S)$.
\begin{theorem} \label{girthofintersection}
Let $S$ be a semigroup such that $\Gamma(S)$ contains a cycle. Then $g(\Gamma(S)) = 3$.
\end{theorem}
\begin{proof}
If $\Gamma(S)$ is disconnected or a tree, then clearly $g(\Gamma(S)) = \infty$. Suppose that the semigroup $S$ has $n$ minimal left ideals. Now we prove the result through following cases.
\noindent\textbf{Case 1.} $n = 0$. If $S$ has no nontrivial left ideals then there is nothing to prove. Otherwise, there exists a chain of nontrivial left ideals of $S$ such that $I_1 \supset I_2 \supset \cdots \supset I_k \supset \cdots$. Thus, $g(\Gamma(S)) = 3$.
\noindent\textbf{Case 2.} $n = 1$. Suppose that $I_1$ is the only minimal left ideal of $S$. Since $I_1$ is unique minimal left ideal then it is contained in all other nontrivial left ideals of $S$. Therefore, for any nontrivial left ideals $I, J$, we have $I_1 \subset I \cap J \neq \emptyset$. If $S$ has at least three nontrivial left ideals then $g(\Gamma(S)) = 3$. Otherwise, $g(\Gamma(S)) = \infty$.
\noindent\textbf{Case 3.} $n = 2$. Let $I_1, I_2$ be two minimal left ideals of $S$. If $I_1 \cup I_2 = S$ then by Theorem \ref{two minimalintersection} and Corollary \ref{null graphintersection}, $g(\Gamma(S)) = \infty$. If $I_1 \cup I_2 \neq S$, then $J = I_1 \cup I_2$ is a nontrivial left ideal of $S$. If $S$ has only these three, namely $I_1, I_2$ and $J$, left ideals then we obtain $I_1 \sim J \sim I_2$ so that $g(\Gamma(S)) = \infty$. Now suppose that $S$ has a nontrivial left ideal $K$ other than $I_1, I_2$ and $J$. Since $I_1, I_2$ are minimal left ideals of $S$ we have either $I_1 \subset K$ or $I_2 \subset K$. Without loss of generality, assume that $I_1 \subset K$, then we get a triangle $I_1 \sim K \sim J \sim I_1$. It follows that $g(\Gamma(S)) = 3$.
\noindent\textbf{Case 4.} $n \geq 3$. Let $I_1, I_2, I_3$ be the minimal left ideals of $S$. Then we have a cycle $(I_1 \cup I_2) \sim (I_2 \cup I_3) \sim (I_1 \cup I_3) \sim (I_1 \cup I_2)$ of length 3. Thus, $g(\Gamma(S)) = 3$.
\end{proof}
Let ${\rm Min}(S)$ (${\rm Max}(S)$) be the set of all minimal (maximal) left ideals of $S$. For a nontrivial left ideal $I_{{i_1}{i_2}\cdots{i_k}}$, we mean $I_{i_1} \cup I_{i_2} \cup \cdots \cup I_{i_k}$, where $I_{i_1}, I_{i_2}, \cdots, I_{i_k}$ $\in {\rm Min}(S)$
\begin{theorem}
For the graph $\Gamma(S)$, we have the following results:
\begin{enumerate}
\item[{\rm(i)}] If $\Gamma(S)$ is planar then $| {\rm Min}(S) | \leq 3$.
\item [{\rm(ii)}] For $S = I_{{i_1}{i_2}\cdots{i_n}}$, we have $\Gamma(S)$ is planar if and only if $n \leq 3$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Suppose that $| {\rm Min}(S) | =4$ with ${\rm Min}(S) = \{I_1, I_2, I_3, I_4\}$. Then note that the subgraph induced by the vertices $I_1, I_{12}, I_{123}, I_{14}$ and $I_{124}$ is isomorphic to $K_5$. Thus, $\Gamma(S)$ is nonplanar.
(ii) The proof for $\Gamma(S)$ is nonplanar for $n \geq 4$ follows from part (i).
If $n = 2$ then by Corollary \ref{null graphintersection} and Theorem \ref{two minimalintersection}, $\Gamma(S)$ is planar. For $n= 3$, $\Gamma(S)$ is planar as shown in Figure 1.
\begin{figure}[h!]
\centering
\includegraphics[width=0.2 \textwidth]{s123newone.pdf}
\caption{Planar drawing of $\Gamma(S)$ for $S = I_{123}$}
\end{figure}
\end{proof}
\begin{theorem}
For the graph $\Gamma(S)$, we have the following results:
\begin{enumerate}
\item[{\rm(i)}] If
$\Gamma(S)$ is a perfect graph then $| {\rm Min}(S) | \leq 4$.
\item [{\rm(ii)}] Let $S$ be the union of $n$ minimal left ideals. Then $\Gamma(S)$ is perfect if and only if $n \leq 4$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Suppose that $| {\rm Min}(S) | = 5$ with ${\rm Min}(S) = \{I_1, I_2, I_3, I_4, I_5 \}$. Note that $I_{12} \sim I_{23} \sim I_{34} \sim I_{45} \sim I_{15} \sim I_{12}$ induces a cycle of length 5. Then by Theorem \ref{strongperfecttheorem}, $\Gamma(S)$ is not perfect.
(ii) The proof for $\Gamma(S)$ is not a perfect graph for $n \geq 5$ follows from part (i). If $n = 2$ then by Corollary \ref{null graphintersection} and Theorem \ref{disconnectedintersection}, $\Gamma(S)$ is disconnected. Thus, being a null graph, $\Gamma(S)$ is perfect. For $n \in \{ 3, 4\}$, we show that $\Gamma(S)$ does not contain a hole or an antihole of odd length at least five (cf. Theorem \ref{strongperfecttheorem}). If $n =3$, $\Gamma(S)$ is perfect as shown in Figure 1. If $n = 4$ then from Figure 2 note that $\Gamma(S)$ does not contain a hole or an antihole of odd length at least five.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5 \textwidth]{s1234.pdf}
\caption{}
\end{figure
\end{proof}
\begin{theorem}
Let $S$ be a semigroup having minimal left ideals such that $V(\Gamma(S)) > 1$. Then the following conditions are equivalent:
\begin{enumerate}
\item[{\rm(i)}] $\Gamma(S)$ is star graph.
\item[{\rm(ii)}] $\Gamma(S)$ is a tree.
\item[{\rm(iii)}] $\Gamma(S)$ is bipartite.
\item[{\rm(iv)}] Either $S$ has exactly three nontrivial left ideals $I_1$, $I_2$ and $I_{12}$ such that $I_1$ and $I_{2}$ are minimal or $S$ has two nontrivial left ideals $I_1, I_2$ such that $I_1 \subset I_2$.
\end{enumerate}
\end{theorem}
\begin{proof}
We prove (ii), (iii) $\Rightarrow$ (iv). The proof of remaining parts is straightforward. Suppose $\Gamma(S)$ is a tree. Then clearly $| {\rm Min}(S) | \leq 2$. Otherwise, for minimal left ideals $I_1, I_2, I_3$ we have $I_{12} \sim I_{13} \sim I_{23} \sim I_{12}$ a cycle, a contradiction. Suppose that $| {\rm Min}(S) | = 1$. Let $I_1$ be the unique minimal left ideal of $S$. Consequently, $I_1$ is contained in all other nontrivial left ideals of $S$. If $S$ has at least three nontrivial left ideals then we get a cycle, a contradiction. Thus $|V(\Gamma(S))| = 2$. Now we assume that $| {\rm Min}(S) | = 2$. Let $I_1, I_2$ be two minimal left ideals of $S$. Let if possible, $S = I_{12}$. Then by Corollary \ref{null graphintersection} and Theorem \ref{two minimalintersection}, $\Gamma(S)$ is disconnected so is not a tree. Thus $S \neq I_{12}$. Then $J = I_{12}$ is a nontrivial left ideal of $S$.
Now if $S$ has a nontrivial left ideal $K$ other than $I_1, I_2$ and $J$. Without loss of generality, assume that $I_1 \subset K$ then we get a cycle $I_1 \sim I_{12} \sim K \sim I_1$, a contradiction. Thus, for $S \neq I_{12}$, we have $V(\Gamma(S)) = \{I_1, I_2, I_{12} \}$.
(iii) $\Rightarrow$ (iv). If $\Gamma(S)$ is bipartite then we have $| {\rm Min}(S) | \leq 2$. In the similar lines of the work discussed above, (iv) holds.
\end{proof}
\begin{theorem}
Let $S$ be a semigroup with $n$ minimal left ideals. Then the following results hold:
\begin{enumerate}
\item[{\rm(i)}] If $S \neq I_{12 \cdots n}$ then $\gamma(\Gamma(S)) = 1$.
\item[{\rm(ii)}] If $S = I_{12 \cdots n}$ then $\gamma(\Gamma(S)) = 2$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Suppose that $S \neq I_{12 \cdots n}$. It follows that $J = I_{12 \cdots n}$ is a nontrivial left ideal of $S$. It is well known that every nontrivial left ideal of $S$ contains at least one minimal left ideal. Consequently, for any nontrivial left ideal $K$ of $S$, we have $J \cap K$ is nontrivial. Thus, $J$ is a dominating vertex. Hence, $\gamma(\Gamma(S)) = 1$.
(ii) Suppose that $S = I_{12 \cdots n}$. Note that there is no dominating vertex in $\Gamma(S)$ so that $\gamma(\Gamma(S)) \geq 2$. Now we show that $D = \{I_1, I_{23 \cdots n}\}$ is a dominating set. Since $S$ is the union of $n$ minimal left ideals so any nontrivial left ideal of $S$ is union of these minimal left ideals (cf. Remark \ref{everynontrivial left ideal is union}). Let $J \in V(\Gamma(S)) \setminus D$ be any nontrivial left ideal of $S$. Then $J$ is union of $k$ minimal left ideals of $S$, where $1 \leq k \leq n-1$. If $I_1 \subset J$, then we are done. If $I_1 \not\subset J$ then $J$ must be union of $I_2, I_3, \ldots, I_n$. It follows that intersection of $J$ and $I_{23 \cdots n}$ is nontrivial. Consequently, $J \sim I_{23 \cdots n}$. Thus $D$ is a dominating set. This completes the proof.
\end{proof}
\begin{theorem}
Let $S$ be a semigroup with $n$ minimal left ideals. Then $\alpha(\Gamma(S)) = n$.
\end{theorem}
\begin{proof}
Let ${\rm Min}(S) = \{{I_{i_1} : i_1 \in [n]}\}$ be the set of all minimal left ideals of S. Then, by Remark \ref{disjoint intersection minimal}, ${\rm Min}(S)$ is an independent set of $\Gamma(S)$. It follows that $\alpha(\Gamma(S)) \geq n$. Now we prove that for any arbitrary independent set $U$, we have $|U| \leq n$. Assume that $I \in V(\Gamma(S))$ such that $I \in U$. Since every nontrivial left ideal contains at least one minimal left ideal. Without loss of generality, assume that $I_{{i_1}{i_2}\cdots{i_k}} \subseteq I$ for some $i_1,i_2, \cdots ,i_k
\in [n]$. Then note that $|U| \leq n-k+1$. Otherwise, there exist at least two nontrivial left ideals which are adjacent, a contracdiction. Consequently, we have $|U| \leq n$. Thus, $\alpha(\Gamma(S)) = n$.
\end{proof}
\begin{lemma}
Let $S$ be a semigroup with $n~ (\geq 3)$ minimal left ideals. Then there exists a clique in $\Gamma(S)$ of size $n$.
\end{lemma}
\begin{proof}
Let $I_1, I_2, \ldots, I_n$ be $n$ minimal left ideals. Consider $\mathcal{C} = \{I_{{i_1}{i_2} \cdots {i_{n-1}}} : i_1, i_2, \ldots, i_{n-1} \in [n]\}$. Clearly, $|\mathcal{C}| = n$. Notice that for any $J , K \in \mathcal{C}$, we have $J \cap K$ is nontrivial so that $J \sim K$. Thus, $\mathcal{C}$ becomes a clique of size $n$.
\end{proof}
\begin{theorem}
Let $S$ be a semigroup with $n(>1)$ minimal left ideals. Then $\omega(\Gamma(S)) = n$ if and only if one of the following holds:
\begin{enumerate}
\item[{\rm(i)}] $S = I_{123}$.
\item[{\rm(ii)}] $S$ has only two minimal left ideals $I_1$ and $I_2$ and a unique maximal left ideal $I_{12}$.
\end{enumerate}
\end{theorem}
\begin{proof}
First suppose that $\omega(\Gamma(S)) = n$. Assume that $S$ has $n (\geq 4)$ minimal left ideals, namely $I_1, I_2, \ldots, I_n$. Then $\mathcal{C} = \{I_{{i_1}{i_2}\cdots{i_{n-1}}}, I_{{i_1}{i_2}} : i_1, i_2, \ldots, i_n \in [n] \}$ forms a clique of size greater than $n$ of $\Gamma(S)$. It follows that $\omega(\Gamma(S)) > n$. If $n =3$, assume that $S \neq I_{123}$. Then $\mathcal{C} = \{I_{12}, I_{13}, I_{23}, I_{123}\}$ forms a clique of size four of $\Gamma(S)$. It follows that $S = I_{123}$.
For $n =2$, we have either $S = I_{12}$ or $S \neq I_{12}$. For $S = I_{12}$, by Corollary \ref{null graphintersection} and by Theorem \ref{two minimalintersection}, $\Gamma(S)$ is disconnected. Thus, $\omega(\Gamma(S)) < n$. Thus $S \neq I_{12}$. If $S$ has a nontrivial left ideal $K \notin \{I_1, I_2, I_{12}\}$ then we get a clique of size three. Therefore, $I_{12}$ is a unique maximal left ideal. Converse follows trivially
\end{proof}
\begin{lemma}\label{maximal ideal intersection nonempty}
If $\Gamma(S)$ is connected then ${\rm Max}(S)$ forms a clique of $\Gamma(S)$.
\end{lemma}
\begin{proof}
We prove the result by showing that if $J, K \in {\rm Max}(S)$ then $J \sim K$.
Let if possible, $J \nsim K$. The maximality of $J$ and $K$ follows that $J \cup K = S$. By Lemma \ref{S minus K is lclass}, $S \setminus J$ and $S \setminus K$ are $\mathcal{L}-$classes of $S$. It follows that $J$ and $K$ are only nontrivial left ideals of $S$. Thus, being a null graph $\Gamma(S)$ is disconnected, a contradiction.
\end{proof}
\begin{theorem}
If $K$ is a maximal left ideal of $S$ such that $deg(K)$ is finite, then $\chi(\Gamma(S)) < \infty$.
\end{theorem}
\begin{proof}
Let $J$ be an arbitrary nontrivial left ideal of $S$ such that $J \nsim K$. Note that $J$ is minimal left ideal of $S$. On contrary, suppose that $J$ is not a minimal left ideal of $S$. Then there exists a nontrivial left ideal $J'$ of $S$ such that $J' \subset J$. Since $K$ is maximal left ideal of $S$. Consequently, $J' \cup K = S$. It follows that intersection of $J$ and $K$ is nontrivial, a contradiction. By Remark \ref{disjoint intersection minimal}, we can color all the vertices which are not adjacent with $K$ with one color. Since $deg(K)$ is finite, we have $\chi(\Gamma(S)) < \infty$.
\end{proof}
\begin{lemma}\label{chromaticnumberintersection}
For $S = I_{{i_1}{i_2}\cdots{i_n}}$, we have $\omega(\Gamma(S)) = \chi(\Gamma(S)) = 2^{n-1}-1$.
\end{lemma}
\begin{proof}
First note that $S$ has $2^n-2$ nontrivial left ideals and every nontrivial left ideal of $S$ is of the form $I_{{i_1}{i_2}\cdots{i_k}}$ and $1 \leq k \leq n-1$ (cf. Remark \ref{everynontrivial left ideal is union}). If $n$ is odd then consider $\mathcal{C} = \{I_{{j_1}{j_2}\cdots{j_t}} : \lceil \frac{n}{2} \rceil \leq t\leq n-1\}$. Note that $\mathcal{C}$ forms a clique of size $2^{n-1}-1$. We may now suppose that $n$ is even. Consider $\mathcal{C}_1 = \{I_{{j_1}{j_2}\cdots{j_t}} :
\frac{n}{2} + 1 \leq t\leq n-1\}$. Notice that $\mathcal{C}_1$ forms a clique. Further, observe that $\mathcal{C}^{'} = \{I_{{i_1}{i_2}\cdots{i_{\frac{n}{2}}}} : i_1, i_2, \ldots, i_{\frac{n}{2}} \in [n]\}$ do not form a clique because for $j_1, j_2, \ldots, j_{\frac{n}{2}} \in [n] \setminus \{i_1, i_2, \ldots, i_{\frac{n}{2}}\}$, $I_{{i_1}{i_2}\cdots{i_{\frac{n}{2}}}} \nsim I_{{j_1}{j_2}\cdots{j_{\frac{n}{2}}}}$. However, $\mathcal{C}^{''} = \{I_{{i_1}{i_2}\cdots{i_{\frac{n}{2}}}} \in \mathcal{C}^{'} \setminus \{I_{{j_1}{j_2}\cdots{j_{\frac{n}{2}}}}\} : j_1, j_2, \ldots, j_{\frac{n}{2}} \notin \{i_1, i_2, \ldots, i_{\frac{n}{2}}\} \}$ forms a clique of size $\frac{|\mathcal{C}^{'}|}{2}$. Further note that the set $\mathcal{C}_1 \cup \mathcal{C}^{''}$ also forms a clique of size $2^{n-1}-1$. Thus, $\omega(\Gamma(S)) \geq 2^{n-1}-1$. To complete the proof, we show that $\chi(\Gamma(S)) \leq 2^{n-1}-1$. For $I = I_{{i_1}{i_2}\cdots{i_k}}$ and $J = I_{{j_1}{j_2}\cdots{j_{n-k}}}$, where $i_1, i_2, \ldots, i_k \in [n] \setminus \{j_1, j_2, \ldots, j_{n-k}\}$ we have $I \cap J$ is trivial. Consequently, we can color these vertices with same color so that we can cover all the vertices with $2^{n-1}-1$ colors. Thus $\chi(\Gamma(S)) \leq 2^{n-1}-1$. Hence $\omega(\Gamma(S)) = \chi(\Gamma(S)) = 2^{n-1}-1$.
\end{proof}
\begin{corollary}
If $S = I_{{i_1}{i_2}\cdots{i_n}}$ then $\Gamma(S)$ is a weakly perfect graph.
\end{corollary}
In order to find the upper bound of the chromatic number of $\Gamma(S)$, where $S$ is an arbitrary semigroup, first we define
\begin{align*}
X_1 & = \{I \in V(\Gamma(S)) : I_{{i_1}{i_2}\cdots{i_n}} \subseteq I \}\\
X_2 & = \{I \in V(\Gamma(S)) : I \subset I_{{i_1}{i_2}\cdots{i_n}} ~\text{and}~ I \neq I_{{i_1}{i_2}\cdots{i_n}} \}\\
X_3 & = V(\Gamma(S)) \setminus (X_1 \cup X_2).
\end{align*}
Let $\widetilde{{\text{Min}(I)}}$ be the set of all minimal left ideals contained in $I$
. Further define a relation $\rho$ on $X_3$ such that \begin{center}
$J ~~ \rho ~~K \Longleftrightarrow \widetilde{{\text{Min}(J)}} = \widetilde{{\text{Min}(K)}}$
\end{center}
Note that $\rho$ is an equivalence relation.
\begin{theorem}
Let $S$ be a semigroup with $n$ minimal left ideals and $\chi(\Gamma(S)) < \infty$. Then
\begin{center}$\chi(\Gamma(S)) \leq |X_1| + (2^{n-1}-1) + (2^{n-1}-1)m$,
\end{center}
where $m = {\rm{max}} \{|C(I)| : C(I) \
\text{is an equivalence class of} \; \rho \}$.
\end{theorem}
\begin{proof}
Note that for any $I, J \in X_1$, we have $I \sim J$. Since every nontrivial left ideal contains at least one minimal left ideal, consequently each element of $X_1$ is a dominating vertex of $\Gamma(S)$. Therefore, we need at least $|X_1|$ colors in any coloring of $\Gamma(S)$.
By proof of Lemma \ref{chromaticnumberintersection}, we can color all the vertices of $X_2$ with at least $2^{n-1}-1$ colors so that we need at least $2^{n-1}-1 + |X_1|$ colors to color $X_1 \cup X_2$.
To prove our result we need to show that the vertices of $X_3$ can be colored by using $(2^{n-1}-1)m$ colors. Now let $J, K \in X_3$ such that $I_{{i_1}{i_2}\cdots{i_k}} \subset J$ and $I_{{j_1}{j_2}\cdots{j_t}} \subset K$. Note that $J \cap K$ is nontrivial if and only if $I_{{i_1}{i_2}\cdots{i_k}} \cap I_{{j_1}{j_2}\cdots{j_t}}$ is nontrivial. It follows that $J \sim K$ in $\Gamma(S)$ if and only if either $I_{{i_1}{i_2}\cdots{i_k}} = I_{{j_1}{j_2}\cdots{j_t}}$ or $I_{{i_1}{i_2}\cdots{i_k}} \sim I_{{j_1}{j_2}\cdots{j_t}}$.
Note that the equivalence class of $I \in X_3$ under $\rho$ is $C(I) = \{J \in X_3 : \widetilde{{\text{Min}(I)}} = \widetilde{{\text{Min}(J)}} \}$. Since $\chi(\Gamma(S)) < \infty$ we get $|C(I)| < \infty$. Consequently, $|C(I)| \leq m$. Observe that $C(I)$ forms a clique, we require maximum $m$ colors to color each class under $\rho$. Note that $J \in C(J)$ and $K \in C(K)$ such that $J \sim K$ if and only if $I_{{i_1}{i_2}\cdots{i_k}} \sim I_{{j_1}{j_2}\cdots{j_t}}$ in $\Gamma(S)$. Consequently, we can color the vertices in $X_3$ by using $(2^{n-1}-1)m$ colors.
\end{proof}
\begin{theorem}
Let $S$ be a semigroup with $n$ minimal left ideals. Then
\[\operatorname{sdim}(\Gamma(S)) = \begin{cases}
|X_1| + |X_3| + 2^{n-1}-2 & \text{\rm if} ~ S \neq I_{{i_1}{i_2}\cdots{i_n}} \\
2^{n-1}-1 & \text{\rm if}~ S = I_{{i_1}{i_2}\cdots{i_n}}
\end{cases}\]
\end{theorem}
\begin{proof}
Let $I, J \in V(\Gamma(S))$ such that $I_{{i_1}{i_2}\cdots{i_k}} \subseteq I$ and $I_{{j_1}{j_2}\cdots{j_t}} \subseteq J$. Then $I \sim J$ if and only if either $I_{{i_1}{i_2}\cdots{i_k}} = I_{{j_1}{j_2}\cdots{j_t}}$ or $I_{{i_1}{i_2}\cdots{i_k}} \sim I_{{j_1}{j_2}\cdots{j_t}}$. Define a relation $\rho_1$ on $V(\Gamma(S))$ such that $I$ $\rho_1$ $J$ if and only if $\widetilde{{\text{Min}(I)}} = \widetilde{{\text{Min}(J)}}$. Clearly, $\rho_1$ is an equivalence relation on $V(\Gamma(S))$. Let $N[I_{{i_1}{i_2}\cdots{i_k}}] = \{I \in V(\Gamma(S)) : \widetilde{{\text{Min}(I)}} = I_{{i_1}{i_2}\cdots{i_k}}\}$ be equivalence class containing $I_{{i_1}{i_2}\cdots{i_k}}$. If $S \neq I_{{i_1}{i_2}\cdots{i_n}}$, then by Theorem \ref{strong-metric-dim}, we have $\mathcal{R}_{\Gamma(S)}$ whose vertex set $V(\mathcal{R}_{\Gamma(S)}) = \{I_{{i_1}{i_2}\cdots{i_k}} : i_1, i_2, \cdots, i_k \in [n] ~ \text{and} ~ 1 \leq k \leq n\}$. By using Lemma \ref{chromaticnumberintersection}, note that $\omega (\mathcal{R}_{\Gamma(S)}) = 2^{n-1}$. Then $\operatorname{sdim}(\Gamma(S)) = |X_1| + |X_3| + 2^{n-1}-2$. Next, if $S = I_{{i_1}{i_2}\cdots{i_n}}$, then $V(\mathcal{R}_{\Gamma(S)}) = \{I_{{i_1}{i_2}\cdots{i_k}} : i_1, i_2, \cdots, i_k \in [n] ~ \text{and} ~ 1 \leq k \leq n-1\}$. By using Lemma \ref{chromaticnumberintersection}, note that $\omega (\mathcal{R}_{\Gamma(S)}) = 2^{n-1}-1$. Then $\operatorname{sdim}(\Gamma(S)) = 2^{n-1}-1$.
\end{proof}
Now in the remaining section, we consider a class of those semigroups which are union of $n$ minimal left ideals. In particular, completely simple semigroups belongs to this class. In what follows, the semigroup $S$ is assumed to be the union of $n$ minimal left ideals $I_{i_1}, I_{i_2}, \ldots, I_{i_n}$ i.e. $S = I_{{i_1}{i_2}\cdots{i_n}}$. The following lemma gives the lower bound of the metric dimension of $\Gamma(S)$.
\begin{lemma}[{\cite[Theorem 1]{chartrand2000resolvability}}]\label{metric dimension theorem}
For positive integers $d$ and $m$ with $d < m$, define $f(m, d)$ as the least positive integer $k$ such that $k + d^k \geq m$. Then for a connected graph $\Gamma$ of order $m \geq 2$ and diameter $d$, the metric dimension $\beta(\Gamma) \geq f(m, d)$.
\end{lemma}
\begin{theorem}
If $S = I_{{i_1}{i_2}\cdots{i_n}}$ then the metric dimension of $\Gamma(S)$ is given below:
\[\beta(\Gamma(S)) = \begin{cases}
2 & \text{\rm if} ~ n = 3 \\
n & \text{\rm if}~ n \geq 4
\end{cases}\]
\end{theorem}
\begin{proof}
For $n =3$, it is easy to observe that $\{I_{i_1}, I_{i_2}\}$ forms a minimum resolving set. If $n \geq 4$ then by Remark \ref{everynontrivial left ideal is union}, we have $|V(\Gamma(S))| = 2^n-2$. In view of Lemma \ref{metric dimension theorem}, we get \begin{center}
$n = f(2^n - 2, 2) \leq \beta(\Gamma(S))$.
\end{center}
It is easy to observe that for $k = n-1$, $2^k + k \not\geq 2^n - 2$. Therefore, the least positive integer $k$ is $n$ for which $k + 2^k \geq 2^n-2$. Thus $n \leq \beta(\Gamma(S))$.
To obtain upper bound of $\beta(\Gamma(S))$, let $J = I_{{i_1}{i_2}\cdots{i_k}}$ and $K = I_{{j_1}{j_2}\cdots{j_t}}$ be distinct arbitrary vertices $\Gamma(S)$. Since $J \neq K$, there exists at least $I_{i_s} \in {\rm Min}(S)$ such that $I_{i_s} \sim J$ and $I_{i_s} \nsim K$. It follows that $d(J, I_{i_s}) \neq d(K, I_{i_s})$. Thus ${\rm Min}(S) = \{I_{i_1} : i_1 \in [n]\}$ forms a resolving set for $\Gamma(S)$ of size $n$. It follows that $\beta(\Gamma(S)) \leq n$. This completes our proof.
\end{proof}
An automorphism of a graph $\Gamma$ is a permutation $f$ on $V (\Gamma)$ with the property that, for any vertices $u$ and $v$, we
have $uf \sim vf$ if and only if $u \sim v$. The set $Aut(\Gamma)$ of all graph automorphisms of a graph $\Gamma$ forms a group with
respect to composition of mappings. The symmetric group of degree $n$ is denoted by $S_n$. Now we obtain the automorphism group of $\Gamma(S)$, when $S$ is union of $n$ minimal left ideal.
\begin{lemma}\label{degree k}
Let $S = I_{{i_1}{i_2} \cdots {i_{n}}}$ and let $K = I_{{i_1}{i_2} \cdots {i_{k}}}$ be a nontrivial left ideal of $S$. Then $deg(K) = (2^k-2) + (2^{n-k}-2) + (2^{n-k}-1)(2^{k}-2)$.
\end{lemma}
\begin{proof}
Let $J$ be a nontrivial left ideal of $S$ such that $J \sim K$. Clearly $J \cap K$ is a nontrivial left ideal. Now we discuss the following cases:
\noindent\textbf{Case 1.} $J \not\subset K$ and $K \not\subset J$. Since $J \sim K$ and $K = I_{{i_1}{i_2} \cdots {i_{k}}}$ then note that the number of nontrivial left ideals such that $J \not\subset K$ and $K \not\subset J$ is
\begin{align*}
&= \left(\sum_{i=1}^{n-k} \binom{n-k}{i}\right) \left(\sum_{i=1}^{k-1} \binom{k}{i}\right) = (2^{n-k}-1)(2^k-2)
\end{align*}
\noindent\textbf{Case 2.} $J \subset K$. The number of nontrivial left ideals of $S$ which are properly contained in $K$ are $2^k-2$.
\noindent\textbf{Case 3.} $K \subset J$. The number of nontrivial left ideals of $S$ properly containing $K$ are $2^{n-k}-2$.
Thus, from the above cases we have the result.
\end{proof}
\begin{corollary}
If $S = I_{{i_1}{i_2}\cdots{i_n}}$ then the graph $\Gamma(S)$ is Eulerian for $n \geq 3$.
\end{corollary}
\begin{lemma}\label{symmetric group}
For $\sigma \in S_n$, let $\phi_{\sigma} : V(\Gamma(S)) \rightarrow V(\Gamma(S))$ defined by $\phi_{\sigma}(I_{{i_1}{i_2}\cdots {i_k}}) = I_{\sigma({i_1})\sigma({i_2})\cdots \sigma({i_k})}$. Then $\phi_{\sigma} \in Aut(\Gamma(S))$.
\end{lemma}
\begin{proof}
It is easy to verify that $\phi_{\sigma}$ is a permutation on $V(\Gamma(S))$. Now we show that $\phi_{\sigma}$ preserves adjacency. Let $I_{{i_1}{i_2}\cdots {i_t}}$ and $I_{{j_1}{j_2}\cdots {j_k}}$ be arbitrary vertices of $\Gamma(S)$ such that $I_{{i_1}{i_2}\cdots {i_t}} \sim I_{{j_1}{j_2}\cdots {j_k}}$. Then $I_{{i_1}{i_2}\cdots {i_t}} \cap I_{{j_1}{j_2}\cdots {j_k}} \neq \emptyset$.
Now
\begin{align*}
I_{{i_1}{i_2}\cdots {i_t}} \sim I_{{j_1}{j_2}\cdots {j_k}}
&\Longleftrightarrow I_{\sigma({i_1})\sigma({i_2})\cdots \sigma({i_t})} \sim I_{\sigma({j_1})\sigma({j_2})\cdots \sigma({j_k})}\\
& \Longleftrightarrow \phi_{\sigma}(I_{{i_1}{i_2}\cdots {i_t}}) \sim \phi_{\sigma}(I_{{j_1}{j_2}\cdots {j_k}}).
\end{align*}
Thus, $\phi_{\sigma} \in Aut(\Gamma(S))$.
\end{proof}
\begin{proposition}\label{phisigma}
For each $f \in Aut(\Gamma(S))$, we have $f = \phi_{\sigma}$ for some $\sigma \in S_n$.
\end{proposition}
\begin{proof}
In view of Lemma \ref{degree k} and Lemma \ref{symmetric group}, suppose that
$f(I_{i_1}) = I_{j_1}$, $f(I_{i_2}) = I_{j_2}$, $\ldots$, $f(I_{i_n}) = I_{j_n}$. Consider $\sigma \in S_n$ such that $\sigma(i_1) = j_1, \sigma(i_2) = j_2, \ldots, \sigma(i_n) = j_n$. Then $\phi_{\sigma}(I_{{i_1}{i_2}\cdots{i_k}}) = I_{\sigma({i_1})\sigma({i_2})\cdots \sigma({i_k})} = I_{{j_1}{j_2}\cdots{j_k}}$ (cf. Lemma \ref{symmetric group}). Clearly, $I_{i_1} \sim I_{{i_1}{i_2}\cdots{i_k}}$, $I_{i_2} \sim I_{{i_1}{i_2}\cdots{i_k}}$, $\ldots$, $I_{i_k} \sim I_{{i_1}{i_2}\cdots{i_k}}$. Also note that $I_{i_t} \cap I_{{i_1}{i_2}\cdots{i_k}}$ is trivial for ${i_t} \in \{i_{k+1}, i_{k+2}, \ldots, i_{n}\}$ where $i_{k+1}, i_{k+2}, \ldots, i_{n}\in [n] \setminus \{i_1, i_2, \ldots, i_k\}$. It follows that $I_{i_{k+1}} \nsim I_{{i_1}{i_2}\cdots{i_k}}$, $I_{i_{k+2}} \nsim I_{{i_1}{i_2}\cdots{i_k}}$, $\ldots$, $I_{i_{n}} \nsim I_{{i_1}{i_2}\cdots{i_k}}$. Thus, $f(I_{i_1}) \sim f(I_{{i_1}{i_2}\cdots{i_k}})$, $f(I_{i_2}) \sim f(I_{{i_1}{i_2}\cdots{i_k}})$, $\ldots$, $f(I_{i_k}) \sim f(I_{{i_1}{i_2}\cdots{i_k}})$ and $f(I_{i_{k+1}}) \nsim f(I_{{i_1}{i_2}\cdots{i_k}})$, $f(I_{i_{k+2}}) \nsim f(I_{{i_1}{i_2}\cdots{i_k}})$, $\ldots$, $f(I_{i_{n}}) \nsim f(I_{{i_1}{i_2}\cdots{i_k}})$. Consequently, $I_{j_1} \subset f(I_{{i_1}{i_2}\cdots{i_k}})$, $I_{j_2} \subset f(I_{{i_1}{i_2}\cdots{i_k}})$, $\ldots$, $I_{j_k} \subset f(I_{{i_1}{i_2}\cdots{i_k}})$ and $I_{j_{k+1}} \not \subset f(I_{{i_1}{i_2}\cdots{i_k}})$, $I_{j_{k+2}} \not \subset f(I_{{i_1}{i_2}\cdots{i_k}})$, $\ldots$, $I_{j_n} \not \subset f(I_{{i_1}{i_2}\cdots{i_k}})$. It follows that $f(I_{{i_1}{i_2}\cdots{i_k}}) = I_{{j_1}{j_2}\cdots{j_k}} = \phi_{\sigma}(I_{{i_1}{i_2}\cdots{i_k}})$. Thus, $f = \phi_{\sigma}$.
\end{proof}
\begin{theorem}\label{automorphism group}
Let $S$ be the union of $n$ minimal left ideals. Then for $n \geq 2$, we have $ Aut(\Gamma(S)) \cong S_n$. Moreover, $|Aut(\Gamma(S))| = n!$.
\end{theorem}
\begin{proof}
In view of Lemma \ref{symmetric group} and by Proposition \ref{phisigma},
note that the underlying set of the automorphism group of $\Gamma(S)$ is
$Aut(\Gamma(S)) = \{\phi_{\sigma} \; : \; \sigma \in S_n \}$, where $S_n$ is a symmetric group of degree $n$. The groups $Aut(\Gamma(S))$ and $S_n$ are isomorphic under the assignment $\phi_{\sigma} \mapsto \sigma$. Since all the elements in $Aut(\Gamma(S))$ are distinct, we have $|Aut(\Gamma(S))| = n!$.
\end{proof}
\section{Acknowledgement}
The first author gratefully acknowledge for providing financial support to CSIR (09/719(0093)/2019-EMR-I) government of India. The second author wishes to acknowledge the support of MATRICS Grant (MTR/2018/000779) funded by SERB, India.
|
1,314,259,995,608 | arxiv | \section{Introduction}
Beyond classic combinatorial relaxations \cite{Goem95}, semidefinite programming has recently found a new stream of applications in machine learning \cite{Lanc02}, geometry \cite{Wein06}, statistics \cite{dAsp06b} or graph theory \cite{Sun05}. All these problems have a common characteristic: they have relatively low precision targets but form very large semidefinite programs for which obtaining second order models is numerically hopeless which means that Newton based interior point solvers typically fail before completing even a single iteration. Early efforts focused on exploiting structural properties of the problem (sparsity, block patterns, etc), but this has proven particularly hard for semidefinite programs. For very large problem instances, first-order methods remain at this point the only credible alternative. This follows a more general trend in optimization which seeks to significantly reduce the \emph{granularity} of solvers, i.e. reduce the per iteration complexity of optimization algorithms rather than their total computational cost, thus allowing at least some progress to be made on problems that are beyond the reach of current algorithms.
In this work, we focus on the following spectral norm minimization problem
\begin{equation}\label{eq:min-maxeig}
\begin{array}{ll}
\mbox{minimize} & \left\|\sum_{j=1}^p y_j A_j +C\right\|_2-b^Ty\\
\mbox{subject to} & y \in Q,
\end{array}\end{equation}
in the variable $y\in {\mbox{\bf R}}^p$, with parameters $A_j\in\symm_n$, for $j=1,\ldots,p$, $b\in{\mbox{\bf R}}^p$ and $C\in \symm_n$, where $Q$ is a compact convex set. Throughout the paper, we also implicitly assume that the set $Q\subset{\mbox{\bf R}}^p$ is simple enough so that the complexity of projecting $y$ on $Q$ is relatively low compared to the other steps in the algorithm.
The idea behind this paper stems from a recent result by \cite{Judi07}, who used a mirror descent stochastic approximation algorithm for solving bilinear matrix games (see \cite{Nest09}, \cite{Poly92} or \cite{Nemi83} for more background), where subsampling is used to perform matrix vector products and produce an approximate gradient. Strikingly, the algorithm has a total complexity of $O(n\log n/\epsilon^2)$, when the problem matrix is $n \times n$, hence only requires access to a negligible proportion of the matrix coefficients as the dimension $n$ tends to infinity.
In parallel, recent advances in large deviations and random matrix theory have produced a stream of new randomization results for high dimensional linear algebra (see \cite{Frie04,Drin06,Achl07, Vemp09} among many others), motivated by the need to perform these operations on very large scale, sometimes streaming, data sets in applications such as machine learning, signal processing, etc. Similar subsampling techniques have been successfully applied to support vector machine classification \cite{Kuma08} or Fourier decomposition. Randomization results were used in \cite{Aror07} to produce complexity bounds for certain semidefinite programs arising in combinatorial relaxations of graph problems. Randomization was also used in \cite{Burk02} and \cite{Burk05} to approximate subdifferentials of functions that are only differentiable almost everywhere.
Our contribution here is to further reduce the granularity of first-order semidefinite programming solvers by combining subsampling procedures with stochastic approximation algorithms to derive stochastic gradient methods for spectral norm minimization with very low complexity per iteration. In practice, significantly larger per iteration complexity and memory requirements mean that interior point techniques often fail to complete a single iteration on very large problem instances. CPU clock also runs much faster than RAM, so operations small enough to be performed entirely in cache (which runs at full speed) are much faster than those requiring larger data sets. Solver performance on very large problem instances is then often more constrained by memory bandwidth than clock speed, hence everything else being equal, algorithms running many cheap iterations will be much faster than those requiring fewer, more complex ones. Here, subsampling techniques allow us to produce semidefinite optimization algorithms with very low cost per iteration, where all remaining $O(n^2)$ operations have a small constant and can be performed in a single pass over the data.
We also observe that the relative approximation error in computing the spectral norm (or trace norm) of a matrix using subsampling is directly proportional to the numerical rank of that matrix, hence another important consequence of using subsampling techniques to solve large-scale semidefinite programs is that the total complexity of running the algorithm becomes explicitly dependent on the complexity (i.e. rank) of its solution.
The paper is organized as follows. Section~\ref{s:random} surveys some key results on randomized linear algebra and spectral norm approximations. In Section~\ref{s:stoch-opt} we then derive a stochastic approximation algorithm for spectral norm minimization with very low cost per iteration and discuss some extensions to statistical learning problems. Finally, we present some numerical experiments in Section~\ref{s:numexp}.
\subsection*{Notation}
We write $\symm_n$ the set of symmetric matrices of dimension $n$. For a matrix $X\in{\mbox{\bf R}}^{m\times n}$, we write $\|X\|_F$ its Frobenius norm, $\|X\|_{\mathrm{tr}}$ its trace norm, $\|X\|_2$ its spectral norm, $\sigma_i(X)$ its $i$-th largest singular value and let $\|X\|_\infty=\max_{ij}|X_{ij}|$, while $X^{(i)}$ is the $i$-th column of the matrix $X$ and $X_{(i)}$ its $i$-th row. We write $\mathop{\bf vec}(X)$ the vector of ${\mbox{\bf R}}^{mn}$ obtained by stacking up the columns of the matrix $X$ and $\mathop{\bf NumRank}(X)$ the numerical rank of the matrix $X$, where $\mathop{\bf NumRank}(X)=\|X\|_F^2/\|X\|_2^2$. Finally, when $x\in{\mbox{\bf R}}^n$ is a vector, we write~$\|x\|_2$ its Euclidean norm, while $\|\cdot\|$ is a general norm on ${\mbox{\bf R}}^m$ and $\|\cdot\|_*$ its dual norm.
\section{Randomized linear algebra}
\label{s:random}
In this section, we survey several results by \cite{Drin06} which, after a single pass on the data, sample columns to approximate matrix products and produce low rank matrix approximations with a complexity of $O(sn)$ where $s$ is the sampling rate.
\subsection{Randomized matrix multiplication}
\begin{algorithm}
\caption{Matrix multiplication}
\label{alg:matrix-mult}
\begin{algorithmic} [1]
\REQUIRE $A\in{\mbox{\bf R}}^{m \times n}$, $B\in{\mbox{\bf R}}^{n \times p}$ and $s$ such that $1 \leq s \leq n$.
\STATE Define a probability vector $q\in{\mbox{\bf R}}^n$ such that
\[
q_i=\frac{\|A^{(i)}\|_2\|B_{(i)}\|_2}{\sum_{j=1}^n \|A^{(j)}\|_2\|B_{(j)}\|_2}, \quad i=1,\ldots,n.
\]
\STATE Define subsampled matrices $C\in{\mbox{\bf R}}^{m \times s}$ and $R\in{\mbox{\bf R}}^{s \times p}$ as follows.
\FOR{$i=1$ to $s$}
\STATE Pick $j\in[1,n]$ with $\mathbf{P}(j=l)=q_l$.
\STATE Set $C^{(i)}=A^{(j)}/\sqrt{sq_{j}}$ and $R_{(i)}=B_{(j)}/\sqrt{sq_{j}}$.
\ENDFOR
\ENSURE Matrix product $CR$ approximating $AB$.
\end{algorithmic}
\end{algorithm}
By construction, we have $\textstyle\mathop{\bf E{}}[CR]=AB$, and the following randomization result from \cite{Drin07} controls the precision of the approximations in algorithm \ref{alg:matrix-mult}.
\begin{lemma}\label{ref:lem-col-sample}
Let $A\in{\mbox{\bf R}}^{m \times n}$, $B\in{\mbox{\bf R}}^{n \times p}$, given a subsampling rate $s$ such that $1 \leq s \leq n$, suppose that $C\in{\mbox{\bf R}}^{m \times s}$ and $R\in{\mbox{\bf R}}^{s \times p}$ are computed according to algorithm \ref{alg:matrix-mult} above, then
\[
\textstyle\mathop{\bf E{}}[\|AB-CR\|_F^2]\leq \frac{1}{s}{\|A\|_F^2\|B\|_F^2}
\]
and if $\beta\in[0,1]$ with $\eta=1+\sqrt{8\log(1/\beta)}$ then
\[
\|AB-CR\|_F^2\leq \frac{\eta^2}{s}{\|A\|_F^2\|B\|_F^2}
\]
with probability at least $1-\beta$.
\end{lemma}
\begin{proof}
See Theorem 1 in \cite{Drin07}.
\end{proof}
Note that using the adaptive probabilities $q_i$ is crucial here. The error bounds increase by a factor $n$ when $q_i=1/n$ for example.
\subsection{Randomized low-rank approximation}
\begin{algorithm}
\caption{Low-rank approximation}
\label{alg:low-rank}
\begin{algorithmic} [1]
\REQUIRE $X\in{\mbox{\bf R}}^{m \times n}$ and $k,s$ such that $1\leq k \leq s < n$.
\STATE Define a probability vector $q\in{\mbox{\bf R}}^n$ such that $q_i={\|X^{(i)}\|^2_2}/{\|X\|_F^2}$, for $i=1,\ldots,n$.
\STATE Define a subsampled matrix $S\in{\mbox{\bf R}}^{m \times s}$ as follows.
\FOR{$i=1$ to $s$}
\STATE Pick an index $j\in[1,n]$ with $\mathbf{P}(j=l)=q_l$.
\STATE Set $S^{(i)}=X^{(j)}/\sqrt{sq_{j}}$.
\ENDFOR
\STATE Form the eigenvalue decomposition $S^TS=Y\mathop{\bf diag}(\sigma)Y^T$ where $Y\in{\mbox{\bf R}}^{s \times s}$ and $\sigma \in {\mbox{\bf R}}^s$.
\STATE Form a matrix $H\in{\mbox{\bf R}}^{m\times k}$ with $H^{(i)}=SY^{(i)}/\sigma_i^{1/2}$.
\ENSURE Approximate singular vectors $H^{(i)}$, $i=1,\ldots,k$.
\end{algorithmic}
\end{algorithm}
Algorithm \ref{alg:low-rank} below computes the leading singular vectors of a smaller matrix $S$, which is a subsampled and rescaled version of $X$. Here, the computational savings come from the fact that we only need to compute singular values of a matrix of dimension~$m\times s$ with $s\leq n$. Recall that computing $k$ leading eigenvectors of a symmetric matrix of dimension $s$ only requires matrix vector products, hence can be performed in $O(ks^2\log s)$ operations using iterative algorithms such as the power method or Lanczos method (see the appendix for details, as usual we omit the precision target in linear algebra operations, implicitly assuming that it is much finer than $\epsilon$), so that the cost of computing $k$ leading singular vectors of a matrix of size~$m\times s$ is $O(ksm\log m)$.
This means that, given the probabilities $q_i$, the total cost of obtaining $k$ approximate singular vectors using algorithm~\ref{alg:low-rank} is $O(ksm\log m)$ instead of $O(knm \log m)$ for exact singular vectors. Of course, computing $q_i$ requires $mn$ operations, but can be done very efficiently in a single pass over the data. We now recall the following result from \cite{Drin06} which controls the precision of the approximations in algorithm \ref{alg:low-rank}.
\begin{lemma}\label{ref:lem-col-vec}
Let $X\in{\mbox{\bf R}}^{m \times n}$ and $1\leq k \leq s < n$. Given a precision target $\epsilon>0$, if $s\geq 4/\epsilon^2$ and $H\in{\mbox{\bf R}}^{m \times k}$ is computed as in algorithm \ref{alg:low-rank}, we have
\[
\textstyle\mathop{\bf E{}}[\|X-H_kH_k^TX\|_2^2]\leq \|X-X_k\|_2^2 + \epsilon \|X\|_F^2
\]
and if in addition $s>4\eta^2/\epsilon^2$ where $\eta=1+\sqrt{8\log(1/\beta)}$ for $\beta\in[0,1]$, then
\[
\|X-H_kH_k^TX\|_2^2 \leq \|X-X_k\|_2^2 + \epsilon \|X\|_F^2
\]
with probability at least $1-\beta$, where $X_k$ is the best rank $k$ approximation of $X$.
\end{lemma}
\begin{proof}
See Theorem 4 in \cite{Drin06}.
\end{proof}
An identical precision bound holds in the Frobenius norm when $s\geq 4k/\epsilon^2$. We now adapt these results to our setting in the following lemma, which shows how to approximate the spectral radius of a symmetric matrix $X$ using algorithm \ref{alg:low-rank}.
\begin{lemma}\label{lem:col-approx}
Let $X\in{\mbox{\bf R}}^{m \times n}$ and $\beta\in[0,1]$. Given a precision target $\epsilon>0$, construct a matrix $S\in{\mbox{\bf R}}^{m \times s}$ by subsampling the columns of $X$ as in algorithm \ref{alg:low-rank}. Let $\eta=1+\sqrt{8\log(1/\beta)}$ and
\begin{equation}\label{eq:col-samp-rate}
s=\eta^2 \frac{\|X\|_2^2}{\epsilon^2 }\mathop{\bf NumRank}(X)^2
\end{equation}
we have
\[
\textstyle\mathop{\bf E{}}[|\|S\|_2-\|X\|_2|]\leq \epsilon
\]
and
\[
|\|S\|_2-\|X\|_2|\leq \epsilon
\]
with probability at least $1-\beta$.
\end{lemma}
\begin{proof}
Using the Hoffman-Wielandt inequality (see \cite[Th. 3.1]{Stew90} or the proof of \cite[Th.2]{Drin06} for example) we get
\[
|\|S\|_2^2-\|X\|_2^2|\leq \|SS^T-XX^T\|_F
\]
hence
\[
|\|S\|_2-\|X\|_2|\leq \|SS^T-XX^T\|_F/\|X\|_2
\]
and Jensen's inequality together with the matrix multiplication result in Lemma \ref{ref:lem-col-sample} yields
\[
\textstyle\mathop{\bf E{}}[\|SS^T-XX^T\|_F] \leq \displaystyle \frac{\|X\|_F^2}{\sqrt{s}}
\]
and
\[
\|SS^T-XX^T\|_F \leq \frac{\eta \|X\|_F^2}{\sqrt{s}}
\]
with probability at least $1-\beta$. Combining these two inequalities with the sampling rate in~(\ref{eq:col-samp-rate})
\[
s=\eta^2 \frac{\|X\|_F^4}{\epsilon^2 \|X\|_2^2}
\]
yields the desired result.
\end{proof}
The subsampling rate required to achieve a precision target $\epsilon$ has a natural interpretation. Indeed
\[
s=\eta^2 \frac{\|X\|_2^2}{\epsilon^2 }\mathop{\bf NumRank}(X)^2
\]
is simply the squared ratio of the numerical rank of the matrix $X$ over the relative precision target~${\epsilon}/{\|X\|_2}$, times a factor $\eta^2$ controlling the confidence level. The numerical rank $\mathop{\bf NumRank}(X)$ always satisfies $1 \leq\mathop{\bf NumRank}(X)={\|X\|_F^2}/{\|X\|_2^2} \leq \mathop{\bf Rank}(X)$ and can be seen as a stable relaxation of the rank of the matrix $X$ (see \cite{Rude07} for a discussion). Note also that, by construction, the subsampled matrix always has lower rank than the matrix $X$. The expectation bound is still valid if we drop the factor $\eta$.
\section{Stochastic approximation algorithm}
\label{s:stoch-opt} Below, we will use a stochastic approximation algorithm to solve problem (\ref{eq:min-maxeig}) when the gradient is approximated using the subsampling algorithms detailed above. We focus on a stochastic approximation of problem (\ref{eq:min-maxeig}) written
\begin{equation}\label{eq:min-maxeig-stoch}
\min_{y\in Q} f(y)\equiv\textstyle\mathop{\bf E{}}\left[\left\|\pi^{(s)}\left(\sum_{j=1}^p y_j A_j +C\right)\right\|_2\right]-b^Ty
\end{equation}
in the variable $y\in{\mbox{\bf R}}^p$ and parameters $A_j\in\symm_n$, for $j=1,\ldots,p$, $b\in{\mbox{\bf R}}^p$ and $C\in \symm_n$, with $1 \leq s \leq n$ controlling the sampling rate, where the function $\|\pi^{(s)}(\sum_{j=1}^p y_j A_j +C)\|_2$ and a subgradient with respect to $y$ are computed using algorithms \ref{alg:matrix-mult} and \ref{alg:low-rank}. For $X\in\symm_n$, we have written $\pi^{(s)}(X)$ the subsampling/scaling operation used in algorithms \ref{alg:matrix-mult} and \ref{alg:low-rank} with
\begin{equation}\label{eq:pi-sub}
\pi^{(s)}(X)= S,
\end{equation}
where $0< s < n$ controls the sampling rate and $S\in{\mbox{\bf R}}^{n \times s}$ is the random matrix defined in algorithm~\ref{alg:low-rank} whose columns are a scaled sample of the columns of X. We will write $S=\pi_{(s)}(X)$ the matrix obtained by subsampling rows as in algorithm~\ref{alg:matrix-mult}. We also define ${\cal A}\in{\mbox{\bf R}}^{n^2 \times p}$ as the matrix whose columns are given by ${\cal A}^{(j)}=\mathop{\bf vec}(A_j)$, $j=1,\ldots,p$.
\subsection{Stochastic approximation algorithm}
We show the following lemma approximating the gradient of the function $\|\pi^{(s)}(\sum_{j=1}^p y_j A_j +C)\|_2$ with respect to $y$ and bounding its quadratic variation.
\begin{lemma}\label{ref:grad-var}
Given $A_j\in\symm_n$ with ${\cal A}\in{\mbox{\bf R}}^{n^2 \times p}$ defined as above, for $j=1,\ldots,p$, $b\in{\mbox{\bf R}}^p$, $C\in \symm_n$ and sampling rates $s_1$ and $s_2$, a (stochastic) subgradient of the function $\|\pi^{(s_1)}(\sum_{j=1}^p y_j A_j +C)\|_2-b^Ty$ with respect to~$y$ is given by the vector $w\in{\mbox{\bf R}}^p$ with
\[
w={\cal A}^T\mathop{\bf vec}(vv^T)-b
\]
where $v\in{\mbox{\bf R}}^n$ is a leading singular vector of the subsampled matrix $S=\pi^{(s_1)}(X)$ formed in algorithm \ref{alg:low-rank}. Furthermore, the product ${\cal A}^T\mathop{\bf vec}(vv^T)$ can be approximated using algorithm \ref{alg:matrix-mult} to form an approximate gradient
\[
g=\pi^{(s_2)}({\cal A}^T) ~ \pi_{(s_2)}(\mathop{\bf vec}(vv^T))-b,
\]
which satisfies
\begin{equation}\label{eq:quad-var}
\textstyle\mathop{\bf E{}}[g]={\cal A}^T\mathop{\bf vec}(vv^T)-b\in\partial f(y) \quad\mbox{and}\quad \textstyle\mathop{\bf E{}}[\|g\|_2^2]\leq M_*^2 \equiv 2\frac{\|{\cal A}\|_F^2}{s_2}+2\|b\|_2^2.
\end{equation}
\end{lemma}
\begin{proof} Iterated expectations give $\textstyle\mathop{\bf E{}}[g]=\textstyle\mathop{\bf E{}}[w]\in\partial f(y)$. The sampling probabilities $q_i$ used in approximating the matrix vector product ${\cal A}^T\mathop{\bf vec}(vv^T)$ following algorithm \ref{alg:matrix-mult} are defined as
\[
q_i=\frac{\|{\cal A}_{(i)}\|_2 |\mathop{\bf vec}(vv^T)_i|}{\sum_{j=1}^n \|{\cal A}_{(j)}\|_2|\mathop{\bf vec}(vv^T)_j|}, \quad i=1,\ldots,n.
\]
As in \cite[Lemma 3]{Drin07}, the quadratic variation of the approximate product $\pi^{(s_2)}({\cal A}^T) ~ \pi_{(s_2)}(\mathop{\bf vec}(vv^T))$ is then given by
\[
\textstyle\mathop{\bf E{}}[\|\pi^{(s_2)}({\cal A}^T) ~ \pi_{(s_2)}(\mathop{\bf vec}(vv^T))\|_F^2]=\sum_{i=1}^{n^2} \frac{\|{\cal A}_{(i)}\|_2^2\mathop{\bf vec}(vv^T)_i^2}{s_2 q_i}.
\]
With $p_i$ defined as above, we get
\begin{eqnarray*}
\sum_{i=1}^{n^2} \frac{\|{\cal A}_{(i)}\|_2^2\mathop{\bf vec}(vv^T)_i^2}{s_2 q_i} & \leq & \frac{\left(\sum_{i=1}^{n^2}\|{\cal A}_{(i)}\|_2\mathop{\bf vec}(vv^T)_i\right)^2}{s_2}\\
& \leq & \frac{\|{\cal A}\|_F^2}{s_2}\\
\end{eqnarray*}
by Cauchy-Schwarz, because $\|\mathop{\bf vec}(vv^T)\|^2_2=\|vv^T\|_F^2=\|v\|_2^4=1$, hence the desired result.
\end{proof}
We now use this result to produce an explicit bound on the complexity of solving problems (\ref{eq:min-maxeig-stoch}) and (\ref{eq:min-maxeig}) by subsampling using a stochastic approximation algorithm. In this section, we let $\|\cdot\|$ be a general norm on ${\mbox{\bf R}}^p$, we write $\|\cdot\|_*$ its dual norm and define $\delta_*(p)$ as the smallest number such that $\|y\|_2\leq \delta_*(p) \|y\|_*$ for all $y\in {\mbox{\bf R}}^p$. Following the notation in \cite[\S 2.3]{Judi07}, we let $\omega(x)$ be a distance generating function, i.e. a function such that
\[
Q^o=\left\{x\in Q:~ \exists y\in {\mbox{\bf R}}^m,~x\in \mathop{\rm argmin}_{u\in Q} [y^Tu + \omega(u)]\right\}
\]
is a convex set. We assume that $\omega(x)$ is strongly convex on $Q^o$ with modulus $\alpha$ with respect to the norm $\|\cdot\|$, which means
\[
(y-x)^T(\nabla\omega(y)-\nabla\omega(x)) \geq \alpha \|y-x\|^2, \quad x,y\in Q^o.
\]
We then define a prox-function $V(x,y)$ on $Q^o \times Q$ as follows:
\[
V(x,y)\equiv \omega(y) - [ \omega(x)+\nabla \omega(x)^T(y-x)],
\]
which is nonnegative and strongly convex with modulus $\alpha$ with respect to the norm $\|\cdot\|$. The prox-mapping associated to $V$ is then defined as
\begin{equation} \label{prox-map}
P_x^{Q,\omega}(y) \equiv \mathop{\rm argmin}_{z\in Q} \{ y^T(z-x) + V(x,z)\}.
\end{equation}
Finally, we define the $\omega$ diameter of the set $Q$ as:
\begin{equation} \label{eq:diameter}
D_{\omega,Q}\equiv(\max_{z\in Q} \omega(z)-\min_{z\in Q} \omega(z))^{1/2}
\end{equation}
and we let $\gamma_l$ for $l=0,\ldots,N$ be a step size strategy.
\begin{algorithm} [h]
\caption{Spectral norm minimization using subsampling}
\label{alg:stoch-grad}
\begin{algorithmic}[1]
\REQUIRE Matrices $A_j\in\symm_n$, for $j=1,\ldots,p$, $b\in{\mbox{\bf R}}^p$ and $C\in \symm_n$, sampling rates $s_1$ and $s_2$.
\STATE Pick initial $y_0 \in Q$
\FOR{$l=1$ to $N$}
\STATE Compute $v\in{\mbox{\bf R}}^n$, the leading singular vector of the matrix $\pi^{(s_1)}(\sum_{j=1}^p y_{l,j} A_j +C)$, subsampled according to algorithm \ref{alg:low-rank} with $k=1$ and $s=s_1$.
\STATE Compute the approximate subgradient $g_l=\pi^{(s_2)}({\cal A}^T) ~ \pi_{(s_2)}(\mathop{\bf vec}(vv^T))-b$, by subsampling the matrix product using algorithm \ref{alg:matrix-mult} and $s=s_2$.
\STATE Set $y_{l+1}=P_{y_l}^{Q,\omega}(\gamma_l g_l)$.
\STATE Update the running average $\tilde y_N= \sum_{k=0}^{N} \gamma_l y_l/\sum_{k=0}^{N}\gamma_l$.
\ENDFOR
\ENSURE An approximate solution $\tilde y_N\in{\mbox{\bf R}}^p$ of problem (\ref{eq:min-maxeig-stoch}) with high probability.
\end{algorithmic}
\end{algorithm}
The following results control the convergence of the robust stochastic approximation algorithm~\ref{alg:stoch-grad} (see \cite{Judi07}, \cite{Nest09}, \cite{Poly92} or \cite{Nemi83} for further details). We call $\bar y$ the optimal solution of problem~(\ref{eq:min-maxeig-stoch}), the lemma below characterizes convergence speed in expectation.
\begin{lemma}\label{lem:conv-expect}
Given $N>0$, let $M_*$ be defined as in (\ref{eq:quad-var}) by
\[
M_*^2=2\frac{\|{\cal A}\|_F^2}{s_2}+2\|b\|_2^2,
\]
using a fixed step size strategy with
\[
\gamma_l=\frac{D_{\omega,Q}}{\delta_*^2(p)M_*^2}\sqrt{\frac{2}{\alpha N}}, \quad l=1,\ldots,N
\]
we have, after $N$ iterations of algorithm \ref{alg:stoch-grad}
\[
\textstyle\mathop{\bf E{}}[f(\tilde y_N)-f(\bar y)] \leq {D_{\omega,Q}}{\delta_*^2(p)M_*^2}\sqrt{\frac{2}{\alpha N}}
\]
and
\[
f(\tilde y_N)-f(\bar y) \geq \epsilon
\]
with probability less than $\frac{D_{\omega,Q}\delta_*^2(p)M_*}{\epsilon}\sqrt{\frac{2}{\alpha N}}$.
\end{lemma}
\begin{proof}
By construction $\textstyle\mathop{\bf E{}}[\|g\|_*^2]\leq \delta_*^2(p) M_*^2$, the rest follows from \cite[\S2.3]{Judi07} for example.
\end{proof}
Lemma \ref{lem:conv-expect} means that we need at most
\[
N=\frac{2D_{\omega,Q}^2\delta_*^2(p)M_*^2}{\alpha\epsilon^2\beta^2}
\]
iterations to get an $\epsilon$ solution to problem (\ref{eq:min-maxeig-stoch}) with confidence at least $1-\beta$. Typically, the prox function $\omega$ and the norm are chosen according to the geometry of $Q$, to minimize $N$. The choice of norm also affects $\delta_*(p)$ and obtaining better bounds on $M_*$ in (\ref{eq:quad-var}) for generic norms would further tighten this complexity estimate.
We now call $y^*$ the solution to the original (deterministic) spectral norm minimization problem~(\ref{eq:min-maxeig}) and bound the suboptimality of~$\tilde y_N$ in the (true) problem~(\ref{eq:min-maxeig}) with high probability.
\begin{theorem}\label{th:conv}
If the sampling rate $s_1$ is set to
\begin{equation}\label{eq:opt-sampling}
s_1= \frac{\left\|\sum_{j=1}^p y^*_{j} A_j +C\right\|_2^2}{\epsilon^2 }\mathop{\bf NumRank}\textstyle\left(\sum_{j=1}^p y^*_{j} A_j +C\right)^2
\end{equation}
then after
\begin{equation}\label{eq:niters}
N=\frac{2D_{\omega,Q}^2\delta_*^2(p)M_*^2}{\alpha\epsilon^2\beta^2}
\end{equation}
iterations of algorithm \ref{alg:stoch-grad}, we have
\[
\left\|\sum_{j=1}^p \tilde y_{N,j} A_j +C\right\|_2-b^T\tilde y_N-\left\|\sum_{j=1}^p y^*_{j} A_j +C\right\|_2+b^T y^* \leq 2\epsilon
\]
with probability at least $1-\beta$.
\end{theorem}
\begin{proof}
Recall that we have written $y^*$ the solution to the original (deterministic) problem (\ref{eq:min-maxeig}), $\bar y$~the solution to the approximate (stochastic) problem (\ref{eq:min-maxeig-stoch}) and $\tilde y_N$ the $N$-th iterate of algorithm \ref{alg:stoch-grad} above. Lemma \ref{lem:conv-expect} on the convergence of $\tilde y_N$ to the solution of the stochastic problem in (\ref{eq:min-maxeig-stoch}) means
\[
f(\tilde y_N)-f(\bar y) \leq \epsilon
\]
with probability at least $1-\beta$. By definition, $\bar y$ minimizes the stochastic problem, so in particular $f(\bar y)\leq f(y^*)$ and we have in fact
\begin{equation}\label{eq:ineq-opt}
f(\tilde y_N)-f(y^*) \leq \epsilon.
\end{equation}
with probability at least $1-\beta$.
Now, with $s_1$ defined as above, Lemma \ref{lem:col-approx} on the quality of the subsampling approximation to $\|.\|_2$ shows that if the sampling rate is set as in (\ref{eq:opt-sampling}) then
\[
\textstyle\textstyle\mathop{\bf E{}}\left[\left|\left\|\sum_{j=1}^p y^*_j A_j +C\right\|_2-\left\|\pi^{(s)}\left(\sum_{j=1}^p y_j^* A_j +C\right)\right\|_2\right|\right]\leq \epsilon
\]
and Jensen's inequality yields
\[
\textstyle\left|\left\|\left(\sum_{j=1}^p y^*_j A_j +C\right)\right\|_2-b^Ty^*-f(y^*)\right|\leq \epsilon.
\]
which bounds the difference between the minimum of the (true) problem in (\ref{eq:min-maxeig}) and the value $f(y^*)$ of its stochastic approximation in (\ref{eq:min-maxeig-stoch}), combining this with inequality (\ref{eq:ineq-opt}) we finally get that
\[
\textstyle f(\tilde y_N) - \left\|\left(\sum_{j=1}^p y^*_j A_j +C\right)\right\|_2+b^Ty^*\leq 2\epsilon.
\]
with probability at least $1-\beta$, which is the desired result.
\end{proof}
This result allows us to bound the \emph{oracle} complexity of solving (\ref{eq:min-maxeig}) by subsampling. In practice of course, both the spectral norm and the numerical rank of the solution matrix $\sum_{j=1}^p y^*_{j} A_j +C$ are unknown. However, assuming we have a {\em stopping criterion}, i.e. a function which efficiently certifies that a given $y\in{\mbox{\bf R}}^p$ is optimal, we can {\em search} for the minimum sampling rate in (\ref{eq:opt-sampling}) by starting from a low target and doubling the sampling rate until we obtain an optimal solution. The simple lemma below explicitly summarizes the complexity of this procedure.
\begin{lemma}\label{lem:bin-search}
Suppose we start from a sampling rate $s=1$ and run algorithm \ref{alg:stoch-grad} repeatedly, doubling the sampling rate until the stopping criterion certifies the solution is optimal. Then, with probability at least $1-\beta$, algorithm \ref{alg:stoch-grad} needs to be run at most
\[
\lceil\log_2 (s_1)\rceil
\]
times, where $s_1$ is given in (\ref{eq:opt-sampling}), before it finds an optimal solution to problem (\ref{eq:min-maxeig}).
\end{lemma}
\begin{proof}
Starting from $s=1$, we simply need to double the sampling rate at most $\lceil\log_2 (s_1)\rceil$ before it becomes larger than $s_1$. At the sampling rate $s=s_1$, algorithm \ref{alg:stoch-grad} will produce an optimal solution with probability $1-\beta$.
\end{proof}
In fact, we will see below that the complexity of each iteration is dominated by a term $O(sn\log n)$, where $s$ is the sampling rate, and because
\[
\sum_{i=1}^{\lceil\log_2 (s_1)\rceil} 2^i \leq 2^{\lceil\log_2 (s_1)\rceil+1} \leq 4 s_1
\]
we then observe that {\em searching} for the minimal sampling rate by repeatedly solving (\ref{eq:min-maxeig-stoch}) for increasing sampling rates will be less than four times as expensive as solving the problem in the {\em oracle} case.
Typically, producing a stopping oracle means computing a duality gap and we will show in~\S\ref{ss:gap} how this can be done efficiently here. Note that in the absence of such a stopping criterion, the minimum sampling rate in (\ref{eq:opt-sampling}) has to be enforced over all matrices $X=\sum_{j=1}^p y_{j} A_j +C$, which considerably increases overall complexity. The next section provides a detailed analysis of the complexity of algorithm \ref{alg:stoch-grad} as a function of $\epsilon,s_1$ and $s_2$.
\subsection{Complexity}
We now study in detail the complexity of algorithm \ref{alg:stoch-grad}. Suppose we are given a precision target~$\epsilon$ and fix the sampling rate $s_2$ arbitrarily between 1 and $n^2$, with the sampling rate $s_1$ set as in Theorem~\ref{th:conv}. The cost of each iteration in algorithm \ref{alg:stoch-grad} breaks down as follows.
\begin{itemize}
\item On line 3: Computing the leading singular vector $v$, using algorithm \ref{alg:low-rank} with $k=1$. This means first computing the probabilities $q_i$ at a cost of $O(n^2)$ operations. Forming the matrix $S=\pi^{(s_1)}(\sum_{j=1}^p y_{l,j} A_j +C)$ costs $O(ns_1)$ operations. It remains to compute the leading singular vector of $S$ using the Lanczos method at a cost of $O(s_1n\log n)$ (cf. the appendix for details). The total numerical cost of this step is then bounded by $c_1 n^2 + c_2 ns_1$ where $c_1$ and $c_2$ are absolute constants. Here, $c_1$ is always less than ten while $c_2$ is the number of iterations required by the Lanczos method to reach a fixed precision target (typically 1e-8 or better here) hence we have $c_1 \ll c_2$.
\item On line 4: Computing the approximate subgradient $g_l=\pi^{(s_2)}({\cal A}^T) ~ \pi_{(s_2)}(\mathop{\bf vec}(vv^T))-b$, by subsampling the matrix product using algorithm \ref{alg:matrix-mult}. This means again forming the vector $q$ at a cost of $O(n^2)$ (the row norms of ${\cal A}$ can be precomputed). Computing the subsampled matrix vector product then costs $O(ps_2)$. Both of these complexity bounds have low constants.
\item On line 5: Computing the projection $y_{l+1}=P_{y_l}^{Q,\omega}(\gamma_l g_l)$, whose numerical cost will be denoted by $c(p)$.
\end{itemize}
Let us remark in particular that all $O(n^2)$ operations above only require one pass over the data, which means that the entire data set does not need to fit in memory. Using the bound on the quadratic variation of the gradient computed in Lemma \ref{ref:grad-var}, we can then bound the number of iterations required by algorithm \ref{alg:stoch-grad} to produce a $\epsilon$-solution to problem~(\ref{eq:min-maxeig}) with probability at least $1-\beta$. Let us call $Y^*=\sum_{j=1}^p y^*_{j} A_j +C$, and recall that $\eta=1+\sqrt{8\log(1/\beta)}$, Table \ref{tab:complex-stoch} summarizes these complexity bounds and compares them with complexity bounds for a stochastic approximation algorithm without subsampling.
\begin{table}[H]
\begin{center}
\extrarowheight 1.5ex
\begin{tabular}{r|c|c}
{\bf Complexity} & Stoch. Approx. & Stoch. Approx. with Subsampling \\
\hline
Per Iter. & $c_4n^2p+c(p)$ & $c_1n^2 + c_3 p s_2 + c_2n\log n~\eta^2 \frac{\|Y^*\|_2^2}{\epsilon^2 }\mathop{\bf NumRank}(Y^*)^2 + c(p)$ \\
Num. Iter. & $\frac{2D_{\omega,Q}^2\delta^*(p)^2({\|{\cal A}\|_F^2}+\|b\|_2^2)^2}{\alpha\epsilon^2\beta^2}$ & $\frac{2D_{\omega,Q}^2\delta^*(p)^2\left(\frac{\|{\cal A}\|_F^2}{s_2}+\|b\|_2^2\right)^2}{\alpha\epsilon^2\beta^2}$\\
\end{tabular}
\caption{Complexity of solving problem (\ref{eq:min-maxeig}) using subsampled stochastic approximation method versus original algorithm. Here $c_1,\ldots,c_4$ are absolute constants with $c_1,c_3 \ll c_2,c_4$.\label{tab:complex-stoch}}
\end{center}
\end{table}
We observe that subsampling affects the complexity of solving problem (\ref{eq:min-maxeig}) in two ways. Decreasing the (matrix product) subsampling rate $s_2\in[1,n^2]$ decreases the cost of each iterations but increases the number of iterations in the same proportion, hence has no explicit effect on the total complexity bound. In practice of course, because of higher cache memory speed and better bandwidth on smaller problems, cheaper iterations tend to run more efficiently than more complex ones. The impact of the (singular vector) subsampling rate $s_1\in[1,n]$ is much more important however, since computing the leading eigenvector of the current iterate is the most complex step in the algorithm when solving problem (\ref{eq:min-maxeig}) using stochastic approximation. Because $c_1,c_3 \ll c_2$, the per iteration complexity of solving large-scale problems essentially follows
\[
n\eta^2 \frac{\|Y^*\|_2^2}{\epsilon^2 }\mathop{\bf NumRank}(Y^*)^2
\]
hence explicitly depends on both the numerical rank of the solution matrix $Y^*=\sum_{j=1}^p y^*_{j} A_j +C$ and on the relative precision target $\epsilon/\|Y^*\|_2$. This means that problems with simpler solutions will be solved more efficiently than problems whose solutions has a high rank.
The choice of norm $\|\cdot\|$ and distance generating function also has a direct impact on complexity through $c(p)$ and $\delta_*(p)M_*$. Unfortunately here, subsampling error bounds are only available in the Frobenius and spectral norms hence part of the benefit of choosing optimal norm/distance generating function combinations is sometimes lost in the norm ratio bound $\delta_*(p)$. However, choosing a norm/prox function combination according to the geometry of $Q$ can still improve the complexity bound compared to a purely Euclidean setting.
\iffalse
\begin{table}[h]
\begin{center}
\extrarowheight 1.5ex
\begin{tabular}{r|c|c|c}
{\bf Complexity} & Interior point & Smooth first-order & Stoch. Approx. \\
\hline
Per Iter. & $O\left(pn^{3}+p^3 \right)$ & $O(pn^2+ n^3)$ & $O(n^2)$\\
Num. Iter. & $O\left(n^{0.5} \log\left(\frac{1}{\epsilon}\right)\right)$ & $\frac{4 \sqrt{pn\log n}}{\epsilon}$ & $O\left(\frac{1}{\epsilon^2}\right)$\\
Total & $O\left((pn^{3.5}+p^3n^{0.5}) \log\left(\frac{1}{\epsilon}\right)\right)$ & $O\left(\frac{(pn^2+ n^3)\sqrt{pn\log n}}{\epsilon}\right)$& $O\left(\frac{n^2}{\epsilon^2}\right)$\\
\end{tabular}
\caption{Complexity of solving problem (\ref{eq:min-maxeig}) using various algorithms, when the entries of the matrices $A_j$ are of order one. \label{tab:complexity}}
\end{center}
\end{table}
\fi
Finally, subsampling can have a more subtle effect on complexity. By construction, solutions to problem (\ref{eq:min-maxeig}) tend to have multiple leading singular values which coalesce near the optimum. Introducing noise by subsampling can potentially break this degeneracy and increase the gap between leading eigenvalues. Since the complexity of the algorithm depends in great part on the complexity of computing a leading singular vector using iterative methods such as the power method or the Lanczos method (cf. Appendix), and the complexity of these methods decreases as the gap between the two leading singular values increases, subsampling can also improve the efficiency of iterative singular value computations.
\subsection{Surrogate Duality Gap}
\label{ss:gap} In practice, we often have no a priori knowledge of $\mathop{\bf NumRank}(Y^*)^2$ and if the sampling rate~$s$ is set too low, it's possible for the algorithm to terminate at a suboptimal point $Y$ where the subsampling error is less than $\epsilon$ (if the error at the true optimal point $Y^*$ is much larger than $\epsilon$). In order to search for the optimal sampling rate $s$ as in Lemma \ref{lem:bin-search}, we first need to check for optimality in~(\ref{eq:min-maxeig}) and we now show how to track convergence in algorithm \ref{alg:stoch-grad} by computing a surrogate duality gap, at a cost roughly equivalent to that of computing a subgradient. The dual of problem (\ref{eq:min-maxeig}) is written
\begin{equation}\label{eq:gen-dual}
\begin{array}{ll}
\mbox{maximize} & \mathop{\bf Tr}(CX) - S_Q(w)\\
\mbox{subject to} & w_j=b_j-\mathop{\bf Tr}(A_jX),\quad j=1,\ldots,p\\
& \|X\|_{\mathrm{tr}} \leq 1,\\
\end{array}\end{equation}
in the variables $X\in\symm_n$ and $w\in{\mbox{\bf R}}^p$, where $S_Q(v)$ is the support function of the set $Q$, defined as
\[
S_Q(w)\equiv \max_{y\in Q} w^Ty.
\]
For instance, when $Q$ is an Euclidean ball of radius $B$, problem (\ref{eq:gen-dual}) becomes
\begin{equation}\label{eq:euc-dual}
\begin{array}{ll}
\mbox{maximize} & \mathop{\bf Tr}(CX) - B \|w\|_2 \\
\mbox{subject to} & w_j=b_j-\mathop{\bf Tr}(A_jX),\quad j=1,\ldots,p\\
& \|X\|_{\mathrm{tr}} \leq 1,\\
\end{array}\end{equation}
in the variables $X\in\symm_n$ and $w\in{\mbox{\bf R}}^p$. The leading singular vector $v$ in algorithm \ref{alg:stoch-grad} always satisfies $\|vv^T\|_{\mathrm{tr}} \leq 1$, hence we can track convergence in solving (\ref{eq:min-maxeig}) by computing the following surrogate duality gap
\begin{equation}
\left\|\sum_{j=1}^p y_j A_j +C\right\|_2-b^Ty - v^TCv + S_Q(w)
\end{equation}
where $w_i=b_j-v^TA_jv$ for $i=1,\ldots,p$.
\subsection{Minimizing the sum of the $k$ largest singular values}
Motivated by applications in statistical learning, we now discuss direct extensions of the results above to the problem of minimizing the sum of the k largest singular values of an affine combination of matrices, written
\begin{equation}\label{eq:min-ksigv}
\min_{y\in Q}~ \sum_{i=1}^k \textstyle \sigma_i\left(\sum_{j=1}^p y_j A_j +C\right)-b^Ty\\
\end{equation}
in the variable $y\in{\mbox{\bf R}}^p$, with parameters $A_j\in\symm_n$, for $j=1,\ldots,p$, $b\in{\mbox{\bf R}}^p$ and $C\in \symm_n$. As in the previous section, we also form its stochastic approximation
\begin{equation}\label{eq:min-ksigv-stoch}
\min_{y\in Q} f(y)\equiv\textstyle\mathop{\bf E{}}\left[\sum_{i=1}^k \sigma_i\left(\pi^{(s)}\left(\sum_{j=1}^p y_j A_j +C\right)\right)\right]-b^Ty
\end{equation}
in the variable $y\in{\mbox{\bf R}}^p$, with $1 \leq s \leq n$ controlling the sampling rate. We now prove an analog of Lemma \ref{lem:col-approx} for this new objective function.
\begin{lemma}\label{lem:col-k-approx}
Let $X\in{\mbox{\bf R}}^{m \times n}$ and $\beta\in[0,1]$. Given a precision target $\epsilon>0$, $k\geq 1$ and a matrix $S\in{\mbox{\bf R}}^{m \times s}$ constructed by subsampling the columns of $X$ as in algorithm \ref{alg:low-rank}, let $\eta=1+\sqrt{8\log(1/\beta)}$ and
\begin{equation}\label{eq:col-samp-k-rate}
s=\eta^2 \frac{(\sum_{i=1}^k \sigma_i(X))^2}{\epsilon^2}\frac{\mathop{\bf NumRank}(X)^2}{k^2}\kappa(X)^4\mathop{\bf Rank}(X)
\end{equation}
where $\kappa(X)=\sigma_1(X)/\sigma_r(X)$ with $r=\min\left\{k,\mathop{\bf Rank}(X)\right\}$, we have
\[
\textstyle\mathop{\bf E{}}\left[\sum_{i=1}^k \left|\sigma_i(X)-\sigma_i(S)\right|\right]\leq \epsilon
\]
and
\[
\sum_{i=1}^k \left|\sigma_i(X)-\sigma_i(S)\right|\leq \epsilon
\]
with probability at least $1-\beta$.
\end{lemma}
\begin{proof} Because $\mathop{\bf Rank}(SS^T)\leq \mathop{\bf Rank}(XX^T)$ by construction, we always have
\begin{eqnarray*}
\sum_{i=1}^k \left|\sigma_i^2(X)-\sigma_i^2(S)\right| & = & \sum_{i=1}^k \left|\sigma_i(X)-\sigma_i(S)\right|(\sigma_i(X)+\sigma_i(S))\\
& \geq & \sigma_r(X) \sum_{i=1}^k \left|\sigma_i(X)-\sigma_i(S)\right|
\end{eqnarray*}
where $r=\min\left\{k,\mathop{\bf Rank}(X)\right\}$. Because the sum of the $k$ largest singular values is a unitarily invariant norm on $\symm_n$ (see \cite[\S 3.4]{Horn91}), Mirsky's theorem (see \cite[Th. 4.11]{Stew90} for example) shows that
\begin{eqnarray*}
\sum_{i=1}^k \left|\sigma_i^2(X)-\sigma_i^2(S)\right| & = & \sum_{i=1}^k \left|\sigma_i(XX^T)-\sigma_i(SS^T)\right| \\
& \leq & \sum_{i=1}^k \sigma_i(XX^T-SS^T)
\end{eqnarray*}
and because, by construction, the range of $SS^T$ is included in the range of $XX^T$, we must have $\mathop{\bf Rank}(XX^T-SS^T)\leq \mathop{\bf Rank}(XX^T)$ and
\[
\sum_{i=1}^k \sigma_i(XX^T-SS^T) \leq \sqrt{\mathop{\bf Rank}(X)}~ \|XX^T-SS^T\|_F
\]
Jensen's inequality together with the matrix multiplication result in Lemma \ref{ref:lem-col-sample} yield
\[
\textstyle\mathop{\bf E{}}[\|SS^T-XX^T\|_F] \leq \displaystyle \frac{\|X\|_F^2}{\sqrt{s}}
\]
and
\[
\|SS^T-XX^T\|_F \leq \frac{\eta \|X\|_F^2}{\sqrt{s}}
\]
with probability at least $1-\beta$. Combining these inequalities with the sampling rate in~(\ref{eq:col-samp-k-rate})
\[
s=\eta^2 \frac{\|X\|_F^4\mathop{\bf Rank}(X)}{\epsilon^2 \sigma_r(X)^2}
\]
and using
\[
\frac{\|X\|_F^4}{(\sum_{i=1}^k \sigma_i(X))^2 \sigma_r(X)^2} \leq \frac{\mathop{\bf NumRank}(X)^2}{k^2}\kappa(X)^4
\]
yields the desired result.
\end{proof}
Once again, the subsampling rate in the above lemma has a clear interpretation,
\[
\eta^2 \frac{(\sum_{i=1}^k \sigma_i(X))^2}{\epsilon^2}\frac{\mathop{\bf NumRank}(X)^2}{k^2}\kappa(X)^4\mathop{\bf Rank}(X)
\]
is the product of a term representing relative precision, a term reflecting the rank of $X$ and a term in $\kappa(X)$ representing its (pseudo) condition number. Note that the bound can be further refined when $\sigma_r \leq \epsilon$. Lemma \ref{lem:col-k-approx} allows us to compute the gradient by subsampling when using algorithm \ref{alg:stoch-grad} to solve problem (\ref{eq:min-ksigv}). The remaining steps in the algorithm are identical, except that the matrix $vv^T$ is replaced by a combination of matrices formed using the $k$ leading singular vectors.
\section{Applications \& numerical results}
\label{s:numexp} In this section, we first detail a few instances of problem (\ref{eq:min-maxeig}) arising in statistical learning. We then study the numerical performance of the methods detailed here on large scale problems.
\subsection{Spectral norm minimization}
\label{ss:nrom-min} For a given matrix $A\in\symm_n$, we begin by studying a simple instance of problem (\ref{eq:min-maxeig}) written
\begin{equation}\label{eq:min-spca}
\begin{array}{ll}
\mbox{minimize} & \|A+U\|_2\\
\mbox{subject to} & |U_{ij}| \leq \rho,\quad i,j=1,\ldots,n\\
\end{array}\end{equation}
in the matrix $U\in\symm_n$. This problem is closely related to a relaxation for sparse PCA (see \cite{dAsp04a}) and we use it in the next section to test the numerical performance of algorithm~\ref{alg:stoch-grad}. The complexity of the main step in the algorithm (i.e. computing the gradient) is controlled by the sampling rate in Lemma~\ref{lem:col-approx}, which is written
\[
s=\eta^2 \frac{\|A+U^*\|_2^2}{\epsilon^2 }\mathop{\bf NumRank}(A+U^*)^2
\]
where $U^*\in\symm_n$ is the optimal solution to problem (\ref{eq:min-spca}).
\subsection{Matrix factorization and collaborative filtering}
\label{ss:matrix-fact} Matrix factorization methods have been heavily used to solve collaborative filtering problems (e.g. the {\em Netflix} problem) and we refer the reader to \cite{Sreb04}, \cite{Bach07}, \cite{Rech07} or \cite{Cand08} for details. All these references form a particular instance of problem (\ref{eq:min-ksigv}), written
\begin{equation}\label{eq:min-tracenorm}
\begin{array}{ll}
\mbox{minimize} & \left\|\sum_{j=1}^p y_j A_j +C\right\|_\mathrm{tr}-b^Ty\\
\mbox{subject to} & y \in Q,
\end{array}\end{equation}
in the variable $y\in{\mbox{\bf R}}^p$ where $Q$ is a low dimension norm ball for example and the matrices $A_j$ have a block format with only a few nonzero coefficients. Here, the trace norm can be understood as a convex lower bound on the rank function (as in \cite{Boyd00}) but sometimes also has a direct interpretation in terms of learning (see \cite{Sreb04}).
In this particular case, the complexity of the main step in the algorithm (i.e. computing the gradient) is controlled by the sampling rate in Lemma~\ref{lem:col-k-approx}, which can be simplified here to
\[
s=\eta^2 \frac{ \left\|Y^*\right\|_\mathrm{tr}^2}{\epsilon^2}\kappa(Y^*)^2\mathop{\bf Rank}(Y^*)
\]
where $Y^*=\sum_{j=1}^p y^*_j A_j +C$ and $\kappa(Y^*)=\sigma_1(Y^*)/\sigma_r(Y^*)$ with $r=\mathop{\bf Rank}(Y^*)$. The bound can be further refined when $\sigma_r \leq \epsilon$. In practice, the complexity of solving problem (\ref{eq:min-tracenorm}) can often be further reduced using the simple observation that an optimal solution of (\ref{eq:min-ksigv}) will also be optimal in (\ref{eq:min-tracenorm}) whenever $\mathop{\bf Rank}(Y^*_k) < k$, where $Y^*_k$ is the optimal solution to (\ref{eq:min-ksigv}) here. Once again, the sampling rate $s$ has a natural interpretation as the product of a relative precision term, a term reflecting the condition number of the solution and the rank of the optimal solution. It means in particular that problems whose solutions have a lower rank are explicitly easier to solve than problems with more complex solutions.
\subsection{LASSO}
\label{ss:lasso} Consider a particular instance of problem (\ref{eq:min-ksigv}) written
\begin{equation}\label{eq:lasso}
\begin{array}{ll}
\mbox{minimize} & \|y\|_1\\
\mbox{subject to} & \|Ax-b\|_2 \leq \sigma
\end{array}\end{equation}
in the variable $y\in{\mbox{\bf R}}^n$, with $A\in{\mbox{\bf R}}^{m \times n}$, $b\in{\mbox{\bf R}}^m$ and $\sigma>0$. This is a (somewhat trivial) version of problem~(\ref{eq:min-ksigv}), where the matrices are diagonal and $Q$ is an ellipsoid. This problem is directly related to LASSO, i.e. $\ell_1$-penalized regression (see \cite{Tibs96}). In the diagonal case, the low rank matrix approximation produced by algorithm~\ref{alg:low-rank} simply picks $s$ coefficients of $y$ with probability proportional to their magnitude. Here, computing the gradient is trivial, but computational savings come from the fact that the bound on the quadratic variation of the gradient in (\ref{eq:quad-var}) is now equal to the sampling rate, so $M^*=s$ in the complexity estimate (\ref{eq:niters}). Since $s$ is chosen as above, with
\[
s=\eta^2 \frac{ \left\|y^*\right\|_1}{\epsilon^2}\kappa(y^*)^2\mathop{\bf Card}(y^*)
\]
where $\kappa(y^*)=y_{[1]}/y_{[r]}$ with $r=\mathop{\bf Card}(y^*)$, this means that the complexity of solving problem (\ref{eq:lasso}) is (explicitly) proportional to the cardinality of the solution $\mathop{\bf Card}(y^*)$. Of course, this algorithm is not competitive with specialized algorithms for solving (\ref{eq:lasso}), but this subsampling bound provides some theoretical support for the empirical observations made in \cite{Dono06} using homotopy methods.
\subsection{Fastest mixing Markov chain on a graph}
\label{ss:mixing} As in \cite{Boyd04}, suppose we are given a connected graph with vertex set ${\cal V}=\{1,\ldots,n\}$ and edge set ${\cal E} \subseteq {\cal V} \times {\cal V}$, with $(i,j)\in{\cal E} \Leftrightarrow (j,i) \in {\cal E}$, where all vertices have a self-loop. We define a Markov chain on this graph with transition probability matrix $P\in{\mbox{\bf R}}^{n\times n}$, where
\[
P_{ij}=\mathop{\bf Prob}(X_{t+1}=j|X_t=i), \quad i,j=1,\ldots,n
\]
This matrix satisfies $P=P^T$ and $P\mathbf 1=\mathbf 1$, which means that the equilibrium distribution of this Markov chain is uniform and the largest singular value of $P$ is equal to one. The asymptotic rate of convergence of this Markov chain to its equilibrium distribution is controlled by the second singular value of $P$, with smaller values of $\sigma_2(P)$ producing faster convergence. \cite{Boyd04} exploited this property to show that the fastest mixing Markov chain on the graph $({\cal V,E})$ could be computed by minimizing $\sigma_2(P)$ over all possible transition matrices on the graph, i.e. by solving
\begin{equation}\label{eq:mixing}
\begin{array}{ll}
\mbox{minimize} & \sigma_2(P)\\
\mbox{subject to} & P \geq 0, ~P\mathbf 1=\mathbf 1,~P=P^T,\\
& P_{ij}=0,\quad (i,j)\notin {\cal E}\\
\end{array}\end{equation}
in the variable $P\in{\mbox{\bf R}}^{n \times n}$. The optimal mixing rate is often significantly faster than the rate provided by classical chains such as maximum degree or Metropolis-Hastings. Because $\sigma_1(P)=1$ here, this is a particular instance of problem (\ref{eq:min-ksigv}) where $k=2$ and the matrices $A_j$ are sparse. Projections on $Q$ can be handled as in \cite{Boyd04}. Once again, the complexity of the main step in the algorithm (i.e. computing the gradient) is controlled by the sampling rate in Lemma~\ref{lem:col-k-approx}, which simplifies to
\[
s=\eta^2 \frac{1}{\epsilon^2 \sigma_2(P^*)^2}{\mathop{\bf NumRank}(P^*)^2}\mathop{\bf Rank}(P^*)
\]
where $P^*$ is the transition matrix of the fastest mixing Markov chain. Once again, simpler transition matrices mean faster convergence.
\subsection{Numerical experiments}
\label{s:numres}
In this section, we test the quality of the subsampling approximations detailed in Section~\ref{s:random} on various matrices. We also evaluate the performance of the algorithms detailed above on large scale problem instances. Numerical code reproducing these experiments is available from the author's webpage.
\paragraph{Randomized low-rank approximations.}
Here, we first measure the quality of the randomized low-rank matrix approximation on both randomly generated matrices and on covariance matrices formed using gene expression data. Because the spectrum of naive large scale random matrices is very structured, these examples are too simple to appropriately benchmark numerical error in algorithm \ref{alg:low-rank}. Fortunately, as we will see below, generating random symmetric matrices with a given spectral measure is straightforward.
Suppose $X\in\symm_n$ is a matrix with normally distributed coefficients, $X_{ij}\sim\mathcal{N}(0,1)$, $i,j=1,\ldots,n$. If we write its QR decomposition, $X=QR$ with $Q,~R\in {\mbox{\bf R}}^{n \times n}$, then the orthogonal matrix $Q$ is Haar distributed on the orthogonal group $\mathcal{O}_n$ (see \cite{Diac03} for example). This means that to generate a random matrix with given spectrum $\mu\in{\mbox{\bf R}}^n$, we generate a normally distributed matrix $X$, compute its QR decomposition and the matrix $Q\mathop{\bf diag}(\mu)Q^T$ will be uniformly distributed on the set of symmetric matrices with spectrum $\mu$. Because the spectral measure of ``natural'' covariance matrices often follows a power law (Tracy-Widom in the Gaussian case, see \cite{John01} and \cite{El-K07} for a discussion), we sample the spectrum $\mu$ from a beta distribution with various exponents to get realistic random matrices with a broad range of numerical ranks. We also use a covariance matrix formed using the gene expression data set in \cite{Alon99}.
In Figure \ref{fig:err-vs-rank}, we plot relative error $\epsilon/\|X\|_2$ versus numerical rank $\mathop{\bf NumRank}(X)$ in loglog scale with 20\% subsampling and $n=500$ on random matrices generated as above and on the gene expression covariance from \cite{Alon99}. We notice that, on these experiments, the relative error grows at most linearly with the numerical rank of the matrix, as predicted by Lemma~\ref{lem:col-approx}. We then plot the histogram in semilog scale of relative error $\epsilon/\|X\|_2$ over theoretical bound $\eta\mathop{\bf NumRank}(X)/\sqrt{s}$ for random matrices with $n=500$. In Figure \ref{fig:err-vs-sample}, we plot relative error $\epsilon/\|X\|_2$ versus sampling rate $s$, in loglog scale, for a gene expression covariance with $n=500$. Once gain, the error decreases as $1/\sqrt{s}$ as predicted by Lemma~\ref{lem:col-approx}. We also plot the median speedup factor (over ten runs) in computing largest magnitude eigenvalues using ARPACK with and without subsampling on a gene expression covariance matrix with $n=2000$, for various values of the sampling ratio $s/n$. Note that both exact and subsampled eigenvalues are computed using direct MEX calls to ARPACK by \cite{Leho98}, as \texttt{eigs} (MATLAB's interface to ARPACK) carries a massive overhead. In all the experiments above, the confidence level used in computing $\eta$ was set to 99\%.
\begin{figure}[hp]
\begin{center}
\begin{tabular}{cc}
\psfrag{numrank}[t][b]{$\mathop{\bf NumRank}(X)$}
\psfrag{relerr}[b][t]{$\epsilon/\|X\|_2$}
\includegraphics[width=0.49 \textwidth]{./figures/ErrVsNumRank.eps}&
\psfrag{erratio}[t][b]{Error / Theoretical error}
\psfrag{occur}[b][t]{\# occurences}
\includegraphics[width=0.45\textwidth]{./figures/HistRatioErr.eps}
\end{tabular}
\caption{\textit{Left:} Loglog plot of relative error $\epsilon/\|X\|_2$ versus numerical rank $\mathop{\bf NumRank}(X)$ with 20\% subsampling and $n=500$ on random matrices (blue dots) and gene expression covariance (red square). The dashed line has slope one in loglog scale. \textit{Right:} Histogram plot in semilog scale of relative error $\epsilon/\|X\|_2$ over theoretical bound $\eta\mathop{\bf NumRank}(X)/\sqrt{s}$ for random matrices with $n=500$.
\label{fig:err-vs-rank}}
\end{center}
\end{figure}
\begin{figure}[hp]
\begin{center}
\begin{tabular}{cc}
\psfrag{srate}[t][b]{Sampling rate $s$}
\psfrag{sqrelerr}[b][t]{$\epsilon/\|X\|_2$}
\includegraphics[width=0.49 \textwidth]{./figures/ErrVsSamplingColon.eps}&
\psfrag{sratio}[t][b]{Sampling ratio $s/n$}
\psfrag{speedup}[b][t]{Speedup factor}
\includegraphics[width=0.49\textwidth]{./figures/CpuVsSampling.eps}
\end{tabular}
\caption{\textit{Left:} Loglog plot of relative error $\epsilon/\|X\|_2$ versus sampling rate $s$ for a gene expression covariance with $n=500$. The dashed line has slope -1/2 in loglog scale. \textit{Right:} Plot of median speedup factor in computing largest magnitude eigenvalue, using ARPACK with and without subsampling on a gene expression covariance matrix with $n=2000$, for various values of the sampling ratio $s/n$.
\label{fig:err-vs-sample}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\psfrag{cpu}[t][b]{CPU time (secs.)}
\psfrag{oval}[b][t]{Objective value}
\includegraphics[width=0.49 \textwidth]{./figures/SampleMovalsVsCpu100.eps}&
\psfrag{cpu}[t][b]{CPU time (secs.)}
\psfrag{gap}[b][t]{Surrogate gap}
\includegraphics[width=0.49\textwidth]{./figures/SampleGapsVsCpu100.eps}
\end{tabular}
\caption{\textit{Left:} Objective value versus CPU for a sample matrix factorization problem in dimension 100, using a deterministic gradient (squares) or a subsampled gradient with subsampling rate set at 20\% (circles). \textit{Right:} Surrogate duality gap versus CPU time on the same example.
\label{fig:sample-trace}}
\end{center}
\end{figure}
\paragraph{Stochastic approximation with subsampling.} In Figure \ref{fig:sample-trace}, we generate a sample ratings matrix $X=VV^T$ for the collaborative filtering problem in \S\ref{ss:matrix-fact}, where $V$ is a discrete feature matrix $V\in\{0,1,2\}^{100 \times 3}$. We ``observe'' only 30\% of the ratings and solve problem (\ref{eq:min-ksigv}) with $k=4$ to approximately reconstruct the full ratings matrix. We plot objective value versus CPU time in seconds for this sample matrix factorization problem, using a stochastic approximation algorithm with deterministic gradient or the subsampled gradient algorithm \ref{alg:stoch-grad} with subsampling ratio~$s_1/n$ set at 20\%. We also plot surrogate duality gap versus CPU time on the same example. We notice that while the subsampled algorithm converges much faster than the deterministic one, the quality of the surrogate dual points and duality gap produced using subsampled gradients as in \S\ref{ss:gap} is worst than in the deterministic case.
In Table \ref{tab:box-spectral}, using the same 20\% sampling rate we compare CPU time versus problem dimension $n$ for subsampled and deterministic algorithms when solving the following instance of problem (\ref{eq:min-maxeig})
\[\begin{array}{ll}
\mbox{minimize} & \|C+X\|_2\\
\mbox{subject to} & \|X\|_\infty \leq \rho
\end{array}\]
in the variable $X\in\symm_n$ where $C$ is a covariance matrix constructed using a subset of size $n$ of the variables in \cite{Alon99}, for various values of $n$. Finally, we generate and solve sample collaborative filtering problems as in (\ref{eq:min-ksigv}) for ratings matrix of various dimensions $n$. We report median CPU time over ten sample problems in Table \ref{tab:collab-filter}. Here, subsampling speeds up the algorithm by an order of magnitude, however the stochastic approximation algorithm is still not competitive with (non convex) local minimization techniques over low rank matrices.
\begin{table}[H]
\begin{center}
\begin{tabular}{r|c|c|c}
$n$ & Deterministic & Subsampling & Speedup factor\\
\hline
500 & 5 & 5 & 0.92 \\
750 & 19 & 13 & 1.40\\
1000 & 32 & 24 & 1.31 \\
1500 & 107 & 58 & 1.84 \\
2000 & 281 & 120 & 2.34
\end{tabular}
\caption{CPU time (in seconds) versus problem dimension $n$ for deterministic and subsampled stochastic approximation algorithms on spectral norm minimization problems. \label{tab:box-spectral}}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{r|c|c|c}
$n$ & Deterministic & Subsampling & Speedup factor\\
\hline
100 & 154 & 23 & 6.67 \\
200 & 766 & 63 & 12.2 \\
500 & 4290 & 338 & 12.7
\end{tabular}
\caption{Median CPU time (in seconds) versus problem dimension $n$ for deterministic and subsampled stochastic approximation algorithms on collaborative filtering problems. \label{tab:collab-filter}}
\end{center}
\end{table}
\section{Appendix}
The complexity results detailed above heavily rely on the fact that extracting {\em one} leading eigenvector of a symmetric matrix $X\in\symm_n$ can be done by computing a few matrix vector products. While this simple fact is easy to prove using the power method when the eigenvalues of $X$ are well separated, the problem becomes significantly more delicate when the spectrum of $X$ is clustered. The section that follows briefly summarizes how modern numerical methods solve this problem in practice.
\subsection{Computing one leading eigenvector of a symmetric matrix}
We start by recalling how packages such as LAPACK \cite{Ande99} form a full eigenvalue (or Schur) decomposition of a symmetric matrix $X\in\symm_n$. The algorithm is strikingly stable and, despite its $O(n^3)$ complexity, often competitive with more advanced techniques when the matrix $X$ is small. We then discuss the problem of approximating one leading eigenpair of $X$ using Krylov subspace methods with complexity growing as $O(n^2\log n)$ with the dimension (or less when the matrix is structured).
\paragraph{Full eigenvalue decomposition.}
Full eigenvalue decompositions are computed by first reducing the matrix $X$ to symmetric tridiagonal form using Householder transformations, then diagonalizing the tridiagonal factor using iterative techniques such as the QR or divide and conquer methods for example (see \cite[Chap. 3]{Stew01} for an overview). The classical QR algorithm (see \cite[\S8.3]{Golu90}) implicitly relied on power iterations to compute the eigenvalues and eigenvectors of a symmetric tridiagonal matrix with complexity $O(n^3)$, however more recent methods such as the MRRR algorithm by \cite{Dhil03} solve this problem with complexity $O(n^2)$. Starting with the third version of LAPACK, the MRRR method is part of the default routine for diagonalizing a symmetric matrix and is implemented in the \texttt{STEGR} driver (see \cite{Dhil06}).
Overall, the complexity of forming a {\em full} Schur decomposition of a symmetric matrix $X\in\symm_n$ is then $4n^3/3$ flops for the Householder tridiagonalization, followed by $O(n^2)$ flops for the Schur decomposition of the tridiagonal matrix using the MRRR algorithm.
\paragraph{Computing one leading eigenpair.} We now give a brief overview of the complexity of computing leading eigenpairs using Krylov subspace methods and we refer the reader to \cite[\S4.3]{Stew01}, \cite[\S8.3, \S9.1.1]{Golu90} or \cite{Saad92} for a more complete discussion. Let $u\in{\mbox{\bf R}}^n$ be a given vector, we form the following {\em Krylov} sequence
\[
\left\{u,Xu,X^2u,\ldots,X^ku\right\}
\]
by computing $k$ matrix vector products. If we call ${\cal K}_k(X,u)$ the subspace generated by these vectors and write $X=\sum_{i=1}^n \lambda_i x_ix_i^T$ a spectral decomposition of $X$, assuming, for now, that
\[
\lambda_1 > \lambda_2 \geq \ldots \geq \lambda_n,
\]
one can show using Chebyshev polynomials (see e.g. \cite[\S4.3.2]{Stew01} for details) that
\[
\tan \angle\left(x_1,{\cal K}_k(X,u)\right) \lesssim \frac{\tan\angle(x_1,u)}{\left(1+2\sqrt{\eta+\eta^2}\right)^{k-1}}
\quad \mbox{where} \quad \eta=\frac{\lambda_1-\lambda_2}{\lambda_2-\lambda_n},
\]
in other words, Krylov subspaces contain excellent approximations of leading eigenpairs of $X$.
This result is exploited by the Lanczos procedure to extract approximate eigenpairs of $X$ called {\em Ritz} pairs (see \cite[Chap. 9]{Golu90} or [\S5.1.2]\cite{Stew01} for a complete discussion). In practice, the matrix formed by the Krylov sequence is very ill-conditioned (as $X^ku$ gets increasingly close to the leading eigenvector) so the Lanczos algorithm simultaneously updates an orthogonormal basis for ${\cal K}_k(X,u)$ and a {\em partial} tridiagonalization of $X$. The Lanczos procedure is described in Algorithm~\ref{alg:lanczos} and requires $k$ matrix vector products and an additional 4nk flops. Note that the {\em only} way in which the data in $X$ is accessed is through the matrix vector products $Xu_j$.
\begin{algorithm}[h!]
\caption{Lanczos decomposition.}
\label{alg:lanczos}
\begin{algorithmic}[1]
\REQUIRE Matrices $X\in\symm_n$ and initial vector $u_1\in{\mbox{\bf R}}^n$.
\STATE Set $u_0=0$ and $\beta_0=0$.
\FOR{$j=1$ to $k$}
\STATE Compute $v=Xu_j$.
\STATE Set $\alpha_j=u_j^Tv$.
\STATE Update $v=v-\alpha_ju_j-\beta_{j-1}u_{j-1}$.
\STATE Set $\beta_j=\|v\|_2$.
\STATE Set $u_{j+1}=v/\beta_j$.
\ENDFOR
\ENSURE A Lanczos decomposition
\[
XU_k=U_kT_k+\beta_ku_{k+1}e^T_{k},
\]
where $U_k\in{\mbox{\bf R}}^{n\times k}$ is orthogonal and $T_k\in\symm_k$ is symmetric tridiagonal.
\end{algorithmic}
\end{algorithm}
In theory, one could then diagonalize the matrix $T_k$ (which costs $O(k^2)$ as we have seen above) to produce Ritz vectors. In practice, key numerical difficulties often arise. First, finite precision arithmetics cause a significant loss of orthogonality in $U_k$. This is remedied by various reorthogonalization strategies (cf. \cite[\S5.3.1]{Stew01}). A more serious problem is clustered or multiple eigenvalues in the spectrum periphery. In fact, it is easy to see that Krylov subspace methods cannot isolate multiple eigenvalues. Assume that the leading eigenvalue has multiplicity two for example, we then have
\[
A^ku=((x_1^Tu)x_1 + (x_2^Tu) x_2) \lambda_1^k + (x_3^Tu) x_3 \lambda_3^k +\ldots + (x_n^Tu) x_n \lambda_n^k
\]
and everything happens as if the eigenvalue $\lambda_1$ was simple and the matrix $X$ had a larger nullspace. This is not a problem in our case, since we need only {\em one} eigenvector in the leading invariant subspace, not the entire eigenspace.
Clustered eigenvalues (i.e. a small gap between the leading eigenvalue and the next one, not counting multiplicities) are much more problematic. The convergence of Ritz vectors cannot be established by the classical Chebyshev bounds described above, and various references provide a more refined analysis of this scenario (see \cite{Parl82}, \cite{Van-87}, \cite{Kucz92} among others). Successful termination of a {\em deterministic} Lanczos method can never be guaranteed anyway, since in the extreme case where the starting vector is orthogonal to the leading eigenspace, the Krylov subspace contains no information about leading eigenpairs. In practice, Lanczos solvers use {\em random} initial points. In particular, \cite[Th.4.2]{Kucz92} show that, for any matrix $X\in\symm_n$ (including matrices with clustered spectrum), starting the algorithm at a random $u_1$ picked uniformly over the sphere means the Lanczos decomposition will produce a leading Ritz pair with {\em relative} precision $\epsilon$ in
\[
k^\mathrm{Lan}\leq \frac{\log(n/\delta^2)}{4\sqrt{\epsilon}}
\]
iterations, with probability at least $1-\delta$. This is of course a highly conservative bound and in particular, the worst case matrices used to prove it vary with $k$.
This means that computing one leading eigenpair of the matrix $X$ requires computing at most $k^\mathrm{Lan}$ matrix vector products (we can always restart the code in case of failure) plus $4nk^\mathrm{Lan}$ flops. When the matrix is dense, each matrix vector product costs $n^2$ flops, hence the total cost of computing one leading eigenpair of $X$ is
\[
O\left(\frac{n^2\log(n/\delta^2)}{4\sqrt{\epsilon}}\right)
\]
flops. When the matrix is sparse, the cost of each matrix vector product is $O(s)$ instead of $O(n^2)$, where $s$ is the number of nonzero coefficients in $X$. Idem when the matrix $X$ has rank $r<n$ and an explicit factorization is known (which is the case in the algorithms detailed in the previous section), in which case each matrix vector product costs $O(nr)$ which is the cost of two $n$ by $r$ matrix vector products, and the complexity of the Lanczos procedure decreases accordingly.
The numerical package ARPACK by \cite{Leho98} implements the Lanczos procedure with a reverse communication interface allowing the user to efficiently compute the matrix vector product $Xu_j$. However, it uses the implicitly shifted QR method instead of the more efficient MRRR algorithm to compute the Ritz pairs of the matrix $T_k\in\symm_k$.
\subsection{Other sampling techniques}
For completeness, we recall below another subsampling procedure in \cite{Achl07}. More recent ``volume sampling'' techniques produce improved error bounds (some with multiplicative error bounds) but the corresponding optimal sampling probabilities are much harder to compute, we refer the reader to \cite{Vemp09} for more details. The key idea behind this result is that, as the matrix dimension $n$ grows and given a fixed, scale invariant precision target $\|X\|_F/\epsilon$, the norm $\|X\|_\infty$ of individual coefficients in $X$ typically becomes negligible and we can randomly discard the majority of them while keeping important spectral features of $X$ mostly intact.
\begin{lemma} \label{lem:rand-achl}
Given $X\in\symm_n$ and $\epsilon>0$, we define a subsampled matrix $S$ whose coefficients are independently distributed as:
\begin{equation}\label{eq:subsamp-achl}
S_{ij}=\left\{\begin{array}{cl}
X_{ij}/p & \mbox{with probability $p$,}\\
0 & \mbox{otherwise.}\\
\end{array}\right.
\end{equation}
when $i\geq j$, and $S_{ij}=S_{ji}$ otherwise. Assume that $1 \geq p\geq (8\log n)^4/n$, then
\[
\|X-S\|_2 \leq 4 \|X\|_\infty \sqrt{n/p}.
\]
with probability at least $1-\exp(-19(\log n)^4)$.
\end{lemma}
\begin{proof}
See \cite[Th.~1.4]{Achl07}.
\end{proof}
At first sight here, bounding the approximation error means letting the probability $p$ grow relatively fast as $n$ tends to infinity. However, because $\|X\|_\infty/\epsilon$ is typically much smaller than $\|X\|_F/\epsilon$, this subsampling ratio $p$ can often be controlled. Adaptive subsampling, i.e. letting $p$ vary with the magnitude of the coefficients in $X$, can further improve these results (see \cite[\S4]{Achl07} for details). The average number of nonzero coefficients in the subsampled matrix can be bounded using the structure of $X$. Note that the constants in this result are all very large (in particular, $1 \geq p\geq (8\log n)^4/n$ implies $n\geq 10^9$) so despite its good empirical performance in low dimensions, the result presented above has to be understood in an asymptotic sense.
\iffalse
\begin{lemma} \label{lem:rand-eig}
Given $X\in\symm_n$ and $\epsilon>0$, we define a subsampled matrix $\tilde X$ whose coefficients are independently distributed as:
\begin{equation}\label{eq:subsamp}
\tilde X_{ij}=\left\{\begin{array}{cl}
X_{ij}/p & \mbox{with probability $p_\epsilon$,}\\
0 & \mbox{otherwise.}\\
\end{array}\right.
\end{equation}
having set:
\begin{equation}\label{eq:choice-p}
p_\epsilon=\min\left\{1,\frac{16 n \|X\|_\infty^2}{\epsilon^2}\right\}
\end{equation}
assuming that $p_\epsilon\geq (8\log n)^4/n$, then ${\lambda_{\rm max}}(X) \leq \textstyle\mathop{\bf E{}}[{\lambda_{\rm max}}(\tilde X)]$ and we have
\[
\|X-\tilde X\|_2 \leq \epsilon \quad\mbox{and} \quad{\lambda_{\rm max}}(\tilde X) - {\lambda_{\rm max}}(X) \leq \epsilon,
\]
with probability at least $1-\exp(-19(\log n)^4)$. Furthermore, the average number of nonzero coefficients in $\tilde X$ is bounded above by
\[
\frac{16n \|X\|_F^2}{\phi \epsilon^2} ~\mathrm{mean}\left(\left\{\frac{\|X\|_\infty^2}{X_{[i]}^2}\right\}_{i=1,\ldots,\lceil \phi n^2 \rceil}\right)
\]
for any $\phi \in [0,1]$.
\end{lemma}
\begin{proof}
By construction, we have $\textstyle\mathop{\bf E{}}[\tilde X]=X$, so we get ${\lambda_{\rm max}}(X) \leq \textstyle\mathop{\bf E{}}[{\lambda_{\rm max}}(\tilde X)]$ by convexity of ${\lambda_{\rm max}}(X)$. Following \cite[Th. 1.4]{Achl07}, if $n$ is such that $p_\epsilon\geq (8\log n)^4/n$, then, with probability at least
$1-\exp(-19(\log n)^4)$, we have
\[
\|X-\tilde X\|_2 \leq 4 \|X\|_\infty \sqrt{n/p_\epsilon}\leq\epsilon,
\]
given our choice of $p_\epsilon$ in (\ref{eq:choice-p}). Finally, the average number of nonzero coefficients in $\tilde X$ is given by $p_\epsilon n^2$, where $p_\epsilon$ can be bounded as follows:
\begin{eqnarray*}
p_\epsilon=\frac{16 n \|X\|_\infty^2}{\epsilon^2}&=& \frac{16 \|X\|_F^2}{\epsilon^2} \cdot\frac{n\|X\|_\infty^2}{ \|X\|_F^2}\\
&=& \frac{16 \|X\|_F^2}{n \phi \epsilon^2} \cdot \frac{\phi n^2}{\sum_{ij} X_{ij}^2/\|X\|_\infty^2}\\
&\leq& \frac{16 \|X\|_F^2}{n \phi \epsilon^2} \cdot \frac{\lceil \phi n^2 \rceil}{\sum_{i=1}^{\lceil \phi n^2 \rceil} X_{[i]}^2/\|X\|_\infty^2}\\
&\leq& \frac{16 \|X\|_F^2}{n \phi \epsilon^2} ~\mathrm{mean}\left(\left\{\frac{\|X\|_\infty^2}{X_{[i]}^2}\right\}_{i=1,\ldots,\lceil \phi n^2 \rceil}\right)
\end{eqnarray*}
for any constant $\phi\in[0,1]$, using an inequality between harmonic and arithmetic means.
\end{proof}
Roughly speaking, this means that computing leading eigenvectors of the subsampled matrix is faster than solving the original eigenvalue problem when
\[
n \geq \frac{16 \|X\|_F^2}{\phi \epsilon^2} ~\mathrm{mean}\left(\left\{\frac{\|X\|_\infty^2}{X_{[i]}^2}\right\}_{i=1,\ldots,\lceil \phi n^2 \rceil}\right)
\]
for some $\phi\in[0,1]$, where ${\|X\|_F}/{\epsilon}$ can be understood as a scale invariant precision target and the second term is a scale invariant uniformity measure on the $\lceil \alpha n^2 \rceil$ leading matrix coefficients $X_{[i]}$. In practice however, the overhead associated with sparse matrix vector products means that actual computational savings appear at somewhat higher dimensions.
\fi
\section*{Acknowledgements}
The author would like to acknowledge support from NSF DMS-0625352, SES-0835550 (CDI) and CAREER awards, a Peek junior faculty fellowship and a Howard B. Wentz Jr. award.
\small{
\bibliographystyle{alpha}
\section*{Notes and References}\addcontentsline{toc}{section}{Notes and References}\markright{Notes and References}
\begin{small}\addtolength{\baselineskip}{-1.0pt}\parindent 0pt \parskip 4pt}{\clearpage\end{small}\addtolength{\baselineskip}{1.0pt}}
\newcommand{\nrsection}[1]{\subsection*{#1}}
\newcommand{\nrsubsection}[1]{\subsubsection*{#1}}
\newcounter{exno}
\renewcommand{\theexno}{\thechapter.\arabic{exno}}
\newenvironment{exercises}{\clearpage\section*{Exercises}\addcontentsline{toc}{section}{Exercises}\markright{Exercises}
\begin{small}\addtolength{\baselineskip}{-1.0pt}
\renewcommand{\labelenumi}{(\alph{enumi})}
\begin{list} {{\bf Exercise \thechapter.\arabic{exno}.}}
{\usecounter{exno
}}{\end{list}\clearpage\end{small}\addtolength{\baselineskip}{1.0pt}}
\iffalse
\newenvironment{example}%
{\begin{quote}\begin{small}\textbf{Example.}}%
{\end{small}\end{quote}}
\newenvironment{examples}%
{\begin{quote}\begin{small}\textbf{Examples.}}%
{\end{small}\end{quote}}
\newenvironment{remark}%
{\begin{quote}\begin{small}\textbf{Remark.}}%
{\end{small}\end{quote}}
\fi
\newenvironment{proof}{\textbf{Proof.}}{~~\rule[-1pt]{6pt}{6pt}}\def\QED{\QED\bigskip}
\newenvironment{algdesc}%
{\begin{quote}}{\end{quote}}
\def\figbox#1{\framebox[\hsize]{\hfil\parbox{0.9\hsize}{#1}}}
\makeatletter
\long\def\@makecaption#1#2{
\vskip 9pt
\begin{small}
\setbox\@tempboxa\hbox{{\bf #1:} #2}
\ifdim \wd\@tempboxa > 5.5in
\begin{center}
\begin{minipage}[t]{5.5in}
\addtolength{\baselineskip}{-0.95pt}
{\bf #1:} #2 \par
\addtolength{\baselineskip}{0.95pt}
\end{minipage}
\end{center}
\else
\hbox to\hsize{\hfil\box\@tempboxa\hfil}
\fi
\end{small}\par
}
\makeatother
\newcounter{oursection}
\newcommand{\oursection}[1]{
\addtocounter{oursection}{1}
\setcounter{equation}{0}
\clearpage \begin{center} {\Huge\bfseries #1} \end{center}
{\vspace*{0.15cm} \hrule height.3mm} \bigskip
\addcontentsline{toc}{section}{#1}
}
\newcommand{\oursectionf}[1]{
\addtocounter{oursection}{1}
\setcounter{equation}{0}
\foilhead[-.5cm]{#1 \vspace*{0.8cm} \hrule height.3mm }
\LogoOn
}
\newcommand{\oursectionfl}[1]{
\addtocounter{oursection}{1}
\setcounter{equation}{0}
\foilhead[-1.0cm]{#1}
\LogoOn
}
\newcounter{lecture}
\newcommand{\lecture}[1]{
\addtocounter{lecture}{1}
\setcounter{equation}{0}
\setcounter{page}{1}
\renewcommand{\theequation}{\arabic{equation}}
\renewcommand{\thepage}{\arabic{lecture} -- \arabic{page}}
\raggedright \sffamily \LARGE
\cleardoublepage\begin{center}
{\Huge\bfseries Lecture \arabic{lecture} \bigskip \\ #1}\end{center}
{\vspace*{0.15cm} \hrule height.3mm} \bigskip
\addcontentsline{toc}{chapter}{\protect\numberline{\arabic{lecture}}{#1}}
\pagestyle{myheadings}
\markboth{Lecture \arabic{lecture}}{#1}
}
\newcommand{\lecturef}[1]{
\addtocounter{lecture}{1}
\setcounter{equation}{0}
\setcounter{page}{1}
\renewcommand{\theequation}{\arabic{equation}}
\renewcommand{\thepage}{\arabic{lecture}--\arabic{page}}
\parindent 0pt
\MyLogo{#1}
\rightfooter{\thepage}
\leftheader{}
\rightheader{}
\LogoOff
\begin{center}
{\large\bfseries Lecture \arabic{lecture} \bigskip \\ #1}
\end{center}
{\vspace*{0.8cm} \hrule height.3mm}
\bigskip
}
\newcommand{\lecturefl}[1]{
\addtocounter{lecture}{1}
\setcounter{equation}{0}
\setcounter{page}{1}
\renewcommand{\theequation}{\arabic{equation}}
\renewcommand{\thepage}{\arabic{lecture}--\arabic{page}}
\addtolength{\topmargin}{-1.5cm}
\raggedright
\parindent 0pt
\rightfooter{\thepage}
\leftheader{}
\rightheader{}
\LogoOff
\begin{center}
{\Large \bfseries Lecture \arabic{lecture} \\*[\bigskipamount] {#1}}
\end{center}
\MyLogo{#1}
}
|
1,314,259,995,609 | arxiv |
\section{Comparison with Related Works} \label{sec:comparison}
In recent years, a variety of financial agreements and processes have been implemented using smart contracts. The first such contract was BitHalo \cite{bithalo}. It replaced middlemen in an escrow, and allowed distrusting parties to buy and sell goods over the internet with security and peace of mind. Unfortunately, it was commonly used in darknet markets such as the Silk Road \cite{silkroad}. Another notable example is the concept of decentralized autonomous organizations \cite{vigna2016age}. These are organizations that are entirely governed by rules written as smart contracts.
After the Equifax breach \cite{ftcwhattodo}, which led to a leak of sensitive data belonging to more than 140 million people, several authors suggested that credit reporting can potentially benefit from decentralization and Blockchain techniques \cite{floyd,huffpost}. However, no concrete approach was introduced to achieve this goal. We filled this gap in this paper by introducing a simple smart-contract-based approach for credit reporting.
At the same time that we were developing our approach, a startup, called Bloom, was created to perform credit scoring on the Blockchain \cite{bloom}. The full details of their protocol is not published and their code is under development. We are not aware of the exact extent of similarity between our approaches. However, based on the Bloom whitepaper \cite{bloom}, there seems to be several fundamental differences.
Our approach provides the exact same financial mechanisms as real-world credit reporting and our goal is to simply remove the CRAs from the process and migrate to the Blockchain, while keeping everything else intact. In contrast, Bloom modifies the financial principles of credit scoring with the goal of making credit accessible to a wider population. It defines its own credit score and argues for its adoption. This score depends not only on the credit history, but also on heuristics such as the graph of acquaintances of a borrower and whether they are willing to vouch for her creditworthiness.
Another main difference is the role of laws and regulations. We assume that all institutions are bound by regulations such as the FCRA and hence the fact that all their actions are provably recorded on the Blockchain is a guarantee that they will not provide false data, and even if they do, they will be subject to legal action and the data can be corrected. In contrast, in line with its goal of making credit more accessible, Bloom opts for a method whose goal is to allow even anonymous lenders and borrowers to take part. This is considerably different from the current status of credit reports where the banks and financial institutions only take reports from other comparable institutions into account.
Finally, Bloom is susceptible to Sybil attacks, where an attacker fakes many identities and keeps giving loans to herself, therefore increasing her creditworthiness. According to its current whitepaper, the solution for avoiding this attack is having several ``trusted participants'' who ``will be manually vetted by the Bloom team''. This gives undue advantage to the Bloom team and is effectively equivalent to having Bloom as a third-party instead of the CRAs. In contrast, a Sybil attack does not entail any benefit in our approach.
\section{Credit Accounts Protocol} \label{protocol:accounts}
We now turn to the core of our approach, which is a protocol for storing credit accounts' data. We introduce a smart contract for modeling credit accounts. Each account is realized by one instance of this contract. This is in contrast to Section~\ref{protocol:identity} where all identities were stored in a single instance of the identity management contract.
As mentioned earlier, we rely on asymmetric (public-key) cryptography. To achieve the desired level of security, we will introduce several new keys in this section. Therefore, to avoid confusion, we use the term ``true identity'' to refer to the key pair which is publicly known to belong to an institution. Similarly, an individual's true identity is the key pair with which she registers in the identity management protocol and for which she obtains certificates. Also, we use upper-case $K$ for public keys and lower-case $k$ for private keys.
We store a singly linked list of each individual's credit accounts, with each account providing a pointer to the next. Note that in Ethereum each deployed instance of a smart contract is uniquely addressable and therefore these pointers are well-defined. The identity management contract provides a pointer to the first credit account. Moreover, these pointers are encrypted, as explained below, and hence they can only be traversed if the individual owner allows it.
\begin{figure}[H]
\resizebox{\linewidth}{!}
{
\includegraphics{linked-list-accounts.pdf}
}
\caption{Each credit account is stored in its own instance of the credit account contract. The arrows denote encrypted pointers.}
\end{figure}
We now proceed to define the data stored in a credit account contract and the process for its creation, management and use as part of a credit report.
\subsubsection*{Key Generation}Let the institution's true identity be $(K_{i}, k_{i})$ and the customer's true identity $(K_c, k_c)$. When, after verifying a customer's identity and credit record, an institution agrees to extend credit to a customer, they ask her to create a new key pair $(K'_c, k'_c)$, called customer's account-specific keys. The institution in turn creates its own account-specific keys $(K'_i, k'_i)$. Then, each side provides the other side with their account-specific public key. Finally, they create and fully exchange two other pairs of keys $(K'_{s,1}, k'_{s,1}), (K'_{s,2}, k'_{s,2})$, which we call account-specific shared keys. Hence, the keys are distributed as in Figure~\ref{fig:key-distro}.
\begin{figure}[H]
\begin{center}
\resizebox{!}{2.9cm}
{
\includegraphics{keys-for-each-contract.pdf}
}
\end{center}
\caption{Key distribution prior to deployment of a Credit Account Contract}
\label{fig:key-distro}
\end{figure}
\subsubsection*{Contract Creation} At this point, the institution creates a new instance of the credit account contract and publishes it on the Blockchain. Figure~\ref{fig:ct} shows the data stored in this contract and the conditions enforced by the contract for changing this data. The contract stores public keys of the customer and the institution, i.e.~$K'_c$ and $K'_i$. These are set at the beginning and are not changeable afterwards. Note that the contract does not store true identities, but uses contract-specific public keys instead. All function calls are also performed using contract-specific keys. The reason behind this is that anyone has access to the data stored on the Blockchain and one must not be able to read the true identities using publicly available data. The contract also has an expiration time which can be changed only if both parties agree on the new value.
\begin{figure}[H]
\begin{center}
\resizebox{\linewidth}{!}
{
\includegraphics{account-contract.pdf}
}
\end{center}
\caption{Data fields and constraints in a Credit Account Contract}
\label{fig:ct}
\end{figure}
\subsubsection*{Commitment} After the contract is deployed, both parties must commit to it by verifiably connecting it to their true identity. The institution does this by signing the contract address, $K_i$ and $K_c$ using its true identity and adding the signature to the contract. This signature cannot be changed after it is added. At this point, the customer can check the signature. If the check passes, she adds the contract to her record by letting her last account's \field{Next Account} field point to this contract by storing its address encrypted using $K'_{s, 2}$. Note that the \field{Next Account} field can be changed only once and hence the contract cannot be removed from the customer's report when added. The institution can now check that the contract is added to the linked list of customer's report using $k'_{s, 2}$.
\subsubsection*{Credit Report Data} Finally, the institution can change the contents of the field \field{Data} as long as the expiration time has not passed. It can store all the relevant data about this account that should appear in a credit report. This data is always encrypted using $K'_{s, 1}$ and is hence accessible to both the institution and the customer, who know $k'_{s, 1}$, but not to anyone else.
Note that we are assuming this data fits in a single transaction of the underlying Blockchain and can hence be changed by a single function call. This is because, to the best of our knowledge, most credit account reports contain only a few lines of data. However, this assumption does not affect the generality of our approach. If the data happens to be too big, one can store it in an external service, such as IPFS, which is a peer-to-peer network for file storage and transfer that supports immutable version control using a structure very similar to the Blockchain \cite{ipfs}. Then one can fill the \field{Data} field with an address/identifier of the original data in IPFS.
\subsubsection*{Reading a Credit Report} When another institution wants to read customer data, it would need the values of $k'_{s, 2}$ for each of the contracts to be able to decrypt the links and traverse the linked list.\footnote[1]{Alternatively, the customer can provide the decrypted contents of the \field{Next Account} fields and the public keys $K'_{s,2}$. The institution can then verify the correctness of this data by encrypting them using these keys and checking that they lead to the same encrypted values that are saved in the contracts.} These can only be provided by the customer. Hence, one cannot find out which accounts belong to an individual, unless that individual allows access. When access is granted, the institution can easily find out when it reaches the end of a report given that the \field{Next Account} field is only empty at the end of the linked list. The institution can also see the beginning time of a contract by looking up the number of the Blockchain block where the contract was first created. Expiration times of the credit accounts are publicly visible on the Blockchain, but not their data. Should the customer decide to allow the institution to read a contract, she can provide them with the contract-specific $k'_{s, 1}$ to access the \field{Data} field\footnote[7]{As in the previous case, an alternative is to provide the decrypted contents of \field{Data} and the public key $K'_{s,1}$.} and with the lender's true identity, $K_i$, to verify the signature.
Note that an individual can add as many credit accounts as she wishes to her linked list, acting as both the institution and the customer. This can be used to initialize the linked list by an account when creating an identity, and also to resist any attempt by an institution to find out the true number of accounts belonging to an individual.
\section{Limitations} \label{sec:ext}
In this section we discuss some of the limitations of our approach and ideas for addressing them.
\noindent\textbf{Inherited Limitations.} The goal of our approach is to remove the CRAs from the credit reporting process, allowing the same financial mechanisms that are currently established to run without relying on a middleman or trusted third party. This means that our approach essentially inherits any limitation of the traditional centralized credit reporting that is not due to the CRAs. In particular, if an individual has two (or more) provable identities in the real world, e.g.~two distinct names and national identity numbers, then she can sign up in our identity management contract twice and obtain certificates for both identities. Note that this attack is not dependent on the existence or lack of CRAs and is also possible under the current credit reporting systems that have CRAs. Moreover, migrating to the blockchain cannot solve this problem given that the smart contracts can only access the data saved on the blockchain and have no way of realizing that the same person has fake or multiple identities in the real world.
\noindent\textbf{Cryptographic Primitives.} The security of our approach is dependent on the security of the cryptographic primitives that are used. Users must keep in mind that any data that is saved on the blockchain is permanent and cannot be deleted. In several protocols explained above, data encryption is used in order to hide the data and restrict public access to it. If/when the underlying cryptographic primitives are broken, this data can be recovered. Therefore, it is advisable to refrain from saving the actual credit data in smart contracts or IPFS, but instead rely on saving its hash. This way the data would be provable, but cannot be obtained even if the cipher breaks in the future. The downside to this method is that the individual has to keep safe copies of all the actual credit report data and can only use our approach for proving its correctness.
\noindent\textbf{Legal Problems.} Our approach does not intend to address legal aspects of credit reporting. We provide a solution that works under minimal legal assumptions, i.e.~prohibition of fraud. Wrong information provided by an institution can be traced back to their originator who in turn has the ability to fix them. Problems arising due to inconsistencies in laws and regulations, especially in a multi-jurisdiction environment, are beyond the scope of this paper.
\section{Conclusion} \label{sec:future}
In this paper, we presented the first solution and a basic prototype for performing secure credit reporting with no third-parties. In Section~\ref{sec:intro} we identified five problems with current systems of credit reporting that can be avoided by migrating to the blockchain. We review how our approach solves these problems:
\begin{itemize}
\item \emph{Long Update Intervals.} Each update to the credit data is done via a single function call in one of the smart contracts. Hence, it takes a few seconds to be added to the Ethereum blockchain, and after a few minutes one can be sure that it will not be reverted.
\item \emph{Identification Problems.} The certification procedure presented in Section~\ref{protocol:identity} ensures that only valid real-world identities will be trusted by the institutions and that each real-world identity can only be represented by a single public key $K_c$.
\item \emph{Errors and Inconsistency.} Inconsistency can only be caused by forks in the blockchain and disappears as soon as the fork is resolved. In our contracts, erroneous data added by an institution can always be fixed by the same institution\footnote{Note that while the contents of the blockchain are immutable, the values of contract variables are not. The blockchain saves the sequence of changes to these values. Hence, once an error is fixed, its history remains in the blockchain.}. Moreover, the source of such data can be provably ascertained. Hence, the institutions are legally bound to fix it.
\item \emph{Endemism.} Using the Ethereum blockchain, the contracts and their data can be used in the same manner all over the world.
\item \emph{Data Breaches.} There is no central authority controlling all the credit report data. Moreover, each credit account is secured by its own dedicated keys. Hence, a large-scale breach is impossible unless the underlying cryptographic ciphers break.
\end{itemize}
There are several directions for future research and development. A first step is creating a more user-friendly interface, especially one that abstracts away the underlying cryptography. Another interesting problem is to run real-world large scale experiments to see if creditors and borrowers feel comfortable with this new approach to credit reporting. On the theoretical side, an interesting problem would be to incorporate multi-party computations in a way that a creditor does not need to read the credit report of an individual directly, but can instead rely on a process that, using data provided by the individual and the creditor and a secure connection to the Blockchain, decides whether the individual satisfies specific credit requirements set by the creditor and if so, produces an unforgeable certificate of her creditworthiness.
\section*{Acknowledgments}
We are thankful to the reviewers for raising points that significantly improved this article. The research was partially supported by Vienna Science and
Technology Fund (WWTF) Project ICT15-003, Austrian Science
Fund (FWF) NFN Grant No S11407-N23 (RiSE/SHiNE) and ERC
Starting grant (279307: Graph Games). The first author is supported by an IBM PhD Fellowship.
\newpage
\section{Identity Management Protocol} \label{protocol:identity} \label{protocol:name}
One of the main issues in credit reporting, as in many other distributed applications, is identity management. There are two important aspects to this issue: first, one should not be able to masquerade as another person, i.e.~commit identity theft, and second, one should not be able to use more than one identity. Note that in a cryptocurrency setting individuals having multiple identities do not pose a problem, given that this does not entail any benefit. However, in our setting, one person having multiple disjoint credit reports is certainly not acceptable.
A simple solution is to create one or several central authorities that check real-world identities and issue certificates of their validity. This is the solution used, for example, for checking valid HTTPS signatures \cite{durumeric2013analysis}. It is also commonly used for managing the identities of banks, institutions and public authorities. In this paper, we assume that such entities' identities can be verified in this manner. However, the same approach is not desirable for individual credit customers, given that it puts too much power in the hands of the certificate issuers and they can, at least in theory, bar one from getting access to credit by refusing to issue a certificate.
Our proposal is to let the lenders themselves act as certificate authorities. To be more precise, we allow anyone to issue a certificate verifying the identity of an individual, but we expect the lenders, who are typically banks and financial institutions, to only take into account certificates issued by other banks or institutions that they already trust. Given that the lenders trust data sent by other lenders to the CRAs, which includes identifying information about the owners of credit accounts, it is expected that they agree to accept this same information directly, i.e.~without the CRAs as middlemen, too. While this approach might lead to a situation where a few banks perform most of the certifications, this is not considered to be a problem, since no group of institutions have a monopoly on certification and every lender who is willing to extend credit to an individual can also certify her identity.
\subsubsection*{Data Fields of the Identity Management Contract} We now describe our identity management protocol more formally. Our approach is realized by a single instance of a smart contract that keeps track of every individual by storing the following data:
\begin{itemize}
\item The \emph{public key} used by the individual.
\item \emph{Fingerprint.} A unique identifier that can be used in real world to check the individual's identity. This can be biometric data or any other data that is unique to the individual. Our approach is not dependent on the exact standard that is used for creating fingerprints, but they should be standardized. If this data is sensitive, one can store a hashed version of it. For example, we can use a hashed version of the individual's country of nationality, appended with her national identification number (or social security number in case of the US).
\item \emph{Two Pointers.} A pointer to the first public record of the individual and another one to her first credit account. These will be discussed in more detail in the next sections.
\item \emph{Certificates.} A list of public keys of individuals or institutions who have verified this identity in the real world.
\end{itemize}
\begin{figure}[H]
\resizebox{\linewidth}{!}
{
\includegraphics{identity_management.pdf}
}
\caption{Interactions between an individual, an institution and the identity management contract. Numbers denote the order in which the actions are taken.}
\label{fig:id}
\end{figure}
\subsubsection*{Functions of the Identity Management Contract}
We now describe how our identity management smart contract works. This is summarized in Figure~\ref{fig:id}. Anyone can register in this contract by calling the \texttt{register} function and providing her own desired public key and (possibly fake) fingerprint. The contract even allows several public keys to be registered as corresponding to the same fingerprint. After a public key and its corresponding fingerprint are added to the contract, anyone can call the function \texttt{certify} and announce that they have checked an identity in the real world and would like to certify it. In this case, the caller's public key is added to the list of certificates. There is also a \texttt{decertify} function that can be used to revoke the certification.
\subsubsection*{Safety against Sybil Attacks}Effectively, one can create as many fake identities and certify them with as many self-created keys as she wishes. However, the lenders would only consider certificates from other trusted lenders or institutions. Such an institution would (i)~ask the individual to sign a random piece of data using the private key corresponding to the desired public key to ensure that she has access to it, (ii)~require real-world verification of the fingerprint, and (iii)~require that no other public key is already certified as corresponding to the same fingerprint by another trusted institution. Only when all of these conditions are met would the institution certify the identity.
\subsubsection*{Legal Guarantees} Note that the institutions, such as banks, have publicly announced public keys and will be subject to legal action, under FCRA or similar regulation in other countries, should they provide false certifications or decertifications. The process is also uniquely transparent, given that all changes to the contract are permanently recorded in the Blockchain. An individual can ask each lender she deals with to certify her identity so that the respective credit account is also trusted by future lenders.
\subsubsection*{Privacy} Our protocol preserves user privacy. The fingerprint is associated with a public key that does not appear in the credit accounts, ensuring that even having access to a person's fingerprint cannot be used to extract information about their non-public credit records. In the next sections, we will show that an attacker with access to the Blockchain cannot read data about the records, such as account details, and is even unable to infer the owner of a given record.
\section{Implementation} \label{sec:implementation}
We have implemented our approach in Solidity to demonstrate the feasibility of the ideas and structures that we suggest. A proof-of-concept implementation, together with instructions for its deployment and testing, is available at \texttt{pub.ist.ac.at/\texttildelow akafshda/credit-reporting}.
Our implementation is entirely loop-free and all of its function calls terminate after executing a small (constantly-bounded) number of instructions. Hence, our gas cost, i.e.~the cost one must pay for execution of commands in Ethereum smart contracts~\cite{wood2014ethereum}, is very little.
\section{Introduction and Preliminaries} \label{sec:intro}
In this section, we first provide a high-level overview of both smart contracts and credit reporting services. Then, we discuss some of the problems that currently exist in real-world credit reporting and argue that these can be mitigated by decentralization and migrating to smart contracts.
\subsubsection*{Blockchain} Blockchain was initially used as a means to achieve global consensus about peer-to-peer cryptocurrency transactions in Bitcoin \cite{nakamoto2008bitcoin}. However, the technology itself is capable of much more than just verifying transactions. Specifically, one can include scripts in transactions, forcing a consensus about the outputs of these scripts. Bitcoin allows simple scripting in a Forth-like loop-free language \cite{bitcoinScript}. A script in a Bitcoin transaction is essentially a program that sets the conditions one must satisfy in order to use the currency units stored in that transaction. For example, a script might ask for a digital signature to gain access to the funds.
\subsubsection*{Ethereum and Smart Contracts} Ethereum is a cryptocurrency that allows stateful scripts of arbitrary, i.e.~Turing-complete, complexity \cite{wood2014ethereum}. It provides an ecosystem for the development of decentralized applications, called smart contracts, that are executed and verified by the whole Ethereum network. A smart contract can be created by anyone and is stored in a bytecode format on the Blockchain. After its creation, the contract can save data in its own dedicated storage and hold, receive and transfer funds (cryptocurrency units) from/to other people or contracts. It can also interact with other contracts and even create new ones. However, the state and actions of the contract are all controlled by its code and subject to consensus using the Blockchain protocol. After its deployment, one can only interact with a contract by calling its functions which perform actions as programmed by its creator.
These characteristics, and the inherent lack of a centralized authority in the Blockchain, make smart contracts ideal for implementing a variety of unbreakable financial agreements. For example, a smart contract called BitHalo replaces trusted third-parties and provides escrow services~\cite{bithalo}. We provide another simple example below.
\subsubsection*{Example}
Consider the contract in Figure~\ref{fig:decode}. This contract rewards anyone who can invert a SHA256 hash value. It is written in Solidity which is a widely-used language for programming Ethereum smart contracts and can in turn be compiled to Ethereum bytecode \cite{solidity}.
The contract creator should provide a value for the parameter \texttt{\_hashed} of the constructor function, which will be stored in the contract. She can also pay some (possibly zero) amount to the contract when creating it. This is signified by the keyword \kw{payable}. After the contract is deployed, anyone can call the function \texttt{claim} and provide an initial value. The contract checks whether this value has the required hash and if so, pays the person who called the function, i.e.~\kwp{msg}.\kwp{sender}, with all the money the contract holds, i.e.~\kw{this}.\kw{balance}.
Note that all changes to the state of the contract, along with the messages (function calls) that caused them are stored permanently on the Blockchain and can be read by anyone. Therefore, one can check the contract's balance before attempting to solve the puzzle. Also, after the puzzle is solved, anyone, including the creator of the contract, can read the function call and parameters that led to a solution. This means that while contracts enable us to reach a consensus about the state of a computation, they are not very good at hiding data.
\begin{figure}[H]
\begin{lstlisting}[language=Solidity]
contract HashInvert
{
bytes32 hashed;
function HashInvert(bytes32 _hashed) payable
{
hashed = _hashed;
}
function claim(bytes32 _initial)
{
if(sha256(_initial)==hashed)
msg.sender.send(this.balance);
}
}
\end{lstlisting}
\label{fig:decode}
\caption{A Solidity contract that rewards finding a value with a given hash}
\end{figure}
\subsubsection*{Credit Reporting}
A credit report is a document that includes data regarding a person's history of managing credit. This data is collected and maintained by a credit reporting agency and used to assess the creditworthiness of the individual when she applies for new credit. It usually contains the following information \cite{avery2003overview}:
\begin{itemize}
\item Identifying information, such as the name, address and social security number, of the individual.
\item Information reported to the credit reporting agency by creditors, such as banks and debt collection agencies, regarding details of current and past loans, leases, credit report requests, utility and medical bills, etc. We refer to each of these as a \emph{credit account}.
\item Data collected from public records, such as bankruptcy information.
\end{itemize}
\subsubsection*{Credit Reporting Industry} The companies that compile the credit report are known by various names in different countries. For example, they are called Credit Bureaus in the US and Credit Reference Agencies in the UK. We shall call them CRAs in the rest of this paper. These companies compile credit data and help future lenders decide about extending credit. There are three major CRAs in the US. In 2003, they issued 2 million credit reports each day and each of them was estimated to hold data about roughly 1.5 billion credit accounts belonging to 190 million individuals \cite{avery2003overview}. One of them, Equifax, reported a revenue of nearly \$850 million in the third quarter of 2017 \cite{equifaxrevenue}.
\subsubsection*{Problems with Credit Reporting} It should come as no surprise that the fact that CRAs are collecting and storing vast amounts of sensitive data about hundreds of millions of people is a source of concern for many. To address these concerns, laws and regulations are passed to ensure that the rights of individuals to privacy and fair treatment are not violated. One example is the US Fair Credit Reporting Act (FCRA) of 1970. However, there are still a variety of problems that cannot be addressed by regulation alone. These include:
\begin{itemize}
\item \emph{Long Update Intervals.} CRAs generally receive information from creditors and other sources once a month and it takes them up to seven days to update the records \cite{avery2003overview}.
\item \emph{Identification Problems.} The data received by the CRAs does not always include uniquely-identifying information and might be erroneously attributed to the wrong individual \cite{CRaccuracy}. Another related aspect of this problem is identity theft. It was estimated that 12 percent of Americans were victims of identity theft in the 5-year period ending in 2003 \cite{kahn2008credit}.
\item \emph{Errors and Inconsistency.} Reports stored by different CRAs can be inconsistent or contradictory \cite{CRaccuracy}. Moreover, it is estimated that as many as a third of all credit reports might contain errors that can lead to denial of access to credit \cite{avery2003overview,golinger1998mistakes}.
\item \emph{Endemism.} Credit data is usually tied to a single country or jurisdiction. The CRAs cannot access foreign credit information \cite{avery2003overview}. This means that when an individual relocates to a new place, her credit data is effectively erased and her record starts from scratch.
\item \emph{Data Breaches.} Finally, a major source of discomfort is the possibility of data breaches and unauthorized access to the sensitive credit report information. In a famous catastrophic case in 2017, hackers stole sensitive information about 143 million people in the US from Equifax \cite{ftcwhattodo}.
\end{itemize}
\subsubsection*{Our Contribution} In this paper, we propose an approach based on smart contracts that can remove the CRAs from the lending process and fixes all the problems mentioned above. In our approach (i)~data updates take only a few seconds, (ii)~identification problems are entirely avoided, (iii)~there is no possibility of inconsistency, (iv)~credit reports can be used globally and (v)~all sensitive information is secured by cryptography. We also give individuals full control over their credit report, allowing them to disclose all or any part of it to others. From the creditors' point-of-view, we guarantee that the report is correct and not editable by its owner and that the creditor can easily check to ascertain that it includes all the data in a requested time-frame.
We now provide a high-level overview of our approach that combines classic constructs in asymmetrical cryptography to achieve secure credit reporting. We first recall the main concepts of encryption, decryption and digital signatures and then proceed with an intuitive description of our method. A more formal treatment is provided in the next sections.
\subsubsection*{Asymmetrical Cryptography} We assume basic familiarity with asymmetrical and public-key cryptography, as introduced e.g.~in \cite{hoffstein2008introduction}. Formally, we use pairs of keys of the form $(K, k)$ for encryption, decryption and digital signatures. The public key is denoted as $K$ and its corresponding private key as $k$. One can encrypt data using $K$ and then the encrypted data can only be decrypted if one knows $k$. Similarly, one can sign a piece of data using $k$ and this signature is verifiable by anyone who has access to the data and $K$. In particular, a function call in a smart contract always includes the public key $K$ and is signed by the private key $k$. This means that anyone can see the function call data and its caller by reading the Blockchain but no one can make a fake function call on behalf of another person unless they have access to her private key.
\subsubsection*{Underlying Principles of Our Approach} We achieve the guarantees mentioned above by employing a combination of the following techniques:
\begin{enumerate}[(i)]
\item \emph{Identity Management.} We use a decentralized identity management and certification system in which a borrower's identity can be certified by lenders and financial institutions.
\item \emph{Data Encryption.} We store the credit report data in an encrypted format, using asymmetrical encryption, in a series of smart contracts. The encryption is such that only the owner and creator of a record, or anyone who they authorize by providing the relevant private key, can decrypt it.
\item \emph{Links Encryption.} We chain the records belonging to each individual in a linked list whose pointers are also encrypted. Hence, not only one cannot read a record without authorization, but it is also impossible to find out to whom a given record belongs or which records belong to a given individual.
\item \emph{Fraud Prevention.} We use digital signatures and asymmetrical cryptography to avoid fraud. The simplified intuition is that a credit record can be first signed by the creditor and then encrypted using a key pair that is shared with the customer. Then, when another creditor wants to see the record, the customer can decode it and the creditor can check the previous creditor's signature to make sure the customer has not altered the record.
\end{enumerate}
The main novelty of our approach is a combination of these ideas that makes it possible to achieve secure credit reporting on the Blockchain. To the best of our knowledge, this is the first method that can reliably perform all credit reporting tasks without a need for trusted third-parties and centralized authorities, or changing the financial mechanism of credit reporting.
\subsubsection*{Organization} Our approach consists of three distinct protocols, each realized by a different smart contract. In Section~\ref{protocol:identity}, we present our solution for identity management. Section~\ref{protocol:accounts} is the core of our approach and explains how we handle credit accounts. This is followed by our public records protocol in Section~\ref{protocol:public}. Section~\ref{sec:implementation} provides a short report on a proof-of-concept implementation of our method that is publicly available. We discuss some limitations of our approach in Section~\ref{sec:ext}. Section~\ref{sec:comparison} is a comparison with similar works and finally, Section~\ref{sec:future} concludes the paper with suggestions for future research and development.
\section{Public Records Protocol} \label{protocol:public}
Our protocol for storing public records is similar to the one we described for credit accounts. However, in this case the protocol becomes much simpler, because unlike credit accounts, public records can be made without the consent of their individual owners. For example, a court does not need permission from an individual to add a declaration of her bankruptcy to her credit report.
Similar to the previous section, we store public records in a singly linked list. Each record is an instance of the public record contract. As previously mentioned in Section~\ref{protocol:name}, there is a pointer from an individual's identity to her first public record, which can be created by herself.
Unlike credit accounts, the pointers used to connect public records are not encrypted. This allows anyone to follow the list of public records corresponding to an identity. Moreover, anyone can add a new public record to the end of any of these lists. This is not problematic, given that lenders will only take the records issued by real public institutions into account. Simply, each record is either added using an unknown identity, in which case it can be considered as spam and ignored\footnote{Producing spam is not free given that one has to pay for its gas fees. This is the native Ethereum solution to combat spam and it naturally extends to our contracts. On the other hand, when reading the records, one can differentiate spam entries pretty fast, by simply checking the identity of their signatures. Note that reading the blockchain is free but writing to it is not.}, or by an official identity, in which case it is either correct or can be corrected by the same authority. Again, note that all changes to the contracts are permanently saved on the Blockchain and that official authorities are bound by legal responsibilities and cannot simply issue false records.
We now define the structure of our public record contract more formally. Figure~\ref{fig:pr} shows the data fields in a public record contract together with the constraints enforced by the contract.
\begin{figure}[H]
\begin{center}
\resizebox{\linewidth}{!}
{
\includegraphics{public-records-contract.pdf}
}
\end{center}
\caption{Data fields and constraints in a Public Record Contract}
\label{fig:pr}
\end{figure}
\subsubsection*{Contract Creation}
The public authority creates an instance of this contract and publishes it on the Blockchain. The authority has access to the individual's fingerprint and can hence add the record to linked lists corresponding to all identities that have that fingerprint. To do so, the authority follows the \field{Next Record} pointers until it reaches the end of the linked list, and then sets the final \field{Next Record} to point to the new instance of the contract. Note that anyone can set the value for \field{Next Record}, the only limitations are that (i)~it can be filled only once and (ii)~it must keep the linked list valid and extensible. We refer to the latter condition as ``validity''.
\subsubsection*{Credit Report Data}
The other two data fields in this contract, \field{Data} and \field{Signature}, are under complete control of its issuer. \field{Data} is meant to contain any relevant information that should be considered part of the credit report. The authority can decide whether to fill this data without encryption, hence allowing public access to it, or encrypt it using $K_c$, so that it is only accessible by the individual owner herself. In the latter case, the authority signs the original unencrypted \field{Data} and stores this signature in the contract. This ensures that the individual owner can both read and prove what is saved in \field{Data} and is the only person, other than the public authority, who can perform these actions.
\subsubsection*{Reading a Credit Report}When an institution decides to read the public records of an individual, it simply follows the linked list, ignoring any entries created by unknown identities. In case it faces an encrypted entry by a trusted public authority, it asks the individual owner to decrypt the \field{Data} field and provide the decrypted text. It then checks the signature to make sure that the text was not changed by the owner.
\subsubsection*{Importance of Validity} Except for the validity condition, all other aspects of our public records protocol are simplifications of those used for credit accounts. When dealing with credit accounts, the pointers used for our linked list were filled by the individual who owned them and there was no fear that she might intend to destruct the whole linked list. Also, the signatures provided by the institutions guaranteed that one cannot add another person's record to her linked list without getting caught. However, in the case of public records, anyone can add a new element to the linked list and fill the \field{Next Record} fields. These fields remain immutable after they are first filled. So, a natural attack would be to fill them with invalid pointers, i.e.~pointers that do not hold the address of a valid contract of the same type. This will make it impossible for others to keep adding records. Another malicious behavior is adding the same instance of a record to the linked lists belonging to two different individuals. This will effectively merge the two lists. It is therefore of utmost importance to keep the linked lists valid.
\subsubsection*{Enforcing Validity} To avoid the attacks described above, we do not allow the individuals to create instances of our public record contract directly. Instead, we develop a so-called ``factory'' contract that can be called by anyone to create valid instances of the public record contract. The factory contract also keeps track of the addresses of all valid public record contracts instantiated using it and whether they have been added to a linked list. On the other hand, each such instantiated contract includes an immutable pointer to the parent factory contract. When a new contract is being added to the linked list, it is first checked against the factory contract to ensure it respects validity.
\subsubsection*{Checks for Adding a New Entry} Formally, when an individual attempts to set a value for the \field{Next Record} pointer, the public record contract performs the following actions:
\begin{enumerate}
\item It first checks the value with its own parent factory contract. If the provided value is not a valid address or if it does not point to a contract created by the same factory, the operation is rejected.
\item It checks that the person (public key) trying to add the new record is the same person who created this new record. One cannot add records authored by others to the linked list.
\item It queries the parent factory contract to make sure this is the first time the given record is being added to a linked list. The parent factory contract remembers this query and will answer negatively to any following queries about the same record.
\item It checks that the new record has an empty \field{Next Record} field. This is equivalent to checking that only a single record is being added.
\end{enumerate}
These checks ensure that the linked lists remain valid and accessible to everyone for adding new entries.
\noindent\textbf{Deanonymization.} The fact that public records are not encrypted means that they can be used to deanonymize users. For example, public records of bankruptcy often include names of individuals and their national identity numbers, which might be the same as fingerprints. However, the only additional data that can be inferred by such deanonymization is the individual's public key $K_c$. As mentioned before, this key is not saved in any of the credit account contracts and cannot be used to infer any non-public information about the individual. Note that the public records themselves are, and should be, accessible to everyone.
|
1,314,259,995,610 | arxiv | \section{Results at half filling}
\label{sectionIII}
The half-filled Hubbard model is known to be an antiferromagnet
for any strength of the coupling $U$. We assume a staggered
magnetization on the bipartite square lattice
\begin{equation}
\vec \phi_0 (\vec r)=\pm \varphi \vec n
\end{equation}
where $\vec n$ is any constant unimodular versor,
$\vert \vec n\vert=1$, and the sign is positive on a sub-lattice
and negative on the other one. The effective potential becomes
a function of the scalar $\varphi$ which gives the strength of the
local magnetization. The versor $\vec n$ breaks the rotational
symmetry of the model, and the opposite signs of $\vec \phi_0$ on
the sub-lattices break the translational symmetry of the square
lattice. The unit cell doubles its size as it is replaced by the
unit cell of the sub-lattice. The first Brillouin zone reduces
to one half and its boundary is at the Fermi surface of the
unperturbed electron gas at half filling (perfect nesting).
Let us define the two-component Green function $G^{\pm}(\varphi)$
\begin{equation}
i {G^\pm }_{\alpha\beta} (\vec k, \omega;\varphi)=
\frac{\left[\omega\pm\epsilon(\vec k)\right]\delta_{\alpha\beta}
+\sqrt{U}\varphi\>\vec n\cdot\vec \tau_{\alpha\beta}}
{\omega^2-E\>^2(\vec k)+i\eta}
\label{Green_f}
\end{equation}
where the free-electron band energy $\epsilon(\vec k)$
is
\begin{equation}
\epsilon (\vec k)=-2t\sum_a\cos(k_a)
\end{equation}
and
the mean field band reads
\begin{equation}
E(\vec k)=\sqrt{\epsilon^2(\vec k)+U\varphi^2}.
\end{equation}
\begin{figure}[htb]
\begin{center}
\subfigure[]{\label{fig:subfig:a}\includegraphics[height=7cm,width=7cm]{Figure1a.eps}}
\hspace{1cm}
\subfigure[]{\label{fig:subfig:b}\includegraphics[height=7cm,width=7cm]{Figure1b.eps}}
\caption{Effective potentials in units of the bandwidth $t$ for $\frac{U}{t}$=10 (a)
and $\frac{U}{t}$=30 (b). The minima of the effective potentials, i.e. the ground state
magnetization, occur for $m$=0.55 in (a) and $m$=0.6 in (b) (see further details in the text).
}
\label{fig:1}
\end{center}
\end{figure}
At half filling ($\mu=0$) the electron
Green function $G_0(\varphi)$ reads\cite{fradkin}
\begin{equation}
G_0(\vec k,\omega; \varphi)=G^+(\vec k,\omega; \varphi)
\end{equation}
for any $\vec k$ belonging to the reduced first zone
(below the unperturbed Fermi energy), while outside the
reduced first zone (above the unperturbed Fermi energy)
the Green function is given by the second component
\begin{equation}
G_0(\vec k,\omega; \varphi)=G^-(\vec k,\omega; \varphi).
\end{equation}
Insertion into (\ref{K}) and (\ref{gap}) yields the
spin wave correlation matrix $g$.
The $\omega$ integration in \eqref{K} can be carried out
analytically. In this way we are
left with a two-dimensional k-integration for the real
part of $g$ and a unidimensional one for its imaginary part, both
to be performed by numerical methods.
The real part has been obtained integrating by Simpson's rule
on a $70\times70$
grid in the full first Brillouin zone, while for the imaginary part we
used a $3000$ point sampling of the interval $[-\pi,\pi]$.
The effective potential $V_{GEP}$ can now be evaluated
through \eqref{GEP}. Exploiting the parity
properties of $g(\vec{q},\Omega)$ the first integral
in \eqref{GEP} can be restricted to positive values
of $\Omega$ and to the upper right quarter of the
Brillouin zone. The employed grid contains
$200\times20\times20$ points in the
$(\Omega\times\vec{q})$ space.
Figure \ref{fig:1} presents two plots of the effective potential
in units of the bandwidth $t$ for $\frac{U}{t}$=10 (figure \ref{fig:subfig:a})
and $\frac{U}{t}$=30 (figure \ref{fig:subfig:b}). As expected, the effective potential
shows a minimum for a non vanishing value of $m$, which corresponds to
the magnetization of the broken symmetry ground state;
a more accurate estimate of $m$ is then obtained by a least square fit of
the region of the minimum with a parabolic curve.
\begin{figure}[htb]
\begin{center}
\includegraphics[height=8cm,width=8cm]{Figure2.eps}
\caption{Sublattice magnetization $m$ vs $\frac{U}{t}$ within
the GEP (blank squares), RPA (stars), Monte Carlo (errorbars)
and mean-field (crosses) approximations. The arrow indicates the spin-wave
strong coupling limit m=0.606.
}
\label{fig:2}
\end{center}
\end{figure}
Another important feature of our approximation can be observed
in figure \ref{fig:1}: the potential broadens as $\frac{U}{t}$ decreases.
This behaviour implies that the smaller is the coupling the larger are
the fluctuations of the field, so we expect that the simple second order
expansion of \eqref{expansion} should fail in the small coupling regime.
\begin{figure}[htb]
\begin{center}
\includegraphics[height=8cm,width=8cm]{Figure3.eps}
\caption{magnetization limiting behaviour for the GEP (full circles) as a
function of the inverse coupling constant $(U/t)^{-1}$. The straight
horizontal line shows the Heisenberg Model magnetization as obtained
with a Monte Carlo (MC) by Sandvik \cite{heisenberg_mc} while the triangle
represents the spin-wave result for the same model \cite{heisenberg_sw}:
they are shown in correspondance to $(U/t)^{-1}$ as in that limit
Hubbard and Heisenberg model should coincide.
As it is evident from the figure, the GEP magnetization
trend confirms this expectation, supporting our claim for the GEP
to be very reliable in the strong coupling limit.
}
\label{fig:3}
\end{center}
\end{figure}
This trend is evident when we compare our results for the
magnetization with other calculations present in literature.
As shown in Fig~\ref{fig:2},
for $\frac{U}{t}$<4, the RPA \cite{RPA} and
Monte Carlo \cite{MC} predictions converge reasonably well towards
the mean field approximation, while the GEP gives wrong results
due to a bad averaging process by a Gaussian functional which
becomes too wide, as discussed in the previous section.
However the approximation improves in the strong coupling region
($\frac{U}{t}\geqslant$8). Data show a tendency to saturate to a limit value
and this trend appears slower than with RPA, in agreement with Monte Carlo
results. In order to get the saturation value we performed a calculation using the very
strong coupling constant $\frac{U}{t}$=30, obtaining $m$=0.6 (see also
figure \ref{fig:subfig:b}). This limit represents a very important
consistency test for the GEP. It is in fact straightforward to show that
for large $\frac{U}{t}$ the Hubbard Hamiltonian \eqref{Hubb_H} reduces to
the anti-ferromagnetic Heisenberg model with exchange coupling $\frac{2t^2}{U}$.
A comparison between our magnetization and Monte Carlo simulations for
the Heisenberg model \cite{heisenberg_mc}
points out that the GEP has the correct limiting behaviour (figure \ref{fig:3}).
This result is also in agreement with spin-wave theory predictions~\cite{heisenberg_sw}.
Moreover, also in the intermediate region our
results nicely interpolate between the weak and the strong
coupling limits and are closer to Monte Carlo calculation than
RPA.
In conclusion, we have shown that the GEP can be regarded as
a useful non-perturbative tool for studying magnetic systems
with a broken symmetry ground state, and that
it can be regarded as a complementary tool together with
other well established approaches
like functional RG, FLEX or RPA.
In particular, we have been able to evaluate the effective
potential for the Hubbard model and to write down an analytic
expression (\eqref{GEP}) where the ``classical''
electron and spin energies and the quantum spin fluctuations
are clearly recognizable.
For the half filled Hubbard model we compared the magnetization
with the result of other approximations showing
that the GEP provides an improvement over the RPA approach in
the intermediate and strong coupling limit, while it seems to be
inaccurate for small values of $\frac{U}{t}$. However in the
weak coupling limit the failure seems to be a consequence of
the simple second order expansion, and we expect that the exact
GEP should predict the correct magnetization. In this limit an
improvement could be achieved by insertion of higher order
terms in the expansion \eqref{expansion}.
Further details about the GEP approximation can surely be obtained
investigating its dependence on the band filling. The method of
section \ref{sectionIII} still holds for any filling of the band,
while the half-filling electron Green's function (\ref{Green_f})
should eventually be replaced by the general one which depends on
the filling of the band.
That would allow us to construct a phase diagram for the 2D
Hubbard Model and discuss
the competition between antiferrommagnetic order and
d-wave superconductivity\cite{RG,Flex}.
|
1,314,259,995,611 | arxiv | \section{Introduction}
We call the potentials that are exactly zero beyond a certain distance
strictly finite-range (SFR) potentials. The conventional nuclear potentials
are in principle not SFR potentials, but in practice, if the radial
Schr\"odinger equation is solved numerically as is usual, a cutoff at a finite
range is implied. Indeed, beyond this range $R_{\rm max}$ the numerical
solution is to be matched at a finite distance
$r=R_{\rm match}(\ge R_{\rm max})$ with the exact solution of the
free-particle (or of the Coulomb) problem.
For instance, the most often quoted Woods--Saxon (WS) potential goes to zero
in infinity, but, in numerical calculations, cut-off WS (CWS) potentials
are used invariably. A disadvantage of the CWS potential is that
the positions of the resonance poles do depend on the cutoff distance
\cite{[Sa08]}, which is an unphysical parameter of the calculation.
To avoid this, a new form was introduced by Salamon and Vertse (SV)
\cite{[Sa08]}, which contains two terms, with one range parameter for each,
and a relative strength of the two terms. The SV potential goes to zero
smoothly. Its parameters can be adjusted so as to get a good fit to the WS
shape except in the tail region, where they are necessarily different.
There is another motivation of using SFR potentials.
It has been observed recently by Sahu and Sahu
\cite{[SS12]} that a faster approach of the nuclear potential to zero
improves the barrier behavior of the interaction potential between heavy ions.
They modified the form of the SV potential by introducing a diffuseness
parameter $a_s$ to one of its terms.
Here we shall refer to this potential as SS potential.
The SS potential was found to describe the elastic scattering and
the fusion below the Coulomb barrier with the same parameters, while a WS
form requires two different sets for these two processes \cite{[SS12]}.
However, the asymptotic density of the matter of nuclei is exponential,
and the nucleon-nucleon interaction has a Yukawa tail. This physically
substantiates the numerically untractable exponential falloff of the
WS potential, and casts some doubt on the use of the convenient
tails of the SV and SS potentials. In this paper we will examine
the effect of the unphysical tail behavior of the SV potential, and further
study the trajectories of the $S$-matrix poles.
The SV potential is a special case of the SS potential with $a_s=1$,
and we extend the studies to $a_s\not=1$. In fact, for very light
nuclei the derivative term in the SV potential can be omitted,
and the SS form becomes identical to an SV form, which has a single
parameter, the range $\rho_0$.
In this work we consider nucleon potential problems. Since we disregard
the Coulomb interaction, we can say that we deal with neutrons.
We perform bound-state and resonance calculations, with an eye to scattering
problems, but we need no absorptive terms. We shall study the cases
of light nuclei with mass number $A_T<20$ as well as
nuclei with much larger $A_T$ values. Light nuclei are important
in fusion reactions taking place in the Sun.
The nucleon optical potential of light nuclei is an ingredient of the
description of the reactions producing the nuclides
used in positron emission tomography
(PET) \footnote{The standard reactions producing the most important positron
emitters are $^{14}$N$(p,\alpha)^{11}$C, $^{13}$C$(p,n)^{13}$N,
$^{15}$N$(p,n)^{15}$O and $^{18}$O$(p,n)^{18}$F.}.
\section{Functional forms of the potentials considered}
The real term of the optical potential is almost exclusively of CWS
form, and the spin-orbit part contains the derivative of
a CWS form.
The CWS potential can be written as
\begin{equation}
\label{WSpot}
V^{\rm CWS}(r,R,a,R_{\rm max})=-V_0f^{\rm CWS}(r,R,a,R_{\rm max})~,
\end{equation}
with
\begin{equation}
\label{vagottWS}
f^{\rm CWS}(r,R,a,R_{\rm max})=
\left(1+e^{\frac{r-R}{a}}\right)^{-1}~\theta(R_{\rm max}-r)~,
\end{equation}
where the Heaviside step function $\theta(x)$ is unity for positive $x$
and zero otherwise. The CWS form factor $f^{\rm CWS}(r,R,a,R_{\rm max})$
has two physical parameters, the radius $R$ and the diffuseness $a$.
The third parameter, the cutoff radius $R_{\rm max}$, should have
no physical significance, but, due to the jump at the finite $R_{\rm max}$,
its derivative does not exist there, and that has implications.
It was shown earlier~\cite{[Sa08]} that the positions of broad resonances in a
CWS potential do depend on the value of the cutoff radius $R_{\rm max}$.
Certain sections of the pole trajectories (mainly the starting regions)
have been found to be sensitive to the value of
$R_{\rm max}$~\cite{[Ra11],[Da12]}. Thus the cutoff radius
$R_{\rm max}$ is an important, though non-physical, parameter of
the CWS form.
The SV potential~\cite{[Sa08]} recommended by two of us
instead of the CWS potential has the form \cite{[Ra11]}
\begin{equation}
\label{SVpot}
V^{\rm SV}(r)=-V_0 f^{\rm SV}(r,c,\rho_0,\rho_1)~,
\end{equation}
in which $V_0\ge 0$ and $f^{\rm SV}(r,c,\rho_0,\rho_1)$ is
a linear combination of the function
\begin{equation}
\label{distrib}
f(r,\rho)=
e^{\frac{r^2}{r^2-\rho^2}} ~ \theta(\rho-r)~,
\end{equation}
and a term containing the derivative, with respect to $r$, of the first factor,
\begin{equation}
\label{SVder}
f^\prime(r,\rho)=-\frac{2 r \rho ^2}{(r^2-\rho^2)^2}
e^{\frac{{r^2}}{r^2-\rho^2}}~\theta(\rho-r)~.
\end{equation}
Note that the function in Eq.~(\ref{distrib}) is a variant of the well-known
functions of compact support, $C^\infty$, defined in the book by Bremmermann
\cite{[Br65]} and sometimes called {\it bump functions}.
The radial factor thus contains three adjustable parameters,
\begin{equation}
\label{newcent4}
f^{\rm SV}(r,c,\rho_0,\rho_1)=f(r,\rho_0) - c f^\prime (r,\rho_1)~,
\end{equation}
in which $\rho_0$ and $\rho_1$ need not be the same, and, for the second term
to be attractive, the coefficient $c$ is non-negative.
The potential $V^{\rm SV}(r)$ goes to zero smoothly, and, if $\rho_0>\rho_1$,
it vanishes at $\rho_0$; furthermore, for $r\ge\rho_0$, it is zero,
together with all its derivatives. Thus the SV potential has the attractive
mathematical property that its derivative exists in the whole
$r\in (0,\infty)$ region. A drawback is, however, that it is not analytic
because at $\rho_0$ the Taylor series is not equal to the function.
Nevertheless, it has turned out to be useful in quantum electrodynamics, too,
as a compactly supported smooth regulator function \cite{[Na13]}.
The formula of the SS potential \cite{[SS12]} is analogous to
Eq.~(\ref{newcent4}):
\begin{equation}
\label{SSform}
f^{\rm SS}(r,c,\rho_0,\rho_1,a_s)=f(r,\rho_0) - c f^\prime (r,\rho_1,a_s)~,
\end{equation}
where
\begin{equation}
\label{SSder}
f^\prime(r,\rho_1,a_s)=-\frac{2 r \rho_1 ^2}{(r^2-\rho_1^2)^2}
e^{\frac{a_s{r^2}}{r^2-\rho_1^2}}~~\theta(\rho_1-r)~,
\end{equation}
with $a_s$ being the extra diffuseness parameter.
When $a_s=1$, the SS form coincides with the SV potential~(\ref{SVpot}).
By using $a_s \ne 1$, one naturally has more freedom in choosing the shape
of the potential. With the usual choice $\rho_0>\rho_1$, the range of the
SS potential is also $\rho_0$. The SS form has the same attractive
mathematical features as the SV potential.
Let us return for a while to the original SV form.
If we want the shape of the SV form to be similar to the WS shape as much as
possible, we should fit its parameters to the CWS shape $f^{\rm CWS}$.
To this end, we can minimize
\begin{equation}
\label{Delta}
\int_{0}^{\rho_{0}}\left[f^{\rm SV}(r,c,\rho_0,\rho_1)
-f^{\rm CWS}(r,R,a,R_{\rm max})\right]^2dr~.
\end{equation}
The integration in Eq.~(\ref{Delta}) can be performed by a quadrature of
$m$ equidistant mesh-points $r_i=i h$ over the range of the integration,
so that what is minimized is
\begin{equation}
\label{Deltas}
\Delta(\rho_0,\rho_1,c)=\sum_{i=1}^{m}
\big[f^{\rm SV}(r_i,c,\rho_0,\rho_1)-f^{\rm CWS}(r_i,R,a,R_{\rm max})\big]^2~.
\end{equation}
\section{Global parameter sets for optical potentials}
In this section we construct SV potentials that approximate the real parts
of some well-known global nucleon optical model potentials, and test
their performance. The real parts of all global potentials are
of CWS shape. Their geometrical shapes are generally fixed, and their
energy dependence is restricted to the strength parameters.
The spin-orbit part for a particle with spin $s=\frac{1}{2}\hbar$ is:
\begin{equation}
\label{spinorb}
V_{\rm so}^{\rm CWS}(r,R_{\rm so},a_{\rm so},R_{\rm max})
=V_{\rm so}^{\rm CWS}h_{\rm CWS}(r,R_{\rm so},a_{\rm so},R_{\rm max})~2(
{\bf l}\cdot {\bf s})~,
\end{equation}
with a radial form
\begin{equation}
\label{spinorbr}
h_{\rm CWS}(r,R,a,R_{\rm max})=-\frac{1}{r} f^\prime_{\rm CWS}
(r,R,a,R_{\rm max})~,
\end{equation}
in which the derivative of the central potential,
\begin{equation}
\label{derspinorb}
f^\prime_{\rm CWS}(r,R,a,R_{\rm max})=-\frac{e^{\frac{r-R}{a}}}{a
\left[1+e^{\frac{r-R}{a}}\right]^2}~\theta(R_{\rm max}-r)~,
\end{equation}
appears.
The spin-orbit term of the SV potential may be defined analogously:
\begin{equation}
\label{spinorbsv}
V_{\rm so}^{\rm SV}(r,c,\rho_0,\rho_1)=V_{\rm so}^{\rm SV}
h_{\rm SV}(r,c,\rho_0,\rho_1)~2({\bf l}\cdot{\bf s})~,
\end{equation}
with
\begin{equation}
\label{spinorbrsv}
h_{\rm SV}(r,c,\rho_0,\rho_1)=-\frac{1}{r}f^\prime_{\rm SV}(r,c,\rho_0,\rho_1)~.
\end{equation}
The mass-number dependence of the global potentials is borne generally
by the radii such that
$R_{\alpha}=r_{\alpha,0}A_T^{1/3}$, where $\alpha$ labels any of the
potential terms.
Classical nucleon potential sets were given by Perey~\cite{[Pe63]} and by
Becchetti and Greenlees~\cite{[BeGe]} long time ago, and they are relied on
in recent studies~\cite{[Li12]} as well. A recent attempt for the
derivation of a new $\alpha$-nucleus potential was made by Mohr and
coworkers \cite{[Mo13]}. In this work, however we restrict
ourselves to the Perey and Becchetti--Greenlees parameters for simplicity.
To construct global SV potentials, we search for the minimum of the squared
deviations in Eq.~(\ref{Delta}) as a function of the mass number $A_T$ and
calculate the best-fit SV parameters as a function of $A_T$. For medium-heavy
and heavy nuclei, the SV potential reproduces the CWS shape quite well,
and its $A_T$ dependence is regular. The mixing coefficient $c$ decreases
with decreasing $A_T$ as seen in Fig.~\ref{c1atdep}.
\begin{figure}[ht]
\includegraphics*[scale=0.4, bb=0 40 750 550]{c1pbr.eps}
\caption{Dependence of the mixing coefficient $c$ on the target mass-number
$A_T$ for two global parameter sets.}
\label{c1atdep}
\end{figure}
In the region of light nuclei, however, the best-fit SV form has a strange,
irregular shape. We can avoid this by requiring that the derivative of
the SV form be similar to the derivative of the WS shape:
\begin{equation}
\label{Deltad}
\Delta(\rho_0,\rho_1,c)=\sum_{i=1}^m\left[f^{\rm SV}(r_i,c,\rho_0,\rho_1)
-f^{\rm CWS}(r_i)\right]^2+\lambda\left[{f^{{\rm SV}}}'(r_i,c,\rho_0,\rho_1)
-{f^{{\rm CWS}}}'(r_i)\right]^2~.
\end{equation}
The Lagrange multiplier $\lambda$ was determined empirically. (Here we
suppressed the parameters of the CWS potential, which were kept fixed.)
With a value of $\lambda=25$ fm$^2$, the fitted SV potential became reasonably
smooth and similar to the CWS shape we want to approximate.
The range $\rho_0$ of the SV potential scales with $A_T^{1/3}$, while
the difference $\rho_0-\rho_1$ is proportional to the diffuseness $a$
of the CWS potential. The parameters of the
Perey potential~\cite{[Pe63]} are $r_0=1.25$ fm, and $a=0.65$ fm, and
the best-fit SV parameters are
$\rho_0=1.85A_T^{1/3}$ fm, $\rho_0-\rho_1=3.2a$, $c=-0.051+0.0051A_T
-3.9\times 10^{-6}A_T^2$,
thus for small $A_T$, $c$ becomes very small.
For the Becchetti--Greenlees~\cite{[BeGe]} geometry ($r_0=1.17$ fm and
$a=0.75$ fm), the best-fit SV parameters relate to the CWS parameters
very similarly, namely their values are
$\rho_0=1.86A_T^{1/3}$ fm, $\rho_0-\rho_1=2.8a$, $c=-0.055+0.003A_T-7.0
\times 10^{-7}A_T^2$.
As a light system, let us consider $^{18}$F+$n$. For the
Perey geometry, the best-fit SV parameters are $\rho_0=5.084$ fm,
$\rho_1=3.244$ fm, and $c=0.040$, while for the Becchetti--Greenlees
geometry, we get $\rho_0=4.957$ fm, $\rho_1=2.728$ fm, and
$c=0.011$. This again shows that for light nuclei $c$ is practically zero,
and it is reasonable to take $c=0$.
\begin{figure}[bht]
\includegraphics*[scale=0.4, bb=0 40 750 550]{a18.eps}
\caption{Radial shapes of Perey's WS and the SV ($c=0$) potentials and
their derivatives for $^{18}$F+$n$. Derivatives appear in the
spin-orbit terms in Eqs.~(\ref{spinorbr}) and (\ref{spinorbrsv}).}
\label{a18}
\end{figure}
In Fig.~\ref{a18} we compare the shape of Perey's WS potential and its
derivative with the SV potential (with $c=0$) and its derivative for the
$^{18}$F+$n$ system. The WS parameters are listed in Table~\ref{compare}. The
ratio $\rho_0/A_T^{1/3}$ is almost constant with a value of $\sim 1.6r_0$.
One can see that the radial shape of the WS potential is approximated
reasonably well by the first term of the SV form with a single adjustable
parameter, $\rho_0$. Now $\rho_0$ must play the role of both the radius and
the diffuseness of the WS potential.
Of course, the SV curves deviate most from the WS curves at large distances.
\begin{table}[htb]
\begin{center}
\caption{Geometrical parameters of the WS and the SV potentials for $^{13}$N,
$^{15}$O and $^{18}$F. All distances are in units of fm.}
\begin{tabular}{rcccccccc}
\hline\hline
$A_T$ && $r_0\!=\!R/A_T^{1/3}$ & $R$ && $a$ && $\rho_0/A_T^{1/3}$ &
$\rho_0$\\
\hline
$^{13}$N && 1.25 & 2.94 && 0.65 && 2.037 & 4.79 \\
$^{15}$O && 1.25 & 3.08 && 0.65 && 2.031 & 5.01 \\
$^{18}$F && 1.25 & 3.28 && 0.65 && 2.022 & 5.30 \\
\hline\hline
\end{tabular}
\label{compare}
\end{center}
\end{table}
\section{Single-particle energies for light nuclei}
It is interesting to see how the differences between the potentials influence
the single-particle energies. In Table~\ref{spf18} we show the neutron
single-particle energies $\epsilon_{nlj}$ calculated for the core nucleus
$^{18}$F, with Perey's WS geometry ($V_0^{\rm CWS}=60$ MeV, $r_0=1.25$ fm,
$a=0.65$ fm, $R_{\rm max}=15$ fm, and $V_{\rm so}^{\rm CWS}=28$ MeV). For the
fitted SV potential we used two values for the spin-orbit strength.
In the first case
the spin-orbit term (\ref{spinorbsv}) was used with
$V_{\rm so}^{\rm SV}=V_{\rm so}^{\rm CWS}=28$ MeV. But, as is seen in
Fig.~\ref{a18}, the shape of the derivative differs somewhat from that
of the standard form. Therefore, to achieve similar spin-orbit splitting,
in the second case we used a bit stronger ($V_{\rm so}^{\rm SV}=30$ MeV) value
for the spin-orbit strength.
\begin{table}[bh]
\begin{center}
\caption{$^{18}$F+$n$ single-particle energies (in MeV) in the CWS potential
and in the fitted SV potential with one central term.}
\label{spf18}
\begin{tabular}{ccccc}
\hline\hline
$i=\{n,l,j\}$&$\epsilon_i$(CWS)&\multicolumn{3}{c}{$\epsilon_i$(SV)}\\
\cline{3-5}
& & $V^{\rm SV}_{\rm so}=28$ MeV
&& $V^{\rm SV}_{\rm so}=30$ MeV\\
\hline\hline
$0s_{1/2}$&$-38.926$& $-38.119$ && $-38.119$\\
$0p_{3/2}$&$-23.998$& $-23.568$ && $-23.611$\\
$0p_{1/2}$&$-22.067$& $-21.729$ && $-21.640$\\
$0d_{5/2}$&$-8.985$& $-8.962$ && $-9.049$\\
$1s_{1/2}$&$-7.697$& $-7.699$ && $-7.699$\\
$0d_{3/2}$&$-5.779$& $-5.901$ && $-5.770$\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
One can see that, with the larger spin-orbit strength, the SV energies
are pretty close to the CWS energies. The differences are largest for the
deepest orbits. Similar behaviors were found for the other two residual nuclei.
In Table~\ref{core13} we present the calculated single-particle energies
for $^{13}$N+$n$, in which the d$_{3/2}$ orbit is very close to the threshold.
We can conclude that for light nuclei the one-term SV potential is
a good phenomenological form, which reproduces the spectra obtained
with the conventional WS potentials, although the shape of its derivative is
somewhat different from that of the CWS potential.
\begin{table}
\begin{center}
\caption{$^{13}$N+$n$ single-particle energies (in MeV) in the CWS potential
and in the corresponding SV potential with one central term and
$V^{\rm SV}_{\rm so}=30$ MeV.}
\label{core13}
\begin{tabular}{ccc}
\hline\hline
$i=\{n,l,j\}$&$\epsilon_i$ (CWS)&$\epsilon_i$ (SV)\\
\hline\hline
$0s_{1/2}$&$-35.045$&$-36.746$ \\
$0p_{3/2}$&$-18.620$&$-20.368$ \\
$0p_{1/2}$&$-16.318$&$-17.958$ \\
$0d_{5/2}$&$-3.067$&$-4.247$ \\
$1s_{1/2}$&$-3.400$&$-3.400$ \\
$0d_{3/2}$&$-0.003$&$-0.548$ \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
The wave functions produced by the two potentials are most conveniently
compared through the neutron densities
\begin{equation}
\label{density}
\rho(r)=\sum_i v_i^2\left[\frac{u_i(r)}{r}\right]^2~,
\end{equation}
where $i=\{n_{i},l_{i},j_{i}\}$ runs over the occupied orbits, $u_i(r)$ denotes
the single-particle radial wave functions, and $v_i^2$ is the occupation
number. It is assumed that the lowest-lying orbits are fully occupied, i.e.,
$v_i^2=2j_i+1$. In Fig. \ref{denshape} we compare the neutron
densities calculated for the nucleus $^{18}$F in CWS and in SV potentials.
The difference between the two densities is largest at the peak of
the densities produced by the two deeply bound orbits, where the energies
are deeper in the CWS potential. In the surface, where the CWS and
SV potentials do differ appreciably, the two densities do not differ
significantly. For $r>4$ fm, the two curves can hardly be distinguished
because the tail of the density is mostly determined by the
single-particle energies being close to the Fermi level, which are very similar
in the two potentials.
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 40 750 550]{density.eps}
\caption{Radial shapes of the neutron densities for the nucleus $^{18}$F
in CWS and in SV potentials.}
\label{denshape}
\end{figure}
\section{The CWS potential imitated by the SS form}
The SS modification only matters for heavier systems, and we consider
$^{208}$Pb+$n$. First we show the effect of $a_s\not=1$ on a potential
whose SV parameters $\rho_0$, $\rho_1$ and $c$ were adjusted to the CWS
shape~\cite{[Ra11],[Da12]}. In Fig.~\ref{sswspb} we can see that $a_s>1$
smooths the SV potential in the region around $\rho_1$, where the SV
curve shows a bend, while $a_s<1$ sharpens the bend, and even
an extra minimum shows up. Such an extra minimum (a pocket) was needed
for the description of $\alpha$ decay from Ra isotopes in Ref.~\cite{[De13]}.
\begin{figure}[hbt]
\includegraphics*[scale=0.4, bb=0 40 750 550]{ssws.eps}
\caption{Radial shapes of the CWS and SS potentials with different $a_s$
values for $^{208}$Pb+$n$.}
\label{sswspb}
\end{figure}
To determine the SS form that approximates the CWS potential best, we should
fit all the four parameters of the SS potential simultaneously.
We minimized the function
\begin{equation}
\label{Deltad1}
\Delta(\rho_0,\rho_1,a_s,c)
=\sum_{i=1}^m\left[f^{\rm SS}(r_i,c,\rho_0,\rho_1,a_s)
-f^{\rm CWS}(r_i)\right]^2
+\lambda\left[{f^{\rm SS}}'(r_i,c,\rho_0,\rho_1,a_s)
-{f^{\rm CWS}}'(r_i)\right]^2~,
\end{equation}
with $\lambda=25$ fm$^2$. The two potentials are shown in Fig.~\ref{fittedss}, and
the parameters are given in the caption.
The agreement is remarkable in spite of the SS potential having a minimum.
\begin{figure}[htb]
\includegraphics*[scale=0.4, bb=0 40 750 550]{ourpotfit.eps}
\caption{Best-fit SS shape to the CWS shape for $^{208}$Pb+$n$.
WS parameters: $r_0=1.27$ fm, $a=0.7$ fm. SS parameters:
$\rho_0=10.75$ fm, $\rho_1=8.94$ fm, $c=1.528$, $a_s=1.4$.}
\label{fittedss}
\end{figure}
\section{Pole trajectories in SFR potentials}
Having indicated some practical aspects of using SFR potentials in
nuclear problems, we now discuss the problem of pole trajectories.
We remind the reader that it is the pole trajectories, especially
in the region of broad resonances, that make the use of truncated
potentials dangerous. Pole trajectories can be labeled conveniently
by $n$, the number of nodes of the wave function defined where the
pole belongs to a bound (or anti-bound) state. However, the trajectories can be
found more easily at the other extreme, where the potential strength is nearly
zero (at the ``starting point''). Here the states are resonances
with complex radial wave functions, whose real as well as imaginary
parts have infinite numbers of zeros. Orbits with low $n$ values are
important in nuclear structure calculations and in low-energy nucleon
scattering. In heavy-ion reactions larger $n$ values occur.
In the present work we restrict ourselves to the s-wave case. Analytical
results are available for the square-well potential in the work of
Nussenzweig \cite{[Nu59]} as was discussed by some of us recently
\cite{[Da12]}. Since, however, we are concerned with less special potentials,
which cannot be treated analytically, we re-consider approximate analytical
formulae for the starting points of the trajectories given in the literature.
We are interested in where these are valid and how they can be treated
numerically.
\subsection{Formulae for the starting points}
The $l=0$ states in the SFR potential
\begin{equation}
\label{newpot}
V(r)=V_0~\theta({\cal R}-r)[({\cal R}-r)^\sigma +\ldots]~
\end{equation}
are discussed by R. G. Newton in his book~\cite{[Ne82]} [see Eq.~(12.98)
on p.~361 there]. Here $\sigma>0$, $\theta(x)$ denotes the
Heaviside step function, and the square bracket contains a truncated
expansion in terms of ${\cal R}-r$. In Eq.~(12.102) on p. 362 Newton gives
the real and imaginary parts of the starting point $k_n=k_n^R -{\rm i} k_n^I$
of the trajectory of the $n$th pole of the $S$-matrix as follows:
\begin{equation}
\label{rek}
k_n^R= \frac{n\pi}{\cal R}+O(1)~,
\end{equation}
and
\begin{equation}
\label{imk}
k_n^I= \frac{\sigma+2}{2{\cal R}}\ln (n) +O(1)~.
\end{equation}
The starting point of the pole trajectory is in the fourth quadrant of the
$k$-plane, and, by definition, it belongs to $V_0=0$. Equations~(\ref{rek}),
(\ref{imk}) are especially useful for large $n$ values, where the $O(1)$
terms in the equations can be neglected, but it is interesting to see
how they are fulfilled for lower $n$.
Eq.~(\ref{rek}) depends linearly on $n$ with a slope
\begin{equation}
\label{exslope}
A_1=\frac{\pi}{\cal R}~.
\end{equation}
Regge pointed out \cite{[Re58]} that a relation similar to Eq.~(\ref{rek}) is
valid for the moduli of the starting wave number values:
\begin{equation}
\label{absk}
|k_n|= \frac{n\pi}{\cal R}+O(1)=A_1 n+O(1)~.
\end{equation}
\subsection{Test with Newton's potential}
For a potential of the form of (\ref{newpot}), the asymptotic expressions
(\ref{rek}),(\ref{imk}) and (\ref{absk}) offer convenient tests of our
numerical procedure for very large $n$ values.
Inaccuracies may come from approximating $V_0=0$ by a small finite value,
from truncation errors in the numerical integration of the differential
equation, and from rounding errors throughout the numerical calculations.
We reduced the rounding errors by using extended precision floating-point
arithmetics. We used Ixaru's method \cite{[Ix84]} for the numerical integration
of the radial equation, and we calculated the position of the pole of the
$S$-matrix using the computer code ANTI \cite{[anti]}.
We chose a potential of the form of Eq.~(\ref{newpot}) with $\sigma=1$:
\begin{equation}
\label{ournewpot}
V(r)=-V_0~\theta({\cal R}-r)({\cal R}-r)~,
\end{equation}
which is attractive if $V_0>0$, and chose $V_0=0.005$ MeV
and ${\cal R}=10$ fm.
We calculated the starting values $k_n$ for the $n=1,\ldots,98$ trajectories,
and fitted the $k_n^R$ values by a first order polynomial of $n$, i.e.,
\begin{equation}
\label{firstpol}
y(n)=a_0+a_1 n~.
\end{equation}
Since in Eq.~(\ref{rek}) we have an unknown $O(1)$ term (the actual value of
this is reflected by $a_0$),
we applied a lower cut value $n_s$ in our data and performed the fitting for a
number of $n\in \{n_s,n_s+1,\ldots,n_u\}$ with $n_u=98$ fixed and $n_s$ varied.
We can thus estimate the value of $a_1$ for each $n_s$ and compare it with
$A_1=\pi/{\cal R}=0.31416$ fm$^{-1}$ obtained from Eq.~(\ref{exslope}). In
Fig.~\ref{c1} the ordinate shows the deduced slope, with the horizontal line
$A_1=\pi/10$ fm$^{-1}$, to which the fitted values of $a_1$ should converge for
large $n_s$.
The dashed line connects the $a_1$ values resulting from the fit to
$k_{n_s}^R$.
It is seen that the estimate for the
range has 3 accurate digits even for $n_s=1$.
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 40 750 550]{c1newton.eps}
\caption{Dependence of the slope of the fitted line on the lower cut value
of the node number $n_s$ for a potential in Eq.~(\ref{ournewpot})
with a range of ${\cal R}=10$ fm.}
\label{c1}
\end{figure}
To check the validity of Eq.~(\ref{absk}), we fitted a linear function to
the moduli of the starting wave number values calculated, and
followed a procedure similar to that for $k^R_n$.
The dotted line in Fig.~\ref{c1} shows the slopes obtained as a function of
$n_s$. Now the fitted slope $a_1$ approaches the horizontal line from below
and yields an estimate of similar accuracy.
The results of these tests show that the small final value of $V_0$
we use provides a reasonable estimate for the starting value of the
pole trajectory.
To check Eq.~(\ref{imk}) for the imaginary part of $k_n$, we introduce
the variable $x=\ln (n)$ and fit $k_n^I=a_1x+a_0$
for the same sets of $n=n_s,\ldots,98$ points, with $n_s=1,\ldots,97$.
From the slope $a_1$ obtained, we can calculate $\sigma=2a_1{\cal R}-2$
as a function of $n_s$ using the actual value of ${\cal R}$.
Figure~\ref{sigmans} shows that this $\sigma$ converges
to 1 as it should, but rather slowly.
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 40 750 550]{newtonsigma.eps}
\caption{Convergence of the fitted $\sigma$ to the exact value (dotted line)
obtained by using the lower cut value of the node number $n_s$ for a potential
of Eq.~(\ref{ournewpot}).}
\label{sigmans}
\end{figure}
\subsection{Cut--off Woods-Saxon form}
The trajectories of the $S$-matrix poles were calculated for two SFR potentials
for a heavy nucleus $^{208}$Pb in Refs.~\cite{[Ra11],[Da12]}.
Certain features found in Ref.~\cite{[Da12]} indicate that the
relationship~(\ref{rek}) might hold for the CWS and even for the
SV potentials.
The asymptotic behavior of the CWS potential for $r<R_{\rm max}$ may be
approximated by a Taylor series around $r={\cal R}=R_{\rm max}$ cut after the
first term:
\begin{equation}
\label{taylorWS}
-V_0f^{\rm CWS}(r,R,a,R_{\rm max})\approx {D}+(R_{\rm max}-r)\frac{D}{a}~,
\end{equation}
where $D=-V_0e^{(R-R_{\rm max})/a}$.
The second term corresponds to a $\sigma=1$ version of Newton's potential
studied before, but now we have an additional first term, which does not depend
on $r$. Thus not even an approximation to the CWS potential has exactly the
form of Eq.~(\ref{newpot}). But, with the usual choice of
$R_{\rm max}\ge R+6 a$, the value of the constant $|D|\le 0.0025\times V_0$,
thus the first term is not very large.
Since for a heavy nucleus, a crucial difference has been observed between
the pole trajectories of the continuous SV potential and the discontinuous
CWS potential~\cite{[Ra11],[Da12]}, here we extend these calculations
to light nuclei and to the SS potential.
For $^{208}$Pb, it has been found~\cite{[Da12]} that the starting points
of the $l=0$ resonant trajectories follow Newton's rule in Eq.~(\ref{rek})
approximately if the $n$ value is not very small even though the asymptotic
behavior of the potential~(\ref{taylorWS}) differs slightly from
Eq.~(\ref{newpot}). Figure~\ref{wstraj} shows the trajectories of a few poles
of the $^{18}$F+$n$ system in the CWS well with parameters $r_0=1.25$ fm,
$a=0.65$ fm, and $R_{\rm max}=15$ fm. The results are similar to those for
$^{208}$Pb even in that there is a loop in the $n=1$ trajectory but nowhere
else. Figure~\ref{f18cws} shows the straight line fitted to $k_n^R$ for
node numbers $n=1,\ldots,8$. From its slope Eq.~(\ref{exslope}) predicts
${\cal R}=14.67$ fm, which agrees reasonably well with the cutoff radius
used, $R_{\rm max}=15$ fm ($|D|=1.4 \times 10^{-8}V_0$).
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 30 750 550]{n1-8l0wsf18ktraj.eps}
\caption{Pole trajectories for a CWS potential with $R_{\rm max}=15$ fm for
$l=0$ and $n=1,\ldots,8$ for $^{18}$F. The full circles denote the starting
points of the trajectories with $V_0=0.005$ MeV.}
\label{wstraj}
\end{figure}
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 40 750 550]{dkdnf18ws.eps}
\caption{The line is the linear function fitted to the $k_n^R$ values (dots)
of the pole trajectories with node numbers $n=1,\ldots,8$ for a CWS potential
for $^{18}$F. These values correspond to the abscissae of the full circles in
Fig. \ref{wstraj}. The fit results in a range ${\cal R}=14.67$ fm.}
\label{f18cws}
\end{figure}
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 40 750 550]{c1wsr10a18.eps}
\caption{Dependence of the slope of the fitted line on the lower cut value
of the node number $n_s$ ($n_s=1,\dots,n_u-1$, and $n_u=48$) for a CWS potential with $R_{\rm max}=10$ fm.}
\label{c1ws}
\end{figure}
We studied the behavior of the trajectories further by setting the cutoff
radius shorter, $R_{\rm max}=10$ fm ($|D|=3.2\times 10^{-5}V_0$).
In Fig.~\ref{c1ws} we examine the validity of Eqs.~(\ref{rek}) and (\ref{absk})
for the CWS potential by a test similar to that shown in Fig.~\ref{c1}.
Now the two curves do not converge smoothly into a constant.
The agreement of the slope $a_1$ with the exact value
is reduced to 2 decimal digits, and, as a function of $n$, it oscillates around
$\pi/R_{\rm max}$. Thus we can still state that Eqs.~(\ref{rek}) and
(\ref{absk}) are approximately satisfied by a CWS potential as well.
The relationship for the imaginary part, Eq.~(\ref{imk}), however, is not
satisfied at all. There is no region where the deduced
$\sigma$ would be more or less constant.
It looks that Eq.~(\ref{taylorWS}) is too approximate to cause Eq.~(\ref{imk})
to be fulfilled.
\begin{figure}[b]
\includegraphics*[scale=0.4, bb=0 30 750 550]{n1-5l0f18svktraj.eps}
\caption{Pole trajectories for a SV potential with $\rho_0=5.3$ fm for $l=0$
and $n=1,\ldots,5$ for $^{18}$F. The full circles denote the $k_n$ points
calculated with $V_0=0.005$ MeV.}
\label{svtraj}
\end{figure}
\subsection{Pole trajectories in SV and in SS potentials}
The pole trajectories for the SV potential behave absolutely regularly, with
no loops and ripples (Fig.~\ref{svtraj}), in contrast to the CWS potential.
The starting values $k_n^R$ can be fitted very well by a straight line as
seen in Fig.~\ref{sv18slope}. From its slope and Eq.~(\ref{rek}) one
can derive ${\cal R}=5.17$ fm, which is just a bit less than the value of the
range parameter $\rho_0=5.3$ fm. Similar behavior was found before for
$^{208}$Pb in Ref.~\cite{[Da12]}. We conclude that the relation in
Eq.~(\ref{rek}) is fulfilled approximately for SV and SS potentials in spite
of their asymptotic behavior being different from Eq.~(\ref{newpot}). Thus
Eq.~(\ref{rek}) is still useful for estimating the pole positions.
Remember that the SV and SS potentials the Taylor expansion at
$\rho_0$ is not equal to the function, because all derivatives are zero at that point.
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 30 750 550]{18fsvslope.eps}
\caption{Fit to the $k_n^R$ values of the full circles in Fig.~\ref{svtraj},
with node numbers $n=1,\ldots,5$ for a single-term SV potential for $^{18}$F.
The range deduced from the slope $a_1$ is ${\cal R}=5.17$ fm.}
\label{sv18slope}
\end{figure}
For two-term SV potentials ($c\ne0$), the starting values of the pole
trajectories were studied in Ref.~\cite{[Ra11]} for $^{16}$O and
for $^{208}$Pb \footnote{In Ref.~\cite{[Ra11]} it was conjectured that, for
low node numbers, the $k_n^R-k_{n-1}^R$ is determined
by ${1\over 2}(\rho_0+\rho_1)$. Later it turned out that this result was
just an accident. The starting point depends only on
$\rho_0$, where the potential vanishes.}.
Now these studies may be extended to the SS potentials of various $a_s$.
If the SS potential obeyed Newton's relation (\ref{newpot}),
the starting regions should be independent of $a_s$ and should coincide
with the SV trajectory.
Since, however, Eq.~(\ref{newpot}) does not hold even for the SV potential,
we expect a dependence.
We consider a heavy core, where the derivative term is important: the case of
$^{208}$Pb+$n$. We choose $l=0$, analyze the SV potential that
approximates the CWS potential of parameters $R=7.525$ fm and $a=0.7$ fm
($\rho_0=10.963$ fm, $\rho_1=8.328$ fm, and $c=0.997$),
and repeat the calculation for SS potentials of $a_s=0.6$ and 1.6
(Fig.~\ref{sstraj}). One can see that the three curves belonging to the same
$n$ do not coincide, and nor do their starting points, but they slightly
depend on $a_s$. This weak dependence may be
attributed to departures from Eqs.~(\ref{rek}) and (\ref{imk}) for
low $n$.
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 30 750 550]{n1-2as06-16ktraj.eps}
\caption{Pole trajectories in SS potentials with different $a_s$ values. The
value $a_s=1.0$ corresponds to the SV potential. The full circles denote
the $k_n$ values calculated with $V_0=0.005$ MeV.}
\label{sstraj}
\end{figure}
We calculated the starting $k_n$ values for the best-fit SS shape to the same
CWS shape for $^{208}$Pb+$n$.
(WS parameters: $r_0=1.27$ fm, $a=0.7$ fm. SS parameters:
$\rho_0=10.75$ fm, $\rho_1=8.94$ fm, $c=1.528$, $a_s=1.4$).
Although we know that the SS potential does not follow Newton's
form [Eq.~(\ref{newpot})], we can still fit our $k_n$ values by first-order
polynomials of the variable $n$ and $\ln(n)$, respectively, to check
the validity of Eqs.~(\ref{rek}), (\ref{absk}) and (\ref{imk}).
Equations~(\ref{rek}) and (\ref{absk}) seem to be valid approximately in the
$n$-range shown in Fig.~\ref{ssa1} for $a_s\ge 1$. For $\sigma=0.6$, which
produced a pocket in Fig.~\ref{sswspb}, the relation breaks down beyond
$n_s\approx 12$.
\begin{figure}[h]
\includegraphics*[scale=0.4, bb=0 30 750 550]{ssa1pb.eps}
\caption{Slope $a_1$ of the straight line fitted to the starting $k_n$
values ($n_s=1,\dots,n_u-1$, and $~n_u=20$) for SS potentials of different $a_s$, with $a_s=1.0$ belonging
to the SV potential; $A_1=\pi/11$ fm$^{-1}$. Slopes belonging to $a_s=1.0$ and $a_s=1.6$ are hardly distinguishable in the given scale.}
\label{ssa1}
\end{figure}
Test calculations show that the $k_n^I$ values weakly depend on $a_s$, and the
$\sigma$, defined by Eq.~(\ref{imk}), does not seem to converge. That is
not surprising as neither the SV nor the SS potential satisfies
Eq.~(\ref{newpot}). Just as for the SV potential, the $k_n^I$ values show
an almost linear slow increase with $n$. This offers practical recipes
for finding suitable starting values in searches for $S$-matrix poles.
\section{Conclusion}
The conventional nuclear potentials
do not tend to zero at finite distances, but are set to zero artificially.
Consequently, they have unpleasant mathematical and numerical properties,
which cause appreciable errors in broad resonances. Their SFR substitutes
have pleasant mathematical and
numerical properties, but their tails are unphysical. Here we
examined the properties of a family of SFR potentials related to
the WS potential, with an emphasis on the effect of the tail and on
the pole trajectories belonging to broad resonances.
We concentrated on the SV potential, which consists of a term
$\exp[(r^2/(r^2-\rho_0^2)]$
($r<\rho_0$) and a term like the derivative
of that but with a different parameter $\rho_1$ ($\le\rho_0$).
We constructed parameters that fit
the real parts of the global Perey--Perey and Becchetti--Greenlees
optical potentials best.
The best-fit range $\rho_0$ of the SV potential is found to scale by
$A_T^{1/3}$ for both geometries, and
the difference of the two ranges, $\rho_0-\rho_1$, is positive and
it is three to four times of the diffuseness of the WS potential.
The admixture of the derivative term tends to zero with decreasing mass number.
In fact, it was found that, for light nuclei, the phenomenological neutron
potential can be approximated reasonably well by a single-term SV potential,
and the single-particle energies and densities calculated in the cut-off
WS potential are also reproduced. In this case the form factor of the potential
has a single parameter, its range $\rho_0$. The tail of the density is pretty
reasonable since it is determined primarily by the energies, and those are
reproduced well by the SV potential.
The new potential form (SS) introduced by Sahu and Sahu
\cite{[SS12]} can be considered as a generalization of the SV form.
The extra diffuseness parameter may smooth or roughen the potential
in the region around $\rho_1$ depending on whether $a_s>1$ or $a_s<1$.
The range of the SFR potentials determines approximately the starting points
of the pole trajectories belonging to potential strength zero.
The problem of the $S$-matrix poles becomes ill-defined in a potential with
strength $V_0\approx 0$, thus it is important to see whether the computer code
is able to solve the problem for small $V_0$. A check is provided
by potentials of the form of
$-V_0(R-r)$ ($r\le R$),
for which these starting points are approximately determined apart from
an additive constant. This check has shown that our calculations are
remarkably accurate.
It is more surprising that even though the CWS and the SV potentials are very
different in the neighborhood of the cutoff, the pole trajectories of
the SV potentials bear out some of the properties of those of
the
$-V_0(R-r)$
potentials, especially for large node numbers.
For some low values of the node number, the CWS trajectory shows strange
shapes, while the SS and SV potentials behave absolutely regularly.
The pole trajectories of the SS potential depend weakly on the extra
diffuseness parameter.
In conclusion, the present results are reassuring concerning the use of the
SFR potentials. The starting points of the pole trajectories seem to have
some approximate universality properties, which can be used to estimate
the values of these starting points.
\section*{Acknowledgment}
This work was partially supported by the ENIAC CSI No. 120209 project and by the T\'AMOP-4.2.2.C-11/1/KONV-2012-0001 project.
The later project has been supported by the European Union, co-financed by
the European Social Fund.
|
1,314,259,995,612 | arxiv | \section{Introduction}
\begin{table*}
\begin{center}
\begin{minipage}{9cm}
\caption{Orbital and Physical parameters of the Typhon-Echidna binary system.}
\end{minipage}
\begin{tabular}{c c c c c c c}
\hline
Body &Orbits &$a_0$ &$e_0$ &$i_0$ &Mass$^{3}$ (kg) &Radius$^{2}$ (km) \\
& & & & & & \\
\hline
Typhon $^{1}$&Sun &38.19520 au &0.54115 &2.42776$^{\circ}$ &$8.1 \times 10^{17}$ &76 \\
Echidna $^{2}$&Typhon &1628 km &0.526 &37.9$^{\circ}$ &$1.4 \times 10^{17}$ &42 \\
\hline
\multicolumn{7}{l}{$^{(1)}$Orbital elements of Typhon obtained through HORIZONS Web-Interface at 2453724.5 JDTBT}\\
\multicolumn{7}{l}{$^{(2)}$ Obtained from \cite{grundy}}\\
\multicolumn{7}{l}{$^{(3)}$ Calculated assuming spherical bodies and a density of $0.44g/cm^3$. This density was estimated by}\\
\multicolumn{7}{l}{\cite{grundy}, considering that the total mass of the system was $9.49\times 10^{17}$ kg as well as the radii of the bodies.}\\
\label{tab_data}
\end{tabular}
\end{center}
\end{table*}
\begin{table}
\begin{center}
\caption{Tidal disruption radius $(r_{td})$ and the registered close encounters of Typhon and its clones along the numerical integration ($<200$ Myr).}
\begin{tabular}{c c c c c}
\hline
Planet &1 $r_{td}$ &1 $r_{td}$ &Significant &Extreme \\
&$(\times 10^5 km)$ &Planet's radius &Encounters &Encounters \\
& & &$d \leq 10\hspace{0.1cm}r_{td}$ &$d \leq 3\hspace{0.1cm}r_{td}$ \\
\hline
Venus & $4.04$ &$67$ &36 &5 \\
Earth & $4.34$ &$68$ &46 &3 \\
Mars & $2.09$ &$62$ &4 &0 \\
Jupiter & $29.6$ &$41$ &1889 &317 \\
Saturn & $19.73$ &$33$ &640 &101 \\
Uranus & $10.47$ &$40$ &912 &120 \\
Neptune & $11.21$ &$44$ &589 &81 \\
\hline
\multicolumn{5}{l}{d - minimum distance of the close encounter}\\
\end{tabular}
\label{tab_enc}
\end{center}
\end{table}
There is no consensus on the definition of Centaurs (as discussed in \citealt{araujo2016}), but it is commonly accepted that they have a perihelion between the orbits of the giant planets. According to Johnston, W.R.\footnote{http://www.johnstonsarchive.net/astro/tnos.html}, Centaurs are bodies that cross the orbit of Neptune (with a perihelion distance of less than $30$ au), while TNOs are bodies with semimajor axes greater than the semimajor axis of Neptune ($a>30$ au). Based on these definitions, we consider Centaurs as bodies in orbits with perihelion distances of less than $30$ au and with $a\leqslant30$ au, and the TNOs as bodies in orbits with perihelion distances greater than $30$ au and with $a>30$ au. The intersection between TNOs and Centaurs, i.e., bodies with $a>30$ au and with perihelion distances less than $30$ au, are referred to herein as TNO-Centaurs. The resonant bodies with $a>30$ au and $q<30$ au are not included in these definitions since they generally present stable orbits and do not currently suffer close encounters with Neptune or another planet.
There are studies showing that trans-Neptunian objects (TNOs) can evolve to become
Centaurs \citep{Lev1997,tiscareno,lykawka,disisto2009,brasser2012}. TNOs are also among the sources of near-Earth
objects (NEOs) \citep{Mor1997,Lev1997,tiscareno,disisto2007,stell2014}. According to \cite{morbidelli2002},
$6\pm4~\%$ of NEOs come from the trans-Neptunian region. \citet{Gal2016} found that TNOs, at least those whose orbits have perihelia
smaller than $34$ au, can become Centaurs and may evolves within the inner solar system. According to \cite{napier2015},
approximately one in ten Centaurs in (2060) Chiron-like orbits enter the Earth-crossing region.
The evolutions of the trajectories of small bodies suffering close encounters with the giant planets generate orbital instabilities, which characterize chaotic motion \citep{araujo2016, Lev1997, Mor1997, Hor2004a, Hor2004b, Gal2016}. Recently, a Centaur, (10199) Chariklo, was found to have a well-defined ring system \citep{braga}. Despite the numerous close encounters with giant planets, the ring systems have proven to be highly stable \citep{araujo2016} along Chariklo's lifetime as a Centaur, which is approximately 10 Myr \citep{Hor2004a}.
According to \cite{fraser}, planetesimals formed near the Kuiper Belt are expected to form as binaries. Trans-Neptunian binaries (TNBs) are estimated to be $\sim20$\% of all TNOs \citep{Nol2008}. Today, 81 TNBs are known\footnote{http://www.johnstonsarchive.net/astro/astmoontable.html}, with cases of multiple systems, e.g., Pluto, Haumea and 1999 TC36. Knowing that TNOs evolve to become Centaurs or even NEOs, the question that arises and that we aim the answer is how would the evolution of a binary TNO proceed when entering the outer or even the inner planetary region?
Currently there are only two known binary TNO-Centaurs: (42355) Typhon-Echidna and (65489) Ceto-Phorcys. The current orbits of the other 79 TNBs behave as typical TNOs, i.e., with no crossing of planetary orbits. Typhon was the first binary TNO-Centaur discovered \citep{Nol2006a}; its secondary is named Echidna. Typhon has heliocentric orbital elements $a = 37.9$ au, $e = 0.538$, and $i = 2.43^\circ$ \citep{grundy} and a perihelion distance of $q=17.5$ au. Ceto's binarity was discovered in the same year as Typhon's \citep{Nol2006b}; its secondary is named Phorcys. Ceto has heliocentric orbital elements $a = 102$ au, $e = 0.82$, and $i = 22^\circ$ \citep{grundy_ceto} and a perihelion distance of $q=18.4$ au. Note that Ceto has a small portion of its orbit inside the orbit of Neptune, since its semi-major axis is more than three times larger than that of Neptune. Therefore, due to the size of its semi-major axis and due to its high orbital inclination, Ceto is expected to experience a lower frequency of close encounters with the giant planets \citep{araujo2017} than is Typhon.
The Typhon-Echidna system was then chosen since, in the current work, we are interested in studying a binary TNO-Centaur that evolves mostly within the planetary region. First, the heliocentric orbital evolution of Typhon was explored. Through numerical integrations of the equations of the N-body gravitational problem, close encounters of clones of Typhon with the planets were recorded in the model integrations. We then analysed the effects of those registered encounters on the binary system.
In the next section, the study of the orbital evolution of Typhon is presented, followed by the exploration of the effects of close encounters on the binary Typhon-Echidna. In Section \ref{sec_terrestre}, the possibility of Typhon entering the terrestrial planetary region as a binary system is discussed. A brief study of the past evolution of Typhon-Echidna is shown in Section \ref{sec_past}. Our final comments are presented in the last section.
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{histo.png}
\caption{Loss of bodies as a function of time for Typhon and its clones.}
\label{fig_lifetime}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[scale=0.65]{fig_rtd1.png}}
\subfigure[]{\includegraphics[scale=0.65]{fig_rtd2.png}}
\caption{Number of close encounter inside each range of the tidal disruption radius of each planet. a) Giant planets. b) Terrestrial planets.
The encounters were computed along the numerical integration ($<200$ Myr).}
\label{fig_rtd}
\end{center}
\end{figure*}
\section{Numerical Simulations}
\label{sec_num_simula}
The numerical simulations were performed in two steps. First, we simulated the orbits of a set of clones of Typhon.
In this step, we aimed to analyse the orbital evolution of the clones as they evolve from their initial orbit.
We also registered the close encounters of the clones with the giant or with the terrestrial planets.
The next step consisted of simulating a selection of the most significant close encounters
(as defined in Section \ref{sec_orb_evol}) considering that, at this time, a binary system is experiencing the encounter.
We then analysed the effects of these encounters in terms of their disruptions of the binary.
\subsection{Orbital Evolution}
\label{sec_orb_evol}
The orbital and physical data of the Typhon-Echidna binary are presented in Table \ref{tab_data}.
Assuming those parameters, we simulated a set of $500$ clones of Typhon. The clones are massless bodies with small deviations in their orbits. They were generated as in \cite{Hor2004a}, i.e., by assuming an interval of variation of $a=a_0\pm 0.005$ au for the semi-major axis, $e=e_0\pm 0.003$ for the eccentricity and $i=i_0\pm 0.01^{\circ}$ for the orbital inclination, following a Lorentzian distribution centred at the initial osculating elements ($a_0$, $e_0$ and $i_0$). The angular elements $\omega=158.98^{\circ}$, $\Omega=351.99^{\circ}$ and $f=358.08^{\circ}$ (argument of perihelion, longitude of the ascending node and true-anomaly, respectively) are the same for all the clones and were obtained for Typhon through the HORIZONS Web-Interface at 2453724.5 JDTBT (in Julian Date, Barycentric Dynamical Time).
A simulation of a system comprising the Sun, the $500$ small bodies and all the planets of the Solar System, excluding Mercury and including the dwarf planet Pluto, was considered. The orbits of the planets were also taken from the HORIZONS Web-Interface for the same Epoch.
The numerical integration was performed using the adaptive time-step hybrid sympletic/Bulirsch-Stoer algorithm from \textsc{Mercury} \citep{chambers} for a time span of $200$ Myr, time step of 2 days and with the output interval of the data being 2000 years. Throughout the integration, the clones did not interact with each other, but they could collide with the massive bodies or be ejected.
A clone was considered ejected if its orbital radius reached $110$ au.
This value was adopted taking into account that if a clone reach the distance of $110$ au and its orbital eccentricity is lower than
$0.6$, then it is already a TNO ($q>30$ au and $a>30$ au), while clones reaching $110$ au with higher eccentricity $(e>0.6)$ spend only a small fraction of their orbital period within the planetary region.
The collisions were defined by the physical radii of the planets. The collisions with the Sun were defined by the distance of $0.009$ au. This value corresponds to approximately two times the radius of the Sun.
Occasionally there could be some clones approaching the Sun at perihelion distances smaller than $\approx 0.1$ au. The temporal evolution of those clones may gradually suffer from orbital inaccuracies when using the hybrid algorithm with the assumed time step. However, as it will be seen in Section \ref{sec_terrestre}, only few clones may have come so close to the Sun and thus, such presumable inaccuracy on their orbital evolution do not compromise our statistical analyses.
As a result of this step, we found that $100\%$ of the clones were lost along the timespan of the integration. Only $4$ clones were lost via collisions, including $2$ collisions with Uranus, $1$ collision with Jupiter and $1$ collision with the Sun. The other clones were ejected. The clone that survived the longest was ejected after $163$ Myr. The histogram in Fig. \ref{fig_lifetime} shows the loss of clones as a function of time. Approximately $80\%$ of the clones do not survive more than $20$ Myr. In fact, we found that $50\%$ of the clones survived only slightly longer than $5$ Myr. The calculated median gives the estimated lifetime of Typhon as $5.2$ Myr. This value is shorter than the mean value of approximately $10$ Myr found by \cite{tiscareno} for the lifetime of Centaurs, which was determined considering a sample of 53 objects with perihelion distances within the orbit of Neptune, including Typhon. However, the authors emphasize that they found a wide variety of lifetimes, ranging from less than $1$ Myr to more than $100$ Myr.
All close encounters experienced by the clones with any of the planets within the
distance of $10$ $r_{td}$ and along the numerical integration ($<200$ Myr) were registered. The tidal disruption radius $(r_{td})$ provides an approximate distance for which a given binary is expected to be disrupted due to the effects of a nearby gravitational encounter with a more massive body \citep{philpott}. The expression for this is given as follows:
\begin{equation}
r_{td} \approx a_B\left(\frac{3M_p}{M_1+M_2}\right)^{1/3}
\label{eq_rtd}
\end{equation}
where $M_p$ is the mass of the encountered planet, $M_1+M_2$ is the total mass of the binary and $a_B$ is the separation of the binary.
In Table \ref{tab_enc}, we present the $r_{td}$ calculated for the Typhon-Echidna system and the number of registered close encounters within $10~r_{td}$ and $3~r_{td}$ observed by this system when each of the planets is considered. Hereafter, the close encounters within $10~r_{td}$ are called significant encounters. The closest encounters experienced within $3~r_{td}$ are the so-called extreme encounters. Fig. \ref{fig_rtd} shows the distribution of the registered close encounters by planets and for a range of values of $r_{td}$.
From Fig. \ref{fig_rtd} and Table \ref{tab_enc} we see that, among the giant planets, the most encountered planet is Jupiter, followed by Uranus, Saturn and Neptune. We see that there were also registered significant and extreme encounters of Typhon with the terrestrial planets. This fact will be further discussed in Section \ref{sec_terrestre}. At this point, we seek to analyse how the encounters with the giant planets perturb the binary system. The discussion and results of this analysis are presented in the following section.
\subsection{Binary Evolution}
\label{sec_bin_evol}
\begin{figure}
\begin{center}
\subfigure{\includegraphics[scale=0.51]{1rtd.png}}
\subfigure{\includegraphics[scale=0.51]{2rtd.png}}
\subfigure{\includegraphics[scale=0.51]{3rtd.png}}
\caption{Percentage of binaries lost (relative to the total of $12960$ Typhon-Echidna-like binary systems),
as a function of the percentage of
the extreme encounters (relative to the total of 619 extreme encounters with the giant planets) performed
within: a) $1~r_{td}$, b) $2~r_{td}$ and c) $3~r_{td}$.}
\label{fig_binaries_lost}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\subfigure{\includegraphics[scale=0.5]{entry.png}}
\subfigure{\includegraphics[scale=0.5]{stay.png}}
\subfigure{\includegraphics[scale=0.5]{time_surv.png}}
\caption{a) Time needed for the 42 clones to reach the terrestrial region from the current orbit of Typhon.
b) Time spent between the first and the last times that the distance between the Sun and each one of the clones was within the limit of $2$ au.
c) Survival time of the clones after leaving the terrestrial region, i.e., crossing orbits.}
\label{fig_clones_terrest}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[scale=0.4]{489.png}}
\subfigure[]{\includegraphics[scale=0.4]{489zoom.png}}
\caption{Orbital evolution and the extreme encounters of a Typhon-Echidna system that reached the region of the terrestrial planets.
The colour-coded label indicates the percentage of binary systems lost due to close encounters relative to the total set of $12960$ binaries.
Triangles show encounters at $2 < r_{td} \leq 3$. Squares show encounters within $1< r_{td} \leq 2$, and circles show encounters
with $r_{td} \leq 1$.
The grey line indicates the orbital radius of the system on its heliocentric orbit.
a) For the entire interval of time during which the encounters within $3\,r_{td}$ were registered.
b) Zoom showing mainly the extreme encounters with the terrestrial planets.}
\label{fig_evolution}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.4]{353zoom.png}
\caption{Orbital evolution and the extreme encounters of a Typhon-Echidna system that reached the region of the terrestrial planets
for the entire interval of time in which the encounters within $3\,r_{td}$ were registered.
The colour-coded label indicates the percentage of binary system lost due to the close encounters relative to the total set of $12960$ binaries.
Triangles show encounters at $2< r_{td} \leq 3$. Squares show encounters within $1< r_{td} \leq 2$, and circles show encounters
with $r_{td} \leq 1$.
The grey line indicates the orbital radius of the system on its heliocentric orbit.}
\label{fig_evolution2}
\end{center}
\end{figure*}
\cite{araujo2016} studied the problem of the rings of Chariklo perturbed by close gravitational encounters with the giant planets. They found that the encounters capable of producing any significant effects on the rings occurred at distances smaller than $2~r_{td}$. Based on their results, we selected those encounters that occurred within $3\,r_{td}$ among all the significant close encounters registered throughout the numerical integrations, i.e., we selected the extreme close encounters. The number of extreme encounters of the clones with the planets is presented in Table \ref{tab_enc}, column $4$, from which we see that we have a total $627$ extreme close encounters to analyse. Each one of these $627$ extreme encounters was individually simulated considering a system comprising the Sun, the planet, the clone of Typhon involved in the encounter and the secondary component Echidna.
The initial conditions of the planet and clone were obtained from the previous numerical integrations (Section \ref{sec_orb_evol}). In these simulations, every time that a significant close encounter was registered, we recorded the components of the heliocentric orbital positions and of the orbital velocities of the clone and planet involved in the encounter. These data were recorded at the moment prior to the nearest crossing of the encounter at $10~r_{td}$. Thus, here, we analysed the effects on the binary system due to the encounters with the minimum distance of the encounter being within $3~r_{td}$ (extreme encounters). However, all the simulations of the encounters started at relative distances on the edge of $10~r_{td}$ of each planet.
Echidna was distributed around Typhon, which was treated as a point mass, with the following initial conditions: the semi-major axis was $a=1628$ km, eccentricity was $e=0.526$, orbital inclination was $0^{\circ} \leq i \leq 180^{\circ}$ with steps of $45^{\circ}$, longitude of the ascending node was at $\Omega=0^{\circ}$ and $\Omega=90^{\circ}$, argument of perihelion was $0^{\circ} \leq \omega \leq 360^{\circ}$ with steps of $10^{\circ}$ and true anomaly was $0^{\circ} \leq f \leq 360^{\circ}$ with steps of $10^{\circ}$. The orbital inclination of Echidna was varied to simulate different positions of the equator of Typhon at the moment of the close encounter.
By creating a 3D-cloud of Echidnas around Typhon, we considered a large range of possibilities for the geometry of the encounter (via the position of Typhon-Echidna binary system relative to the encountered planet). The combination of these values resulted in a cloud with $12960$ Typhon-Echidna-like systems performing each of the registered extreme close encounters. This approach allows us to statistically analyse the effects of the close encounters while taking into account different geometries of the binary system during the encounter.
Here, Typhon and Echidna were considered massive bodies such that the binarity of the system was preserved. However, the Echidnas do not interact with each other. The cumulative effect of the extreme encounters on a single binary was also not taken into account. Our study consisted of a statistical analysis based on the number of extreme encounters and how they are expected to affect a Typhon-Echidna like system.
At this step, numerical integrations were performed using the adaptive time-step Gauss-Radau numerical integrator \citep{everhart} for a time span of $1$ year. Throughout the simulation of the extreme encounters, the Echidnas could collide with Typhon, or the binary could be disrupted. A collision was defined by the physical radius of Typhon and the disruption was defined by the two-body energy of the system comprising Typhon and Echidna. When this energy was initially negative but became positive, we computed a disruption of the Typhon-Echidna system.
After the numerical integrations, we analysed, for each extreme close encounter, the percentage loss (via collisions or disruptions) of the binaries relative to the initial total number of $12960$ binaries per encounter. The results of the encounters with the giant planets are presented in Fig. \ref{fig_binaries_lost}. The analysis for the terrestrial planets is presented in the next section. The graphs of this figure show the percentage of binaries lost as a function of the percentage of extreme encounters (relative to the total number of 619 extreme encounters with the giant planets). For a better visualization, these results were presented separately according to the distances of the encounters. Fig. \ref{fig_binaries_lost}a shows the results of the encounters at distances $d\le1~r_{td}$, Fig. \ref{fig_binaries_lost}b shows the encounters at distances $1< d\le2~r_{td}$ and Fig. \ref{fig_binaries_lost}c shows the encounters at distances $2< d\le3~r_{td}$.
We assume that an extreme close encounter is capable of disrupting the Typhon-Echidna binary system if more than $50\%$ of the binaries from the cloud of $12960$ binaries are lost. We refer to these encounters as disruptive encounters. Under this assumption, from Fig. \ref{fig_binaries_lost}, it is confirmed that, the disruptive encounters mainly occurred within $1~r_{td}$. From the total of 619 extreme close encounters with the giant planets simulated, $133$ (i.e.,$133/619\approx21.5\%$) occurred within $1r_{td}$, and 129 (i.e., $129/133\approx97\%$) were disruptive encounters. We also computed 11 disruptive encounters (i.e., $11/619\approx1.8\%$ of the extreme close encounters) that occurred within the limit of $1< d\le2~r_{td}$. None of the encounters within $2< d\le3~r_{td}$ were disruptive (\ref{fig_binaries_lost} c). These results led to estimations that, from the total of 619 extreme close encounters of Typhon with any of the giant planets, only 140 (i.e., $140/619\approx23\%$ of the extreme close encounters) were disruptive, i.e., capable of disrupting the Typhon-Echidna system. Thus, our simulations of the binary system of Typhon-Echidna experiencing extreme close encounters with the giant planets showed that most of these encounters are harmless to the system.
Another way to analyse these data is to compute the number of disruptive encounters relative to the number of clones that experienced them. This was done by computing the close encounters of each clone within $1r_{td}$, since these encounters were shown to be almost exclusively responsible for the disruptive encounters, as discussed above. We found that the 133 encounters that occurred within $1r_{td}$ were experienced by $84$ clones. Eighteen clones suffered a total of $67$ multiple encounters within $1r_{td}$, while $66$ clones suffered only one encounter within $1r_{td}$. This number leads us to estimate that approximately $17\%~(\approx84/500)$ of the clones would lose their second component due to close gravitational
encounters with the giant planets. This analysis confirms our previous conclusion. The binary system of Typhon-Echidna is more likely to survive close gravitational encounters with these planets.
However, as previously stated, the small bodies of the outer Solar System called Centaurs or TNOs are known to be able to reach the region of the terrestrial planets, and as presented in Table \ref{tab_enc}, we have registered some significant and extreme encounters of Typhon with Mars, Earth and Venus. Thus, if Typhon-Echidna is more likely to safely pass through the region of influence of the giant planets and if the orbit of this system is such that it can reach the inner Solar System, is it possible to have a binary system as large as this one transiting near us? Note that the sizes of the bodies of Typhon and Echidna are an order of magnitude larger than the largest NEOs. This is one of the questions addressed in the following section.
\section{Typhon-Echidna into the terrestrial planetary region}
\label{sec_terrestre}
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.5]{teste-ecc.png}}
\subfigure[]{\includegraphics[scale=0.5]{teste-inc.png}}
\caption{Example of the backward temporal evolution of a clone of Typhon in a) $(a\times e)$ space, b) $(a\times i)$ space.
The clone in this example started the integration with $a=38.19460$ au, $e=0.54090$ and $i=2.42670^{\circ}$ (black stars) and survived (no ejection or collision) for the
entire integration time of $100$ Myr to the past.
The colour-coding indicates the time spent by the clone in a given region of the spaces from shorter (light blue)
to longer (dark red). The graphs show that the clone spent the most time with $a=42.5$ au, $e=0.265$ and $i=27.8^{\circ}$.
}
\label{fig_example}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[scale=0.45]{ecc.png}}
\subfigure[]{\includegraphics[scale=0.44]{inc.png}}
\caption{Backward temporal evolution of the semi-major axis $a$, eccentricity $e$ and inclination $i$ for $100$ clones of Typhon.
The locations in a) $(a\times e)$ space and b) $(a\times i)$ space are shown, which is where the clones spent most of the time during their integration.
In the first graph, the Centaur region (yellow region) is defined by the semimajor axis $a<30$ au and $q<30$ au, and
the TNO region (blue region) is defined by a perihelion distance greater than $30$ au (with no crossing of Neptune's orbit).
The TNO-Centaur region is the intersection between the Centaurs and the TNOs (green region) with $a>30$ au and a perihelion distance less than $30$ au.
The current orbits of Typhon and Ceto are indicated in the graphs by the black square and by the black circle, respectively.}
\label{fig_past}
\end{center}
\end{figure}
During the numerical simulations of the $500$ clones of Typhon (Section \ref{sec_orb_evol}), we followed all clones that reached the terrestrial planetary region. We monitored the clones whose orbital radii were smaller than $2$ au by computing the relative distances of the clones to the Sun for each time step throughout the integration.
These are candidates that might experience significant and extreme encounters with the terrestrial planets since, at the heliocentric distance of $2$ au, the clones have already left the inner border of the Main Belt Asteroid \citep {demeo} and are about to become Mars-crossers \citep{michel}.
From the total of $500$ clones, we found that $42$ reached this region, implying a probability of $8.4\%$ of Typhon moving into the region of the terrestrial planets along the numerical integration over $<200$ Myr. Those clones suffered a total of $86$ significant encounters with the planets Mars, Earth and Venus (see Table \ref{tab_enc} and Fig. \ref{fig_rtd}b).
By examining the dynamical evolutions of those $42$ clones before their entries, we found that $23$ suffered close encounters with a giant planet within $1\,rtd$. Those are the encounters capable of disrupting the binary system, as discussed in Section \ref{sec_bin_evol}. Thus, 19 clones ($\approx45\%$ from the total of 42 clones that reach the terrestrial region) are likely to be accompanied by the secondary component Echidna. Considering these 19 clones relative to the total sample of $500$ clones, we estimate that the Typhon-Echidna system has a $3.8\%$ probability of reaching the terrestrial region as a binary.
We then analysed how long it took for each of the $42$ clones to reach the terrestrial region. The entry time is the time computed since the beginning of the integrations until the clone crosses the limit of $2$ au. The first clone to reach the terrestrial region took $\approx4.27\times10^{5}$ years. The last occurred at $\sim6.6\times10^{7}$ years. The distribution of the entry time for the $42$ clones is presented in
Fig. \ref{fig_clones_terrest}a.
The calculated median shows that the clones are more probable to reach the terrestrial region in $\approx 5.4$ Myr.
On average, this region is achieved by them in $\approx 12.2$ Myr. This value is consistent with the average time for a wider sample of Centaurs and TNOs entering the main asteroid belt, as obtained by \citet{Gal2016}, which is estimated to be $10.0$ Myr for the Centaurs and $16.8$ Myr for the TNOs.
In this work, the authors considered the orbits of known Centaurs and TNOs with $5.5\leq a \leq 80$ au and perihelion distances of less than 40 au. This range includes Typhon.
The histogram in Fig. \ref{fig_clones_terrest}b shows the time spent between the first and the last times that the distance between the Sun and each one of the clones was within the limit of $2$ au. The shortest period was $6.3$ years. The calculated median was $\approx13,300$ years, but we registered one extreme case in which this time interval was $\approx 3.7\times 10^5$ years. The lifetime of this clone was $2.729\times10^6$ years. Therefore, it spent $13.4\%$ of its lifetime evolving in such way that the crossing of its orbit into the terrestrial region occurred. However, $31$ clones $(73.8\%)$ spent less than $1\%$ of their lifetimes in this type of orbit.
In the histogram of Fig. \ref{fig_clones_terrest}c, we present the time that each clone remained in the planetary region after leaving the terrestrial region. More than $50\%$ of the clones did not stay more than $60,000$ years. The longest time was $1.3\times10^7$ years. Except for this case, the clones that reach the terrestrial region are about to be ejected from the planetary region.
Only two clones had extreme encounters (distances $\leqslant 3\,r_{td}$) with Earth and/or Venus (Figs. \ref{fig_evolution} and \ref{fig_evolution2}). They did not have any previous encounters within $1\,r_{td}$ with the giant planets. Therefore, these are the cases in which the Typhon-Echidna system suffered close encounters with Earth and/or Venus as a binary. Figs. \ref{fig_evolution} and \ref{fig_evolution2} show the orbital evolutions and the histories of the encounters of these clones. The percentage loss of the binaries suffered by each one of these clones was obtained through the simulations performed in Section \ref{sec_bin_evol}, where it was considered a cloud of 12960 Typhon-Echidna-like systems performing each one of the registered encounters.
In one case, the binary would probably be disrupted by extreme encounters with both Earth and Venus (Fig. \ref{fig_evolution}). This body survived $5.73$ Myr and completed its evolution after a collision with the Sun. In the other case (Fig. \ref{fig_evolution2}), the binary survived extreme encounters with the giant and terrestrial planets. This Venus-crossing clone was considered ejected after $1.93$ Myr of evolution. Therefore, of the whole set of 500 clones, only one $(0.2\%)$ reached the terrestrial region as a binary and left as such. Similarly, only one $(0.2\%)$ of the clones lost its second component via close encounters with the terrestrial planets).
\section{Typhon-Echidna - past evolution}
\label{sec_past}
Backward numerical integrations of the clones of Typhon were performed in order to study the past evolution of the Typhon-Echidna system. Among the $500$ clones previously considered, we randomly selected a sample of $100$. The orbits of these clones were numerically integrated for $100$ Myr in the past. The planets and the other parameters of this simulation are the same as in Section \ref{sec_orb_evol}, except for the ejection distance. Here, a clone was considered ejected if it reached a relative distance to the Sun of $1000$ au, since it is assumed that at this distance, it is about to enter in the Oort cloud \citep{Levison2006}. The significant close encounters with any of the planets were also recorded throughout the backward integrations.
We then analysed the temporal evolutions of the semi-major axis $a$, eccentricity $e$ and inclination $i$. For each clone, we computed the locations in the $(a\times e)$ and in the $(a\times i)$ spaces
where the clones spent most of the time along the integration. The graphs presented in Fig. \ref{fig_example} exemplify how this was done. For each clone, we have the temporal orbital evolution in the $(a\times e)$ and in the $(a\times i)$ spaces, as shown in Figs. \ref{fig_example}a and \ref{fig_example}b. Those spaces were then divided assuming a regular grid, and how long the clone remained in each one of these partitions was computed. From this example, we found that the clones mostly remained within $a=42.5$ au, $e=0.265$ and $i=27.8^{\circ}$.
The results for the whole set of clones are presented in Figs. \ref{fig_past}(a) and \ref{fig_past}(b). Each point on these graphs was obtained as described above. These plots show that Typhon must have spent most of its time in the past $100$ Myr as a TNO-Centaur. From the total sample of $100$ clones, $81$ spent the past mainly as a TNO-Centaur, while $16$ spent most of their lifetime as a Centaur and only $3$ spent the majority of time as a TNO. It is possible to see a large dispersion over the whole TNO-Centaur region without any preferable locus. From the wide range of possibilities, we found that the present Typhon may have kept its current orbit for the last $100$ Myr or even may have had a Ceto-like orbit (black circles in Figs. \ref{fig_past}(a) and \ref{fig_past}(b)) in the past. On the other hand, Ceto can evolve until it becomes the current Typhon. The results show that $71\%$ of the clones were ejected towards the Oort cloud.
Once we found that Typhon may have been a TNO-Centaur for the last hundred million years, we can expect that the close encounters of this system with the giant planets may have been as frequent in the past as we found for the forward evolution. In fact, the backward integrations show that, of the total of $100$ clones, $25$ have suffered past disruptive encounters with the giant planets (encounters within $1\,r_{td}$). In total, $10$ disruptive encounters with Neptune, $7$ with Jupiter, $4$ with Saturn and $4$ with Uranus were registered. These results show that the binary system of Typhon-Echidna could be as old as $100$ Myr, since $75\%$ of the clones would survive as binaries over this past time span.
\cite{Nol2006a} explored whether a Typhon-Echidna-like system would survive a transition from the scattered disk (TNOs with $a>50$ au and $q>30$ au.) to its current orbit. They found that there is a maximum chance of $95\%$ of this happening, considering the separation of the components of the binary $a_B=2700~km$ (this value was undetermined at the time of their study). Although their result also shows a favourable survival rate of the binary system, their values are not comparable to ours. They considered bodies that were initially placed in the scattered disk, and through forward numerical integrations, they monitored those that reached a region near the current orbit of Typhon-Echidna. In our simulations, our clones are in a very small region near the current orbit of Typhon, and the separation of Typhon-Echidna is now known, so we used much smaller values than those adopted previously.
\section{Final Comments}
\label{sec_final}
In this paper, we presented a scenario of the possible evolutions of the binary TNO-Centaur Typhon-Echidna. A system of $500$ clones of Typhon, the Sun and the planets of the Solar System (excluding Mercury) was considered. The orbits of the clones were numerically integrated for a time span of $200$ Myr. Throughout the integrations, significant close encounters of these clones with any of the planets were registered.
It was found that this system frequently crosses the orbits of the giant planets, leading to numerous
significant and extreme close encounters with these planets. The most encountered planet was Jupiter, followed by Uranus, Saturn and Neptune. Nevertheless, we found that only $17\%$ of the clones suffered extreme encounters along the numerical integration over $<200$ Myr. The encounters suffered by those clones were then simulated considering the binary systems with a wide range of possible initial configurations. This approach allowed us to statistically analyses the probability of the disruption of the Typhon-Echidna binary system within the planetary region.
The simulations of these encounters showed that the majority of extreme encounters between Typhon-Echidna and the giant planets were not strong enough to lead to the disruption of the binary. From the total of 619 extreme encounters registered for these planets, only approximately $22\%$ led to the disruption of the system. Thus, it was shown that it is highly probable that the binary system of Typhon-Echidna survives the close encounters with the giant planets while a Centaur.
A peculiar result that was obtained is the probability of having a binary system with components as large as the components of Typhon-Echidna that enters the terrestrial planetary region. Among the $500$ clones of the sample, $42$ reached the terrestrial region, leading to a probability of $8.4\%$ of such an event occurring during the range of the numerical integration ($<200$ Myr). The probability of Typhon-Echidna reaching the terrestrial planet region as a binary was estimated to be $3.6\%$. It is more probable that the Typhon-Echidna system would spend just a small fraction of its lifetime (less than $1\%$) in this region, and this period usually occurs at its final stage within the planetary region.
\cite{napier2015} and \cite{napieretaal} discuss that the entry of a Centaur into the terrestrial region, taking into account its fragmentation due to the sublimation or tidal disruption of the Sun or Jupiter, significantly increases the amount of Earth-crossing debris. It was estimated that a variation of two orders of magnitude in the mass of the near-Earth asteroid population is observed over a timescale of $30,000 - 100,000$ years. A major consequence of such an event is that the increase in the number of debris objects also increases the risk of a catastrophic event caused by a collision of those debris with our planet. We highlight that the presence of a binary Centaur in the terrestrial region, as described in the present paper, may potentiate this effect.
We also report the curious case of a binary TNO-Centaur being disrupted by close encounters with terrestrial planets. We found eight extreme encounters of Typhon-Echidna with Earth and Venus, and in two cases, those encounters were strong enough to disrupt the binary system. It is interesting to note that the presence of water ice on Typhon has been confirmed \citep{candal2010}. Therefore, the entry of Typhon-Echidna into the inner Solar System increases the possibility of this binary system presenting cometary features.
The past evolution of the Typhon-Echidna system was investigated, and it was found that Typhon must have spent most of its past as a TNO-Centaur. It was possible to see a large dispersion in the whole TNO-Centaur region without any preferable locus. By looking for the disruptive encounters in the past of the binary system, especially encounters with the giant planets, it was found that Typhon-Echidna is more likely to survive those encounters, and thus, this binary system could be as old as $100$ Myr.
Therefore, here, we presented Typhon-Echidna as an unprecedented case of a binary system comprising large cometary bodies originating from the outer Solar System that might enter the terrestrial planetary region while preserving its binarity throughout the journey.
\section{Acknowledgements}
MAG would like to thank the FWF: Project J-3588 "NEOS, impacts, origins and stable orbits''.
This work was also funded by CNPq Procs. 305737/2015-5 and 312813/2013-9 and by FAPESP Proc. 2016/24561-0.
This support is gratefully acknowledged.
\renewcommand{\refname}{REFERENCES}
|
1,314,259,995,613 | arxiv | \section{Introduction}
In the last few decades much effort has been put into the modeling of
radiative transfer in relativistically moving atmospheres such as
novae and supernovae. One of the state of the art techniques to solve
this problem is the operator splitting (OS) method. This method has
been successfully used to solve radiative transfer problems with
scattering and complete treatment of non-local thermodynamic
equilibrium (NLTE) effects.
However, these sophisticated methods for treating radiative transfer
have not been used in a general relativistic environment, although the
form of the GR equation of transfer does, basically, not differ from
the special relativistic version and the OS method can be applied to
such problems.
There have been several successful attempts to model the emergent
spectra of general relativistic systems such as neutron stars. These
models also account for magnetic fields or different surface
temperatures. However, the radiative transfer in these cases generally
solves classical plane parallel problems. See
\citet{2002nsps.conf..263Z} for a review.
Here we present an OS method of solving GR radiative transfer problems
in spherical (Schwarzschild) geometry. Other authors have solved the
radiative transfer problem in GR. For instance
\citet{1989ApJ...346..350S} solve the moment equations of radiative
transfer with a variable Eddington factor method. The advantage of OS
is not only that it does not depend on closure conditions but can also
solve magneto-optical transfer, what could prove to be important as
far as neutron stars are concerned.
\citet{1996ApJ...466..871Z} developed a characteristics method to solve
general relativistic radiative transport problems. They utilize the
constants of motion for the description of photon orbits that arise
due to the Killing vectors of the spherically symmetric spacetime and
use the analytical connection of an affine parameter with the radial
coordinate as well as choosing the momentum variables along the characteristics
to be constant to formulate the radiative transport equation.
Although this simplifies the equation in the case of flows such as
accretion onto black holes and neutron stars, the lack of a comoving
wavelength description forces the use of a large number of
characteristics to resolve the (in this case) angular dependent
absorption coefficients and a huge number of wavelength points to
resolve the shift of spectral lines through the atmosphere. The latter
is necessary even in the case of static general relativistic
atmospheres, e.g., in neutron stars.
We avoid these problems by using
a comoving wavelength coordinate which explicitly accounts for the
coupling of different wavelengths. This ansatz is the important core
part of this work. Since in order to calculate detailed spectra you need
to resolve spectral lines throughout the atmosphere and you cannot afford
to use a wavelength description that depends on the layer of the atmosphere
that you are at. Since in order to perform NLTE calculations you would have to
add a significant number of wavelengths to your computation to resolve the
spectral line at hand in every layer with the desired quality.
In the following we present calculations of general relativistic
radiative line and continuum transfer with a complete treatment of
scattering. In order to demonstrate the functionality of the method,
we chose a simple test case similar to a neutron star atmosphere.
\section{Radiative Transfer}
Lindquist found the equation for radiative transfer for a comoving metric \citep{lindquist66}.
He used the photon distribution function as variable to describe the radiation field. The works
of \citet{1989ApJ...346..350S} and \citet{1996ApJ...466..871Z} follow this ansatz.
However, we want to utilize a description of the radiation via the specific intensity that
is suitable for our method of solution.
Rather than using the general metric used by Lindquist,
we neglect the effects of the atmosphere on the metric and use
the Schwarzschild solution:
\begin{equation}
g_{\alpha \beta} = \left(
\begin{array}{c c c c}
1-\frac{2 M G}{c^2 r} & 0 & 0 & 0 \\
0 & -\frac{1}{1- \frac{2 M G}{c^2 r}} & 0 & 0 \\
0 & 0 &- r^2 & 0 \\
0 & 0 & 0 & -r^2 \sin^2{\Theta}
\end{array}
\right).
\label{schwarzmetric}
\end{equation}
Since the atmospheres that are influenced by GR effects
typically have a small mass compared to the parent object, this
simplification is well justified.
Furthermore, it is possible to calculate
the metric coefficients for spherical symmetry by integrating the
Tolman-Oppenheimer-Volkov equations and thus avoiding this approximation if
desired.
The equation of radiative transfer in the Schwarzschild metric is then found in its characteristic form:
\begin{equation}
\frac{\partial I_\lambda}{\partial s} + a_\lambda \frac{\partial \lambda I_\lambda}{\partial \lambda} + 4 a_\lambda I_\lambda = \eta_\lambda - \chi_\lambda I_\lambda
\label{EQRTschwarz}
\end{equation}
with
\begin{eqnarray}
\frac{\partial}{\partial s} & = & \frac{\partial r}{\partial s} \frac{\partial}{\partial r} + \frac{\partial \mu}{\partial s} \frac{\partial}{\partial \mu} \\
\frac{\partial r}{\partial s} & = & \sqrt{1-\frac{2 M G}{c^2 r}} \mu\\
\frac{\partial \mu}{\partial s} & = & \frac{1-\mu^2}{r} \left( 1 - \frac{M G}{c^2 r - 2 M G} \right) \sqrt{1-\frac{2 M G}{c^2 r}} \\
a_\lambda & = & \sqrt{\frac{r}{r - \frac{2 M G}{c^2}}} \frac{M G}{c^2 r^2} \mu
\end{eqnarray}
This development is equivalent to the work of Lindquist and delivers no new physical
insight besides being in the same form as the spherically symmetric special relativistic
equation of transfer \citep{mihalas80}. Hence modern operator splitting techniques to
solve radiative transfer are applicable to the problem.
However,
there is a fundamental difference from the special relativistic equation
of transfer. The coefficient $a_\lambda$ does not change sign in
monotonic flows that describe, e.g., supernova and nova
atmospheres. In the GR case, this coefficient is linear in $\mu$ and
hence the sign of $a_\lambda$ will be different for ingoing ($\mu<0$)
and outgoing ($\mu>0$) photons. $a_\lambda$ couples different
wavelengths and determines how the spectral features shift due to the
flow or the gravitational field. In the case of a supernova atmosphere
the different parts of the atmosphere all move away from each other so
that the direction of the wavelength shift along a ray is always the
same, hence $a_\lambda$ has the same sign along the ray. In a
gravitational field, ingoing photons will be blueshifted and outgoing
photons will be redshifted and hence, the sign of $a_\lambda$
changes. This presents a difficulty as the direction of flow of
information in the wavelength space is reversed along a ray and the
transfer equation is no longer an initial value problem in wavelength
space, but a 2-point boundary value problem.
Mihalas
already realized this in \citet{mihalas80} and
outlined a simple ray by ray formal solution
to this problem. However, this solution is of little use for the construction
of an approximate $\Lambda$-operator for an ALI-iteration.
To solve this problem we use the OS method
described in detail in \citet{petereddienms3}.
The treatment of arbitrarily flows of information in the wavelengthspace means
that every spatial point at all wavelengths can influence the intensity
at a given spatial point at a given wavelength. This forces you to use a
matrix notation for the formal solution where you have to implement proper
boundary conditions gouverned by $a_\lambda$ at every spatial point for all
wavelengths to ensure a locally stable upwind scheme for the wavelength discretization.
The GR case is a rather simple application to this method since the flow of information
in the wavelengthspace just changes once on a ray, namely at the point of tangency or in
the case of core intersecting rays at the innermost layer.
To make this method work with the GR case you have to calculate $a_\lambda$ for every
wavelength on all points of a given characteristic and determine the appropriate discretization
of the derivative at this point. Furthermore you have to know the pathlength between two
neighboring points on a ray to be able to calculate the change of optical
depth along the ray for all wavelengths. In addition to that you must determine the angles
of intersection with a given layer for all characteristics in order to perform angular integrations.
A photon in a gravitational field not only experiences a
wavelength shift but also is deflected since it moves on a
null-geodesic for the given spacetime. Since we are employing a
characteristics based solution we can fully account for this effect.
The spatial derivatives in equation (\ref{EQRTschwarz}) describe
the geometry of our characteristics. They are still geodesics and can
be described by an affine parameter.
In order to obtain the rays, the $\frac{\partial}{\partial s}$ part of equation
(\ref{EQRTschwarz}) has to be integrated. For the Schwarzschild metric it is
possible to analytically describe the photon orbits in terms of constants of
the motion. In addition, we have to relate the affine parameter to the distance
along the characteristics. In the one dimensional case, the integration is
simple and fast. Since the spacetime just outside the atmosphere will, in
general, not be flat, we extend our calculation of the characteristics into a
regime of spacetime that can be considered flat -- in our test calculations
discussed below, we chose a boundary of ten Schwarzschild-radii -- to make sure
that we calculate the spectrum from a correct set of angles that represent the
imaging of the source in curved spacetime.
We assume vacuum conditions outside
the atmosphere and, therefore, the intensities will not change along the
characteristic outside the atmosphere, of course except for being redshifted
due to the gravitational field what is trivial to account for.
\section{The Testing Setup}
We solve the test radiative transfer problems in a spherical model
configuration with 50 radial points (layers). We assume an exponential density
structure
\begin{equation}
\varrho(r) = \varrho_0 \exp {\frac{r - r_{\rm{out}}}{r_{\rm{scale}}}}
\label{rholaw}
\end{equation}
within the atmosphere and that the gas consists only of a simple
two-level-atom with a wavelength independent background continuum.
For a given optical depth ($\tau$) grid we integrate the radial grid via
\begin{equation}
\label{drdtau}
\frac{\mathrm{d} r}{\mathrm{d} \tau} = - \frac{1}{\chi_\kappa}
\end{equation}
where $\chi_\kappa = \chi_0 \varrho(r)$ with $\varrho$ given by (\ref{rholaw}).
It should be noted that $\chi_\kappa$ represents only the continuum extinction
coefficient. The resulting structure is not intended to be an accurate model of
a neutron star atmosphere. However, it has the correct spatial dimensions and
we thus use it to make predictions how GR will affect radiative transfer in a
realistic neutron star atmosphere.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{01.eps}
\caption{Radius is plotted over optical depth. The optical depth was
calculated from the wavelength independent
continuous opacity.
The atmosphere is about 60 meters thick, but the layers with an
optical depth around one lie just
centimeters below the outermost layer.}
\label{fig1}
\end{center}
\end{figure}
In Fig. \ref{fig1} we plot the radial structure of the test atmosphere versus
the optical depth grid that we used for the calculations in section
\ref{secresults}. The method of \citet{petereddienms3} makes it necessary to
provide a spatial boundary condition for the characteristics which is done via
the diffusion approximation. We generate a simple grey temperature structure
with the Hopf-function \citep{chandraradtransfer}. To describe the single
spectral line of the two-level-atom, we define a wavelength
$\lambda_{\mathrm{line}}$ as the center of the line and use a Gaussian
profile centered on this wavelength:
\begin{equation}
\Phi(\lambda) = \frac{\omega_{\mathrm{line}}}{\sqrt{\pi}} \exp{(-\frac{\lambda - \lambda_{\mathrm{line}}}{\omega_{\mathrm{line}}})^2},
\end{equation}
with $\omega_{\mathrm{line}}$ being the width of the Gaussian.
We describe the opacity associated with the line via:
\begin{equation}
\chi_{\mathrm{line}}(\tau,\lambda) = \chi_\kappa(\tau) R_{\mathrm{line}} \frac{\Phi(\lambda)}{\int \Phi(\lambda) \mathrm{d} \lambda},
\end{equation}
whereby the $R_{\mathrm{line}}$ factor determines the strength of the line
relative to the continuum.
It should be noted that the Gaussian width of the line is only
$0.01$ \,{\AA}. This is very small and does not represent a line width
one
would expect in neutron star atmosphere due to the large temperatures and
pressures present in such an atmosphere . The small width was chosen
to highlight the effects of GR on
radiative transfer. Since the atmosphere of a neutron star is only a
few centimeters
thick, the intrinsic wavelength shifts within the atmosphere are very small and
the GR effects can be tested best with very narrow lines.
A detailed treatment of radiative transfer especially in the general
relativistic environment of a neutron star atmosphere is desirable, since
constraints on the mass-radius relation are needed for the understanding of
neutron star interiors and its equation of state. The constraint should be as
strict as possible and, therefore, the radiative transfer should be as
sophisticated as possible.
Realistic models will need a multidimensional
description and have characteristics in the atmosphere that extend over larger
portions of spacetime and hence have a larger intrinsic wavelength shift.
Furthermore there may be configurations of blended lines that create a rapidly
changing opacity as seen, e.g., in the UV spectra of classical novae
\citep{novaphys}.
Furthermore, in realistic models of accretion columns on neutron stars the
atmosphere extends over larger regions of spacetime and the intrinsic line
shifts will be much larger. Therefore, there are physical systems where you
expect, that detailed general relativistic radiative transfer is important.
Besides this method is not limited to static atmospheres but can also be
applied to gamma-ray-bursts, accretion scenarios, or neutrino transport in
early phases of core collapse supernovae.
Since we treat the radiative transfer problem with scattering, we need
to specify
the quantities $\epsilon_\kappa$ and $\epsilon_{\mathrm{line}}$ to
define the scattering albedo.
The true absorption and the scattering part of the continuum opacity can now
be expressed through:
\begin{eqnarray}
\kappa_\kappa(\tau) & = & \epsilon_\kappa \chi_\kappa(\tau) \\
\sigma_\kappa(\tau) & = & (1 - \epsilon_\kappa) \chi_\kappa(\tau) \quad,
\end{eqnarray}
whereas for the line opacity we have:
\begin{eqnarray}
\kappa_{\mathrm{line}}(\tau,\lambda) & = & \epsilon_{\mathrm{line}}\chi_{\mathrm{line}}(\tau,\lambda)\\
\sigma_{\mathrm{line}}(\tau,\lambda) & = & (1 - \epsilon_{\mathrm{line}}) \chi_{\mathrm{line}}(\tau,\lambda) \quad.
\end{eqnarray}
The total opacity can now be given as:
\begin{eqnarray}
\chi_{\mathrm{total}}(\tau,\lambda) & = & \kappa_\kappa(\tau) + \sigma_\kappa(\tau) \nonumber \\
& & \quad + \kappa_{\mathrm{line}}(\tau,\lambda) + \sigma_{\mathrm{line}}(\tau,\lambda)
\end{eqnarray}
while the emissivity is:
\begin{eqnarray}
\eta_{\mathrm{total}}(\tau,\lambda) & = & (\kappa_\kappa(\tau)+ \kappa_{\mathrm{line}}(\tau,\lambda) ) \; B(T(\tau)) \nonumber \\
& & +(\sigma_\kappa(\tau) + \sigma_{\mathrm{line}}(\tau,\lambda) ) \; J(\tau,\lambda). \nonumber \\
\end{eqnarray}
Note that the continuous opacity is constant over the wavelength range
of interest and that the scattering was assumed to be coherent
for simplicity.
\section{Results}
\label{secresults}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{02.eps}
\caption{Results for non-scattering model line and continuum.}
\label{fig2}
\end{center}
\end{figure}
We calculate the emerging spectra of the model line for various
combinations of scattering parameters for the continuum and for the line
-- hence providing a NLTE treatment of the line transfer.
In Figs. \ref{fig2} to \ref{fig5} we always show
two cases with different gravitational masses -- one with $\mathrm{M}
= 0$ and the other with
$\mathrm{M} = \mathrm{M}_{\sun}$. The wavelength scale at the bottom corresponds
to the massless case and the upper scale to the one solar mass
case. The emerging
line profiles are plotted over each other in order to be easily compared.
All calculated spectra include the full treatment of scattering unless it is
indicated that a scattering parameter was set to one.
We include the massless case in order to verify the code by comparing
the results
to the thoroughly tested special relativistic code and
obtained
the same results with both methods.
In addition, all other tests such as omitting the line and
recovering a flat continuum
for constant thermal sources, sudden changes
in the wavelength resolution, and wavelength dependent boundary
conditions produced correct results.
In Fig. \ref{fig2} the continuum as well as the line are purely
thermal. In the massless case this results in a symmetric absorption
line while in the general relativistic case for one solar mass the
line profile deforms and becomes asymmetric with an extended wing to
the red, although the effect is very small. The line profiles are
normalized to the continuum and the equivalent widths of the two lines
are different. Although the radial structure and the run of the
opacities is exactly the same in both cases, the effective opacity is
different. due to the factor $4 a_\lambda$ in equation
\ref{EQRTschwarz}.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{03.eps}\\
\caption{The model line and the continuum are scattering with
$\epsilon_\mathrm{line} = 0.01$ and $\epsilon_\kappa = 0.01$ respectively. }
\label{fig3}
\end{center}
\end{figure}
In Fig. \ref{fig3} the line and the continuum are scattering with
$\epsilon_\mathrm{line} = 0.01$ and $\epsilon_\kappa = 0.01$.
In the massless case this results in a symmetric absorption profile
with line wings
slightly in emission, while in the general relativistic case the
line profile is very asymmetric with an emission feature on the blue
side and an
extended red absorption wing.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{04.eps}\\
\caption{The model line scatters with $\epsilon_\mathrm{line} = 0.01$ while the continuum
scattering factor is $\epsilon_\kappa= 1.0 \times 10^{-6}$.}
\label{fig4}
\end{center}
\end{figure}
The model parameters used to generate Fig. \ref{fig4} resemble those
of Fig.
\ref{fig3}. The only difference is a stronger scattering in the continuum
$\epsilon_\kappa= 1.0 \times 10^{-6}$. The strength and shape of the blue
emission depends on the scattering in the continuum. For a strong scattering
continuum even in the massless case an emission feature appears in both wings
of the line. It can be explained with the Schuster-mechanism \citep{mihalas2}.
In the relativistic case this mechanism appears to be amplified and strongly
distorted in the red wing. To rule out any effect of line scattering, we have
calculated a model line with pure scattering ($\epsilon_\kappa=0$) in the
continuum and no scattering in the line. The result is shown in Fig.
\ref{fig6}. The massless case shows the expected absorption profile
with the wings
in emission. In the relativistic case, the shape of the blue emission feature
does not change compared to the cases with less scattering. However, its slope
extends way farther into the blue, while on the red side of the line a very
broad emission feature extends much farther to the red, although the
peak is not as
high as the blue peak. In addition, we have calculated a purely scattering
($\epsilon_\mathrm{line} = 0$) model line
combined with a purely absorptive continuum. This decouples the model line
from the thermal pool. The
results are shown in Fig.
\ref{fig5}. Due to the scattering, the line is very strong, but the
deformation of the lines wings is essentially the same as in Fig. \ref{fig2},
where the line did not scatter.
The effects of GR for the given atmosphere are most obvious for a
scattering continuum. The resulting lineshape is strongly
asymmetric.
Scattering in the line
has no discernible effect on the shape of the line. The actual deformation due
to GR effects is also very small for pure linescattering.
The emergent lineshapes are related in principle to P-Cygni profiles, since
there is also a constant wavelength shift -- the dopplershift -- throughout the
atmosphere. However there
is a fundamental difference between the two cases, since in an expanding atmosphere
where P-Cygni profiles are observed there is a point of last contact of a photon with
the atmosphere, hence the information of the dopplershift at the point
of emission in respect to the observer is conserved. In the gravitational field the
wavelength of the photon gets shifted even without any interaction with the atmosphere.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{05.eps}\\
\caption{The model line is nonscattering with a completely scattering -- $\epsilon_\kappa = 0 $ -- continuum. }
\label{fig6}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{06.eps}\\
\caption{The model line is completely scattering -- $\epsilon_\mathrm{line} = 0 $ -- while the continuum is
not scattering at all.}
\label{fig5}
\end{center}
\end{figure}
\section{Grey continuum models}
As another application to general relativistic transfer we calculated grey
continuum models. To do so we changed our wavelength resolution and omitted the
model line from our calculations ($R_{\mathrm{line}}=0$). In the following we
present the emerging continuous spectra for varying values of
$\epsilon_\kappa$.
The temperature structure of the model is grey with an
effective temperature of $10^4\,$K (the absolute value of the effective
temperature is not important for the testing). The emergent spectra are nearly
blackbody spectra with only a slightly distorted overall shape as they are a
little bit broader than the blackbody.
To determine the effective temperature
from an observation one would have to correct for the gravitational redshift
and then try to fit a blackbody to the spectrum. This procedure will only
deliver physically relevant results if the scattering conditions in the
atmosphere are known. As long as all photons are absorbed and none scattered
-- $\epsilon_\kappa = 1$ -- (see Fig. \ref{fig9a}) the emergent spectrum
(corrected for gravitational redshift) is fitted with a blackbody with a
temperature that is the same as the effective temperature of the model. For
models with non-zero scattering -- $\epsilon_\kappa = 0.1$ in Fig. \ref{fig9}
and $\epsilon_\kappa = 0.001$ in Fig. \ref{fig11} -- the emerging spectra are
much bluer than the blackbody fit with the model effective temperature. In
Fig. \ref{fig10} and Fig. \ref{fig12} we corrected the temperatures of the
blackbody fits to match the emerging spectra. The apparent temperatures are
much higher than the effective temperature of the model.
To prove the
consistency of our
results, we plot the thermalisation depth $\tau_{\mathrm{th}}$ - the optical
depth where the $J_\lambda = B_\lambda$ - over the scattering parameter
$\epsilon_\kappa$. Since the emergent spectrum is Planckian,
$\tau_{\mathrm{th}}$ is the optical depth where the temperature of the
atmosphere equals the temperature of the blackbody fit. The results are plotted
in Fig. \ref{fig8}.
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{07.eps}\\
\caption{The spectrum was corrected for the gravitational redshift and fitted
with a blackbody with the known effective temperature of the model -
$T_{\mathrm{eff}}= 10^4\,$K. The continuum has $\epsilon_\kappa = 1$ The
blackbody fits the model
reasonably well.}
\label{fig9a}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{08.eps}\\
\caption{The spectrum was corrected for the gravitational redshift and fitted
with a blackbody with the known effective temperature of the model -
$T_{\mathrm{eff}}= 10^4\,$K. The continuum has $\epsilon_\kappa = 0.1$ The
blackbody doesn't fit the model spectrum. The apparent temperature is higher
than the model temperature.}
\label{fig9}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{09.eps}\\
\caption{The model spectrum is the same as in Figure \ref{fig9}, but this time
the temperature of the blackbody was chosen to fit the spectrum.}
\label{fig10}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{10.eps}\\
\caption{The spectrum was corrected for the gravitational redshift and fitted
with a blackbody with the known effective temperature of the model -
$T_{\mathrm{eff}}= 10^4 $K. The continuum has $\epsilon_\kappa = 0.001$ The
blackbody doesn't fit the model spectrum. The apparent temperature is higher
than the model temperature.}
\label{fig11}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{11.eps}\\
\caption{The model spectrum is the same as in Figure \ref{fig11}, but this time
the temperature of the blackbody was chosen to fit the spectrum.}
\label{fig12}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\hsize]{12.eps}\\
\caption{Thermalisation depth is plotted over optical depth.
Since the radial structure is only known on a discrete grid the value
of the thermalisation depth for a given temperature was determined via linear interpolation.}
\label{fig8}
\end{center}
\end{figure}
The results are consistent with the simple model $\tau_{\mathrm{th}}
=\frac{1}{\sqrt{\epsilon_\kappa}}$. Hence, in the GR case the determination of
effective temperatures via blackbody fits or simple radiative transfer models
that neglect scattering will be inconsistent and result in systematic errors of
the effective temperatures.
Therefore, it is desirable to solve the scattering
problem in a GR environment selfconsistently in a full blown atmospheric code with the method of solution that we
present in this paper.
\section{Conclusion}
We have developed a method to solve continuum and line radiative
transfer problems in spherically symmetric spacetimes that fully
accounts for general relativistic effects and can account
for scattering in the continuum and the lines. It uses a comoving
frame wavelength formalism that allows resolution of spectral lines
throughout the atmosphere without significantly increasing the number
of wavelength
points. The method was developed and tested in static neutron
star-like atmospheres, but is generally applicable to general
relativistic
systems. Only the photon orbits and the wavelength coupling term
$a_\lambda$ would be different in other GR systems. The test models
provide an illustration of possible results for realistic model
atmospheres for neutron stars. The results show that the emergent
line profiles in general relativistic atmospheres cannot be described
in detail with non-relativistic radiative transfer. The influence of
the continuous scattering opacity on the shape of the lines is large.
The apparent effective temperature of continuous spectra also depends strongly
on the strength of the scattering. Therefore, it is necessary to include the
treatment of scattering in the radiative transfer solution in order to obtain a
consistent physical model of a neutron star atmosphere and similar cases. The
method that we have presented here is a first step to develop a thorough
treatment of general relativistic atmosphere models in 3D. It can be directly
applied to multi-level NLTE calculations of relativistic neutron star
atmospheres, which we will present in a subsequent paper.
\bibliographystyle{aa}
|
1,314,259,995,614 | arxiv |
\section{Introduction}
Speech synthesis is the task of generating speech waveforms with desired characteristics, including but not limited to textual content~\citep{hunt1996unit,zen2009statistical,shen2018natural,ping2017deep,li2019neural}, speaker identity~\citep{jia2018transfer,cooper2020zero}, and speaking styles~\citep{wang2018style,skerry2018towards,akuzawa2018expressive,hsu2018hierarchical}. It is also more often referred to as \ac{TTS} when text is used as input to the system. Along with automatic speech recognition (ASR) and machine translation (MT), these language technologies have advanced rapidly over the past few years~\citep{tan2021survay}.
Traditionally, these tasks may be used in conjunction to form a system (e.g., combining the three for speech-to-speech translation), but they rarely leverage each other during training. As a result, each application used to have its own dedicated open-source toolkit, for example, Kaldi~\citep{povey2011kaldi} and HTK~\citep{young2002htk} for ASR, HTS~\citep{zen2007hmm}, Merlin~\citep{wu2016merlin}, STRAIGHT~\citep{kawahara1999restructuring}, and WORLD~\citep{morise2016world} for speech synthesis, and Moses~\citep{koehn2007moses} for MT.
Recently, there are growing interactions among these systems in the learning process. For example, \citet{hayashi2018back} and \citet{rosenberg2019speech} propose to leverage speech synthesis systems to generate paired text and speech data for ASR training; \citet{tjandra2017listening}, \citet{hori2019cycle}, and \citet{baskar2019semi} chain ASR and TTS together to form a loop for semi-supervised learning with cycle-consistency loss; \citet{weiss2017sequence}, \citet{li2020multilingual}, and \citet{jia2019direct} demonstrate that it is possible to build an end-to-end system translating speech into text or speech in a target language.
Beyond text-based systems, there is also an emerging research topic that explores the use of units discovered from self-supervised speech representation learning~\citep{oord2017neural,baevski2019vq,harwath2019learning,hsu2021hubert} to replace text for representing the lexical content in numerous applications, such as language modeling~\citep{lakhotia2021generative}, speech resynthesis~\citep{polyak2021speech}, image captioning~\citep{hsu2020text}, and translation~\citep{tjandra2020speech,hayashi2020discretalk}. This line of research bypasses the need for text and makes technologies applicable even to unwritten languages. However, to interpret the output of such systems - a sequence of learned units, a unit-to-speech model is required. This brings up the need of a framework for broader speech synthesis systems that can alternatively take learned units as input.
These research directions can benefit from having a single toolkit with different state-of-the-art language technologies.
In this paper, we introduce \textsc{fairseq S$^2$}, a \textsc{fairseq} \ \citep{ott2019fairseq} extension for speech synthesis. \textsc{fairseq} \ is a popular open-source sequence modeling toolkit based on PyTorch~\citep{paszke2019pytorch} that allows researchers and developers to train custom models.
It offers great support for training large models on large scale data, and
provides a number of state-of-the-art models for language technologies.
We extend \textsc{fairseq} \ to support speech synthesis in this work. In particular, we implement a number of popular text-to-spectrogram models, with interface to both signal processing-based and neural vocoders.
Multi-speaker variants of those models are also implemented.
While speech synthesis often relies on subjective metrics such as mean opinion scores for benchmarking, we implemented a suite of widely used automatic evaluation metrics to facilitate faster iteration on model development.
Last but not least, we support a number of text and audio preprocessing modules,
which allow developers to quickly build a new dataset from less curated in-the-wild data for speech synthesis.
The main contribution of this work is threefold. First, we implement a number of state-of-the-art models and provide pre-trained checkpoints and recipes, which can be used by researchers as baselines or as building blocks in applications such as text-to-speech translation. Second, we create pre-processing tools that enable developers to use customized data to build a TTS model, and demonstrate the effectiveness of these tools empirically. Lastly, as part of the \textsc{fairseq} \ codebase, this speech synthesis extension allows easy integration with numerous state-of-the-art MT, ASR, ST, LM, and self-supervised systems already built on \textsc{fairseq}. We provide an example by building a unit-to-speech system that can be used for text-free research.
The rest of the paper is organized as follows: Section 2 describes the features of \textsc{fairseq S$^2$}. Experiments are presented in Section 3. Related work is discussed in Section 4, and we conclude this work in Section 5.
\section{Features}
\paragraph{Fairseq Models} \textsc{fairseq}~provides a collection of MT~\citep{ng-etal-2019-facebook}, ST~\citep{wang2020fairseq}, unsupervised speech pre-training and ASR~\citep{NEURIPS2020_92d1e1eb,hsu2021hubert} models that demonstrate state-of-the-art performance on standard benchmarks. They are open-sourced with pre-trained checkpoints and can be integrated or extended easily for other tasks.
\paragraph{Speech Synthesis Extension} \textsc{fairseq S$^2$}~adds state-of-the-art text-to-spectrogram prediction models, Tacotron 2~\citep{shen2018natural} and Transformer~\citep{li2019neural}, which are AR with encoder-decoder model architecture. For the latest advancements on fast non-AR modeling, we provide FastSpeech 2~\citep{ren2019fastspeech,ren2020fastspeech} as an example.
All our models support the multi-speaker setting via pre-trained~\citep{jia2018transfer} or jointly trained speaker embeddings~\citep{arik2017deep,chen2020multispeech}. Note that the former enables synthesizing speech for speakers unseen during training. For FastSpeech 2, pitch and speed are controllable during inference.
For spectrogram-to-waveform conversion (vocoding), \textsc{fairseq S$^2$}~has a built-in Griffin-Lim~\citep{griffin1984signal} vocoder for fast model-free generation. It also provides examples for using external model-based vocoders, such as WaveGlow~\citep{prenger2019waveglow} and HiFiGAN~\citep{kong2020hifigan}.
\paragraph{Speech Preprocessing.} Recent advances in neural generative models have demonstrated that neural-based \ac{TTS} models, can synthesize high-quality, natural and intelligible speech. However, such models usually require high-quality, and clean speech data~\cite{zhang2021denoispeech}. In order to enable leveraging noisy data for \ac{TTS} training, we propose a speech preprocessing pipeline to enhance and filter data. The proposed pipeline is comprised of three main components: i) Background noise removal, ii) \ac{VAD}, and iii) Outlier filtering using both \ac{SNR} and \ac{CER}.
First, a speech enhancement model is applied over input recordings to remove background noise. We used the speech enhancement model proposed by~\cite{defossez2020real} where the $i_{th}$ convolutional layer has $2^{i-1}*64$ output channels. As suggested by the authors, we additionally used a dry/wet knob, i.e. the final output is $dry \cdot {\mathbf x} + (1-dry) \cdot \vyh$, where ${\mathbf x}$ is the noisy input signal and $\vyh$ is the output of the enhancement model. We experiment with $dry \in \{0.0, 0.01, 0.05, 0.1\}$ and find 0.01 to perform the best.
Next, we apply \ac{VAD} to remove silence from the denoised utterances, as silence can vary in length significantly which causes increasing uncertainty and therefore degrades \ac{TTS} performance. Silence regions at the beginning and end of the utterances are completely removed. In case we encounter a silence segment in the middle of the signal in where its length is greater than 300ms we replace it with a 300ms artificially generated silence (since completely removing silence regions produces unnatural speech). Silence regions of less than 300ms are left unchanged. We use the open-source implementation of the Google WebRTC \ac{VAD}~\citep{vad}, of which four aggressiveness levels \{0, 1, 2, 3\} can be set. A higher aggressiveness level removes more silences but comes at the risk of removing partial speech. The aggressiveness level corresponds to the size of the processing window (a larger processing window will make the \ac{VAD} work at a coarser level and remove silence frames more aggressively).
Lastly, we notice that in extremely noisy recordings (\ac{SNR} close to zero), the generated denoised samples are often not intelligible enough to train a \ac{TTS} or contain distortion artifacts. In addition, when setting the VAD aggressiveness level high, speech may be truncated along with silence. To remedy this, we proposed two outliers filtering methods. The first approach is based on \ac{SNR} estimation. We approximate the noise by subtracting the output of the enhancement model from the input-noisy speech, then we compute the \ac{SNR} between the two. The second approach is based on applying an \ac{ASR} over the denoised speech and compute the CER against the target transcription.
\paragraph{Computation} \textsc{fairseq}~is implemented in PyTorch~\citep{paszke2019pytorch} and provides efficient batching, gradient accumulation, mixed precision training \citep{micikevicius2017mixed}, model parallelism, multi-GPU as well as multi-machine training for computational efficiency on large-scale experiments and enabling training gigantic models.
\paragraph{Quantitative Metrics} We provide automatic metrics for fast evaluation in model development. Similarly to~\cite{polyak2020unsupervised}, we report \ac{GPE}~\cite{nakatani2008method}, \ac{VDE}~\cite{nakatani2008method}, and \ac{FFE}~\cite{chu2009reducing} to evaluate F0 reconstructions of the generated speech. We additionally, report \ac{MCD}, \ac{MSD}, and \ac{CER} to evaluate both the overall similarity to the target speech and content intelligibility~\cite{weiss2021wave}.
\paragraph{(i) \ac{GPE}} GPE is an objective metric which measures the portion of voiced audio frames with a pitch error of more than 20\%.
\begin{equation}
\begin{split}
\text{GPE}&(\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}, \vph, \bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}, \vvh) = \\ &\frac{\sum_t \mathbbm{1}[|\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}_t - \vph_t| > 0.2 \cdot\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}_t] \mathbbm{1}[\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}_t] \mathbbm{1}[\vvh_t] }{\sum_t \mathbbm{1}[\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}_t] \mathbbm{1}[\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}'_t]}
\end{split}
\end{equation}
where $\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}_t, \bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}'_t$ are the pitch frames from the target and generated signals, $\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}_t, \vvh_t$ are the voicing decisions from the target and generated signals, and $\mathbbm{1}$ is the indicator function.
\paragraph{(ii) \ac{VDE}} VDE measures the portion of frames with voicing decision error,
\begin{equation}
\text{VDE}(\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}, \vvh) = \frac{\sum_{t=1}^{T-1} \mathbbm{1}[\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}_t \ne \vvh_t]}{T},
\end{equation}
where $T$ is the total number of frames.
\paragraph{(iii) FFE} Combining GPE and VDE, FFE measures the percentage of frames that contain a deviation of more than 20\% in pitch value or have a voicing decision error.
\begin{equation}
\begin{split}
\text{FFE}&(\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}, \vph, \bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}, \vvh) = \text{VDE}(\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}, \vvh) \\ + &\frac{\sum_{t=1}^{T-1} \mathbbm{1}[|\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}_t - \vph_t| > 0.2\cdot\bm{p}} \newcommand{\vph}{\hat{\bm{p}}} \newcommand{\ph}{\hat{p}_t] \mathbbm{1}[\bm{v}} \newcommand{\vvh}{\hat{\bm{v}}} \newcommand{\vh}{\hat{v}_t] \mathbbm{1}[\vvh_t]}{T}.
\end{split}
\end{equation}
\paragraph{(iv) \ac{MCD}/\ac{MSD}} These are defined as the root mean squared error of the synthesized speech against the reference speech computed on the 13-dimensional MFCC features for \ac{MCD} and log-mel spectral features for MSD. Since the reference and the synthesized speech may not be aligned frame-by-frame, instead of zero-padding the shorter one and assuming they are frame-wise aligned as done in \citet{skerry2018towards}, we follow \citet{weiss2021wave} and use dynamic time warping~\citep{berndt1994using} to align the frames from the two sequences. The main difference between these two metrics lies in the features they compute distortion on: MFCC features aim to capture phonetic information while removing speaker information, while log-mel spectral features encode both, and hence \ac{MCD} addresses phonetic similarity more.
\paragraph{(v) \ac{CER}} CER is computed between the transcription of the generated audio against the input text using an \ac{ASR} system publicly available in \textsc{fairseq}.
\paragraph{Visualization} \textsc{fairseq}~integrates Tensorboard\footnote{\url{https://github.com/tensorflow/tensorboard}} for monitoring holistic metrics during model training. It also has VizSeq~\citep{wang-etal-2019-vizseq} integration for offline sequence-level error analysis, where transcript and target/predicted speech are visualized in Jupyter Notebook interface. \textsc{fairseq S$^2$}~further adds generated spectrogram and waveform samples to Tensorboard for model debugging.
\input{table_ljspeech}
\section{Experiments}
We evaluate our models in three settings: single-speaker synthesis, multi-speaker synthesis and multi-speaker synthesis using noisy data.
\subsection{Experimental Setup}
We use either characters, phonemes or discovered units as input representations.
To convert texts into phonemes, we employ g2pE~\citep{g2pE2019} or Phonemizer~\citep{phonemizer2015} with espeak-ng\footnote{\url{https://github.com/espeak-ng/espeak-ng}} backend. We use the Montreal Forced Aligner~\citep{mcauliffe2017montreal} to obtain phonemes with frame durations for FastSpeech 2 training, which is based on the same pronunciation dictionary (CMUdict) as g2pE.
For discovered units, we extract frame-level units using a Base HuBERT model trained on LibriSpeech\footnote{\url{https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt}} and collapse consecutive units of the same kind. We use the run length of identical units before collapsing as target duration for FastSpeech 2 training. We use a reduction factor (number of frames each decoder step predicts) of 4 for Transformer and 1 for FastSpeech 2 by default.
We resample audios to 22,050Hz and extract log-Mel spectrogram with FFT size 1024, window length 1024 and hop length 256. We optionally pre-process audios to improve model training: denoising (``DN"), level-2 or level-3 VAD (``VAD-2" or "VAD-3"), filtering by SNR$>15$ and CER$<10\%$ (``FLT") and volume normalization (``VN").
We use MCD and CER for automatic evaluation. MCD is computed on Griffin-Lim vocoded reference and model output spectrograms. We use vocoded references as opposed to the original ones to eliminate the error introduced by the vocoder and focus the evaluation on spectrogram prediction.
HiFiGAN vocoders trained on each dataset are used to generate waveforms for CER evaluation.
The large wav2vec 2.0~\cite{baevski2020wav2vec} ASR model, which achieves WERs of 1.8\% and 3.3\% on Librispeech test-clean and test-other, respectively and is provided in \textsc{fairseq}\footnote{\url{https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_960h_pl.pt}}, is used both for CER filtering and evaluation.
GPE, VDE, and FFE are not reported here, because these metrics are more meaningful when prosody modeling is taken into account~\citep{polyak2020unsupervised, skerry2018towards,wang2018style}.
For subjective evaluation, we conduct a Mean Opinion Score (MOS) test using the CrowdMOS package~\cite{ribeiro2011crowdmos} using the recommended recipes for detecting and discarding inaccurate scores. We randomly sample 100 speech utterances from the test set and collect manual scores using a crowd sourcing framework. The same samples are used across all tested methods. Each sample is rated by at least 10 raters on a scale from 1 to 5 with 1.0 point increments. Overall, scores for each tested method are averaged across more than 1000 manual annotations. We report both average MOS scores together with a 95\% confidence interval (CI95).
\subsection{Single-Speaker Synthesis on LJSpeech}
\input{table_vctk}
\input{table_common_voice}
\input{table_counterparts}
LJSpeech~\citep{ljspeech17} is a single-speaker TTS corpus with 13,100 English speech samples (around 24 hours) from audiobooks. We follow the setting in~\citet{ren2020fastspeech} to use 349 samples (with document title LJ003) for validation, 523 samples (with document title LJ001 and LJ002) for testing and the rest for training.
On this de-facto standard benchmark, we compare autoregressive model (Transformer, ``TFM") with non-autoregressive model (FastSpeech 2, ``FS2"), as well as 3 different types of inputs: characters, phonemes (from g2pE or espeak-ng) and HuBERT units. We see from Table~\ref{tab:ljspeech} that FastSpeech 2 performs comparably well to Transformer with phoneme inputs (g2pE), both achieving 4.2 MOS. However, the latter does not require input-output alignments for model training and supports more types of inputs---it achieves 4.1 MOS with characters (no need for phonemization), and 4.2 MOS with simpler phonemes (espeaker-ng). The task falls into the re-synthesis setting with unit inputs. We notice that FastSpeech 2 performs worse (4.0 vs. 4.2 on MOS) in this setting, likely due to the finer-grained inputs and its simplified attention mechanism.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{vctk_vad.png}
\caption{\textbf{A VCTK example.} With VAD level 3, the first word ``But'' is detected as silence and cut off.}
\label{fig:vctk}
\end{figure*}
\subsection{Multi-Speaker Synthesis on VCTK}
VCTK~\citep{veaux2017superseded} is a multi-speaker English TTS dataset that contains 44 hours of read speech from 109 speakers with various English accents\footnote{\url{https://datashare.ed.ac.uk/handle/10283/3443}}. We randomly sample 50 utterances for validation and 100 utterances for testing, and use the rest for training.
Speech recordings from VCTK include considerable amount of silence as shown in \autoref{fig:vctk} (raw); therefore, silence removal is considered a standard preprocessing step for VCTK~\citep{jia2018transfer, cooper2020zero}. \autoref{fig:vctk} shows silence-removed spectrograms with three VAD aggressiveness levels. We see that a higher aggressiveness level removes more silence, but may also truncate the speech. The dataset durations after silence removal and filtering with CER $<10\%$ are listed in \autoref{tab:vctk_preproc}, along with the validation CER.
We use this dataset to study how audio-preprocessing and speaker representation affect the performance of TTS. We train a transformer TTS model with a reduction factor (i.e.\ how many frames each decoding step predicts) of 2 or 4 on three sets of audio: raw data (Raw), DN+VAD-3, and DN+VAD-3+FLT. A speaker embedding lookup table (LUT) is used by default. In addition, we train models on DN+VAD-3+FLT with a fixed embedding (Emb) for each speaker inferred from a pre-trained speaker verification model~\citep{heigold2016end}, which would enable synthesizing the voice of an unseen speaker.
Results in Table~\ref{tab:vctk} show that increasing the reduction factor from 2 to 4 improves the performance consistently.
Specifically, we found that without VAD, the model fails to train when using a reduction factor of 2. Finally, we found that using a pre-trained speaker embedder achieves similar performance than using a learnable lookup table, while enabling synthesizing speech for unseen speakers.
\subsection{Multi-Speaker Synthesis using Noisy Data from Common Voice}
Common Voice~\citep{ardila-etal-2020-common} is a multi-speaker speech corpus with around 4.2K hours of read speech in 40 languages (version 4). It is crowd-sourced from around 78K voice contributors in various accents, age groups and genders. We use its English portion and select data from the top 200 speakers by duration (total 226 hours).
The audio data in this corpus is expectedly noisy given the lack of curated recording environments. We explore if speech processing can counteract the negative factors (background noise, long silence, variable volume across clips, etc.) during recordings and improve model training. Specifically, we examine 3 preprocessing settings with Transformer model and phoneme (g2pE) inputs: VN, DN+VAD-2+VN and DN+VAD-2+FLT+VN. As shown in Table~\ref{tab:common_voice}, the original audio has 0.3/0.5 lower MOS than the LJSpeech/VCTK one, confirming its relatively low recording quality. Noise and silence removal improve synthesis quality significantly by 0.2 MOS (DN+VAD-2+* vs. VN). Filtering by SNR and CER improves both model fitting (-0.1 MCD) and intelligibility (-1.5 CER) given the removal of difficult training examples.
\section{Related Work}
There are many existing open-source repositories for speech synthesis. The most prominent toolkits for conventional statistical parametric speech synthesis (SPSS) include HMM/DNN-based Speech Synthesis System (HTS)~\citep{zen2007hmm} and Merlin~\citep{wu2016merlin}. These rely heavily on feature engineering and
use signal processing-based vocoders like STRAIGHT~\cite{kawahara1999restructuring} and WORLD~\cite{morise2016world} to synthesize waveforms from acoustic features (e.g., fundamental frequency, spectral envelope, and aperiodic information).
Recently, end-to-end models that take minimally pre-processed features (characters and mel-spectrograms) have achieved superior performance compared to conventional systems~\citep{shen2018natural}, especially when paired with neural vocoders~\citep{prenger2019waveglow,kong2020hifigan}. There are a number of open-source implementations available on Github~\footnote{coqui-ai/TTS, Kyubyoung/tacotron, NVIDIA/tacotron2, Rayhane-mamah/Tacotron2, r9y9/deepvoice3\_pytorch},
however, these repositories are solely for text-to-speech synthesis, and mostly support one model only.
ESPnet~\citep{watanabe2018espnet,hayashi2020espnet}, NeMo, and OpenSeq2Seq~\cite{kuchaiev-etal-2018-openseq2seq} are the most similar toolkits that also support multiple tasks. As listed in Table~\ref{tab:counterparts}, \textsc{fairseq S$^2$}~provides more audio preprocessing tools and automatic metrics for building and evaluating speech synthesis models on custom datasets. As part of \textsc{fairseq}, it can also be easily integrated with numerous state-of-the-art models already provided in \textsc{fairseq} \ for exploring novel ideas. For example, we demonstrate that units discovered from a self-supervised speech pre-training model can be used to build a unit-to-speech system that converts output from systems like unit LM~\citep{lakhotia2021generative} or image-to-unit~\citep{hsu2020text} to speech.
\section{Conclusion}
This paper introduces \textsc{fairseq S$^2$}, a \textsc{fairseq} \ extension for speech synthesis. We believe this extension will allow researchers and developers to more easily test novel ideas for language technologies by providing great support for scalability, integrability, and a wealth of tools for curating data as well as automatically evaluating trained systems.
|
1,314,259,995,615 | arxiv | \section{Introduction}
\input{sections/introduction}
\paragraph{Related Work}
\input{sections/related_work}
\section{Setting}
\input{sections/setting}
\section{Choice of the class of algorithms} \label{sec:choice_strategies}
\input{sections/choice_class_strategies}
\section{Learning in a stationary environment}
\input{sections/track_fixed_env}
\section{Leaning in a non-stationary environment}
\input{sections/track_changing_env}
\section{Application to mortal bandits}
\input{sections/mortal_bandits}
\section{Conclusion}
\input{sections/conclusion}
\begin{ack}
The research presented was supported by the French National Research Agency, under the project BOLD (ANR19-CE23-0026-04) and it was also supported in part by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences.
\end{ack}
\printbibliography
\newpage
\subsection{Choice of the class of algorithms} \label{appendix:choice_class_strategies}
In this subsection, we repeat the simulations of Section \ref{sec:choice_strategies}. We recall that we evaluate the Bayesian regret of several \algo{UCB}-like algorithms as a function of their parameters $\gamma$. We consider two scenarios: the first scenario is a Gaussian bandit problem where mean rewards are drawn i.i.d.\ from a uniform distribution over $[0, 1]$; while the second scenario is a Bernoulli bandit problem where mean rewards are drawn i.i.d.\ from a Beta(1, 3) distribution. In all experiments, the horizon is fixed at $T= 1000$ and we vary the number of arms $K$. Results are averaged over $5000$ iterations and displayed on Figure \ref{fig:choice_sub_policy_app}.
\begin{figure}[hbt]
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/sub_policy/scenario_1_K_5.pdf}
\caption{Scenario 1 / $K=5$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/sub_policy/scenario_1_K_63.pdf}
\caption{Scenario 1 / $K=63$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/sub_policy/scenario_1_K_250.pdf}
\caption{Scenario 1 / $K=250$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/sub_policy/scenario_2_K_5.pdf}
\caption{Scenario 2 / $K=5$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/sub_policy/scenario_2_K_63.pdf}
\caption{Scenario 2 / $K=63$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/sub_policy/scenario_2_K_250.pdf}
\caption{Scenario 2 / $K=250$}
\end{subfigure}
\caption{Bayesian regret of various algorithms as a function of $\gamma$ for diverse environments and numbers of arms $K$. Rows correspond respectively to Gaussian bandits with a uniform prior and Bernoulli bandits with a Beta(1, 3) prior.}
\label{fig:choice_sub_policy_app}
\end{figure}
In the first scenario, we observe a similar behavior compared to the Bernoulli case: for small numbers of arms $K$, \algo{AdaUCB} performs better than \algo{UCB} for all $\gamma$; for moderate values of $K$, the \algo{Greedy} algorithm is roughly the best; and for large values $K$, \algo{SubUCB} performs the best. Similarly, \algo{SubUCB}($m$) performs better as $m$ grows larger and becomes more sensitive to $\gamma$ at the same time.
In the second scenario, the same behavior can be noticed, however what we called moderate and large values of $K$ are, in this case, much higher than previously.
\subsection{Influence of the initialization} \label{appendix:init}
In this subsection, we repeat the simulations of Section \ref{sec:fixed_init}.
We recall that we study the impact of several initializations on the lifelong regret. To do so, we set $\gamma = 0.22$ in the \algo{UCB} algorithm. We consider three scenarios: the first scenario is a Bernoulli bandit problem where mean rewards are drawn i.i.d.\ from a uniform distribution over $[0, 1]$, the second scenario is a Gaussian bandit problem where mean rewards are drawn i.i.d.\ from a uniform distribution over $[0, 1]$ and the third scenario is a Bernoulli bandit problem where mean rewards are drawn i.i.d.\ from a Beta(1, 3) distribution.
In all experiments, the horizon is fixed at $T= 1000$ and we vary the number of arms $K$. For $K=5$ and $K=63$, the numbers of episodes is $J=100$, whereas for $K=250$ it is $J=10$.
Results are averaged over $500$ iterations for $K=5$, $100$ for the rest, and displayed on Figure \ref{fig:fixed_env_init_app}.
\begin{figure}[hbt]
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_0_K_5.pdf}
\caption{Scenario 1 / $K = 5$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_0_K_63.pdf}
\caption{Scenario 1 / $K = 63$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_0_K_250.pdf}
\caption{Scenario 1 / $K = 250$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_1_K_5.pdf}
\caption{Scenario 2 / $K = 5$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_1_K_63.pdf}
\caption{Scenario 2 / $K = 63$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_1_K_250.pdf}
\caption{Scenario 2 / K = 250}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_2_K_5.pdf}
\caption{Scenario 3 / $K = 5$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_2_K_63.pdf}
\caption{Scenario 3 / $K = 63$}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_2_K_250.pdf}
\caption{Scenario 3 / $K = 250$}
\end{subfigure}
\caption{Lifelong regret of a deterministic meta-algorithm with various initializations in stationary environments. Rows correspond respectively to Bernoulli bandits with a uniform prior, Gaussian bandits with a uniform prior and Bernoulli bandits with a Beta(1, 3) prior. Shaded areas show standard errors.}
\label{fig:fixed_env_init_app}
\end{figure}
It is complex to observe a clear trend across the different simulations.
For small values of $K$, the choice of initialization is insignificant; except for the initialization at 0 in the first scenario, they all have roughly the same performance.
For intermediate values of $K$, the impact of the choice of initialization becomes apparent. Although there is no optimal choice, initializing arms with the median of previous arms seems more robust.
For large value of $K$, a clear trend is emerging: pulling each arm once is always the worst thing to do. It was expected since we spend most of the time initializing arms. There is still no optimal choice in this case, yet the initialization at 0 seems more robust.
\subsection{Learning in a stationary environment} \label{appendix:fixed_env}
In this subsection, we repeat the simulations of Section \ref{sec:fixed_learning}. We recall that we study the impact of several meta-algorithms on the lifelong regret. We consider two scenarios: the first scenario is a Gaussian bandit problem where mean rewards are drawn i.i.d.\ from a uniform distribution over $[0, 1]$; while the second scenario is a Bernoulli bandit problem where mean rewards are drawn i.i.d.\ from a Beta(1, 3) distribution. In all experiments and for each episode, the horizon is fixed at $T= 1000$ and the number of arms at $K=5$; and for the meta-algorithm, the number of episodes is set at $J=10000$. Results are averaged over $100$ iterations and illustrated on Figure \ref{fig:fixed_env_learning_app}.
\begin{figure}[hbt]
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{figures/fixed_env/scenario_1.pdf}
\caption{Gaussian bandits with uniform prior}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{figures/fixed_env/scenario_2.pdf}
\caption{Bernoulli bandits with Beta(1, 3) prior}
\end{subfigure}
\caption{Lifelong regret of various meta-algorithms in stationary environments. Shaded areas show standard errors.}
\label{fig:fixed_env_learning_app}
\end{figure}
The results are similar to Section \ref{sec:fixed_learning}: \algo{TS} fails to learn in that time frame, while \algo{AdaUCB} manages to do so but is outperformed for a long period of time by a naive \algo{Greedy}. As for \algo{Greedy}(100), it still performs extremely well; its regret is close to the optimal one and is again sublinear.
\subsection{Influence of the initialization} \label{sec:fixed_init}
We start with a study of the effect of the initialization choice on empirical performance.
By default, most bandit algorithms initialize arms by pulling them at least one time. Knowing that we solve again and again similar bandit problems, we may want to find a more clever initialization. For example, consider a challenging bandit problem with a large number of arms with respect to the time horizon and assume we have found a reasonably good arm; exploiting this arm may then be more rewarding that exploring new arms in the hope of finding a better one. This becomes especially critical the more greedy we get.
In this section, we fix the value of the hyperparameter $\gamma$ and we evaluate three different initializations. In all cases, we set the empirical means of arms to a specific value and their upper confidence bounds are built as if they have been played once. In the first case we set this value at 0 (Init 1), in the second (Init 2) and third (Init 3) cases, it is fixed at the mean and median, respectively, of previous empirical means. We denote by ``Init 0'' the default initialization
We analyze here a particular problem class and we refer the reader to Appendix \ref{appendix:init} for a more complete overview on the impact of the different initializations for different prior distributions and number of arms. In this experiment, each episode consists of a Gaussian bandit problem with $K=5$ arms, a time horizon $T=,000$ and the mean rewards of arms are drawn i.i.d.\ from a uniform distribution over $[0, 1]$. We fix $\gamma = 0.2$ and we repeat this tuned UCB for $J=100$ episodes with the different initializations previously mentioned. On Figure \ref{fig:fixed_env_init}, we report the lifelong regret averaged over 100 iterations.
In this experiment, all three initializations improve over the default choice; it turns out that initializing arms with the median of previous empirical means performs better than the mean of previous arms which in turn performs better than an initialization at 0. However, this is not always the case and it heavily depends on the number of arms $K$ and on the prior distribution. Fortunately, this choice is insignificant on instances with a small number of arms, which we study in this paper, therefore in what follows we assume that \algo{UCB} uses the default initialization in each episode.
\begin{figure}[t]
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{figures/init/scenario_1_K_63.pdf}
\caption{Comparison of various initializations}
\label{fig:fixed_env_init}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{figures/fixed_env/scenario_0.pdf}
\caption{Comparison of various meta-algorithms}
\label{fig:fixed_env_learning}
\end{subfigure}
\caption{Lifelong regret of various meta-algorithms in stationary environments. Shaded areas show standard errors.}
\end{figure}
\subsection{Bandit algorithms as meta-algorithm} \label{sec:fixed_learning}
Now that the initialization rule is set, we can focus on the meta-algorithm, i.e.\ the algorithm that is responsible for picking the parameter $\gamma$ of \algo{UCB} for each episode, and ultimately, is the keystone in the minimization of the lifelong regret. We consider bandit algorithms for the choice of the meta-algorithm, since they are efficient online optimization algorithms.
This may seem like a vicious circle as we are talking about optimization of bandits algorithms and we want to avoid having to optimize the optimizer. Fortunately, the two algorithms, the meta-algorithm and the sub-algorithm, face a different problem. Indeed, the sub-algorithm aims at maximizing the average reward over bandit instances while the meta-algorithm aims at maximizing a function, which is the expected cumulative reward of the sub-algorithm with respect to its parameter $\gamma$.
The dilemma encountered by the meta-algorithm is actually a continuous-armed bandit problem \cite{kleinberg2005nearly, auer2007improved} where the set of arms lies in some bounded interval, in our case the different $\gamma \in [0, 1]$. \textcite{kleinberg2005nearly} proposed a simple, yet nearly optimal, algorithm which consists in discretizing the $[0, 1]$ interval into a finite set of $n$ equally spaced points and running a standard bandit algorithm over those points. Unfortunately, their theoretical result holds only when the function to be optimize satisfies some Hölder conditions which may not verified for the Bayesian regret of \algo{UCB}($\gamma$). Still, that does not refrain us from using this strategy. The chosen number of arms $n$ is critical in practice; set too low we may be far from the optimal solution and set too high we may end up exploring all the time. \textcite{auer2007improved}, with a similar algorithm, claimed a value $n = (J / \log J)^{1/3}$ is optimal without knowing the exact Hölder condition. We thus choose this specific discretization in our simulations. It has also been noted by \textcite{bayati2020optimal} that the \algo{Greedy} algorithm, known to have a linear regret, ran with a sufficiently large number of arms may benefit from ``free'' exploration. This change point in its behavior happens around $\sqrt{J}$; we also evaluate \algo{Greedy} with a discretization which contains that many points.
We again study a particular bandit instance; see Appendix \ref{appendix:fixed_env} for the same experiment made with different choices of prior distribution. In this experiment, each episode consists of a Bernoulli bandit problem with $K=5$ arms, a time horizon $T=1000$ and the mean rewards are drawn i.i.d.\ from a uniform distribution over $[0, 1]$. We set the number of episodes $J=10000$ and we compare different meta-algorithms, namely \algo{Thompson Sampling} (\algo{TS}) with a uniform prior \cite{agrawal2013further}, \algo{AdaUCB} \cite{lattimore2018refining} and the \algo{Greedy} algorithm with the two discussed discretizations, denoted \algo{Greedy}(100) for the discretization with 100 points. We also report an oracle meta-algorithm, which knows the optimal $\gamma$. Results are averaged over 100 iterations and are displayed on Figure \ref{fig:fixed_env_learning}.
We see that \algo{TS} has a ``linear'' regret indicating that it fails to learn a good parameter $\gamma$ within that time frame. Whereas the regret of \algo{AdaUCB} is sublinear and the algorithm is thus learning; however it is outperformed for a relatively long period of time by a naive \algo{Greedy}, which is stuck to a suboptimal, yet good arm. The most interesting part is that \algo{Greedy}(100) performs extremely well; its regret is remarkably close to the one of \algo{Oracle} and is even sublinear. This supports the notion of free exploration for a large enough number of arms of the \algo{Greedy} algorithm.
|
1,314,259,995,616 | arxiv | \section{An Extra Postulate is Required}
I believe most physicists would consider that the postulates (or
at least the properties they embody) concerning the superposition,
evolution and measurement of quantum states cover the essence of
Quantum Mechanics, the theory that is at the basis of current
fundamental Physics and gives us such an accurate description of
Nature at the atomic scale. Yet, if the theory was only based on
these postulates (or properties), its descriptive power would be
almost zero and its interest, if any, would be mainly
mathematical. As soon as one wants to describe matter, one has to
include an extra postulate: \emph{Pauli's Exclusion Principle}.
One of its usual formulations, equivalent to the one proposed
originally by Wolfang Pauli in 1925 \cite{pauli-exclusion}, is the
following:\\
\textbf{Pauli's Exclusion Principle} --- \emph{No two electrons
can share the same quantum numbers.}\\
This principle refers to electrons, which constitute a significant
(but not the whole) part of matter, and is crucial in helping us
explain a wide range of phenomena, including:
\begin{itemize}
\item The electronic structure of atoms and, as a consequence, the
whole Periodic Table;
\item The electronic structure of solids and their electrical and
thermal pro\-perties;
\item The formation of white dwarfs, where the gravitational
collapse of the star is halted by the pressure resulting from its
electrons being unable to occupy the same states;
\item The repulsive force that is part of the \emph{ionic bond} of
molecules and puts a limit to how close the ions can get (e.g.,
$0.28\!$ nm between $Na^+$ and $Cl^-$ for solid sodium chloride),
given the restrictions to the states the overlapping electrons can
share.
\end{itemize}
We thus see how Pauli's insight when proposing the Exclusion
Principle was fundamental for the success of Quantum Mechanics.
Although he made many other important contributions to Physics, it
was for this one that he was awarded the Nobel prize in 1945.
Pauli's Exclusion Principle remains as a postulate, for Pauli's
own dissatisfaction, as he expressed in his Nobel prize acceptance
lecture in 1946 \cite{pauli-lecture}:
\begin{quote}
\emph{``Already in my original paper I stressed the circumstance
that I was unable to give a logical reason for the exclusion
principle or to deduce it from more general assumptions. I had
always the feeling, and I still have it today, that this is a
deficiency." }
\end{quote}
In any case, as inexplicable as it may be, Pauli's Exclusion
Principle seems to beg for a generalization. In fact, it was soon
realized that other particles apart from electrons suffer from the
same inability to share a common quantum state (e.g., protons).
More surprising was the indication that some particles seem to
obey to the exactly opposite effect, being
--- under certain circumstances --- forced to share a common state, as
for instance photons in the stimulated emission phenomenon, thus
calling for a much more drastic generalization of Pauli's
Principle.
\section{Identity and Indistinguishability}
Pauli's Exclusion Principle intervenes in a wide range of
phenomena, from the chemical bond in the salt on our table to the
formation of stars in distant galaxies. This is because it applies
to electrons and we consider all electrons in the universe to be
\emph{identical}, as well as any other kind of quantum
particles:\\
\textbf{Identical particles} --- \emph{Two particles are said to
be \emph{identical} if all their intrinsic properties (e.g., mass,
electrical charge, spin, colour, ...) are exactly the same.}\\
Thus, not only all electrons are identical, but also all
positrons, photons, protons, neutrons, up quarks, muon neutrinos,
hydrogen atoms, etc. They each have the same defining properties
and behave the same way under the interactions associated with
those properties. This brings us to yet another purely quantum
effect, that of \emph{indistinguishable} particles.
How can we distinguish identical particles? Their possibly
different internal states are not a good criterium, as the
dynamics can in general affect the internal degrees of freedom of
the particles. The same is valid for their momentum or other
dynamical variables. But their spatial location can actually be
used to distinguish them. Let us imagine we have two identical
particles, one in Alice's possession and the other with Bob. If
these two parties are kept distant enough so that the wave
functions of the particles practically never overlap (during the
time we consider this system), then it is possible to keep track
of the particles just by keeping track of the classical parties.
This situation is not uncommon in quantum mechanics. If, on the
other hand, the wave functions do overlap at some point, then we
no longer know which particle is with which party. And if we just
do not or cannot involve these classical parties at all, then it
is in general also impossible to keep track of identical
particles. In both these cases, the particles become completely
indistinguishable, they are identified by completely arbitrary
labels, with no physical meaning (as opposed to \emph{Alice} and
\emph{Bob}). In these situations the description of our system
becomes ambiguous and the so-called \textit{exchange degeneracy}
appears.
The problem of finding the correct and unambiguous description for
such systems is very general and requires the introduction of a
new
postulate for quantum mechanics: the Symmetrization Postulate.\\
\textbf{Symmetrization Postulate} \label{Post. Symmetrization} ---
\emph{In a system containing indistinguishable particles, the only
possible states of the system are the ones described by vectors
that are, with respect to permutations of the labels of those
particles:}
\begin{itemize}
\item \textit{either \emph{completely symmetrical} --- in which
case the particles are called \emph{bosons};}
\item \textit{either \emph{completely antisymmetrical} --- in
which case the particles are called \emph{fermions}.}
\end{itemize}
This is in fact a generalization of Pauli's Exclusion Principle,
in two ways. First, it extends it to a whole class of particles
which suffer the same restrictions: fermions. But if goes even
further and introduces a new class of particles, bosons, which
have a very different behaviour, almost the opposite, as they are
forced to share the same quantum numbers. To decide which
particles should be associated to a particular symmetry is
something that must ultimately be determined by observation. The
Symmetrization Postulate matches the study of such symmetries with
our empirical knowledge: as far as we know today, there are two
classes of particles in Nature according to their collective
behaviour in indistinguishable situations. These are, of course,
bosons and fermions: no particles have been found so far that
under the same circumstances could be described by vectors that
are neither symmetrical nor antisymmetrical. It is important to
note that none of this could have been deduced from the other
standard postulates of Quantum Mechanics. Yet, the Symmetrization
Postulate is rarely evoked.
\subsection{The Spin-Statistics Connection}
To determine whether a given particle is a fermion or a boson, we
need to investigate its statistical behaviour in the presence of
(at least one) other identical particles, when they are all
indistinguishable, and this behaviour will be very different for
the two types of particles. Indirect methods could also help us
reach a conclusion, but before any of that a simple and intriguing
property can actually come to our rescue: the
\emph{spin-statistics connection}.\\
\textbf{Spin-Statistics Theorem} --- \textit{Particles with
integer spin are bosons. Particles with half-integer spin are
fermions.}\\
This is not only a widely known empirical rule in Physics, but in
fact a theorem (originally proved by Pauli \cite{pauli-theorem}),
even if its proofs are not all completely clear and free from
controversy. Thanks to it, it is very easy to determine whether
some particle is either a fermion or a boson. In particular, this
criterion works also for composite particles. It is quite
surprising to find such a connection between the spin of a
particle and its statistical nature, a connection whose origins I
believe are still not well understood.
\section{Quantum Information}
The use of quantum systems and their unique properties to encode,
transmit, process and store information offers a completely new
way to deal with information, representing a revolution for
Information Sciences, and possibly for our Information Society as
well. It is conceivable that one day we will have a more
fundamental description of Nature than Quantum Physics and this
may well represent yet again a revolution in the way we deal with
information. But before trying to reach that far, we should ask
ourselves if we have already explored all the properties of the
quantum world in terms of their relevance for information
processing. I think not. There is still at least one other
property, as fundamental as the ones already mentioned, that
should be considered: particle statistics \footnote{Also referred
to as \emph{quantum statistics}; I shall use both expressions
interchangeably.}, or the apparent fact that every particle is
either a fermion or a boson and that their collective behaviour
obeys precise rules. Now, can the effects of particle statistics
play any role in quantum information processing? Can they be used
to perform useful quantum information tasks? And in an efficient
way?
For the last couple of years we have been exploring the role of
indistinguishable particles and quantum statistics in quantum
information processing, both for fermions and bosons \footnote{
Note also some recent attempts to use the statistical properties
of particles in the context of quantum information using electrons
\cite{divincenzo-loss, antonio}, photons \cite{dik}, parahydrogen
\cite{glaser}, fermions \cite{lloyd}, bosons \cite{sougato}, and
anyons \cite{kitaev}, but never presenting a systematic comparison
between the fermionic and bosonic statistics (This note and the
respective references had to be removed from the published version
of this article due to length restrictions).}. We have proved
that, using \emph{only} the effects of particle statistics, it is
possible to perform a quantum information task
--- such as transfer of entanglement \cite{Omar}, to do useful quantum
information processing --- such as entanglement concentration
\cite{Paunkovic}, and do it in a optimal way --- in particular, in
a state discrimination protocol \cite{Bose}. All these results
make use the antibunching of indistinguishable electrons impinging
in a beam splitter, as well as of the bunching of photons in a
similar situation, both a clear signature of their statistics
\cite{yamamoto}.
Using two pairs of entangled particles, it was shown for both
fermions (electrons) and bosons (photons) that
indistinguishability enforces a transfer of entanglement from the
internal to the spatial degrees of freedom without any interaction
between these degrees of freedom \cite{Omar}. Furthermore,
sub-ensembles selected by local measurements of the path will in
general have different amounts of entanglement in the internal
degrees of freedom depending on the statistics of the particles
involved. Then, an entanglement concentration scheme was proposed
which uses only the effects of particle statistics
\cite{Paunkovic}. Although its efficiency is the same for both
fermions and bosons, the protocol itself is slightly different
depending on the nature of the particles. Moreover, no explicit
controlled operation is required at any stage. Finally, particle
statistics is applied to the problem of optimal ambiguous
discrimination of quantum states \cite{Bose}. It was shown that
the Helstrom optimal single-shot discrimination probability to
distinguish non-orthogonal states of two qubits (encoded in the
internal degree of freedom of two electrons or two photons) can be
achieved using only the properties of fermions and bosons.
Furthermore, this method offers interesting applications to the
detection of entanglement and the purification of mixed states.
Two main features emerge from the above results: particle
statistics appears as a resource that can replace controlled
operations (conditional interactions) in a \emph{natural} way, and
information processing using indistinguishable particles is
different for fermions and bosons. The obtained results can also
be tested with current technology. Moreover, they establish that
indistinguishable particles and quantum statistics can play a new
and important role in quantum information and that this connection
should be further explored.
\section*{Acknowledgements}
This article is based on a talk delivered at the
\emph{International Meeting on Quantum Information Science:
Foundations of Quantum Information}, held in Camerino, Italy, in
April 2004. I would like to thank the support from
Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (Portugal) and
the 3rd Community Support Framework of the European Social Fund,
as well as acknowledge the QuantLog initiative.
|
1,314,259,995,617 | arxiv | \section*{Plain Language Summary}
The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we can look for specific states of the system that lead to more predictable behavior than others, often termed ``forecasts of opportunity''. When these opportunities are not present, scientists need prediction systems that are capable of saying ``I don't know.'' We present a method for teaching neural networks, a type of machine learning tool, to say ``I don't know'' for regression problems. By doing so, the neural network focuses less on the predictions it identifies as problematic and focuses more on the predictions where its confidence is high. In the end, this leads to better predictions.
\clearpage
\section{Introduction}
The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we look for specific states of the system that lead to more predictable behavior than others, often termed ``forecasts of opportunity'' \cite{Mariotti2020,Albers2019,Mayer2020,Barnes2020}. When skillful forecast opportunities are not present, scientists need prediction systems that are capable of saying ``I don't know.'' While this concept of forecasts of opportunity stems from weather and climate predictions, the general idea is far broader than this. For example, a forecast of opportunity framework may be beneficial when certain predictors are only helpful under certain circumstances. Additionally, if the predictor data has unknown errors or corrupted values (e.g. corrupted pixels in satellite imagery), a system that can say ``I don't know'' can act as an effective data cleaner: identifying the more skillful predictions, when they occur.
Many approaches to identify skillful forecasts of opportunity already exist. For example, retrospective analysis of the forecast can provide a sense of the physical circumstances that can lead to forecast successes or busts \cite<e.g.>{Rodwell2013-qz}, The ensemble spread can also give a sense of uncertainty in numerical weather prediction systems \cite<e.g.>{Van_Schaeybroeck2016-lo}. \citeA{Albers2019} used a linear inverse modeling approach to identify confident subseasonal predictions and showed that these more confident predictions were indeed more skillful. Recently, \citeA{Mayer2020} and \citeA{Barnes2020} suggested that machine learning, specifically neural networks, may be a useful tool to identify forecasts of opportunity for subseasonal-to-seasonal climate predictions. Specifically, a classification network is first trained, then the predicted probabilities are ordered from largest to smallest. A selection of predictions with the highest probabilities are identified as possible forecasts of opportunity. While \citeA{Mayer2020} and \citeA{Barnes2020} show that this approach works well for classification tasks (i.e., predicting a specific category) where the network is already tasked with predicting a probability, it is less clear how one might apply this methodology to regression tasks (i.e., predicting a continuous quantity).
Most of the current machine learning approaches used to identify forecasts of opportunity, including those described above, are applied post-training. The network is first trained, and then the model confidence is assessed. Instead, here we build on the work by \citeA{Thulasidasan2019} and \citeA{Thulasidasan2020} to develop a deep learning abstention loss function for regression tasks that teaches the network to say ``I don't know'' (abstain) on certain samples \textit{during training}.
The resulting controlled abstention network (CAN) preferentially learns from the samples in which it has more confidence and abstains on samples in which it has less confidence. The CAN is designed to identify the optimal abstention fraction, or abstain on a user-defined fraction via a PID controller; both approaches ultimately lead to more accurate predictions than our baseline approach. While alternative methods have recently been suggested for abstention (rejection) during training \cite{Geifman2019-paper,Geifman2019-thesis}, the CAN approach can be easily implemented in most any network architecture designed for regression, as it only requires modification of the output layer and loss function.
We demonstrate the behavior of the CAN on a simple 1D example, and then on synthetic climate data where the correct answer is known. We present two use cases with the climate data. The first use case explores the utility of the CAN to identify climate forecasts of opportunity and is modeled loosely after global teleconnections associated with the El Ni\~no Southern Oscillation \cite<e.g.>{McPhaden2006-pi,Yeh2018-tf}. The second use case explores the utility of the CAN to act as a data-cleaner by identifying input samples with corrupted pixels and preferentially learning on the uncorrupted samples.
Section 2 introduces the synthetic climate data and general neural network architecture. Section 3 discusses the baseline loss function and the CAN in detail, and Section 4 presents the results. Additional discussion on the approach is provided in Section 5 and conclusions in Section 6.
\section{Data and Experiments}
\subsection{Synthetic climate data}
To demonstrate the utility of the controlled abstention network (CAN), we use the synthetic benchmark data set introduced by \citeA{Mamalakis2021}. While \citeA{Mamalakis2021} provides an extensive description of this data, we give a brief overview here. The data set consists of input fields $x_i$ and output series $y_i$ (where $i$ denotes the $i^{th}$ sample), which is a function of the input. The input fields represent monthly anomalous global sea surface temperatures (SSTs) generated from a multivariate normal distribution with a correlation matrix estimated from observed SST fields\footnote{https://psl.noaa.gov/data/gridded/data.cobe2.html}. The $i^{th}$ input sample consists of one map of SST anomalies, denoted as $x_i$. \citeA{Mamalakis2021} then defines the global response $y_i$ to sample $x_i$ as the sum of local, nonlinear responses. Specifically,
\begin{linenomath*}
\begin{equation}
y_i = \sum_g F_g(x_i)
\end{equation}
\end{linenomath*}
where $g$ represents the grid point and $F_g$ is defined locally (at each grid point $g$) by a piecewise linear function. The slopes $\beta_n$ (where $n$ is an integer that runs from 1 to the number of piecewise linear segments, set here to 5) of each local function are chosen randomly from a multivariate normal distribution with correlation matrix, once again, estimated from observed SST fields.
In the end, this data set consists of input maps of SSTs with spatial correlations indicative of observed SSTs, but where each input map is independent of the others. $y_i$ then represents the sum of contributions from each grid point across the globe, where that contribution is a nonlinear function (specifically, a piecewise linear function) of the SST value at that grid point. To speed up training time, we reduce the number of grid points (pixels) from that used by \citeA{Mamalakis2021} to 60 longitudes and 15 latitudes for a total of 900 grid points per input map. An example input map is shown in Fig. \ref{fig_arch}; its corresponding $y$ given in the title.
\subsection{Network architecture and training}
For regression problems, it is typical to have a single output unit that provides the prediction by the network. Here, we add uncertainty estimates to our regression network by simply adding an additional output unit. We give these two output units the names $\mu$ and $\sigma$ as shown in Fig. \ref{fig_arch}. $\mu$ denotes the predicted value while $\sigma$ denotes the uncertainty related to that prediction. As we will show, we can take our interpretation even further and say that the network outputs a probabilistic prediction (conditional probability distribution) for the $j^{th}$ sample in the form of a normal distribution with mean $\mu_i$ and standard deviation $\sigma_i$.
We train a fully connected feed-forward network with two hidden layers with 50 and 25 units, respectively. As described above, the output layer consists of two units. We train with a ReLU (rectified linear unit) activation function, learning rate of 0.0005, and batch size of 32. Since the second output unit (denoted by $\sigma$ in Fig. \ref{fig_arch}) cannot be negative, we constrain it to being positive through the network setup. We train on 8,000 samples, validate on 5,000 samples, and test on 5,000 samples. While we could train on a much larger data set, we have intentionally kept the sample size relatively small to demonstrate the utility of the CAN when the sample size is relatively low --- as is the case for many geoscience applications. All quantities and figures are computed from the testing data unless otherwise specified.
We employ early stopping to automatically determine the optimal number of epochs to train. Specifically, the network stops training when the validation loss stops decreasing, with a {\tt patience} of 60 epochs. The network with the best performance on the validation loss is saved. Specifically for the CAN, we select the best performing network from epochs after the spin-up period, but we only consider epochs where the validation abstention fraction is within 0.1 of the abstention setpoint. For all examples shown here, 20 different networks are trained for each configuration (i.e., baseline ANN and CAN) by varying the randomly initialized weights.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=375px]{figures/architecture_regress.png}
\end{center}
\caption{General CAN architecture used for the experiments. A map of synthetic sea-surface temperature anomalies is fed into a fully connected network tasked with predicting $\mu$ and $\sigma$ for that sample.}
\label{fig_arch}
\end{figure}
The network was trained using Python 3.7.9 and TensorFlow 2.4.
\section{Methods}
\subsection{Baseline network with log-likelihood loss}
The baseline deep neural network (ANN) has the architecture shown in Fig. \ref{fig_arch} and trains using the negative log-likelihood loss defined for sample $x_i$ as
\begin{linenomath*}
\begin{equation}
\mathcal{L}(x_i) = -\log p_i. \label{loss_base}
\end{equation}
\end{linenomath*}
where $p_i$ is the value of the probability density function of a normal distribution ($\mathcal{N}$) with mean $\mu_i$ and standard deviation $\sigma_i$:
\begin{linenomath*}
\begin{equation}
p_i = \mathcal{N}(y_i,\mu_i,\sigma_i).
\end{equation}
\end{linenomath*}
This baseline model predicts $\mu_i$ and $\sigma_i$ for each sample, where $\mu_i$ is the model's best guess of $y_i$ and $\sigma_i$ is the associated uncertainty \cite<e.g.,>[Section 5.3.2]{Duerr2020}.
Once the network is trained, we can invoke abstention on the less certain predictions by thresholding on $\sigma$ \cite<e.g.>{Mayer2020}. For example, the 20\% most confident predictions are determined as the smallest 20\% $\sigma$ values (or the $20^{th}$ percentile of the predicted $\sigma$). As we will show, this thresholding approach for abstention is itself very powerful and can be used as a simple way to add uncertainty to regression networks. In addition, this baseline approach will serve as a comparison for the CAN.
As an additional baseline, we will also compare our results with those obtained by training a standard feed-forward network containing a single output unit and a loss function defined by the mean absolute error (MAE). In this case, the network does not quantify uncertainty (i.e., $\sigma$). Consequently, only summary statistics over all testing predictions are provided.
Throughout this paper, we use ``coverage'' to denote the fraction of samples for which the network makes a prediction, and ``abstention'' to refer to the fraction of samples for which the network does not make a prediction. Thus, the percent coverage is always 100\% minus the percent abstention. For the baseline approach, abstention and coverage is computed post-training based on the predicted uncertainties $\sigma$ while for the CAN, these quantities are determined during the training itself (see next section).
\subsection{Controlled Abstention Network (CAN)}
\subsubsection{Abstention loss}
Unlike the baseline ANN, the CAN loss is designed to identify the less confident predictions so as to preferentially learn from the more confident predictions. The CAN loss for sample $x_i$ is defined as
\begin{linenomath*}
\begin{equation}
\mathcal{L}(x_i) = -q_i\log p_i - \alpha \log q_i . \label{loss}
\end{equation}
\end{linenomath*}
where $\alpha$ controls the amount of abstention (see next subsection) and $q_i$ represents the prediction weight defined as
\begin{linenomath*}
\begin{equation}
q_i = \min\left (1.0, \left[\frac{\kappa}{\sigma_i} \right]^2 \right).
\end{equation}
\end{linenomath*}
$\kappa$ is a data-specific scale (see below). The prediction weight $q_i$ tells the CAN how much it should consider sample $j$ when it reduces the total loss during backpropagation. Note that Eq. \ref{loss} is very similar to the abstention loss of \citeA<>[Chapter 4]{Thulasidasan2020} and \citeA{BarnesBarnes2021Class} for classification networks.
The loss above works by increasing $\sigma_i$ values on samples that the CAN identifies as less certain. In this way, one can define abstention based on a threshold $\sigma$. Specifically, we define abstention by the CAN when the predicted $\sigma_i > \tau$. To define $\tau$, let $\mathcal{P}_m$ denote the $m^{th}$ percentile of the predicted validation $\sigma$ at the end of the spin-up period. Then $\tau = \mathcal{P}_m$ where $m$ is the percent coverage setpoint. For example, for a coverage setpoint of 80\% (abstention setpoint of 20\%), $\tau$ is set to the $80^{th}$ percentile of predicted validation $\sigma$ at the end of the spin-up period: $\tau = \mathcal{P}_{80\%}$. Note that since $\tau$ is defined by the validation data at the end of the spin-up period, it remains fixed during training and evaluation of the testing data.
We define $\kappa = \mathcal{P}_{90\%}$. This definition of $\kappa$ is something that the user can modify. For example, setting $\kappa=\tau$ is an obvious choice. However, we found that setting $\kappa = \mathcal{P}_{90\%}$ outperformed $\kappa=\tau$ and worked for all experimental setups here; consequently, we did not explore further tuning of this parameter.
To summarize this section, the abstention loss looks a lot like the baseline loss (Eq. \ref{loss_base}). The main difference is the use of an additional scaling factor $q$ and an additional term that penalizes the network for large $\sigma$ predictions. This penalty is modulated by $\alpha$. $\kappa$ and $\tau$ are parameters set by the network during the spin-up period. $\kappa$ acts as a scaling parameter on $\sigma$ within the loss function. Samples with $\sigma$ larger than $\kappa$ contribute less to the loss function, while samples with $\sigma$ smaller than $\kappa$ contribute their full amount. $\tau$, on the other hand, sets the threshold used to define abstention and is used by the PID controller (see next section) when the user wishes to set a target coverage fraction.
\subsubsection{Setting the abstention setpoint}
The abstention loss, as defined in Eq. \ref{loss}, can be used in two distinct ways, depending on how $\alpha$ is determined. The first way is to set $\alpha$ to a predetermined constant. By doing this, the network is penalized equally throughout training for assigning high $\sigma$ values. If $\alpha$ is chosen correctly, the network can learn the optimal coverage percent from the data set. When $\alpha$ is held constant, the coverage setpoint is not set by the user and so we set $\tau = \kappa$. Physically, this represents the fact that the definition of abstention is set by the $90^{th}$ percentile of the predicted validation $\sigma$ values at the end of spin-up (i.e., $\mathcal{P}_{90\%}$). This works well because this same value is also used to define $\kappa$, the normalization factor used to set the confidence $q$ in Eq. \ref{loss}.
Alternatively, $\alpha$ can be adaptively modified throughout training so that the network abstains on a specified fraction of the training samples. Inspired by the success reported in \citeA<>[Chapter 4]{Thulasidasan2020}, we implement a discrete-time PID controller (velocity algorithm) to modulate $\alpha$ throughout training \cite<e.g,>[Eq. (1.38)]{Visioli2006}.
\citeA{Thulasidasan2020} solely explores low abstention setpoints (e.g. 10\%), and evaluates the PID terms batch by batch. For our applications, however, we need the algorithm to work well for a broad range of abstention setpoints (e.g. from 10\% to 90\%). With a high abstension setpoint, say 90\%, and a batch size of 32, only 3 samples on average would be covered per batch --- this leads to to unstable behavior. Because of this, we evaluate the PID terms on 6 consecutive batches ($32 \times 6 = 192$ samples) which leads to more stable behavior of abstention fraction while not being so big as to impede training. Fig. \ref{fig_epochs} shows examples of the PID controller modulating $\alpha$ to control the abstention setpoint during training.
The training of the CAN occurs in two stages:
\begin{itemize}
\item \textbf{Spin-up:} For the first $N_{spin}$ epochs, the CAN is trained using the baseline loss function given in Eq. \ref{loss_base}. At the end of spin-up, $\mathcal{P}_m$ is computed on the validation samples for $m$ between 10 and 90 in increments of 10.
\item \textbf{Abstention training:} The CAN continues from where it stopped during the spin-up stage, but now trains using the abstention loss of Eq. \ref{loss}, with $\kappa$ and $\tau$ defined from $\mathcal{P}_m$. During this stage, $\alpha$ is either updated by the PID controller, or held constant at a user-defined value.
\end{itemize}
Based on these stages of training, there are only one to two \textit{new} free parameters to be determined by the user, depending on whether the PID controller is used to update $\alpha$ or whether $\alpha$ is held constant. Specifically, the user must choose the number of spin-up epochs, $N_{spin}$, for both methods and must also choose $\alpha$ if it is held fixed. While other parameters can certainly be tuned, we did not find it necessary for the range of experiments included in this paper.
\section{Results}
\subsection{A simple 1D example}
Before we discuss results with the synthetic climate data, it is informative to explore the behavior of the baseline ANN and CAN for a simple example with a 1-dimensional input. Specifically, we define an $(x,y)$ data set as (Fig. \ref{fig_ols_summary}a):
\begin{linenomath*}
\begin{eqnarray}
x_c &=& \epsilon(4.0,.25) \\
y_c &=& 1.0x_c - 2.0 + \epsilon(0.0,.5) \nonumber \\
x_l &=& \epsilon(0.0,.5) \nonumber \\
y_l &=& 0.7x_l + 0.6 + \epsilon(0.0,.05) \nonumber \\
(x,y) &=& (\{x_c,x_l\},\{y_c,y_l\}) \nonumber
\end{eqnarray}
\end{linenomath*}
where $\epsilon(a,b)$ denotes a random variable drawn from a normal distribution with mean $a$ and standard deviation $b$. The data is created such that 30\% of the samples exist along the line (i.e. $(x_l,y_l)$), and 70\% of the points exist within the cloud (i.e., $(x_c,y_c)$). Fig. \ref{fig_ols_summary}a shows the data with $x$ on the x-axis and $y$ on the y-axis. The data is designed such that for $x$ less than about 2.5, the data largely follows a straight line with little noise. For larger $x$, the data shows a cloud of points with no clear linear relationship. Naively fitting a straight line through all of this data would result in a fit that performs poorly on most samples. Instead, we would like a network to predict the samples along the line with accuracy while also identifying the samples within the cloud as being highly unpredictable (``I don't know.").
\begin{figure}
\begin{center}
\noindent\includegraphics[width=400px]{figures/scatter_truthpredict_olsr1_AbstentionLogLoss_npSeed99_networkSeed0.png}
\noindent\includegraphics[width=400px]{figures/mean_absoluteError_line_dots_olsr1_AbstentionLogLoss_npSeed99.png}
\end{center}
\caption{\textbf{1D Example w/ Constant $\alpha$.} (a) Data used for the simple 1D example. (b) Predicted $y$ versus the true $y$ for the baseline ANN predictions. The dashed line denotes the one-to-one line --- a perfect prediction. (c) As in (b) but for the CAN predictions. Scatter plots only show covered predictions (i.e., non-abstained). Colors denote the predicted $\sigma$, and insets in (b,c) display histograms of the predicted $\sigma$ for both covered and abstained predictions. (d) Mean absolute error versus coverage for different neural network loss functions over a range of initialization seeds for constant $\alpha=0.1$. Purple shading denotes the full range of errors over 20 baseline ANN models; the solid purple line denotes the median.}
\label{fig_ols_summary}
\end{figure}
The network is trained to take the input value $x_i$ and predict $y_i$. For this 1D simple example only, we train a fully connected network with 2 hidden layers of 5 units each. We found that this architecture is complex enough to learn the linear fit but not so complex as to learn a separate fit for the cloud. The network is trained with a constant $\alpha$ to evaluate whether the CAN is able to identify the correct coverage fraction of 30\% and abstain on the remaining 70\%. We found that $\alpha=0.1$ works well. We set the number of spin-up epochs to $N_{spin} = 225$ and use a learning rate of 0.0001. Finally, we train on 3,000 samples, validate on 1,000 samples, and test on 1,000 samples.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=200px]{figures/history_olsr1_AbstentionLogLoss_setpoint0.1_networkSeed0_npSeed99.png}
\end{center}
\caption{\textbf{1D Example w/ Constant $\alpha$.} Example training and validation metrics for a constant $\alpha=0.1$.}
\label{fig_ols_epochs}
\end{figure}
Fig. \ref{fig_ols_epochs} shows $\alpha$ (fixed to 0.1 after spin-up), the abstention fraction, and the loss as a function of epoch during training for one particular model. The loss of both the training and validation data drops steadily during the spin-up stage of 0-225 epochs. At the start of the abstention stage, $\alpha$ is fixed to 0.1, while the abstention fraction is allowed to vary. However, it is clear that the network identifies an optimal abstention fraction by the second epoch of the abstention stage, and network does not vary for the rest of the training. Training is halted by early-stopping and the best weights are taken from the best model at epoch 559.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=200px]{figures/histograms_olsr1_AbstentionLogLoss_npSeed99_networkSeed0.png}
\end{center}
\caption{\textbf{1D Example w/ Constant $\alpha$.} Histograms of the standardized errors (z-scores) of predictions by the baseline ANN for all samples. Means and standard deviations of these standardized errors are shown in colored text.}
\label{fig_ols_hist}
\end{figure}
Results from the baseline ANN and the CAN are shown in Fig. \ref{fig_ols_summary}b,c,d. As shown in Fig. \ref{fig_ols_summary}b,d, the baseline ANN outperforms the MAE model for all coverage percentages. Unlike the MAE model, the baseline ANN learns which samples are more certain and scales its predicted $\sigma$ accordingly, as shown by the inset histogram in Fig. \ref{fig_ols_summary}b. Fig. \ref{fig_ols_hist} shows the histograms of the standardized errors $z_i$ for the baseline ANN, which are defined as
\begin{linenomath*}
\begin{equation} \label{zj}
z_i = \frac{y_i - \mu_i}{\sigma_i}.
\end{equation}
\end{linenomath*}
The mean and standard deviation of the $z_i$ are approximately $0$ and $1$ for both training and validation. This reveals that the $\sigma$ are more than just unscaled measures of relative confidence. Rather, we may usefully interpret $\mu_i$ and $\sigma_i$ as the mean and standard deviation of an approximate conditional probability distribution for prediction $j$.
Focusing more closely on the baseline ANN results in Fig. \ref{fig_ols_summary}d (solid purple line), the error decreases with the coverage percent. This indicates that the more confident predictions are also more correct. As mentioned in the introduction, this is the idea behind \textit{forecasts of opportunity}, and the baseline ANN alone is able to identify the most skillful forecasts without abstention. Even so, the CAN (orange dots) outperforms the baseline ANN slightly: its error is slightly below even the best baseline ANN model and does a slightly better job learning the best fit line (Fig. \ref{fig_ols_summary}c). The CAN obtains its edge over the baseline ANN because it is able to put even more energy into learning the relationships of the confident samples because of the abstention loss design. Furthermore, recall that 20\% of the data falls along the well-defined line in Fig. \ref{fig_ols_summary}a, and the CAN is able to identify the optimal coverage percent as 19\%.
\subsection{Forecasts of Opportunity}
For our first use case with the synthetic climate data, we modify the data to loosely reflect forecasts of opportunity related to teleconnections associated with the El Ni\~no Southern Oscillation (ENSO). Warm ENSO events (El Ni\~no events) have long been known to impact global temperatures and precipitation \cite<e.g.>{McPhaden2006-pi,Yeh2018-tf}. At times these events have led to skillful forecasts on subseasonal-to-seasonal time scales \cite<e.g.>{Johnson2014-fh}. To mimic this behavior with our synthetic data set, we average the anomalous SSTs in the ENSO region within the equatorial eastern Pacific (dashed white box in the map in Fig. \ref{fig_arch}). When the average value in this box is larger than 0.5 (29\% of the samples), we leave the sample as is. This reflects an opportunity where a strong El Ni\~no may lead to more predictable behavior of the global climate system. Samples where the average value is less than 0.5 represent ``noisy'' samples consequently, we shuffle the $y$ values across these samples so that there is no relationship between the input maps $x$ and their labels $y$. With such a setup, we anticipate that the network can identify strong synthetic El Ni\~no samples (i.e., large values within the ENSO box, Fig. \ref{fig_arch}) as samples with high confidence and low error.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=185px]{figures/history_tranquilFOOr16_AbstentionLogLoss_setpoint0.3_networkSeed0_npSeed99.png}
\noindent\includegraphics[width=185px]{figures/history_tranquilFOOr16_AbstentionLogLoss_setpoint0.7_networkSeed0_npSeed99.png}
\end{center}
\caption{\textbf{Forecasts Of Opportunity Experiment w/ PID-controlled $\alpha$.} Example training and validation metrics for abstention setpoints of (a) 0.3 and (b) 0.7.}
\label{fig_epochs}
\end{figure}
We train separate models for abstention setpoints ranging from .1 to .9 in increments of 0.1. Fig. \ref{fig_epochs} shows $\alpha$, the abstention fraction, and the loss as a function of epoch during training for two different abstention setpoints. Following the spin-up period of 15 epochs, the PID-controller adjusts $\alpha$ to maintain an abstention fraction within 0.1 of the abstention setpoint.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=400px]{figures/scatter_truthpredict_tranquilFOOr22_AbstentionLogLoss_npSeed99_networkSeed0.png}
\noindent\includegraphics[width=400px]{figures/mean_absoluteError_line_dots_tranquilFOOr22_AbstentionLogLoss_npSeed99.png}
\end{center}
\caption{\textbf{Forecasts Of Opportunity Experiment w/ PID-controlled $\alpha$.} (a) Predicted $y$ versus the true $y$ for the baseline ANN predictions. The dashed line denotes the one-to-one line --- a perfect prediction. (b,c) are the same (a), but for CAN predictions at two different coverage rates. Scatter plots only show covered predictions (i.e., non-abstained). Colors in (a)-(c) denote the predicted $\sigma$, and insets in (a)-(c) display histograms of the predicted $\sigma$ for both covered and abstained predictions. (d) Mean absolute error versus coverage for different neural network loss functions over a range of initialization seeds and abstention setpoints (shown in colors). Purple shading denotes the full range of errors over 20 baseline ANN models; the solid purple line denotes the median.}
\label{fig_summary}
\end{figure}
Results for the baseline ANN and PID-controlled CAN are shown in Fig. \ref{fig_summary}. As shown in Fig. \ref{fig_summary}d, the baseline ANN error (purple shading) decreases for decreasing coverage. This documents the ability of the baseline ANN to identify the forecasts of opportunity while it assigns higher $\sigma$ values to samples with higher uncertainty (Fig. \ref{fig_summary}a). The colored dots in Fig. \ref{fig_summary}d show results from the PID-controlled CAN for a range of abstention setpoints. Like the baseline ANN, the CAN error decreases with decreasing coverage; however, the best CAN models are always better (lower error) than the best baseline ANN models. This is especially evident for coverage fractions below 30\%, which corresponds to the 29\% of samples that are forecasts of opportunity (i.e., unshuffled). Fig. \ref{fig_summary}b,c display the predictions by the CAN, including histograms of $\sigma$, for two coverage fractions. For lower coverage fractions (higher abstention fractions), the CAN pushes the abstained $\sigma$ values to larger values and likewise improves its confidence on the covered samples by reducing $\sigma$ (compare predicted $\sigma$ histograms inset in Fig. \ref{fig_summary}b,c). That is, the CAN with 24\% coverage learns the forecasts of opportunity samples \textit{better} than the baseline ANN, and better than it does for higher coverage fractions.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=200px]{figures/histograms_tranquilFOOr22_AbstentionLogLoss_npSeed99_networkSeed0.png}
\end{center}
\caption{\textbf{Forecasts Of Opportunity Experiment w/ PID-controlled $\alpha$.} Histograms of the standardized errors (z-scores) of the predictions by the baseline ANN for all samples. Means and standard deviations of these standardized errors are shown in colored text.}
\label{fig_hist}
\end{figure}
Fig. \ref{fig_hist} shows the histograms of the standardized errors $z_i$ from the baseline ANN (see Eq.~\ref{zj}). As in the 1D example, the mean and standard deviation of the $z_i$ are approximately $0$ and $1$ for both training and testing data (validation data looks similar). This reveals that the $\sigma$ are more than just unscaled measures of relative confidence. Moreover, we may usefully interpret $\mu_i$ and $\sigma_i$ as the mean and standard deviation of an approximate conditional probability distribution for prediction $j$. One can also create histograms for the CAN of the predicted samples (not shown); however, in this case the histograms are much narrower since the covered (non-abstained) samples tend to be highly confident and exhibit small $\sigma$, as expected.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=266px]{figures/scatter_truthpredict_tranquilFOOr45_AbstentionLogLoss_npSeed99_networkSeed0.png}
\noindent\includegraphics[width=400px]{figures/mean_absoluteError_line_dots_tranquilFOOr45_AbstentionLogLoss_npSeed99.png}
\end{center}
\caption{\textbf{Forecasts Of Opportunity Experiment w/ Constant $\alpha$.} (a) Predicted $y$ versus the true $y$ for the baseline ANN predictions. The dashed line denotes the one-to-one line --- a perfect prediction. (b) As in (a) but for CAN predictions at a coverage rate of 24\%. Scatter plots only show covered predictions (i.e., non-abstained). Colors in (a,b) denote the predicted $\sigma$, and insets in (a,b) display histograms of the predicted $\sigma$ for both covered and abstained predictions. (d) Mean absolute error versus coverage for different neural network loss functions over a range of initialization seeds for constant $\alpha=0.1$. Purple shading denotes the full range of errors over 20 baseline ANN models; the solid purple line denotes the median.}
\label{fig_const_summary}
\end{figure}
Thus far, we have trained the CAN to identify synthetic El Ni\~no forecasts of opportunity with the PID-controller, which sets the abstention setpoint during training. We can instead use the constant $\alpha$ approach to see if the CAN identifies the correct abstention fraction. We set $\alpha=0.1$; the results are shown in Fig. \ref{fig_const_summary}. The CAN outperforms the baseline ANN with constant $\alpha$, as it did with the PID-controller. In addition, it identifies a coverage of $\sim 24$\%, which is very close to the 29\% forecasts of opportunity samples.
Interestingly, we find that the PID-controller method tends to slightly outperform the constant $\alpha$ approach (compare the 25\% coverage errors between Fig. \ref{fig_summary}d and \ref{fig_const_summary}c). It is unclear to the authors why this is the case; it could be a function of this synthetic data set. Future work will explore this behavior further.
\subsection{Corrupt Inputs}
\begin{figure}
\begin{center}
\noindent\includegraphics[width=400px]{figures/corruptInputsr22_corruptmap.png}
\end{center}
\caption{\textbf{Corrupt Inputs Experiment.} Examples of (a), an unmodified input map, and (b), a corrupted input map where 66\% of the pixels have been set to $-4.0$.}
\label{fig_corrupt_map}
\end{figure}
For the second synthetic use case, we modify the climate input maps by ``corrupting" some of the grid points by setting them equal to $-4.0$. This exercise is meant to mimic a data set where some of the inputs have bad pixels in some areas. An example of this is shown in Fig. \ref{fig_corrupt_map}. We corrupt 30\% of the samples and leave the remaining 70\% unmodified. We use the CAN with constant $\alpha=0.05$ to assess whether the network is able to successfully identify the correct abstention fraction of 30\%. Results are shown in Fig. \ref{fig_corrupt_summary}. Once again, the baseline ANN outperforms the standard MAE model for coverages less than 100\%. Furthermore, the CAN outperforms the baseline ANN and correctly identifies 70\% coverage (30\% abstention) as the optimal fraction.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=266px]{figures/scatter_truthpredict_corruptInputsr22_AbstentionLogLoss_npSeed99_networkSeed0.png}
\noindent\includegraphics[width=400px]{figures/mean_absoluteError_line_dots_corruptInputsr22_AbstentionLogLoss_npSeed99.png}
\end{center}
\caption{\textbf{Corrupt Inputs Experiment w/ Constant $\alpha$.} (a) Predicted $y$ versus the true $y$ for the baseline ANN predictions. The dashed line denotes the one-to-one line --- a perfect prediction. (b) is the same as (a), but for CAN predictions at a coverage rate of 24\%. Scatter plots only show covered predictions (i.e., non-abstained). Colors in (a,b) denote the predicted $\sigma$, and insets in (a,b) display histograms of the predicted $\sigma$ for both covered and abstained predictions. (c) Mean absolute error versus coverage for different neural network loss functions over a range of initialization seeds for constant $\alpha=0.05$. Purple shading denotes the full range of errors over 20 baseline ANN models; the solid purple line denotes the median.}
\label{fig_corrupt_summary}
\end{figure}
This use case demonstrates the ability of the CAN to act as a ``data cleaner'' for regression problems \cite{Thulasidasan2019}; the CAN preferentially learns on the uncorrupted samples and abstains on the corrupted ones. Note that if we had corrupted samples in the training set only (not in the testing set), we could remove these corrupted samples prior to training to obtain a model that performs well on the clean data set. This is different than what we have done here. We have trained the network to not only learn the uncorrupted samples, but to also learn to \textit{identify the corrupted samples}. This means that in the future, when new, unseen samples are pushed through the network, the network will be able to handle them accordingly whether they are corrupted or not.
\section{Discussion}
In many ways, abstention loss is yet another approach to combat overfitting, if we think broadly of overfitting as incorrectly learning ``noise'' within the training samples that is not present in the validation or testing samples. Common approaches for dealing with overfitting include dropout \cite{Srivastava2014-bs} and regularization. To explore this a bit further, we reran our forecast of opportunity experiment shown in Fig. \ref{fig_summary} but applied ridge regression with an L$_2$ parameter of 0.1 \cite{Marquardt1975-ac} to the first layer of the network. Ridge regression reduces the magnitude of individual weights, and thus spreads the importance across multiple units \cite<see>[Fig. 3]{Barnes2020-toe}. Results, shown in Supp. Fig. 1, can be directly compared with those in Fig. \ref{fig_summary}. Regularization slightly reduces both the baseline ANN and CAN errors, and allows the baseline ANN to perform more similarly to the CAN. Even so, the CAN outperforms the baseline ANN for the lowest coverage fractions, consistent with the fraction of noisy samples within the synthetic data set. Overall, we see that for this specific use case, regularization can be paired with the abstention loss to produce an even better prediction.
Results presented here were based on the synthetic climate data of \citeA{Mamalakis2021}, where each sample is independent and the input and output values are largely symmetric about zero. However, real data seldom behave so well. It is likely that real data may require a transformation (e.g. standardization or a power transformation) prior to training if we are to interpret $\mu_i$ and $\sigma_i$ as the mean and standard deviation of an approximate conditional probability distribution for prediction $j$. Furthermore, a potential concern is that we only present use cases based on synthetic climate data. Our aim in this paper is to demonstrate the basic concept and implementation of the abstention loss in a setting where the correct answer is known. This leaves exploration of CAN's utility in specific scientific contexts to future research. With that said, previous work exploring forecasts of opportunity in observations taking a baseline ANN approach \cite<e.g.>[]{Mayer2020,Barnes2020} provides confidence that the abstention loss will be beneficial.
While we have shown that the abstention loss outperforms the baseline ANN approach, we wish to stress that this baseline approach is itself a simple yet powerful method for incorporating uncertainty into neural network regression problems. This is especially true because the output offers approximate conditional probability distributions for the predictions. Although this baseline approach is a standard in the computer science literature \cite<e.g.,>[Chapters 4 and 5]{Duerr2020}, it much less known in the geoscience community. The authors believe it will be powerful tool as we move forward.
\section{Conclusions}
The ability to say ``I don't know'' is an important skill for any scientist.
In the context of prediction with deep learning, the identification of uncertain (unpredictable) samples is often approached post-training. In this paper we propose an alternative: a deep learning loss function that can abstain \textit{during training} for regression problems. We first present a baseline regression approach and then introduce a new abstention loss for regression. The abstention loss controlled abstention network (CAN) allows the network to preferentially learn more from confident samples, and ultimately outperform the baseline ANN approach.
An additional benefit of both the baseline ANN and abstention loss CAN is their simplicity -- they are straightforward to implement in most any network architecture as they only require modification of the output layer and training loss. The abstention loss framework has the potential to aid deep learning algorithms to identify skillful forecasts, as well as corrupt samples, ultimately improving performance on the samples with predictability.
\clearpage
\acknowledgments
This work was funded, in part, by the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) under NSF grant ICER-2019758. Once published, the code and data will be made available to the community via the Mountain Scholar permanent data repository with a permanent DOI and via Zenodo.
|
1,314,259,995,618 | arxiv | \section{Introduction}
As people across the world become increasingly aware of how their privacy is compromised in this digital era, the field of Privacy Enhancing Technologies, or PETs, has boomed. The first workshop on Privacy Enhancing Technology was in 2000, then called the "Workshop on Design Issues in Anonymity and Unobservability" \cite{petsym}. By 2007, the workshop had ballooned into a full symposium. In 2015, the first issue of the Proceedings on Privacy Enhancing Technologies journal was published \cite{privenhanctech}. This year in 2019, there were 4 volumes of Proceedings on Privacy Enhancing Technologies published, containing a total of 66 papers. Of these papers, we identified 14 which specifically describe PETs that have been newly developed or utilized in a new way. We focused on 3 papers that seemed to have widespread use cases. Some of these use cases, however, include criminal activity. While heavily focused on adversarial actions and capability, much of security research does not focus on or consider the social science and motivations behind bad actors. We believe that this is a critical factor to consider when working on the problem of cybercrime. In this paper, we will analyze some of the cutting edge PETs and the potential risks associated with them from the lenses of computer science and criminal justice. We argue that the continued development of privacy enhancing technology contributes to the global rise of cybercrime.
\section{Crime}
The field of Criminal Justice is built on theories. Researchers will examine crime data and try to find patterns based off of criminology, sociology, psychology, and even economics. These theories focus on different elements of crime, such as the different types of crime, the different types of offenders, how to deal with crime after it happens, and how to deal with crime before it happens.
The idea of deterrence is a continuously researched topic in the field of Criminal Justice that focuses on trying to deal with crime before it happens. Researchers hope to understand how we can de-incentivise and prevent criminal behaviors by making the risk greater than the reward. The theory of deterrence was first proposed by Italian philosopher and economist Cesare Beccaria in the 1700s \cite{beccaria}. According to Beccaria, there are three factors of a punishment that determine if the punishment will deter criminal action. These factors are severity, celerity, and certainty.
Since then, the different weights of these factors, and their impact on deterrence at all, have been questioned \cite{deterrence}. Research has shown that the severity of punishment might have the opposite effect; offenders that have faced strict punishment are often more likely to reoffend . The swiftness of punishment has been observed to have little effect at all \cite{uscourts}. Certainty, however, does have a measurable deterrent effect. This has been proven in multiple studies across the world, both in macro- and micro-level contexts \cite{deterrencejustice}.
A more recent theory that has grown out of the idea of deterrence is crime prevention. There are three main forms of crime prevention that have been agreed upon within the field: situational crime prevention, developmental crime prevention, and community crime prevention. Situational crime prevention, or SCP, focuses on proximate causes of crime. As the name suggests, SCP focuses on how to decrease a specific crime within a specific context. Its goal is to prevent crime by eliminating or reducing conditions that make a criminal offense more likely to occur. It draws heavily on economic theory about decision making \cite{criminology}.
In 2003, two Criminal Justice scholars Cornish and Clarke identified 25 specific techniques of crime prevention. Many of these techniques directly conflict with privacy, specifically those within the "Increase the Risks" category. This category is rather self-explanatory and highlights techniques that increase the risks of performing criminal activities. The two techniques most at odds with privacy are "reduce anonymity" and "strengthen formal surveillance." \cite{crimeprevention}
\section{Privacy Technology}
Privacy technology increases anonymity, and in doing so shrinks the certainty factor of deterrence from cybercrime to almost nothing. In this way, the development of PETs is in direct conflict with crime prevention. One of the most famous PETs is Tor. An acronym for "The Onion Router," Tor is a service that sends messages with layers of encryption between multiple endpoints to obfuscate network traffic and provide anonymity for users. According to RSA, the group that utilizes Tor the most is cybercriminals. Tor’s criminal use cases include: the trade of stolen financial data, financial fraud, illegal sexual content, bypassing censorship, drug trafficking, weapons trading, gambling, and the sale of stolen goods \cite{cybercrime}.
Tor is also a popular area of security research. One of the 2019 PoPETs papers was, "DPSelect: A Differential Privacy Based Guard Relay Selection Algorithm for Tor" \cite{dpselect}. Researchers have recently identified a system-wide vulnerability of Tor against network-level adversaries, which includes governments and other large policing bodies. These adversaries can analyze traffic to glean information about users. In addition to passive attacks, researchers have found that Tor users can be de-anonymized using active attacks that target BGP. There has been a proposed counter to this particular attack, called Counter-RAPTOR. Counter-RAPTOR uses a "guard relay selection algorithm" to select guard relays with a high resilience to BGP hijacking attacks. In Tor, guard relays are the first "messengers" in the relay system, taking a message from the user’s IP, packaging it, and sending it to another layer. As a result of having access to IPs, guard relays have to be more trusted and resilient than other nodes in the Tor ecosystem.
Hanley et al propose a new guard relay selection algorithm called DPSelect that uses differential privacy to improve upon Counter-RAPTOR, which can leak information through the decrease in randomness of guard relay selection because it is location based. Over time, an adversary could start seeing patterns in which guard relays are selected, and learn more about the Tor users. DPSelect mitigates this by including a "Max-Divergence" metric to adjust the likelihood of a particular potential guard relay being chosen. Max-Divergence is equal to the natural log of the highest probability of choosing a specific guard relay over the lowest probability of picking that relay. In a truly random scenario, each relay has an equal chance of being picked, so you get ln(1), which is equal to 0. By setting the Max-Divergence, the algorithm roughly determines how far from truly random you are willing to go, which determines how much information could be leaked. A higher Max-Divergence means a higher deviation from random. The Max-Divergence of DPSelect was 0.67 in the average case and 1.05 in the worst case, compared to Counter-RAPTOR with 1.3 in the average case and 7 in the worst case. Ultimately, this means that the new DPSelect algorithm reduces the ability for network-level actors to de-identify Tor users.
Another paper looks at the flaws of Tor and proposes an entire alternative service in, ConsenSGX: Scaling Anonymous Communications Networks with Trusted Execution Environments"\cite{consensgx}. In their design, ORAM protocols are utilized to fetch smaller portions of the full network without revealing the address space they learned about so as to minimize an adversary’s ability to attack the network. ORAM stands for Oblivious RAM; it’s a cryptographic tool to prevent the exposure of information to anyone observing memory access patterns. This team set out to design a network that is scalable, efficient, and requires minimal changes to the underlying Tor architecture.
On the server side, they built their network protocols on top of Tor’s. The main difference is how ConsenSGX does directory authorities. Each directory authority distributes its own parameters. These parameters contain all things that are not for an individual relay like protocol version, network features, and the number of relays in the different pools (as well as the bandwidth for these pools.) The directory authorities also need to verify the caches that serve the protocol. Each of the DAs has a long TTL signature verification key that is then used to sign an ephemeral asymmetric keypair. This key is then used to verify what the protocol calls an attestation. Once verified, the server responds with descriptions of available relays. This process can only work if a client knows if a node supports ConsenSGX.On the client side, like the Tor protocol, they connect to a DA authenticating and verify using the DA’s public key. Once authenticated to the DA, the client selects a relay to connect to.
In the evaluation of the ConsenSGX, the team found that the protocol is faster than it’s PIR-Tor counterpart. This speed increase is because the time complexity of client queries in ConsenSGX is log3(N), whereas Tor is O(N). Beyond the time complexity benefits, The amount of bandwidth required is also less. These benefits makes it easy to deploy a ConsenSGX node. The ConsenSGX scheme is built on top of Intel SGX as its Trusted Execution Environments. This scheme was shown to be an effective implementation for scalable anonymous networks.
Anonymous communication is another tool that can help enable cybercrime. A new covert channel called Tithonus was developed by Recabarren and Carbunar and described in their paper, "Tithonus: A Bitcoin Based Censorship Resilient System"\cite{tithonus}. It is specifically designed to not be able to be brought down by state level actors, which are typically the ones trying to catch and prevent illegal activities. Tithonus uses Bitcoin’s gossip protocol rather than relying on full consensus. The authors try to provide 6 properties. First is unobservability, which guarantees that a censor would be unable to detect communications even if they can inspect packets and corresponding metadata. This is measured in both unobservable access to received messages as well as sent messages that are indistinguishable from normal Bitcoin usage. Next, unblockability says that the censor is unable or unwilling to block communications even if the unobservability property was unable to be achieved because that would mean disrupting normal function of the blockchain. Availability promises that the system is resilient to DOS attacks. Tithonus hampers DOS attacks by requiring that requests be paid upfront, so that all interaction with the system forces the users to invest resources. Communication integrity says that communications between the user and the destination should not be able to be modified. The property of ease of deployment is fairly self explanatory. Along with being easy to bootstrap and deploy, Tithonus does not require altruistic participation (when clients download blockchain content and persist it with no remuneration). Finally, performance promises that the system minimizes cost while maximizing the amount of useful information sent over time.
Tithonus utilizes a complex communication stack to achieve its goals. The lowest layer embeds data into transactions, the next optimizes transactions fees, the one after that sends messages of random size, the next establishes trust, the layer on top of that allows the client to securely communicate with the system, and the final layer is the application layer, providing the interface. Based on the ease of deployment property, users don’t need to understand or be aware of the intricacies of the implementation. Tithonus even limits the amount of content a client can request per day to be consistent with regular Bitcoin users which provides further obscurity for users.
\section{Conclusion}
According to Criminal Justice theory that has been developed and tested over centuries, the more certain someone is that they can commit a crime and get away with it, the more likely they are to do so. The developments in Privacy Enhancing Technologies have only made it easier to commit a crime undetected. PETs prevent governments and other policing bodies from being able to detect, monitor, track, and even prove criminal activity. Improvements in Tor further prevent users from being identified. Alternatives to Tor allow the technology to take root at a much larger scale. Breakthroughs in covert communication channels facilitate the planning and conducting of criminal activity. Disabling the ability to combat cybercrime is an important ethical concern. While preserving privacy is a noble goal, researchers must consider how their developments might be used.
|
1,314,259,995,619 | arxiv | \section{Introduction}
For theories of the early universe, the right amount of perturbations must be generated so as to conform with our observations such as cosmic miscrowave background (CMB) \cite{Larson:2010gs} and large scale structures (LSS) \cite{Bernardeau:2001qr}. One of the well-known observed features is, the power spectrum of these perturbations, which comes from the 2-point correlation function, has to be (nearly) scale-invariant \cite{Larson:2010gs}, which will put on non-trivial constraints on theoretical model building. Although it is well-known that a single scalar field, which drives the universe into de-Sitter like expansion (inflation \cite{Starobinsky:1980te,Guth:1980zm,Albrecht:1982wi,Linde:1983gd,Starobinsky:1985ww}, while the scalar is called inflaton), or nonrelativistic matter-like contraction \cite{Finelli:2001sr,Cai:2007qw} could easily generate perturbations to meet the requirement, the nature of the scalar is still unclear.
Scale-invariant power spectrum may also arise when one modify Einstein's gravity at early times. In some cases, the modified gravity theories could be connected with unmodified general relativity (GR) plus a scalar through conformal transformations \cite{Faraoni:1998qx}, with the latter being viewed as the counterpart in Einstein frame of the former. Due to the equivalence between the two frames (Jordan and Einstein), the perturbation generated by the couple of counterparts are exactly the same. Thanks to the connection, one can thus reconstruct models of modified gravity from the known evolution of GR plus a scalar models, which can lead to inflation or matter-contraction scenarios. Recently we proposed a way of reconstructing the models with a scalar nonminimally coupled to gravity which could give rise to scale-invariant power spectrum \cite{Qiu:2012ia}. In this paper, we will consider another case of modified gravity, namely $f(R)$ theories. Actually as we will see later, $f(R)$ theories could be one specific but nontrivial form of nonminimal coupling. In $f(R)$ theories, there is no need to introduce the unknown scalar, and the universe is driven totally by its gravitational structure. $f(R)$ theories has been used widely as alternatives of inflation, dark matter, dark energy and so on. See \cite{Faraoni:2000gx} for comprehensive reviews.
The reconstruction of $f(R)$ gravity has been pursued by many authors, see \cite{Nojiri:2006be}. In their approaches, most of them reconstruct $f(R)$ theory in Jordan frame itself, provided that the cosmic evolution in Jordan frame is given. Here we will reconstruct in a different way, namely from their counterpart in Einstein frame, which looks like a single scalar field in GR, via conformal transformation. This kind of reconstruction aims at connecting different evolutions of the universe driven by modified gravity in its Jordan and Einstein frames. As is shown in \cite{Qiu:2012ia}, in Einstein frame there are only two cases which could give rise to (nearly) scale-invariant power spectrum, namely inflation and matter-contraction. Taking the Einstein frame lagrangian as: \be\label{einstein} {\cal L}_E\sim \frac{1}{2}R_E-\frac{1}{2}(\partial\varphi_E)^2-V(\varphi_E)~,\ee where here and after we set the unit such that $8\pi G=M_{Pl}^{-2}=1$, and use the metric signature $(-,+,+,+)$. A simple and representative solution is the exact solution which is obtained assuming that its equation of state $w_E$ is a constant, namely: \bea\label{parametrize} a_E(t_E)&\sim&(\pm t_E)^{\frac{2}{3(1+w_E)}}~,~H_E(t_E)=\frac{2}{3(1+w_E)t_E}~,\nonumber\\ \varphi_E(t_E)&=&\frac{2\ln(\pm M t_E)}{\sqrt{3(1+w_E)}}~,~V(\varphi_E)=V_0 e^{-\sqrt{3(1+w_E)}\varphi_E}~\eea where $M$ is some energy scale. In this parametrization, we have set $``+"$ for positive $t_E$ meaning an expanding phase, while $``-"$ for negative $t_E$ denoting a contracting phase, and $V_0$ is some constant factor. In Inflation case, we have $w_E=-1+2\epsilon_E/3$ with the slow-roll parameter $|\epsilon_E|\equiv|-(dH_E/dt_E)/H_E^2|\ll1$, then Eq. (\ref{parametrize}) can be written as: \bea\label{parametrizeinf} a_E(t_E)&\sim&{t_E}^{\frac{1}{\epsilon_E}}~,~H_E(t_E)=\frac{1}{\epsilon_E t_E}~,\nonumber\\ \varphi_E(t_E)&=&\sqrt{\frac{2}{\epsilon_E}}\ln(Mt_E)~,~V(\varphi_E)=V_0 e^{-\sqrt{2\epsilon_E}\varphi_E}~,\eea while in matter-contraction case, one has $w_E=0$, and thus Eq. becomes: \bea\label{parametrizeMB} a_E(t_E)&\sim& (-t_E)^{\frac{2}{3}}~,~H_E(t_E)=\frac{2}{3t_E}~,\nonumber\\ \varphi_E(t_E)&=&\frac{2}{\sqrt{3}}\ln(-Mt_E)~,~V(\varphi_E)=V_0 e^{-\sqrt{3}\varphi_E}~.\eea
In this short paper, we will mainly focus on the Jordan frame of the modified gravity theories in order to find which form can be conformally connected to the above two cases, while more complete study for the case of varying $w_E$ (or $\epsilon_E$) will be left for the future.
The remaining sections are organized as following: in Sec. II, we review the main results for the general nonminimal coupling theories that was obtained in our previous paper. in Sec. III, we focus on $f(R)$ theories. Numerical plots of the functional form of $f(R)$ as well as the evolution of $R$ in terms of cosmic time $t$ are presented. Furthermore, we also discussed about the relation of the evolutions of various cosmological variables between the two frames for an arbitrary constant $\epsilon_E$. In Sec. IV we conclude our paper.
\section{Review of reconstruction of nonminimal coupling theory}
\subsection{Background}
First of all, we will briefly review the main results obtained in \cite{Qiu:2012ia}. The action of the nonminimal coupling theory we are considering is: \be\label{actionNMC} {\cal S}_{NMC}=\int d^4x\sqrt{-g}\Bigl[F(\phi)R-\frac{1}{2}Z(\phi)\partial_\mu\phi\partial^\mu\phi-U(\phi)\Bigr]~,\ee where $F(\phi)$ and $Z(\phi)$ can be arbitrary functions of the field $\phi$ in the Jordan frame, and $U(\phi)$ is the potential. The equation of motion of $\phi$ is: \be\label{eomNMC} \ddot\phi+3H_J\dot\phi+\frac{Z_\phi}{2Z}\dot\phi^2-\frac{6F_\phi}{Z}(\dot H+2H^2)+\frac{U_\phi}{Z}=0~,\ee where subscript ``$\phi$" indicates $\partial/\partial\phi$ and dot denotes derivative with respect to cosmic time $t_J$ in the Jordan frame, and the Friedmann Equation is: \be\label{friedmannNMC} 6H_J\dot F+6H_J^2F=\frac{1}{2}Z\dot\phi^2+U~.\ee Following the conformal transformation of metrics in Jordan and Einstein frame, $g_{\mu\nu}^{(E)}=\Omega^2 g_{\mu\nu}^{(J)}$, where $\Omega^2\equiv 2F$, the relations of some basic variables between the two frames are summarized as follows: \bea\label{relationNMC} dt_E&=&\Omega dt_J~,~a_E=\Omega a_J~,~H_E=\frac{H_J}{\Omega}(1+\frac{\dot\Omega}{2H_J\Omega})~,\nonumber\\ \varphi_E&=&\int\sqrt{\frac{6M_{Pl}^2\Omega_\phi^2+Z}{\Omega^2}}d\phi~,~V(\varphi_E)=\frac{U(\phi)}{\Omega^4}~.\eea
\subsection{Perturbations}
The equation of motion of the perturbation generated by the action (\ref{actionNMC}) can be written down as: \be\label{perteomNMC} u^{\prime\prime}_{\cal R}+(k^2-\frac{(a_J\sqrt{2Q_{\cal R}})^{\prime\prime}}{a_J\sqrt{2Q_{\cal R}}})u_{\cal R}=0~,\ee where $u_{\cal R}=a_J\sqrt{2Q_{\cal R}}{\cal R}$, and ${\cal R}$ is the conformal-invariant curvature perturbation. The variable $Q_{\cal R}$ is defined as: \be Q_{\cal R}\equiv\frac{2F}{(2+\delta_F)^2}[3\delta_{F}^{2}+\frac{\dot\phi^2Z}{H_J^{2}F}]~,\ee where $\delta_F\equiv\dot F/(H_J F)$. The prime denotes derivative with respect to the conformal time $\eta=\int a_J^{-1}(t_J)dt_J$. With the parametrization that $a_J\sqrt{2Q_{\cal R}}\sim |\eta_\ast-\eta|^\lambda$, the superhorizon solution of Eq. (\ref{perteomNMC}) can be expressed in the following: \bea\label{resultNMC}
u_{\cal R}&\sim&\sqrt{|\eta_\ast-\eta|}\Big[c_1J_{\lambda-\frac{1}{2}}(k|\eta_\ast-\eta|)+c_2J_{\frac{1}{2}-\lambda}(k|\eta_\ast-\eta)\Big]\nonumber\\ &\sim& c_1k^{\lambda-\frac{1}{2}}|\eta_\ast-\eta|^{\lambda-\frac{1}{2}}+c_2k^{\frac{1}{2}-\lambda}|\eta_\ast-\eta|^{1-\lambda}~,\nonumber\\
{\cal R}&=&\frac{u_{\cal R}}{a_J\sqrt{2Q_{\cal R}}}\sim c_1k^{\lambda-\frac{1}{2}}+c_2k^{\frac{1}{2}-\lambda}|\eta_\ast-\eta|^{1-2\lambda}~,\eea where $J_i$ is the Bessel function and $c_1$, $c_2$ are constants. The power spectrum is defined as \be {\cal P}_{\cal R}(k)\equiv\frac{k^3}{2\pi^2}\big|{\cal R}\big|^2~.\ee
From the above solution, it is straightforward to see that scale-invariant spectrum $({\cal P}_{\cal R}(k)\sim k^0)$ can be obtained in two ways: one is $\lambda=-1$, where the time-varying mode becomes decaying while the constant mode dominates the perturbation, which is inflation, and the other is $\lambda=2$, where the time-varying mode is the growing mode and thus dominates over the constant one, which is matter-contraction. In fact, from the relation (\ref{relationNMC}) one can express $Q_{\cal R}$ as: \be Q_{\cal R}\sim F\epsilon_E~,\ee and since we have assumed constant $w_E$ and $\epsilon_E$, the condition of getting scale-invariant power-spectrum can be written as $a_J\sqrt{F}\sim|\eta_\ast-\eta|^{-1}$ or $a_J\sqrt{F}\sim|\eta_\ast-\eta|^2$.
\subsection{Reconstruction of nonminimal coupling theory in Jordan Frame}
We can reconstruct the universe evolution once we assume the evolution of $\Omega$ in terms of $t_J$. In our previous paper \cite{Qiu:2012ia}, we assumed that $\Omega(t_J)=\Omega_0 [(\pm t_J)/(\pm t_J^\ast)]^{\omega}$, then from the relation (\ref{relationNMC}) we have: \bea\label{tE2tJI1} t_E=\left\{ \begin{array}{l} \frac{\Omega_0t^\ast_J}{\omega+1}\Big(\frac{\pm t_J}{\pm t^\ast_J}\Big)^{\omega+1}~~~~{\rm for}~~\omega\neq-1~,\\\\ \Omega_0t^\ast_J\ln(\pm\bar{t}_J)~~~~{\rm for}~~\omega=-1~,\\ \end{array}\right. \eea where the $``+"$ sign in $``\pm"$ means $t_J>0$, and in the Jordan frame the universe is expanding, while the $``-"$ sign means $t_J<0$, and in the Jordan frame the universe is contracting. Here we define $\bar{t}_J=t_J/t_{Pl}$ where $t_{Pl}$ is the Planck time. Substituting it into Eqs. (\ref{parametrizeinf}) and (\ref{parametrizeMB}) respectively, one can get the evolution of variables such as $a_J$, $H_J$ and $w_J$ in terms of $t_J$ as (for $\omega\neq-1$ only): \bea a_J(t_J)&\sim&(\pm t_J)^{\frac{1+(1-\epsilon_E)\omega}{\epsilon_E}}~,~H_J=\frac{1+(1-\epsilon_E)\omega}{\epsilon_Et_J}~,\nonumber\\ w_J&=&-1+\frac{2}{3}\frac{\epsilon_E}{1+(1-\epsilon_E)\omega}~,\eea where $|\epsilon_E|\ll1$ for the case corresponding to inflation, while $\epsilon_E=3(1+w_E)/2=3/2$ for the case corresponding to matter-contraction, respectively. Moreover, from relation (\ref{relationNMC}) one can also find the evolution of field variables, and thus determine the form of functions $F(\phi)$, $Z(\phi)$ and $U(\phi)$ in the lagrangian. In fact, taking the ansatz of $Z(\phi)=Z_0\phi^{2z}$ and $U(\phi)=U_0\phi^{q}$, and with the help of Eqs. (\ref{eomNMC}) and (\ref{friedmannNMC}), we found the relation: \be\label{relationfieldNMC} F(\phi)=F_0\phi^{2z+2}~, q=2(z+1)(1-\frac{1}{\omega})~,\ee and the equation of state $w_J$ can be given by: \be w_J=\frac{2(z+1)(5\epsilon_E-6)-q(2\epsilon_E-3)}{3[2(z+1)(2-\epsilon_E)-q]}~.\ee
From above we can see that, once the functional form of $F(\phi)$, $Z(\phi)$ and $U(\phi)$ in action (\ref{actionNMC}) is given by (\ref{relationfieldNMC}), one could obtain scale-invariant power spectrum. Rather than being fixed to be inflation or matter-contraction only, the evolution of the universe in the Jordan frame has more freedom. This is because in the Jordan frame, the nonminimal coupling action (\ref{actionNMC}) has more degrees of freedom than that in the Einstein frame and is more dependent on the form of the action. However, as we will see below, it is not the case in $f(R)$ theory. In $f(R)$ theory, there will be less degree of freedom than nonminimal coupling theory and the form of $f(R)$ will be more fixed. Following similar steps, we will find the appropriate $f(R)$ theory, which can correspond to inflation or matter-contraction scenarios in its Einstein frame and thus, give rise to scale-invariant power spectrum.
\section{Reconstruction of $f(R)$ modified gravity theory}
\subsection{Background}
Now we turn on to study the reconstruction of $f(R)$ modified gravity theories. The action of $f(R)$ modified gravity theory is:
\be\label{actionfr} {\cal S}_{f(R)}=\int d^{4}x\sqrt{-g}f(R)~,\ee where $f(R)$ can be arbitrary function of the Ricci scalar $R$. Varying the action (\ref{actionfr}) with respect to the metric $g_{\mu\nu}$ we can get the equation of motion: \be\label{eomfr} -F_{,\mu;\nu}+g_{\mu\nu}\Box F+FR_{\mu\nu}-\frac{1}{2}g_{\mu\nu}f=0~,\ee where we defined the function $F(R)\equiv \partial f/\partial R$. The left part of the above equation can also be viewed as the ``effective" stress energy tensor $\Sigma_{\mu\nu}$ of $f(R)$ modified gravity, which satisfies the continuity equation, $\nabla^\mu\Sigma_{\mu\nu}=0$. Moreover, the ``$0-0$" and ``$0-i$" components of Eq. (\ref{eomfr}) are just Friedmann equations, which are \be 3H^2F=\frac{1}{2}(f+3\ddot F+3H\dot F)~,~-2\dot H F=\ddot F-H\dot F~,\ee respectively.
Same as nonminimal coupling theory, $f(R)$ theories with action (\ref{actionfr}) can also be connected with (\ref{einstein}) as its counterpart in the Einstein frame, via the conformal transformation $g_{\mu\nu}^{(E)}=\Omega^2 g_{\mu\nu}^{(J)}$ with $\Omega^2=2F$. To see this, one can rewrite the action (\ref{actionfr}) in the form of scalar-tensor theory, namely as: \be\label{actionst} {\cal S}_{ST}=\int d^{4}x\sqrt{-g}\Big[F(R)R-U(R)\Big]~\ee where the potential $U(R)$ can be identified as $F(R)R-f(R)$. The relations of the basic variables between the two frames are summarized as follows: \bea\label{relationfr} dt_E&=&\Omega dt_J~,~a_E=\Omega a_J~,~H_E=\frac{H_J}{\Omega}(1+\frac{\dot\Omega}{2H_J\Omega})~,\nonumber\\ \varphi_E&=&\sqrt{6}\ln\Omega~,~V(\varphi_E)=\frac{U(R)}{\Omega^4}~.\eea
From the transformed action (\ref{actionst}) we can see that, the $f(R)$ action is actually the specific form of the general nonminimal coupling action (\ref{actionNMC}) with $Z(\phi)=0$, as long as we identify $F(\phi)$ with $F(R)$, and $U(\phi)$ with $U(R)$, which is easy provided that the inverse function of $F(\phi)$ exists. Moreover, since $Z(\phi)$ as well as the kinetic term of (\ref{actionNMC}) vanishes, there are less degrees of freedom in $f(R)$ than in nonminimal coupling theories, and the conformal factor $\Omega$, which determines the cosmic evolution in Jordan frame, can be totally fixed by the field $\varphi_E$. Therefore, when there is one kind of evolution in Einstein frame, there is only one kind of evolution in Jordan frame. This gives less possibilities for $f(R)$ theories to get scale-invariant power spectrum than those for nonminimal coupling theories.
\subsection{Perturbations}
One can also check from the perturbation theory of $f(R)$ that what conditions should be met when one requires a scale-invariant power spectrum. Working in the Arnowitt-Deser-Misner (ADM) formalism \cite{Arnowitt:1962hi}, one can obtain the perturbed action of $f(R)$ up to the second order as: \be\label{pertactionfr} {\cal
S}^{(2)}=\int d\eta d^3xa_J^2 Q_{\cal R}\Bigl[{\cal R}^{\prime 2}-(\partial{\cal R})^2\Bigr]~,\ee where ${\cal R}$ is the conformal-invariant curvature perturbation, and \be Q_{\cal R}\equiv \frac{6F\delta_{F}^{2}}{(2+\delta_F)^2}~\ee with $\delta_F=\dot F/(H_J F)$ and the prime denotes derivative with respect to the conformal time $\eta$. Varying (\ref{pertactionfr}) with respect to ${\cal R}$, one can straightforwardly write down the equation of motion for the perturbation as: \be\label{perteomfr} u^{\prime\prime}_{\cal R}+(k^2-\frac{(a_J\sqrt{2Q_{\cal R}})^{\prime\prime}}{a_J\sqrt{2Q_{\cal R}}})u_{\cal R}=0~,\ee through the redefined variables $u_{\cal R}=a_J\sqrt{2Q_{\cal R}}{\cal R}$.
From the above analysis, we can directly conclude that scale-invariant spectrum can be obtained in two ways, namely $a_J\sqrt{2Q_{\cal R}}\sim|\eta_\ast-\eta|^{-1}$ which corresponds to inflation, or $a_J\sqrt{2Q_{\cal R}}\sim|\eta_\ast-\eta|^2$ which corresponds to matter-contraction. Moreover, from the relation (\ref{relationfr}) one can express $Q_{\cal R}$ as $Q_{\cal R}\sim F\epsilon_E$, the same as that in nonminimal coupling theories. Here we can see again that $f(R)$ theories are nothing but specific case of nonminimal coupling theories. In our case where constant $w_E$ and $\epsilon_E$ have been assumed, the condition of getting scale-invariant power-spectrum can be written as $a_J\sqrt{F}\sim|\eta_\ast-\eta|^{-1}$ or $a_J\sqrt{F}\sim|\eta_\ast-\eta|^2$.
\subsection{Reconstruction of $f(R)$ modified gravity theory in Jordan Frame}
First of all, from relations (\ref{relationfr}) as well as the evolution of $\varphi_E(t_E)$ in the Einstein frame (\ref{parametrize}), we can obtain the evolution of the conformal factor $\Omega$ in terms of $t_E$, which is \be \Omega=\Big(\frac{t_E}{t_E^\ast}\Big)^{\frac{1}{\sqrt{3\epsilon_E}}}~,~|\epsilon_E|\ll1~,\ee where $t_E^\ast=M^{-1}$. Since the universe in Einstein frame is expanding, we set $t_E$ and $t_E^\ast$ to be positive \footnote{Here and after, we assume that the same as $t_E$, $t_J$ monotonically increases, although its value can be either positive or negative. This is an arbitrary choice, only indicating the arrow of time, and one can surely assume that time goes in an opposite direction, which is only trivially dual to the current case by the transformation $t_J^\prime\rightarrow-t_J$.}. Since $dt_J=\Omega^{-1}(t_E)dt_E$, one could easily get $t_J$ as: \be t_J=\frac{\sqrt{3\epsilon_E}t_E^\ast}{\sqrt{3\epsilon_E}-1}\Big(\frac{t_E}{t_E^\ast}\Big)^{\frac{\sqrt{3\epsilon_E}-1}{\sqrt{3\epsilon_E}}}~,\ee or equivalently, \be \frac{t_E}{t_E^\ast}=\Big(\frac{-t_J}{-t_J^\ast}\Big)^{\frac{\sqrt{3\epsilon_E}}{\sqrt{3\epsilon_E}-1}}~,~t_J^\ast\equiv\frac{\sqrt{3\epsilon_E}t_E^\ast}{\sqrt{3\epsilon_E}-1}~.\ee Note that since $|\epsilon_E|\ll1$, $t_J$ and $t_J^\ast<0$. Then we have: \be\label{omegatj} \Omega(t_J)=\Big(\frac{-t_J}{-t_J^\ast}\Big)^{\frac{1}{\sqrt{3\epsilon_E}-1}}~.\ee
With Eqs. (\ref{parametrizeinf}), (\ref{relationfr}) and (\ref{omegatj}) in hand, we can obtain the evolution of $a_J$, $H_J$ and $w_J$ in the Jordan frame, in terms of $t_J$. The results are: \bea \label{evolutioninf} a_J(t_J)&\sim&\Big(\frac{-t_J}{-t_J^\ast}\Big)^{\frac{\sqrt{3}-\sqrt{\epsilon_E}}{\sqrt{\epsilon_E}(\sqrt{3\epsilon_E}-1)}}~,\nonumber\\ H_J(t_J)&=&\frac{\sqrt{3}-\sqrt{\epsilon_E}}{\sqrt{\epsilon_E}(\sqrt{3\epsilon_E}-1)t_J}~,\nonumber\\ w_J&=&\frac{\sqrt{\epsilon_E}+2\sqrt{3}\epsilon_E-3\sqrt{3}}{3(\sqrt{3}-\sqrt{\epsilon_E})}~. \eea
From this result we can see that, since $|\epsilon_E|\ll1$ as we considered, the index of $a_J$ in terms of $t_J$ (namely $1/\epsilon_J$, if we define $\epsilon_J$ to be the slow-roll parameter in the Jordan frame) is less than zero, and $a_J(t_J)$ will be increasing as $t_J$ increases. This indicates that it is an expanding universe, driven by $f(R)$ modified gravity theory, which is equivalent to the so-called ``Super-inflation" \cite{Gunzig:2000kk} (or phantom-inflation \cite{Piao:2003ty}) scenario in GR when transformed to the Einstein frame. One can also look into the equation of state $w_J$ of the universe, which is very much close to $-1$ up to order of slow-roll parameter, which means that the universe in the Jordan frame is also near de Sitter, so different from the general nonminimal coupling theory, inflation in the Einstein frame can only refer to inflation in the Jordan frame in $f(R)$ modified gravity theory.
The Ricci scalar $R$, which is defined as $R=6(\dot H+2H^2)$, can be expressed as: \be\label{ricciinf} R(t_J)=6\frac{(2-\sqrt{3\epsilon_E})(3-\epsilon_E)}{\epsilon_E(1-\sqrt{3\epsilon_E})^2t_J^2}~.\ee Finally, with Eqs. (\ref{omegatj}), (\ref{ricciinf}), as well as the relation $\Omega^2=2F$, we can obtain the form of $F(R)$ as: \be F(R)=\frac{1}{2}\Big(\frac{R}{R_0^{inf}}\Big)^{\frac{1}{1-\sqrt{3\epsilon_E}}}~,~R_0^{inf}\equiv6\frac{(2-\sqrt{3\epsilon_E})(3-\epsilon_E)}{\epsilon_E(1-\sqrt{3\epsilon_E})^2{t_J^\ast}^2}~\ee and \bea\label{frinf} f(R)&=&\int F(R)dR~\nonumber\\ &=&\frac{1-\sqrt{3\epsilon_E}}{4-2\sqrt{3\epsilon_E}}R_0^{inf}\Big(\frac{R}{R_0^{inf}}\Big)^{\frac{2-\sqrt{3\epsilon_E}}{1-\sqrt{3\epsilon_E}}}~.\eea We can see that when $\epsilon_E$ is small during inflation, the function of $f(R)$ is almost proportional to $R^2$ up to slow-roll parameter. Therefore, this model coincides with the well-known Starobinsky's model \cite{Starobinsky:1980te} of which $f(R)\sim R+\alpha R^2$ in the very early time, when $R$ is very large. In the late time when $\epsilon_E$ is large, it goes near the standard GR.
The plot of $R(t_J)$ and $f(R)$, which are reconstructed from inflation in its Einstein frame, are presented in Figs. \ref{Ricci1plot} and \ref{f1}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{Ricci1.eps}
\caption{The behavior of $R(t_J)$ w.r.t. $t_J$, where we choose $M=0.1$ and hence $t_E^\ast=10$. In this case, $R>0$, and is increasing w.r.t. $t_J$, showing a ``super/phantom-inflation" behavior.}\label{Ricci1plot}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{f1.eps}
\caption{The behavior of $f(R)$ w.r.t. $R$, where we choose $M=0.1$ and hence $t_E^\ast=10$. We can see that $f(R)$ monotonically increases with $R$, and in the limit of large $R$, it approaches to the squared power-law $f(R)\sim R^2$.}\label{f1}
\end{figure}
Following the same procedure, we can do the reconstruction of $f(R)$ from matter contraction, just replacing $t_E$ by $-t_E$, and $\epsilon_E$ by the value $3/2$. Note that here $t_E$ and $t_E^\ast$ are negative. $t_J$ and $\Omega(t_J)$ will become \be t_J=t_J^\ast\Big(\frac{-t_E}{-t_E^\ast}\Big)^{1-\frac{\sqrt{2}}{3}}~,t_J^\ast=\frac{3}{7}(3+\sqrt{2})t_E^\ast~,\ee and \be \Omega(t_J)=\Big(\frac{-t_J}{-t_J^\ast}\Big)^{\frac{2+3\sqrt{2}}{7}}~,\ee where $t_J$ and $t_J^\ast$ still smaller than 0. The scale factor $a_J$, the Hubble parameter $H_J$ and the equation of state $w_J$ will be given by: \be \label{evolutionmb} a_J(t_J)\sim\Big(\frac{-t_J}{-t_J^\ast}\Big)^{\frac{4-\sqrt{2}}{7}}~,~H_J(t_J)=\frac{4-\sqrt{2}}{7t_J}~,~w_J=\frac{1+\sqrt{2}}{3}~. \ee
From this result we can see that, since the index of $a_J$ in terms of $t_J$ is larger than zero, so $a_J(t_J)$ will be decreasing as $t_J$ increases, indicating that there is also an contracting universe driven by $f(R)$ modified gravity theory when we require it be equivalent to matter-contraction scenario in GR when transformed to the Einstein frame. The Hubble parameter $H_J(t_J)$ is smaller than zero because of the negative $t_J$, and the equation of state $w_J$ of the universe is about the value of $0.8$, which is even larger.
The Ricci scalar $R$ in this case is: \be\label{riccimb} R(t_J)=\frac{6(8-9\sqrt{2})}{49t_J^2}~,\ee which gives the form of $F(R)$ as: \be F(R)=\frac{1}{2}\Big(\frac{R}{R_0^{MC}}\Big)^{-\frac{2+3\sqrt{2}}{7}}~,~R_0^{MC}\equiv\frac{6(8-9\sqrt{2})}{49{t_J^\ast}^2}~,\ee and \bea\label{frmb} f(R)&=&\int F(R)dR~\nonumber\\ &=&\frac{5+3\sqrt{2}}{2}R_0^{MC}\Big(\frac{R}{R_0^{MC}}\Big)^{\frac{1}{(5+3\sqrt{2})}}~.\eea The plot of $R(t_J)$ and $f(R)$, which are reconstructed from matter-contraction in its Einstein frame, are presented in Figs. \ref{Ricci2plot} and \ref{f2}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{Ricci2.eps}
\caption{The behavior of $R(t_J)$ w.r.t. $t_J$, where we choose $M=0.1$ and hence $t_E^\ast=10$. In this case, $R<0$, and is decreasing w.r.t. $t_J$.}\label{Ricci2plot}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{f2.eps}
\caption{The behavior of $f(R)$ w.r.t. $R$, where we choose $M=0.1$ and hence $t_E^\ast=10$. We can see that $f(R)$ is also less than 0, and increases with $R$ since both $R$ and $f(R)$ is decreasing w.r.t. $t_J$.}\label{f2}
\end{figure}
One can check our results with the conditions for generating scale-invariant power spectrum for consistency. For the case of inflation, from Eq. (\ref{evolutioninf}) we can write down the relation of conformal time $\eta$ and $t_J$ as: \bea \eta&=&\int a_J^{-1}(t_J)dt_J~\nonumber\\ &\sim&(-t_J)^{\frac{\sqrt{3}(1-\epsilon_E)}{\sqrt{\epsilon_E}(1-\sqrt{3\epsilon_E})}}~,\eea while \be a_J\sqrt{Q_{\cal R}}\sim a_J\sqrt{F}\sim (-t_J)^{\frac{\sqrt{3}}{\sqrt{\epsilon_E}(\sqrt{3\epsilon_E}-1)}}~,\ee where we note that $\delta_F$ is a constant. Thus we could easily find that \be a_J\sqrt{Q_{\cal R}}\sim\eta^{1/(\epsilon_E-1)}\sim\eta^{-1}~\ee when $|\epsilon_E|\ll1$. The case of matter-contraction is similar. From Eq. (\ref{evolutionmb}) one has: \be \eta\sim(-t_J)^{\frac{3+\sqrt{2}}{7}}~,\ee and \be a_J\sqrt{Q_{\cal R}}\sim a_J\sqrt{F}\sim (-t_J)^{\frac{2(3+\sqrt{2})}{7}}~,\ee which gives \be a_J\sqrt{Q_{\cal R}}\sim\eta^2~.\ee Moreover, one can also check the conditions for ghost-free and stable fluctuations for our constructed $f(R)$ models using the criterion for $f(R)$ models mentioned in e.g. Ref. \cite{Faraoni:2005ie}. From our expressions (\ref{frinf}) and (\ref{frmb}) one can easily check that the fluctuations in our models have neither ghost or instabilities.
Before ending this section, let's also remark the relation between the general evolutions of the universe driven by $f(R)$ modified gravity theory in the two frames with an arbitrary constant $\epsilon_E$, though without showing the detailed calculations. The relation between variables in the two frames is summarized in TABLE \ref{table}. Here we write $\Omega(t_E)$ in a general form of $\Omega=(\pm t_E/\pm t_E^\ast)^\omega$.
\begin{widetext}
\begin{table*}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{$t_E$} & \multirow{2}{*}{$\epsilon_E$} & $a_E$ & $\omega$ & $t_J$ & $\epsilon_J$ & $a_J$ & $\rm{horizon}$ \\
& & $(\sim t_E^{1/\epsilon_E})$ & $(=1/\sqrt{3\epsilon_E})$ & $(\sim [t_E^\ast/(1-\omega)]t_E^{1-\omega})$ & $(=(\omega-1)/(\omega-1/\epsilon_E))$ & $(\sim t_J^{1/\epsilon_J})$ & $\rm{problem}$ \\ \hhline{========}
\multirow{4}{*}{$t_E>0$} & $\epsilon_E>3$ & \multirow{4}{*}{\rm{expanding}} & $1/\epsilon_E<\omega<1$ & \multirow{3}{*}{$t_J>0$} & $\epsilon_J<0$ & \rm{contracting} & \multirow{2}{*}{y} \\
\cline{2-2} \cline{4-4} \cline{6-7}
& $1<\epsilon_E<3$ & & $\omega<1/\epsilon_E$ & & $\epsilon_J>1$ & \multirow{3}{*}{\rm{expanding}} & \\
\cline{2-2} \cline{4-4} \cline{6-6} \cline{8-8}
& $1/3<\epsilon_E<1$ & & $\omega<1$ & & $0<\epsilon_J<1$ & & \multirow{2}{*}{n} \\
\cline{2-2} \cline{4-6}
&$0<\epsilon_E<1/3$ & &$1<\omega<1/\epsilon_E$ & $t_J<0$ & $\epsilon_J<0$ & & \\
\hline
\multirow{4}{*}{$t_E<0$} & $\epsilon_E>3$ & \multirow{4}{*}{\rm{contracting}} & $1/\epsilon_E<\omega<1$ & \multirow{3}{*}{$t_J<0$} & $\epsilon_J<0$ & \rm{expanding} & \multirow{2}{*}{n} \\
\cline{2-2} \cline{4-4} \cline{6-7}
& $1<\epsilon_E<3$ & & $\omega<1/\epsilon_E$ & & $\epsilon_J>1$ & \multirow{3}{*}{\rm{contracting}} & \\
\cline{2-2} \cline{4-4} \cline{6-6} \cline{8-8}
& $1/3<\epsilon_E<1$ & & $\omega<1$ & & $0<\epsilon_J<1$ & & \multirow{2}{*}{y} \\
\cline{2-2} \cline{4-6}
& $0<\epsilon_E<1/3$ & & $1<\omega<1/\epsilon_E$ & $t_J>0$ & $\epsilon_J<0$ & & \\
\hline
\end{tabular}
\caption{
The relations between variables in the Jordan and Einstein frames where we generalize $\epsilon_E$ to be an arbitrary positive constant value. $t_E$ can be chosen as either positive or negative, presenting parametrization of an expanding or a contracting universe. For $\epsilon_E>1$ in expanding phase or $\epsilon_E<1$ in contracting phase, we have horizon problem, while in the other two cases we don't. Due to the fact that $\omega=1/\sqrt{3\epsilon_E}$, the region of $t_J>0/<0$ can be divided by the line of $\omega=1(\epsilon_E=1/3)$, the region of $\epsilon_J>0/<0$ is divided by both $\omega=1$ and $\omega=1/\epsilon_E(\epsilon_E=1)$. In the $\epsilon_J>0$ region, the region of $\epsilon_J>1/<1$ is divided by the line of $\omega=1/\epsilon_E(\epsilon_E=3)$. Whether the universe contracts or expands in the Jordan frame is decided by whether $\epsilon_E>3$ or not. Finally, when there is no horizon problem in the Einstein frame, there will be no horizon problem in the Jordan frame, and vice versa. Similar summary but only for GR can be found in, e.g. \cite{Piao:2004jg}.}\label{table}
\end{table*}
\end{widetext}
Note that since $f(R)$ theory can only be equivalent to canonical field via conformal transformation, we don't have $\epsilon_E<0$ case.
\section{Discussions and Conclusion}
In this paper we studied the reconstruction and cosmic evolutions of $f(R)$ modified gravity models, which can be transformed as inflation or matter-contraction scenarios in their Einstein frame. The equivalence of the Jordan and Einstein frames guarantee that the perturbations generated by $f(R)$ models follows the same evolution, namely can give rise to scale-invariant power spectrum required by observations, however their background evolution might be different. In our previous work \cite{Qiu:2012ia} we have shown that there can be more than one kind of evolution in the case of general nonminimal coupling theories, but for the $f(R)$ case, there's no such degeneracy and the correspondence between the two frames must be one to one. We find that in $f(R)$ modified gravity theory, inflation in the Einstein frame can only refer to (phantom-like) inflation in the Jordan frame, while matter-contraction in the Einstein frame can only refer to contraction with a larger equation of state in the Jordan frame. We analysed the general conditions for $f(R)$ theory of getting scale-invariant power spectrum, and obtained the evolution of the universe in the Jordan frame as well as the functional form of $f(R)$. Numerical plot of $R$ w.r.t. $t_J$ and $f(R)$ w.r.t. $R$ are also presented.
In the current paper, we only focus on $f(R)$ models corresponds to models in Einstein frame with constant $\epsilon_E$. For case where $\epsilon_E$ is time-varying will also be interesting, and has been investigated in many places. Varying $\epsilon_E$ can also be one of the mechanisms of generating scale-invariant power spectrum, especially in scenarios alternative to inflation, see e.g. \cite{Khoury:2009my}. Moreover, for whole evolution process of the universe, including reheating after inflation or transfering to late-time acceleration. For these consideration, more complicated functional form of $f(R)$ models is needed. For example, for the reheating process, other field will be introduced to interact with gravity in order to produce particles effectively. This requires new conformal relations for multi-degrees of freedom other than Eq. (\ref{relationfr}). All these interesting topics are under investigation now.
Before ending, we would like to mention that due to the equivalence of the two frames, the Big-Bang cosmological problems (horizon, flatness, etc.) will also do no harm to the reconstructed $f(R)$ models. To see this, one can look into the efolding number $\cal N$ defined as \cite{Khoury:2003vb} \be {\cal N}\equiv\ln\Big(\frac{a_iH_i}{a_iH_i}\Big)~,\ee which can be directly related to these problems. Usually these problems can be avoided as long as we require that ${\cal N}\gtrsim 70$ during inflation. From the relation (\ref{relationfr}) we can see that the conformal Hubble parameter, ${\cal H}\equiv aH$, is not conformal invariant, but since in our case $\delta_F$ is a constant, ${\cal N}$ is a conformal invariant variable. Therefore, provided that inflation lasts for enough efolding number in one frame, one need not worry about whether it does in the other frame. We'd also like to refer the readers to \cite{Qiu:2012ia} for more detailed arguments.
\section*{Acknowledgments}
The author thanks Antonio de Felice, Je-An Gu and Yun-Song Piao for useful discussions. This work is funded in part by the National Science Council of R.O.C. under Grant No. NSC99-2112-M-033-005-MY3 and No. NSC99-2811-M-033-008 and by the National Center for Theoretical Sciences.
|
1,314,259,995,620 | arxiv | \subsubsection*{\bibname}}
\usepackage{booktabs}
\usepackage{xcolor}
\usepackage{subcaption}
\usepackage{graphicx}
\usepackage{calc}
\newlength{\imagew}
\newlength{\imageh}
\newlength{\legendw}
\newlength{\legendh}
\newlength{\legendx}
\newlength{\legendy}
\newcommand{\graphicswithlegend}[6]{
\setlength{\imagew}{#1}
\settoheight{\imageh}{\includegraphics[width=\imagew]{#2}}
\setlength{\legendw}{#3\imagew}
\settoheight{\legendh}{\includegraphics[width=\legendw]{#4}}
\setlength{\legendx}{\imagew}
\addtolength{\legendx}{-\legendw}
\addtolength{\legendx}{-#5\imagew}
\setlength{\legendy}{\imageh}
\addtolength{\legendy}{-\legendh}
\addtolength{\legendy}{-#6\imageh}
\includegraphics[width=\imagew]{#2}%
\llap{
\hspace{-\the\legendx}
\raisebox{\legendy}{\includegraphics[width=\legendw]{#4}}
\hspace{\the\legendx}
}
}
\usepackage{mathtools}
\usepackage{amsthm}
\usepackage{amsfonts}
\DeclarePairedDelimiterX{\norm}[1]{\lVert}{\rVert}{#1}
\DeclarePairedDelimiter{\abs}{\lvert}{\rvert}
\newcommand{\argmin}[1]{\underset{#1}{\operatorname{arg}\,\operatorname{min}}\;}
\newcommand{\argmax}[1]{\underset{#1}{\operatorname{arg}\,\operatorname{max}}\;}
\usepackage{adjustbox}
\usepackage{array}
\newcolumntype{R}[2]{%
>{\adjustbox{angle=#1,lap=\width-(#2)}\bgroup}%
l%
<{\egroup}%
}
\newcommand*\rot{\multicolumn{1}{R{45}{1em}}
\newcommand{0.32}{0.32}
\newcommand{0.3}{0.3}
\newcommand{0.48}{0.48}
\usepackage{hyperref}
\usepackage{url}
\begin{document}
\twocolumn[
\aistatstitle{Identifying Layers Susceptible to Adversarial Attacks}
\aistatsauthor{ Shoaib Ahmed Siddiqui \And Thomas Breuel }
\aistatsaddress{ German Research Center for Artificial Intelligence (DFKI)\\
TU Kaiserslautern\\
\texttt{shoaib\_ahmed.siddiqui@dfki.de} \\ \And NVIDIA Research \\
\texttt{tbreuel@nvidia.com} } ]
\begin{abstract}
In this paper, we investigate the use of pretraining with adversarial networks, with the objective of discovering the relationship between network depth and robustness.
For this purpose, we selectively retrain different portions of VGG and ResNet architectures on CIFAR-10, Imagenette, and ImageNet using non-adversarial and adversarial data. Experimental results show that susceptibility to adversarial samples is associated with low-level feature extraction layers. Therefore, retraining of high-level layers is insufficient for achieving robustness.
Furthermore, adversarial attacks yield outputs from early layers that differ statistically from features for non-adversarial samples and do not permit consistent classification by subsequent layers.
This supports common hypotheses regarding the association of robustness with the feature extractor, insufficiency of deeper layers in providing robustness, and large differences in adversarial and non-adversarial feature vectors.
\end{abstract}
\section{Introduction}
Deep neural networks often yield performance on test sets comparable to human performance~\cite{resnet}. However, at the same time, they have been found to be susceptible to imperceptible perturbations of inputs~\cite{szegedy2013intriguing,goodfellow2014explaining,madry2017towards,xie2019featuredenoising}.
These new samples crafted by an adversary with the aim of fooling the classifier are termed adversarial examples~\cite{szegedy2013intriguing}.
There has been a plethora of research in developing stronger defenses as well as stronger adversarial attacks to circumvent these defenses~\cite{goodfellow2014explaining,madry2017towards,xie2019featuredenoising,zhang2019trades,wong2020fast,akhtar2018defense,naseer2020selfsup,folz2020robustness_s2s,li2020enhancing}.
However, the reasons for their existence are still poorly understood~\cite{goodfellow2014explaining,ilyas2019adversarialexamplesarenotbugs,wang2019highfreq}.
Understanding these differences between deep neural networks and human perception is important both in order to understand the mathematical and statistical structure of such networks, as well as to protect systems against attacks.
Deep neural networks automate the task of feature extraction, obviating the need for hand-engineering features.
Such networks are thought of as consisting of initial feature extraction layers and high-level layers responsible for learning decision boundaries.
In fact, in many cases, initial feature extraction layers are often reused in practice between different datasets and tasks to speed up convergence (commonly known as transfer learning~\cite{zhuang2020transfer}).
In the context of adversarial samples, if we could reuse feature extraction layers, it would greatly speed up research in adversarial samples, since adversarial samples could be studied on pre-extracted data.
If susceptibility to adversarial samples is associated with high-level layers, it would also give us insights into the nature of adversarial phenomena and suggest that adversarial samples might be related primarily to the formation of decision boundaries by the high-level layers.
This idea has been leveraged in techniques such as large-margin classification to achieve adversarial robustness~\cite{elsayed2018large}.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{Figures/blockwise_robust_training_horizontal_new.pdf}
\caption{Overview of the training methodology where a set of of blocks is reinitialized and retrained (using either clean or adversarial samples) while others are kept frozen after loading the pretrained model. One special case is that of a single cut-off point (right) where the network from the beginning to the cut-off point or from the cut-off point to the end is trained while keeping the other part fixed.}
\label{fig:overview}
\end{figure*}
To test these ideas, we use a novel block-wise retraining protocol.
In particular, considering a pretrained model, either trained non-adversarially or adversarially, we reinitialize and retrain a particular set of blocks either conventionally for an adversarially pretrained model or adversarially for a conventionally pretrained model.
The key findings of this paper can be summarized as follows:
\begin{itemize}
\item Adversarial retraining of just low-level/early layers is associated with strong reductions in susceptibility to adversarial samples.
\item Adversarial retraining of just high-level/late layers fails to result in robustness to adversarial samples.
\item The distributions of feature vectors from non-adversarial and adversarial inputs differ substantially at all levels; therefore, susceptibility to adversarial attacks is associated with the early generation of feature vectors that do not occur in non-adversarial images.
\item Adversarial training results in weights for early layers that bring the distribution of feature vectors for adversarial samples back to the distribution of feature vectors for non-adversarial samples.
\end{itemize}
Overall, these results show that susceptibility to adversarial samples is primarily a phenomenon associated with early layers and low-level feature extraction.
Adversarial samples do not merely transform the appearance of one image into that of another, but instead generate novel feature vectors that result in novel activation patterns in late layers and high-level, class-specific feature vectors.
\section{Related Work}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/cifar10/cifar10_adversarial_training_upto_cutoff_label_attack_clean_prob_0.8.pdf}
{0.1}{Figures/illustrations/training_type_adv_upto.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Adv. retraining up to cut-off}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/cifar10/cifar10_adversarial_training_after_cutoff_label_attack_clean_prob_0.5.pdf}
{0.1}{Figures/illustrations/training_type_adv_after.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Adv. retraining after cut-off}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/cifar10/cifar10_conventional_training_after_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_clean_after.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Con. retraining after cut-off}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/cifar10/cifar10_conventional_training_upto_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_clean_upto.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Con. retraining up to cut-off}
\end{subfigure}
\end{subfigure}
\hspace{-25mm}
\begin{subfigure}[b]{0.09\textwidth}
\raisebox{0.5\height}{\includegraphics[width=\textwidth]{Figures/illustrations/training_type_legend_vertical.pdf}}
\end{subfigure}
\caption{Partial retraining of VGG and ResNet architectures on CIFAR-10 shows that robustness to adversarial samples is achieved if and only if the weights for early layers were either pretrained or retrained with adversarial samples.
Retraining up to \textit{m\_fc} refers to full retraining of the pretrained model. Similarly, retraining after \textit{m\_fc} refers to the pretrained model without any partial retraining. In cases where the performance of the pretrained model is not naturally incorporated in the plot, \textit{only pretrained} refers to the performance of the original pretrained network without any partial retraining.}
\label{fig:cifar10_results}
\end{figure*}
Susceptibility to adversarial samples~\cite{goodfellow2014explaining,madry2017towards,xie2019featuredenoising,zhang2019trades,wong2020fast,akhtar2018defense,naseer2020selfsup,folz2020robustness_s2s,li2020enhancing} has been explained
in terms of overreliance on texture for classification~\cite{geirhos2018imagenettrainedshapebias}, excessive invariance~\cite{jacobsen2018excessive}, over-reliance on high-frequencies~\cite{wang2019highfreq}, piece-wise linear nature of deep networks~\cite{goodfellow2014explaining}, or even a bias present in the dataset itself~\cite{ilyas2019adversarialexamplesarenotbugs}.
One of the most effective defenses against adversarial samples has been {\bf robust optimization}~\cite{goodfellow2014explaining,madry2017towards,xie2019featuredenoising,zhang2019trades,wong2020fast}, where the model is trained on adversarial samples rather than clean samples.
Adversarial training has been combined with transfer learning~\cite{shafahi2019transferablerobustness,jeddi2020simple}, using the typical approach of reusing the low-level (``feature extraction'') layers; our work extends this approach to systematically determine which portions of networks can be retrained to achieve adversarial robustness.
{\bf Image-denoising-based defenses}~\cite{akhtar2018defense,naseer2020selfsup,folz2020robustness_s2s,li2020enhancing} attempt to remove the noise introduced by adversarial attacks; the success of such approaches suggests that low-level layers in networks may be important for adversarial defenses but do not exclude the possibility that high-level layers may be important as well.
Recent results indicate that vision transformers may be more robust to adversarial attacks than convolutional architectures~\cite{shao2021vitrobust} when trained adversarially.
Work on {\bf adversarial example detection}~\cite{roth2019odds} attempts to identify adversarial examples based on distinctive feature vectors.
Other work~\cite{mao2019metric,li2020towards,bai2021improving,bai2021improving} also compares the activations in the penultimate layer of networks between adversarial and non-adversarial samples. In this paper, we compare and analyze activations from adversarial and non-adversarial samples at different depths and for different training modalities, yielding new insights into the origin of adversarial samples.
\section{Methods}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenette/imagenette_adversarial_training_upto_cutoff_label_attack_clean_prob_0.8.pdf}
{0.1}{Figures/illustrations/training_type_adv_upto.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Adv. retraining up to cut-off}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenette/imagenette_adversarial_training_after_cutoff_label_attack_clean_prob_0.5.pdf}
{0.1}{Figures/illustrations/training_type_adv_after.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Adv. retraining after cut-off}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenette/imagenette_conventional_training_after_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_clean_after.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Con. retraining after cut-off}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenette/imagenette_conventional_training_upto_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_clean_upto.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Con. retraining up to cut-off}
\end{subfigure}
\end{subfigure}
\hspace{-25mm}
\begin{subfigure}[b]{0.09\textwidth}
\raisebox{0.5\height}{\includegraphics[width=\textwidth]{Figures/illustrations/training_type_legend_vertical.pdf}}
\end{subfigure}
\caption{Results on Imagenette are primarily consistent with all the findings on the CIFAR-10 dataset despite a larger input size of $224 \times 224$. However, we see that since the size of the objects is larger in this case, the importance of the higher-level modules increases.}
\label{fig:imagenette_results}
\end{figure*}
Layers present in a network are often thought of as operating at different semantic levels, where initial layers respond to basic features such as edges or gradients, while higher layers represents complete objects or some prominent parts of it~\cite{yosinski2015DeepVisToolbox}.
In order to analyze the role of different layers of the network in terms of their susceptibility to adversarial noise, we use a block-wise retraining protocol. Given a model (ResNet or VGG in our case), we split the model into different modules. Both ResNet and VGG models are naturally dissected into six modules by the down-sampling layers within the network (max-pooling in the case of VGG and convolutional layer with a stride of 2 in the case of ResNets).
An overview of the method is presented in Fig.~\ref{fig:overview}.
We first pretrain the complete network either conventionally or adversarially.
Conventional training refers to training on clean images while adversarially training refers to training on adversarial images computed using a particular attack method.
We follow the adversarial training recipe from Madry et al. (2017)~\cite{madry2017towards} where we train the model on adversarial images computed using Projected-Gradient Descent (PGD) attack.
Once the model is pretrained, we reinitialize and retrain a set of modules of the network adversarially for the conventionally pretrained model or conventionally for the adversarially pretrained model, while keeping the weights for the rest of the modules fixed.
Our main experiments rely on a single splitting point for the network, where we only retrain all the layers before or after the cut-off point.
\subsection{Datasets}
We validate our findings by testing on three different datasets including CIFAR-10~\cite{krizhevsky2014cifar}, ImageNet~\cite{ILSVRC15} and Imagenette~\cite{imagenette}.
CIFAR-10~\cite{krizhevsky2014cifar} is a 10 class dataset comprising of low-resolution images ($32 \times 32$) with 50000 training and 10000 test samples.
We also include the large-scale high-resolution ImageNet dataset with 1.28M training and 50000 validation samples to evaluate our hypothesis\footnote{The validation set serves the purpose of the test set in our case as direct access to the test set is not available. Therefore, no hyperparameters are tuned directly on the validation set.}.
Finally, we include a small subset of ImageNet called Imagenette with only 10 classes but high-resolution images (9469 training and 3925 test samples).
\subsection{Experimental Protocol}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.9\textwidth}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenet/imagenet_adversarial_training_upto_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_adv_upto.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Adv. retraining up to cut-off}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenet/imagenet_adversarial_training_after_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_adv_after.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Adv. retraining after cut-off}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenet/imagenet_conventional_training_after_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_clean_after.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Con. retraining after cut-off}
\end{subfigure}
~
\begin{subfigure}[b]{0.32\textwidth}
\graphicswithlegend{\columnwidth}{Figures/condition_split_new/imagenet/imagenet_conventional_training_upto_cutoff_label_attack.pdf}
{0.1}{Figures/illustrations/training_type_clean_upto.png}{0.88}{0.7}
\vspace{-4mm}
\caption{Con. retraining up to cut-off}
\end{subfigure}
\end{subfigure}
\hspace{-25mm}
\begin{subfigure}[b]{0.09\textwidth}
\raisebox{0.5\height}{\includegraphics[width=\textwidth]{Figures/illustrations/training_type_legend_vertical.pdf}}
\end{subfigure}
\caption{ImageNet results with ResNet-50 architecture are consistent with the results on Imagenette or CIFAR-10, but we observe a shift in the cut-off point resulting in a rise in the importance of the higher-level modules. This can be partly explained by the larger size of ImageNet where initial modules themselves are insufficient to provide robustness since most of the parameters lie in the higher-level modules.}
\label{fig:imagenet_results}
\end{figure*}
All the CIFAR models were trained on a single GPU (NVIDIA RTXA6000) with Adam optimizer with an initial learning rate of 0.001 and a batch size of 128. The models were trained for 300 epochs, with a cosine decay in the learning rate after every epoch.
All our results are based on $L_{\infty}$ norm-based attacks.
For adversarial training, we used an epsilon of 8/255 and epsilon per iteration of 2/255. We used PGD-based adversarial training~\cite{madry2017towards,xie2019featuredenoising} where the number of iterations was fixed to 7.
We use identical training settings for Imagenette.
For the ImageNet dataset, we use fast adversarial training~\cite{wong2020fast} to speed up the training process.
Fast adversarial training uses a random start followed by a single step in the direction of the gradient which makes it equivalent to PGD-1. We use an epsilon of 4/255 for ImageNet as per the common practice~\cite{wong2020fast,xie2020smooth}. The model was trained using 8 GPUs (NVIDIA RTXA6000) with synchronized batch-norm using SGD with an initial learning rate of 0.256, a momentum of 0.875, and a batch size of 256, where the learning rate was reduced by a factor of 10 after the $30^{th}, 60^{th}$ and the $90^{th}$ epoch.
We used a weight decay of 0.0001 to train all our models.
For both Imagenette and CIFAR-10, we add clean samples to the batch during adversarial retraining (we use a ratio of 50:50 for clean and adversarial samples in the batch).
This inclusion of clean examples helps maintain the clean accuracy of the model.
However, we do not include any clean samples when retraining ResNet-50 on ImageNet.
For model evaluation, we use PGD-200~\cite{xie2019featuredenoising,xie2020smooth,madry2017towards} with a single restart.
Stronger attacks do exist~\cite{croce2020autoattack}, but the objective of this paper is to determine the relative susceptibility of layers.
Therefore, absolute numbers are not particularly important in our case.
It is important to mention that we attack the model using the actual target during evaluation rather than the prediction.
This ensures that the robust accuracy does not accidentally inflate due to the attack moving the class from wrong to the correct label.
However, we still attack the prediction of the model during adversarial training to avoid the \textit{label-leaking} effect~\cite{kurakin2016labelleaking} as per the common adversarial training recipe.
\subsection{Model Architectures}
We evaluated the VGG~\cite{vgg} and ResNet~\cite{resnet} model families imported from the TorchVision~\cite{torchvision} model repository. We specifically evaluated VGG-11 and VGG-16, where both of these models were equipped with batch-norm. In order for these architectures to work on CIFAR, we replace the average pooling layer before the classification head (which outputs a $7 \times 7$ tensor) with Global Average Pooling (GAP) layer, which reduces the dimensionality to $1 \times 1$. This is similar to the residual architecture~\cite{resnet}.
We include ResNet-18 and ResNet-50 within the residual family~\cite{resnet} for our experiments with identical architecture across datasets.
\section{Key Results}
We evaluate the block-wise susceptibility of four models (ResNet-18, ResNet-50, VGG-11, VGG-16) belonging to two major model families (ResNet~\cite{resnet} and VGG~\cite{vgg}) on three image recognition datasets (CIFAR-10~\cite{krizhevsky2014cifar}, Imagenette~\cite{imagenette} and ImageNet~\cite{ILSVRC15}) using our block-wise retraining protocol.
The primary results are divided into four different training settings that we evaluate which include (i) adversarial retraining before the cut-off, (ii) adversarial retraining after the cut-off, (iii) conventional retraining before the cut-off, and (iv) conventional retraining after the cut-off.
\subsection{CIFAR-10}
Our results on CIFAR-10 are visualized in Fig.~\ref{fig:cifar10_results}.
In all four different settings, it is evident that adversarial performance improves with adversarial training of early layers.
This also indicates that the main discrepancy between conventionally and adversarially trained models lies at the lower-level features.
For conventional retraining, just retraining the initial two modules evades the robustness of the network, while having a marginal effect on clean accuracy.
This highlights the fact that the initial modules are the most distinct ones between conventionally and adversarially trained models.
\subsection{Imagenette}
We see a similar trend on Imagenette in terms of retraining as CIFAR-10 where adversarial retraining of initial modules is important for obtaining robustness (Fig.~\ref{fig:imagenette_results}).
However, we see a relative increase in the importance of higher-level modules as compared to CIFAR-10.
This can be attributed to the larger image size, where the object occupies a larger fraction of the image, requiring a larger effective receptive field of the network.
\subsection{ImageNet}
The results on ImageNet are again consistent with our prior results on CIFAR-10 and Imagenette.
But the point is shifted where mid-level modules (m\_2, m\_3, and m\_4) also play a dominant role for robustness as evident from Fig.~\ref{fig:imagenet_results}.
Our evaluation is limited to ResNet-50 trained using fast adversarial training~\cite{wong2020fast} on ImageNet~\cite{ILSVRC15}.
Looking at the performance in the case where the model is retrained after the cut-off point, we see the same trend where excluding the first two modules results in poor robustness of the model.
\section{Analysis}
Based on our preliminary findings, we analyze particular aspects of the models in more detail.
This includes extending our analysis to every layer, evaluating all possible combinations of modules as well as evaluating layer robustness to reinitialization.
\subsection{Per-Layer Analysis}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Figures/layer_training/cifar10_resnet18_relu_layer_line_comb_legend_out.pdf}
\caption{Results decomposed at the level of layers rather than modules for the ResNet-18 architecture on CIFAR-10. These results are consistent with the module-based results as the initial layers are part of the initial modules of the network. We do not observe any significant contribution from a single layer, but rather, the contribution from different layers accumulates.}
\label{fig:cifar10_resnet18_layer_results}
\end{figure}
We perform a more fine-grained layer-wise retraining evaluation for ResNet-18 on CIFAR-10 where we move away from complete modules which comprise of a different number of layers at each level and focus on the individual layers themselves.
This analysis decouples the aggregation artifacts and highlights if there are other layers that are equally important as the initial layers in the network.
The results for the per-layer cut-off experiment are visualized in Fig.~\ref{fig:cifar10_resnet18_layer_results}.
The analysis show that the gains are flattened out after the inclusion of the first seven layers, which is consistent with the results from the module-based experiment as these layers form the initial modules of the network. We observe the most significant gain for the first layer in the network, which is referred to as \textit{m\_0} in our previous results.
It is interesting to note that there is a sudden drop in clean accuracy in Fig.~\ref{fig:cifar10_resnet18_layer_results} when retraining module 4 adversarially. This is an interesting phenomenon that is likely related to the fact that for small input images like those found in CIFAR-10, module 4 already performs a kind of \textit{global classification}. The effect does not seem to be related to adversarial robustness. However, this observation highlights that module/layer-wise retraining is a technique that allows us to discover and analyze other unexpected effects in deep neural networks.
\subsection{Module Combinations}
In our first set of experiments, we considered a single split in the network where we either train the network before the cut-off or after the cut-off while keeping the other part fixed.
However, this does not preclude the possibility that a combination of lower-level and higher-level modules provides better robustness as compared to just a single split.
In order to test this, we trained ResNet-18 (CIFAR-10) on all the different possible combinations of modules.
These results are summarized in Fig.~\ref{fig:cifar10_resnet18_module_results} where we plot the median accuracy for all possible combinations of the modules with or without a particular module.
These results are qualitatively consistent with single cut-off experiments, where adversarial training of the initial modules is essential for robustness to adversarial samples.
This indicates that there are no specific combinations of lower-level and high-level modules that are robust, but rather, just the initial set of layers are important for this purpose.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{Figures/module_comb/cifar10_resnet18_clean_pretrained_median_line.pdf}
\caption{Adversarial retraining}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{Figures/module_comb/cifar10_resnet18_adv_pretrained_median_line.pdf}
\caption{Conventional retraining}
\end{subfigure}
\caption{Median accuracy when considering the distribution of accuracies (ResNet-18 on CIFAR-10) when either including or excluding a particular module. Median robust performance drops whenever the initial layers are not adversarially trained in the end.}
\label{fig:cifar10_resnet18_module_results}
\end{figure}
\subsection{Layer-wise Reinitialization Robustness}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/layer_sensitivity/cifar10_resnet50_relu_complete_sensitivity.pdf}
\caption{Conventionally pretrained (CIFAR-10)}
\label{fig:reinit_con_cifar10}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/layer_sensitivity/cifar10_resnet50_relu_complete_adv_sensitivity.pdf}
\caption{Adversarially pretrained (CIFAR-10)}
\label{fig:reinit_adv_cifar10}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/layer_sensitivity/imagenet_resnet50_relu_complete_sensitivity.pdf}
\caption{Conventionally pretrained (ImageNet)}
\label{fig:reinit_con_imagenet}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/layer_sensitivity/imagenet_resnet50_relu_complete_adv_sensitivity.pdf}
\caption{Adversarially pretrained (ImageNet)}
\label{fig:reinit_adv_imagenet}
\end{subfigure}
\caption{Layer-wise reinitialization robustness for ResNet-50 on CIFAR and ImageNet with either conventional or adversarial training. In the figure, \textit{none} refers to the performance of the original model, while \textit{layer index} refers to the index of the convolutional layer in TorchVision's ResNet-50~\cite{torchvision}. Adversarially pretrained ResNet-50 demonstrates the high sensitivity of the initial layers in contrast to the conventionally pretrained model, which indicates that the initial layers change significantly to cater for the adversarial noise, highlighting their importance in obtaining robust models.}
\label{fig:layer_sensitivity_results}
\end{figure*}
We attempt to understand the behavior described by Zhang et al. (2019)~\cite{zhang2019layersensitivity} for adversarial samples, where they found some layers to be much more important than others for overall classification performance.
Although we reproduce their results on clean samples, from the point of view of adversarial training, we find (Fig.~\ref{fig:reinit_con_cifar10}) that reinitialization of low-level layers tends to induce significant adversarial robustness. This is consistent with our other findings, namely that adversarial samples are a phenomenon associated with low-level layers. In addition, it suggests that susceptibility to adversarial samples is associated with training~\cite{ilyas2019adversarialexamplesarenotbugs}. Conversely, susceptibility to adversarial samples is never significantly increased due to layer reinitialization (Fig.~\ref{fig:reinit_adv_cifar10}). Since ResNet-50 models trained on full ImageNet are much more susceptible to layer reinitialization, the results are more difficult to interpret. However, we still observe that reinitialization of initial layers tends to induce higher robustness to adversarial samples (Fig.~\ref{fig:reinit_con_imagenet}).
Furthermore, while non-adversarial accuracy is usually strongly affected by reinitialization, adversarial accuracy is usually less affected in comparison (Fig.~\ref{fig:reinit_adv_imagenet}).
\section{Feature Distributions}
Above, we have seen the differential effects of early and late layers on adversarial robustness.
Adversarial attacks on a network might operate by changing one type of feature into another, leaving the overall distribution of feature vectors the same, or by producing novel feature vectors that do not occur in non-adversarial samples.
These changes might occur only in late layers or both in early and late layers.
This distinction is important both for understanding the nature of adversarial attacks and to devise possible defenses.
Prior work visualizing the activities in hidden convolutional layers has primarily focused on visualizing the aggregate or per-filter activity~\cite{rauber2016visualizing}. In contrast, we visualize the distribution of activations for all the filters simultaneously across many images and layers, under both adversarial and non-adversarial conditions, using nonlinear dimensionality reduction by picking a single vector across spatial dimensions of the activation ($\mathbf{z} \in \mathbb{R}^{C}$ where $C$ is the number of filters in a layer).
Representative results are shown in Figure 8 using t-SNE dimensionality reduction. Images represent random samples after blocks one through four in a ResNet 50 network, choosing 100 random samples from each of 1000 different images.
These results are consistent across three dimensionality reduction techniques (UMAP, TriMap, t-SNE) as well as two different datasets which we tested (ImageNet as well as CIFAR-10).
Each scatterplot is overlayed with a kernel density estimation in the dimensionality-reduced space, with green regions corresponding to non-adversarial samples and red regions corresponding to adversarial samples.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{1.0\textwidth}
\includegraphics[width=\textwidth]{Figures/dim_red_act/clean_tsne_plot.jpg}
\caption{Conventionally pretrained ResNet-50 (ImageNet)}
\label{fig:act_con_imagenet}
\end{subfigure}
\begin{subfigure}[b]{1.0\textwidth}
\includegraphics[width=\textwidth]{Figures/dim_red_act/adv_tsne_plot.jpg}
\caption{Adversarially pretrained ResNet-50 (ImageNet)}
\label{fig:act_adv_imagenet}
\end{subfigure}
\caption{Dimensionality reduced activation vectors using t-SNE~\cite{tsne} of four (\textit{m\_1} to \textit{m\_4}) modules in the network. These plots highlight the substantial differences between clean and adversarial activations, which are amplified upon propagation through higher-level modules. Adversarial training minimizes these differences between activations.}
\label{fig:activation_vec_dim_red}
\end{figure*}
Fig.~\ref{fig:act_con_imagenet} shows that the distribution of feature vectors differs substantially between non-adversarial and adversarial samples. In fact, after block 4 (high-level features), the distribution of adversarial and non-adversarial sample are almost completely non-overlapping, showing that adversarial samples do not imitate non-adversarial activation patterns or outputs, but generate very different collections of high-level features. This difference is particularly striking since there is little overlap even in the feature vectors that might correspond to background regions in the image. The second striking phenomenon observable in Fig.~\ref{fig:act_con_imagenet} is that substantial distributional differences are present even after the first block, i.e., in low-level features. This is consistent with our findings above, namely that differences between non-adversarial and adversarial samples must occur already in the early layers of the network and are responsible for susceptibility to adversarial samples.
Fig.~\ref{fig:act_adv_imagenet} illustrates the same distributions for adversarially trained networks. What we find here is that the distributions of feature vectors associated with non-adversarial and adversarial samples are much closer, not just overlapping in general, but reproducing peaks and regions of high density in substantial detail. That is, adversarial training has to adjust not just the high-level convolutional layers to match distributions between adversarial and non-adversarial samples, but also the low-level feature extraction layers (and from our above experiments, we already know that this change to the distribution of low-level feature vectors is both necessary and sufficient).
State-of-the-art reductions in susceptibility to adversarial samples through adversarial training results in feature vector distributions for non-adversarial and adversarial samples that closely match each other. Nevertheless, the resulting models still have substantial susceptibility to adversarial samples.
This implies that the remaining successful adversarial attacks probably work by transforming feature vectors into each other while staying within the distribution of non-adversarial feature vectors.
In other words, adversarial attacks on adversarially trained networks are qualitatively different from adversarial attacks on undefended networks.
\section{Conclusion}
We have described selective retraining of networks as a technique for localizing susceptibility to adversarial samples in deep neural networks.
Furthermore, we have demonstrated that dimensionality reduction of sets of activation vectors in different layers can be a useful tool for understanding the statistics and relations of adversarial and non-adversarial vectors.
Our experimental results demonstrate that susceptibility of deep neural networks to adversarial samples is associated with the early, non-specific layers of such networks.
That is, we have shown that adversarial samples generate differences in feature distributions in those layers and that training networks to be robust to adversarial samples largely eliminates those distributional differences.
Practically, this means that in order to achieve robustness to adversarial samples, it is both necessary and sufficient to retrain only the early layers where feature vectors are not yet highly class specific.
Our experiments also show that adversarial samples can be detected and visualized easily as
anomalies or outliers.
However, a substantial gap between human performance and deep neural networks however remains even after adversarial training. Our results show that these differences are not merely quantitative in nature; rather, in the absence of adversarial training, adversarial samples succeed via generating novel feature vectors, while after adversarial training, adversarial samples mimic the feature distribution of non-adversarial samples, suggesting that different defense mechanisms may be required.
Our work has a number of practical implications: (1) it shows that we cannot quickly transform non-robust networks into robust networks by retraining, (2) adversarially trained networks are still susceptible to some adversarial attacks, but our results show that the methods used for detecting attacks on undefended networks fail for adversarially trained networks, and (3) we can likely achieve better detection of adversarial samples by analyzing unit outputs as a distribution of feature vectors rather than as a single vector.
The techniques described should prove useful in future work on understanding the statistical origins of adversarial samples, as well as devising practical techniques for defending against adversarial samples.
\section{Broader Impact}
Our investigation aims to help understand the causes of the existence of adversarial examples, which is a major failure mode of current deep learning models.
Deep learning-based visual recognition systems have been deployed in a range of different areas, including self-driving cars and security systems.
Improving the robustness of these systems is critical.
Furthermore, a better understanding of robustness can help us achieve robustness in an efficient way without going through the compute-intensive process of adversarial training.
However, on the flip side, these robust systems can potentially be used in a negative context such as mass surveillance.
\section*{Acknowledgements}
The authors would like to acknowledge useful discussions with Iuri Frosio on adversarial robustness.
This work is in part supported by the BMBF project DeFuseNN (Grant 01IW17002) and the NVIDIA AI Lab (NVAIL) program.
\bibliographystyle{plain}
|
1,314,259,995,621 | arxiv | \section{Introduction} \label{sec:intro}
A lot of real-time data is generated within Uber’s data centers. This data originates in different sources such as end user applications (driver/rider/eater) or the backend microservices. Some of this data consists of application or system logs continuously emitted as part of day to day operation. A lot of services also emit special events for tracking things such as trip updates, driver status change, order cancellation and so on. Some of it is also derived from the OnLine Transactional Processing (OLTP) database changelog used internally by such microservices. As of October 2020, trillions of messages and petabytes of such data were generated per day across all regions.
Real-time data processing plays a critical role in Uber’s technology stack and it empowers a wide range of use cases. At high level, real-time data processing needs within Uber consists of three broad areas: 1) Messaging platform that allows communication between asynchronous producers and subscribers 2) Stream processing that allows applying computational logic on top of such streams of messages and 3) OnLine Analytical Processing (OLAP) that enables analytical queries over all this data in near real time. Each area has to deal with three fundamental scaling challenges within Uber:
\begin{itemize}
\item Scaling data: The total incoming real time data volume has been growing exponentially at a rapid rate of year over year produced by several thousands of micro services. In addition, Uber deploys its infrastructure in several geographical regions for high availability, and it has a multiplication factor in terms of handling aggregate data. Each real time processing system has to handle this data volume increase while maintaining SLA around data freshness, end-to-end latency and availability.
\item Scaling use cases: As Uber’s business grows, new use cases emerge from various business verticals and groups. Different parts of the organization have varying requirements for the real time data systems, which are often competing in nature. For instance, dynamic pricing\cite{chen2016dynamic} for a given Uber product (such as rides or eats) is a highly complex real-time workflow involving multi-stage stream processing pipelines that run various machine learning algorithms along with a fast key-value store. This system is designed for favoring freshness and availability over data consistency, and it’s implemented entirely by engineers. On the other hand, monitoring real-time business metrics around orders and sales requires a SQL like interface used by data scientists with more emphasis given to data completeness.
\item Scaling users: The diverse users interacting with the real time data system fall on a big spectrum of technical skills from operations personnel who have no engineering background to advanced users capable of orchestrating complex real time computational data pipelines. As the personnel of Uber grows, the platform teams also face increasing challenges on the user imposed complexities, such as safe client-side version upgrade for a large number of applications and managing an increasing number of user requests.
\end{itemize}
In short, the biggest challenge is to build a unified platform with standard abstractions that can work for all such varied use cases and users at scale instead of creating custom solutions. A key decision made by us to overcome such challenges is to adopt the open source solutions in building this unified platform. Open source software adoption has many advantages such as the development velocity, cost effectiveness as well as the power of the crowd. Given the scale and rapid development cycles in Uber, we had to pick technologies that were mature enough to be able to scale with Uber’s data as well as extensible enough for us to integrate it in our unified real-time data stack.
Figure \ref{fig:data-flow} depicts the high-level flow of data inside Uber's infrastructure. Various kinds of analytical data are continuously collected from Uber’s data centers across multiple regions. These streams of raw data form the source of truth for all analytics at Uber. Most of these streams are incrementally archived in batch processing systems and ingested in the data warehouse. This is then made available for machine learning and other data science use cases. The Real Time Data Infra component continuously processes such data streams for powering a variety of mission critical use cases such as dynamic pricing (Surge), intelligent alerting, operational dashboards and so on. This paper focuses on the real time data eco-system.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{uber-infra.png}
\caption{The high-level data flow at Uber infrastructure}
\label{fig:data-flow}
\end{figure}
The paper is organized as follows. In Section \ref{sec:requirements}, we list the requirements derived from the use cases which are used to guide the design decisions for various real-time use cases. In Section \ref{sec:abstractions}, we provide an overview of the high-level abstractions of the real-time data infrastructure at Uber. In Section \ref{sec:overview}, we present the open source technologies that we adopted for each component in the architecture. More importantly, we describe the enhancements and improvements on the open source solutions to overcome the scaling challenges faced by Uber. In Section \ref{sec:use-case}, we analyze several real-time use cases at Uber and show how their solutions are shaped differently due to unique design requirements. We then discuss a few other important aspects of the real-time data infrastructure in Section \ref{sec:all-active} and Section \ref{sec:backfill}. Then in Section \ref{sec:related} we discuss related work and in Section \ref{sec:lessons}, we reflect on lessons we learned about building and operating real-time systems at Uber. Finally, we conclude in Section \ref{sec:conclusion} and show the future work in Section \ref{sec:future}.
\section{Requirements} \label{sec:requirements}
Each category of use case mentioned in Section \ref{sec:intro} has its own special requirements pertaining to real-time data infrastructure which are often competing with those of other use cases. The different requirements generally include the following aspects:
\begin{itemize}
\item Consistency: Mission-critical applications such as financial dashboards require data to be consistent across all regions. This includes zero data loss in the inter-region and intra-region dispersal and processing mechanisms, de-duplication as well as ability to certify data quality.
\item Availability: The real time data infrastructure stack must be highly available with 99.99 percentile guarantee. Loss of availability has a direct impact on Uber’s business and may result in significant financial losses. For instance, dynamic pricing leverages the real-time data infrastructure component for calculating demand and supply ratios per geo-fence, which in turn is used to influence the price of a trip or UberEats delivery.
\item Data Freshness: Most of the use cases require seconds level freshness. In other words a given event or log record must be available for processing or querying, seconds after it has been produced. This is a critical requirement to ensure ability to respond to certain events such as security incidents, demand-supply skews, business metric alerts and so on.
\item Query latency: Some use cases need the ability to execute queries on the raw data stream and require the p99th query latency to be under 1 second. For instance, site facing or external analytical tools such as UberEats Restaurant manager\cite{rm} will execute several analytical queries for each page load. Each such query must be very fast to provide a good experience for the restaurant owner.
\item Scalability: The raw data streams constitute petabytes of data volume collected per day across all regions. This data is constantly growing based on organic growth of our user base, new lines of business deployed by Uber as well as new real time analytics use cases that arise over time. The ability to scale with this ever growing data set in a seamless manner, without requiring users to re-architect the processing pipelines is a fundamental requirement of the real-time data infrastructure stack.
\item Cost: Uber is a low margin business. We need to ensure the cost of data processing and serving is low and ensure high operational efficiency. This influences a variety of design decisions such as amount of data kept in memory, tiered storage, pre-materialization vs runtime computation and so on.
\item Flexibility: We need to provide programmatic as well as declarative (SQL like) interface for expressing computational logic to accommodate the diverse user groups. In addition, some use cases need a push-based model which is semi-stateful and continuously emits generated results whereas others might need a stateful pull-based model where the user can execute queries on the raw data stream. For instance, users can create intelligent alerts in case of business rule violation using push-based stream processing pipelines. Whereas, dashboarding and triaging will require a pull-based SQL interface for the same datasets.
\end{itemize}
It’s easy to observe that guaranteeing all these requirements for the same use case is not possible. For instance in the dynamic pricing use case, we cannot guarantee both data consistency and freshness (availability) at Uber’s scale based on CAP theorem\cite{gilbert2002brewer}. To minimize business impact, we must therefore prioritize freshness ahead of consistency. Subsequently, each technology chosen for building this use case must be finely tuned for favouring freshness. Such tradeoffs are discussed in detail in Section \ref{sec:use-case} by analyzing several real-time use cases at Uber.
\section{Abstractions} \label{sec:abstractions}
The diagram in Figure \ref{fig:abstractions} illustrates the logical building blocks that constitute a real-time analytics stack. The different components (from bottom up) are as follows:
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{abstraction-new.png}
\caption{An abstraction of the real-time data infrastructure and the overview of the components}
\label{fig:abstractions}
\end{figure}
{\bfseries Storage.} This provides a generic object or blob storage interface for all the layers above it with a read after write consistency guarantee. This is primarily used for long term storage of data and should be optimized for high write rate. Reads are less frequent and used for cases such as bootstrapping data in an OLAP table or a stream, data backfills and so on.
{\bfseries Stream.} This provides a publish-subscribe interface to the higher layers. Users of this layer can produce events to a particular stream or topic. Any other user subscribing to this stream can consume the data one event at a time. This system should be optimized for low latency for reads and writes. The minimum requirements from this layer include ability to partition the data and at least once semantics between producer and subscriber.
{\bfseries Compute.} This provides the ability to perform arbitrary computation on the underlying stream and storage layers. When computing over a stream, processing happens for each individual event whereas computation done directly over storage can be done in batches. It’s important to note that we can choose the same or different technologies for stream processing vs storage or batch processing. Choosing the same technology will result in simpler abstraction for the higher layers but results in higher complexity to implement - thus adding to significant operational overhead. Whereas, choosing two different technologies makes the individual components feasible but delegates the task of federation to the higher layers. The minimum requirements from this layer include at least once semantics between the data source and sink.
{\bfseries OLAP.} This layer provides a limited SQL capability over data coming from stream or storage. The system should be optimized for serving analytical queries including filtering, aggregations with group by, order by in a high throughput, low latency manner. The minimum requirements from this layer for the vast majority of use cases include at least once semantics while ingesting data from the different sources. Exactly once data ingestion based on a primary key is a must have for a small set of critical use cases.
{\bfseries SQL.} This refers to a full SQL query layer on top of OLAP as well as compute. When used with the compute layer, the SQL statement is compiled into a compute function which can be applied to the underlying stream or storage. When used with the OLAP layer, it will do additional processing on top of the limited SQL provided, to fill in the gaps. For instance, most real-time OLAP databases have limited or no join support, and this can be done at this SQL layer. It’s interesting to note that joins can also be done in lower layers (pre-materialize at the compute layer) and served by the OLAP layer without need for additional processing - albeit at a higher cost. The minimum requirements from this layer include SQL semantics which are closer to ANSI SQL with extensions applicable for stream processing (for instance - window functions).
{\bfseries API.} This provides a programmatic way to access the stream or specify a compute function for the higher layer applications. This is to be used by advanced users for whom the SQL interface is not sufficient. It’s important to note that the choice of technologies in the layers below will have a direct impact on the simplicity of this API.
{\bfseries Metadata.} This provides a simple interface to manage all kinds of metadata required for all the aforementioned layers. For instance, the schema that describes structured data managed by storage or stream will be stored here. Minimum requirements include ability to version the metadata and have checks for ensuring backward compatibility across versions.
\section{System Overview} \label{sec:overview}
Each following subsection introduces the open source systems we have adopted for the corresponding logical building block as shown in Figure \ref{fig:overview}. We then subsequently describe Uber’s unique contributions in each domain and explain how it bridges the gaps to meet Uber’s unique scale and requirements.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{architecture-new.png}
\caption{Overview of the real-time data infrastructure at Uber}
\label{fig:overview}
\end{figure}
\subsection{Apache Kafka for streaming storage} \label{sec:kafka}
Kafka\cite{kreps2011kafka} is a popular open-source distributed event streaming system that is widely used in the industry. In 2015 Kafka was already a popular solution known for good performance when we adopted it. A more recent performance study can be found in this benchmark report from Confluent\cite{confluent-test}, which compared Kafka, Pulsar\cite{pulsar} and RabbitMQ\cite{rabbitmq} on system throughput and latency, the primary performance metrics for event streaming systems in production. Besides the performance, there were several other important factors to consider for adoption, such as operational simplicity, open-source ecosystem maturity, size of open-source community, adoption rate across the industry. Looking at those together, Kafka was the clear winner among the queuing and event streaming systems.
Today at Uber, we have one of the largest deployments of Apache Kafka in the industry, with trillions of messages and Petabytes of data per day. As the transport mechanism for sending streaming data to both batch and realtime systems, Kafka at Uber empowers a large number of different workflows, such as propagating event data from the rider and driver apps, enabling a streaming analytics platform (e.g. Apache Samza\cite{samza}, Apache Flink), streaming database changelog to the downstream subscribers, ingesting all sorts of data into Uber’s Apache Hadoop Data Lake. Due to Uber’s large scale, fault-tolerance considerations and some unique requirements, we customized Kafka and added the following enhancements.
\subsubsection{Cluster federation}
To improve the availability and to tolerate single-cluster failure, we at Uber developed a novel federated Kafka cluster setup which hides the cluster details from producers/consumers. Users do not need to know which cluster a topic resides in and the clients view a “logical cluster”. A metadata server aggregates all the metadata information of the clusters and topics in a central place, so that it can transparently route the client’s request to the actual physical cluster. In addition to reliability, cluster federation also improves scalability to support business growth. Based on our empirical data, the ideal cluster size is less than 150 nodes for optimum performance. With federation, the Kafka service can scale horizontally by adding more clusters when a cluster is full. New topics are seamlessly created on the newly added clusters. Lastly, cluster federation also brings the ease of topic management. Inside Uber there are a large number of applications and clients, and it’s challenging to migrate a topic with live consumers between clusters. Typically, this requires manual user coordination to shift their traffic to the new cluster, resulting in job restart. Cluster federation enables consumer traffic redirection to another physical cluster without restarting the application.
\subsubsection{Dead letter queue}
There are cases when some messages fail to be processed by downstream application, for example, due to message corruption or unexpected behavior. In Apache Kafka’s model, there are two options to handle such failed messages: either drop those messages or retry indefinitely which blocks processing of the subsequent messages. However, there are many scenarios in Uber that demand neither data loss nor clogged processing, such as trip receipt processing. To accommodate such use cases, a Dead Letter Queues (DLQ) strategy was built on top of the Kafka interface\cite{dlq}. If a consumer of the topic cannot process a message with several retries, it will publish that message to the dead letter topic. The messages in the dead letter topic can be purged or merged (i.e. retried) on demand by the users. This way, the unprocessed messages remain separate and therefore are unable to impede live traffic.
\subsubsection{Consumer Proxy}
Open source Kafka includes a consumer library which packages sophisticated logic of batching and compression. Though such client-side optimizations improves the consumer throughput, it brings a big challenge to large organizations like Uber regarding client management. With tens of thousands of Kafka applications running, it’s tremendously difficult for the platform team to support the users in troubleshooting and debugging. In addition, it slows down the development of the client library, as it takes months to upgrade the client library in all the applications. Moreover, large organizations use many programming languages, so it’s hard to provide multi-language support when the clients are complex. Lastly, due to Kafka’s architecture limitations, the open-source Kafka limits the number of the instances in a consumer group to no more than the number of the topic’s partition, and therefore puts a cap on consumer’s level of parallelism.
To address these challenges, we built a proxy layer that consumes messages from Kafka and dispatches them to a user-registered gRPC service endpoint for all the pub/sub use cases. The complexities of the consumer library are encapsulated in the proxy layer, and applications only need to adopt a very thin, machine-generated gRPC client. In particular, the consumer proxy provides sophisticated error handling. When the downstream service fails to receive or process some messages, the consumer proxy can retry the dispatch, and send them to the DLQ if several retries failed.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{KCP.png}
\caption{Overview of the Kafka Consumer Proxy at Uber}
\label{fig:kcp}
\end{figure}
Another noticeable benefit of the consumer proxy is the change of delivery mechanism from message polling to push-based message dispatching, as shown in Figure \ref{fig:kcp}. Most pub/sub use cases inside Uber do not assume any dependencies among the messages. As a result, a push-based dispatching mechanism can greatly improve the consumption throughput by enabling higher parallelism for slow consumers with negligible latency overhead. This addresses Kafka’s consumer group size issue and allows significantly more concurrent processing opportunities to the applications.
\subsubsection{Cross-cluster replication} \label{sec:kafka-cross-dc}
Given the large-scale use of Kafka within Uber, we ended up using multiple clusters in different data centers. With this setup, the cross-cluster replication of Kafka messages is necessary for two reasons. First, we need to look at the global view of this data for a variety of use cases. For example, in order to compute business metrics related to trips, we need to gather information from all data centers and analyze it in one place. Second, Kafka is also replicated for redundancy to tolerate cluster and datacenter failures. To achieve this, we built and open-sourced a robust and performant replicator across the Kafka clusters called uReplicator\cite{ureplicator}. uReplicator is designed for strong reliability and elasticity. It has an in-built rebalancing algorithm so that it minimizes the number of the affected topic partitions during rebalancing. Moreover, uReplicator is adaptive to the workload so that when there is bursty traffic it can dynamically redistribute the load to the standby workers for elasticity.
On top of this, to ensure there is no data loss from the cross-cluster replication, we also developed and open sourced an end-to-end auditing service called Chaperone\cite{chaperone}. Chaperone collects key statistics like the number of unique messages in a tumbling time window from every stage of the replication pipeline. The auditing service compares the collected statistics and generates alerts when mismatch is detected.
\bigskip
With these improvements, we built a standardized and reliable streaming and messaging platform on top of Kafka to empower various use cases in real-time. The future work in this area includes scaling up to multi regions across on-prem data centers and cloud, as well as better cost-efficiency architecture, which is discussed in section \ref{sec:future}.
\subsection{Apache Flink for stream processing} \label{sec:flink}
In order to process all the real-time data coming through Kafka, we have built a stream processing platform on top of Apache Flink\cite{katsifodimos2016apache}. Apache Flink is an open-source, distributed stream processing framework with a high-throughput, low-latency engine widely adopted in the industry. We adopted Apache Flink for a number of reasons. First, it is robust enough to continuously support a large number of workloads with the built-in state management and checkpointing features for failure recovery. Second, it is easy to scale and can handle back-pressure efficiently when faced with a massive input Kafka lag. Third, it has a large and active open source community as well as a rich ecosystem of components and toolings. Based on our comparisons done in 2016 with Apache Storm, Apache Spark and Apache Samza, Flink was deemed as the better choice of technology for this layer. Storm performed poorly in handling back pressure when faced with a massive input backlog of millions of messages, taking several hours to recover whereas Flink only took 20 minutes. Spark jobs consumed 5-10 times more memory than a corresponding Flink job for the same workload. Samza had a strict dependency on Kafka for maintaining its internal state, which induced a significant operational overhead.
At Uber, we use Flink heavily for both facilitating customer-facing products and powering internal analytics with a wide range of insights captured from across the world and at all times, such as city-specific market conditions to global financial estimations. The stream processing logic can be expressed by the users in 2 ways, a SQL dialect or a set of low-level APIs. The SQL dialect is commonly used by different categories of users, technical and non-technical, such as engineers, data scientists, operations personnel, product managers and so on. The more advanced users prefer to use the API for expressing complex logic as well as connecting to external systems such as databases, RPC endpoints, caches and so on. To better support Uber use cases, we made the following contributions and improvements to Apache Flink.
\subsubsection{Building streaming analytical applications with SQL}
One of the most important contributions we made was to introduce a layer on top of the Flink framework known as FlinkSQL\cite{athenax}. This is now contributed back to Apache Flink project and it provides the ability to transform an input Apache Calcite\cite{calcite} SQL query into an efficient Flink job. The SQL processor compiles the queries to reliable, efficient, distributed Flink applications, and manages the full lifecycle of the application, allowing users to focus solely on their business logic. Internally, it converts the input SQL query into a logical plan, runs it through the query optimizer and creates a physical plan which can be translated into a Flink job using the corresponding Flink API. As a result, users of all technical levels can run their streaming processing applications in production in a span of mere hours regardless of scale.
These internal details are hidden from the user which has a big trade-off. It makes adoption very easy since all the users need to understand is the data source, e.g. input Kafka topic, and the SQL syntax. However, it adds significant operational overhead for the platform team to tune and maintain the production jobs. In particular, we had to overcome the following challenges:
{\bfseries Resource estimation and auto scaling} The resource configurations such as allocated CPU and memory are important for the job health and also the cost efficiency. We used empirical analysis for establishing a correlation between the common job types and the corresponding resource requirements. For instance, a stateless Flink job which does not maintain any aggregation windows is CPU bound vs a stream-stream join job will almost always be memory bound. We also observed that the job load may vary during peak and off-peak hours. To maximize the cluster utilization, we employ continuous monitoring of the job load and garbage collection statistics, and perform auto-scaling when necessary.
{\bfseries Job monitoring and automatic failure recovery} Since the end user does not know about the underlying Flink job and its job status, the platform needs to monitor the job and provide a strong reliability guarantee. To address this, we built a component for automatically handling job failures when it detects a certain condition. It is a rule-based engine which compares the Flink job’s key metrics such as resource usage against the desired state and takes corrective action such as restarting a stuck job or auto scaling.
Note that FlinkSQL has different semantics from the batch processing SQL systems such as Presto. FlinkSQL is a stream processing engine wherein both the input and output are unbounded streams. Whereas, batch processing engines query bounded datasets and output a bounded dataset. One of the future work for FlinkSQL is to unify the streaming/batch processing semantics as discussed in section \ref{sec:future}.
\subsubsection{Unified architecture for deployment, management and operation} \label{sec:unified-flink-architecture}
Since we have provided two platforms to the users for building and managing the stream processing pipelines, we identified commonalities between the two and converged them into a unified architecture for deployment, management and operation. The new unified platform also addressed several other challenges and resulted in a layered architecture for better extensibility and scalability as depicted in Figure \ref{fig:flink}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{flink.png}
\caption{The layers of the Unified Flink architecture at Uber}
\label{fig:flink}
\end{figure}
The Platform layer handles organizing the business logic and integration with other external platforms such as Machine learning feature training, workflow management and SQL compilation. It consists of multiple business-specific components and can easily be extended to support new business requirements with additional components. It transforms the specific business logic into standard Flink job definitions, and passes this to the next layer for validation and management.
The job management layer manages the Flink job's lifecycle including validation, deployment, monitoring and failure recovery. It offers a set of unified API abstractions for the platform layer (such as Start/Stop/List a job) and persists the job information including the state checkpoints and the metadata. In addition, it serves as the proxy layer to the physical clusters and dispatches the jobs based on the job type, importance and priority. Lastly, a shared component in the job management server continuously monitors the health of all jobs and automatically recovers the jobs from the transient failures.
The bottom layer is the infrastructure layer consisting of the compute clusters and storage backend. It also provides the abstraction of the physical resources for flexibility and extensibility, regardless of the hosting infrastructure being on-prem or cloud. For example, the compute clusters can be paired with different resource schedulers such as YARN and Uber’s Peloton\cite{peloton}. Similarly, the storage backend can adopt HDFS, Amazon S3\cite{s3} or Google Cloud Storage\cite{gcs} for the state checkpoints, to meet the various requirements for storage choice.
With these improvements, Flink has emerged as the de facto stream processing platform within Uber, powering several thousands of jobs with 30\% year-over-year growth rate. Based on user feedback, the current challenges in this layer include the need for seamless data backfills without writing additional code which is discussed in detail in Section \ref{sec:backfill}. Furthermore, some critical use cases need the ability to do a Flink job restart without any downtime which is an active area of investigation in the Flink community.
\subsection{Apache Pinot for OLAP} \label{sec:pinot}
Apache Pinot\cite{im2018pinot} is an open-source, distributed, OnLine Analytical Processing (OLAP) system designed for performing low latency analytical queries on terabytes-scale data. Pinot employs the lambda architecture to present a federated view between real-time and historical (offline) data. As a column store, Pinot supports a number of fast indexing techniques, such as inverted, range, sorted and startree index\cite{im2018pinot}, to answer the low-latency OLAP queries. Pinot takes a scatter-gather-merge approach to query large tables in a distributed fashion: data is chunked by time boundary and grouped into segments; while the query is first decomposed into sub-plans which execute on the distributed segments in parallel, and then the plan results are aggregated and merged into a final one.
We decided to adopt Apache Pinot as our OLAP solution for several reasons. During this time in 2018, the only other options available were Elasticsearch\cite{elastic} and Apache Druid\cite{yang2014druid}. Based on our experimental evaluation and outside research, Apache Pinot has a smaller memory and disk footprint as well as supports significantly lower query latency SLA. In particular:
\begin{itemize}
\item ElasticSearch: With the same amount of data ingested into Elasticsearch and Pinot, Elasitcsearch's memory usage was 4x higher and disk usage was 8x higher than Pinot. In addition, Elasticsearch's query latency was 2x-4x higher than Pinot, benchmarked with a combination of filters, aggregation and group by/order by queries.
\item Apache Druid: Pinot is similar in architecture to Apache Druid but has incorporated optimized data structures such as bit compressed forward indices, for lowering the data footprint. It also uses specialized indices for faster query execution such as Startree \cite{im2018pinot}, sorted and range indices, which could result in order of magnitude difference of query latency. Recent studies done outside of Uber also confirmed this performance benefit of Pinot over Druid \cite{compareOlap} \cite{confluera}.
\end{itemize}
At Uber, Pinot powers a number of real-time analytics use cases. Various products build their customized dashboards on Pinot for visualizing and exploring important metrics such as rides demand-supply, UberEats orders statistics and so on. Another category of use cases stems from the need to execute analytical queries as part of many backend services. The primary distinguishing requirement for such use cases is data freshness and query latency which needs to be real-time in nature. For example, identifying rider cancellation or abandoned UberEats carts instantly, enables quick corrective action in the form of messaging and incentives. We have contributed the following enhancements to Apache Pinot to handle Uber’s unique requirements around high availability, rich query support and exactly once semantics (i.e. upserts).
\subsubsection{Upsert support}
Upsert is a common requirement by many use cases inside Uber, such as correcting a ride fare and updating a delivery status. We designed and developed a scalable upsert solution on Pinot, so that records can be updated during the real-time ingestion into the OLAP store. To the best of our knowledge, Apache Pinot is the only open-source real-time OLAP store that supports upsert. The key technical challenge for upsert is tracking the locations of the records with the same primary key. In a real-time system, it’s very complicated and inefficient to keep track of these locations in a centralized manner and coordinate with distributed storage nodes. To overcome this challenge, we organize the input stream into multiple partitions by the primary key, and distribute each partition to a node for processing. As a result, all the records with the same primary key are assigned to the same node. On top of that, we introduced a new routing strategy that dispatches subqueries over the segments of the same partition to the same node to ensure the integrity of the query result. Together they lead to a shared-nothing solution to this problem in Pinot. This shared-nothing solution has many advantages, including better scalability, elimination of single point of failure, and ease of operation.
\subsubsection{Full SQL support }
Pinot is an OLAP system that excels at low-latency queries with a rich set of indexing technique support. However, it lacks several notable SQL features such as subqueries and joins. To fill this gap, we have integrated Pinot with Presto for enabling standard PrestoSQL queries on Pinot tables\cite{pinot-sql}. In fact, Presto is the defacto query engine for interactive queries within Uber. This combination works great since we can combine Pinot’s seconds level data freshness with Presto’s flexibility in doing complicated queries. In addition, predicate pushdowns and aggregation function pushdowns enable us to achieve sub-second query latencies for such PrestoSQL queries - which is not possible to do on standard backends such as HDFS/Hive.
\subsubsection{Integration with the rest of Data ecosystem}
In large corporations like Uber, it’s a priority to improve engineering productivity and development velocity, in an environment where every product evolves at a fast pace. Towards this goal, we’ve spent a lot of time integrating Pinot with the rest of the Data ecosystem to ensure a seamless user experience\cite{operate-pinot}. Pinot integrates with Uber’s schema service to automatically infer the schema from the input Kafka topic and estimate the cardinality by sampling the messages. Pinot also integrates with FlinkSQL as a data sink, so customers can simply build a SQL transformation query and the output messages can be “pushed” to Pinot. Similar integrations have been added to Piper, Uber’s data workflow management system\cite{piper}, to create Pinot offline tables from Hive datasets via Spark.
\subsubsection{Peer-to-peer segment recovery}
The original design of Apache Pinot introduced a strict dependency on an external archival or “segment store” such as HDFS, Amazon S3, Google GCS and so on. During real time data ingestion, completed segments had to be synchronously backed up to this segment store to recover from any subsequent failures. In addition, this backup was done through one single controller. Needless to say, this was a huge scalability bottleneck and caused data freshness violation. Moreover, any segment store failures caused all data ingestion to come to a halt. Our team designed and implemented an asynchronous solution wherein server replicas can serve the archived segments in case of failures. Thus, we replaced a centralized segment store with a peer-to-peer scheme, while still maintaining the same guarantees around data and query consistency. Lastly, this also solved the single node backup bottleneck and significantly improved overall data freshness.
\bigskip
With these improvements, Pinot adoption has grown significantly within Uber. In 2 years since it was introduced in our Data stack, the data footprint has grown from dozens of GBs of data to over several hundreds of TBs of data. At the same time the query workload has grown from a few hundreds of QPS (Queries Per Second) to tens of thousands of QPS. Our team continues to address the growing needs within Uber and are currently working on the following challenges:
{\bfseries Ability to perform low latency joins}: Currently joins are performed by Presto, which federates query execution across Pinot and Hive. However, this is done entirely in-memory in the Presto worker and cannot be used for critical use cases. We are contributing the ability to perform lookup joins to Pinot to support joining tables with commonly used dimension tables.
{\bfseries Semistructured (e.g. JSON) data support}: Users currently rely on a Flink job to preprocess an input Kafka topic with nested JSON format into a flattened-schema Kafka topic for Pinot ingestion. We are working with the community in building native JSON support for both ingestion and queries.
\subsection{HDFS for archival store }
In Uber, we use HDFS as the long term storage for all the Data. Most of this data comes from Kafka which is in Avro format and is persisted in HDFS as raw logs. These logs are then merged into the long term Parquet data format using a compaction process and made available via standard processing engines such as Hive, Presto or Spark. Such datasets constitute the source of truth for all analytical data. This is used to backfill data in Kafka, Pinot and even some OLTP or key-value store data sinks. In addition, HDFS is used by other platforms for managing their own persistent storage. For instance, Apache Flink uses HDFS for maintaining the job checkpoints. This consists of all the input stream offsets as well as snapshots of the Flink job's internal state per container. Furthermore, Apache Pinot uses HDFS for long term segment archival which is crucial for correcting failed replicas or during server bootstrap.
\subsection{Presto for Interactive Query}
Traditionally, in the big data ecosystem a distributed SQL query engine such as Hive\cite{hive} is used for processing batch datasets where the emphasis is on query flexibility rather than ingestion or query latency. In the recent years, there have been increasing demands on the interactive analytics workloads to derive the insights in a quick manner and at Uber we adopted Presto\cite{presto} as the interactive query engine solution. Presto is an open source, distributed query engine originally developed by Facebook. Presto was designed from the ground up for fast analytical queries against large scale datasets by employing a Massively Parallel Processing (MPP) engine and performing all computations in-memory, thus avoiding materialization overhead of writing intermediate results to disk.
Moreover, Presto is designed to be flexible and extensible. It provides a Connector API with high performance I/O interface to multiple data sources, including Hadoop data warehouses, RDBMSs and NoSQL systems. In the case of Uber, data scientists and engineers often want to do exploration on real-time data to enhance the sensitivity of the corresponding features or models. In order to achieve this, we have leveraged Presto’s connector model and built a Pinot connector to deeply integrate with Apache Pinot so that we can execute standard Presto SQL queries on fresh data. One challenge we overcame during this connector development is to be intelligent and selective on which parts of the physical plan can be pushed down to the Pinot layer. Our first version of this connector only included predicate pushdown given the limited connector API. In order to lower query latency and leverage Pinot’s fast indexing, we enhanced Presto’s query planner and extended Presto Connector API to push as many operators down to the Pinot layer as possible, such as projection, aggregation and limit.
\section{Use cases analysis} \label{sec:use-case}
In this section, we present several different real-time use cases across the 4 broad categories (as in Figure \ref{fig:abstractions}) in production at Uber, and show how they use the different systems to achieve their business goals. Also, we discuss the design tradeoffs considered by those use cases.
\subsection{Analytical Application: Surge Pricing}
The surge\cite{garg2019driver} use case is a dynamic pricing mechanism in Uber ride-hailing marketplace to balance the supply of available drivers with the demand for rides. On the rider side, surge pricing reduces the demand to match the level of available drivers and maintains the reliability of the marketplace. On the driver side, it encourages drivers to drive during certain hours and locations, as drivers earn more during surge.
Surge pricing is essentially a streaming pipeline for computing the pricing multipliers per hexagon-area geofence based on the trip data, rider and driver status in a time window. The surge pricing pipeline ingests streaming data from Kafka, runs a complex machine-learning based algorithm in Flink, and stores the result in a sink key-value store for quick result look up. The surge pricing favors data freshness and availability over data consistency. The late-arriving messages do not contribute to the surge computation and the pipeline must meet a strict end-to-end latency SLA requirement on the calculation per time window. This tradeoff is reflected in the design that the surge pricing pipeline uses the Kafka cluster configured for higher throughput but not lossless guarantee, as well as an active-active setup for higher availability which is described in Section \ref{sec:all-active}.
\subsection{Dashboards: UberEats Restaurant Manager}
Dashboards are popular for observing trends and spotting anomalies at a glance. And at Uber, many engineering teams build customized dashboards using the real-time analytics systems. Among them, UberEats restaurant manager is a good representative example. This dashboard enables the owner of a Restaurant to get insights from the UberEats orders regarding customer satisfaction, popular menu items, sales and service quality analysis, via generated interactive, slice-and-dice queries.
At a high level, the restaurant manager demands fresher data and low query latency, but does not require too much flexibility as the patterns of the generated queries are fixed. To meet the requirements, we used Pinot with the efficient pre-aggregation indices of the large volume of raw records, in order to reduce the serving time. Also, we built preprocessors in Flink such as aggressive filtering, partial aggregate and roll-ups to further reduce the processing time in Pinot and meet the latency SLA. With such preprocessing, we trade the query flexibility required for ad-hoc exploration and complexity of query evolution for lower latency.
In general, there is a tradeoff between processing at the transformation time, as done by Flink, and processing at query time, as done by Pinot. The preprocessing during transformation time can create optimized indices and reduce the amount of data for serving, but it reduces the query flexibility on the serving layer.
\subsection{Machine Learning: Real-time Prediction Monitoring} \label{sec:ml-monitoring}
Machine learning (ML) has been playing a crucial role within Uber to create seamless, impactful experiences for our customers\cite{michelangelo}. To ensure ML model quality, it is critical to monitor its predictions so as to ensure that the data pipelines are continuing to send accurate data. To address this, a real-time prediction monitoring pipeline is set up that joins the predictions to the observed outcomes (or labels) generated by the data pipeline, creating ongoing, live measurements of model accuracy.
The key requirement from this use case is scalability, due to a high volume and high cardinality of data to be processed. With thousands of ML models deployed and each model with hundreds of features, there are several hundreds of thousands of time series with millions of data points computed per second, far beyond the capability of the time-series database inside Uber. Thanks to the horizontal scalability of Flink, we deployed a large streaming job to aggregate the metrics and detect prediction abnormality. To boost the query performance over the large number of data points, the Flink job also creates pre-aggregation as Pinot tables.
This real-time prediction monitoring pipeline represents a large number of use cases that build real-time OLAP cubes with pre-aggregates and indices in Pinot, to speed up query execution time and throughput for large scale datasets.
\begin{table}
\caption{The components used by the example use cases}
\label{tab:components}
\begin{tabular}{ m{4em} | m{0.8cm} | m{1.3cm} | m{1.6cm} | m{1.3cm} }
\toprule
& Surge & Restaurant Manager & Real-time Prediction Monitoring & Eats Ops Automation\\
\midrule
API & Y & & Y & \\
SQL & & Y & Y & Y \\
OLAP & & Y & Y & Y \\
Compute & Y & Y & Y & Y \\
Stream & Y & Y & Y & Y \\
Storage & & Y & Y & \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ad-hoc Exploration: UberEats Ops Automation}
The UberEats team needed a way to execute ad hoc analytical queries on real time data generated by couriers, restaurants and eaters. Once an insight was discovered, a subsequent need was to productionize the query in a rule-based automation framework. This was a critical component used by the Ops team to combat Covid 19 and keep restaurants open in different geographical locations like Europe. To comply with regulation and safety rules, Uber needed to limit the number of customers and couriers at a restaurant. The ops team was able to identify such metrics using Presto on top of real-time data managed by Pinot and then inject such queries into the automation framework.
This framework uses Pinot to aggregate needed statistics for a given geographical location in the past few minutes and then generates alerts and notifications to the couriers and restaurants accordingly. Thus the same infrastructure provided a seamless path from ad-hoc exploration to production rollout. Needless to say, the underlying system has to be extremely reliable and scalable since this decision making process is critical not only to the business but also for the safety of the customers. Pinot, Presto and Flink were able to scale easily with the organic data growth and performed reliably during peak hours.
\bigskip
To summarize, Table ~\ref{tab:components} shows the components in the real-time infrastructure used by the representative use case for each category.
\section{All-active strategy} \label{sec:all-active}
Providing business resilience and continuity is a top priority for Uber. Disaster recovery plans are built carefully to minimize the business impact from natural and man-made disasters, such as power outages, catastrophic software failures and network outages. At Uber, we rely on a multi-region strategy that ensures services are deployed with backup in data centers geographically distributed, and when the physical infrastructure in one region is unavailable, the service can still stay up and running from other regions.
The foundation of this multi-region real-time architecture is a multi-region Kafka setup that provides data redundancy and traffic continuation support for its clients. In fact, the majority of the services in the stack above depend on Kafka for the active/active setup. For example, Figure \ref{fig:active-active} below shows how Uber’s dynamic pricing service (i.e. surge pricing) uses active-active Kafka to build the disaster recovery plan. All the trip events are sent over to the Kafka regional cluster and then aggregated into the aggregate clusters for the global view. Then in each region a complex Flink job with large-memory footprint will compute the pricing for different areas. Each region has an instance of ‘update service’ and one of them is labelled as primary by an all-active coordinating service. The update service from the primary region stores the pricing result in an active/active database for quick lookup.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{active-active.png}
\caption{The active-active setup for surge pricing}
\label{fig:active-active}
\end{figure}
When disaster strikes the primary region, the active-active service assigns another region to be the primary, and the surge pricing calculation fails over to another region. It’s important to note that the computation state of the Flink job is too large to be synchronously replicated between regions, and therefore its state must be computed independently from the input messages from the aggregate clusters. Given that the input to the Flink job from aggregate Kafka is consistent across all regions, the output state converges. This approach is compute intensive since we’re running redundant pipelines in each region.
The other strategy is to consume Kafka in an active/passive mode: only one consumer (identified by a unique name) is allowed to consume from the aggregate clusters in one of the regions designated as the primary region at a time. When disaster happens, the service can fail over to another region and resume its consumption progress. Such active/passive mode is desirable for the services that favor strong consistency such as payment processing and auditing.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{active-passive.png}
\caption{The active-passive setup for stronger consistency}
\label{fig:active-passive}
\end{figure}
As the consumption progress is represented by the offset of the Kafka topic, the key challenge of the active/passive strategy is offset synchronization of consumers across regions. Because many services at Uber cannot accept any data loss, in case of a failover, the consumer can neither resume from the high watermark (i.e. the latest messages), nor from the low watermark (i.e. the earliest messages) to avoid too much backlog. In order to overcome the challenge of offset mappings across the regions, we developed a sophisticated offset management service at Uber. As shown in Figure \ref{fig:active-passive}, when uReplicator (introduced in section \ref{sec:kafka-cross-dc}) replicates messages from source cluster to the destination cluster, it periodically checkpoints the offset mapping from source to destination in an active-active database. Meanwhile, an offset sync job periodically synchronizes the offsets between the two regions for the active-passive consumers. So when an active/passive consumer fails over from one region to another, the consumer can take the latest synchronized offset and resume the consumption.
\section{Backfill} \label{sec:backfill}
There is a recurring need to go back in time and reprocess the stream data at Uber, for several reasons. First, a new data process pipeline often needs to test against the existing data, or a new machine learning model often needs to be trained with a few months of data. To save time and be able to iterate faster, the testing or training is done on historic data that is already available. Second, sometimes a bug may be discovered in a real-time application that has already processed the data for a period. In such cases there could be a desire to reprocess some/all of the data separately after fixing the bug. Third, similar to the previous case, there can be a change of stream processing logic that necessitates reprocessing of old data.
The backfill problem appears to be a common problem wherever there is realtime big data processing. Lambda\cite{lambda} and Kappa\cite{kappa} architectures have been proposed in this respect but both suffer from limitations. Lambda architecture maintains two separate systems: one for batch, and one for stream processing. This leads to maintenance and consistency issues when trying to keep both implementations in sync. Kappa architecture improves upon this by using the same streaming code for both real-time and backfill processing but requires very long data retention in Kafka and may not be very efficient in terms of processing throughput. Given the scale of data generated into Kafka at Uber and the operational concerns regarding node replacement, we limit Kafka retention to only a few days. Therefore, we're unable to adopt the Kappa architecture.
At Uber, we built a solution for ease of backfill for stream processing use cases using Flink which has 2 modes of operations:
\begin{itemize}
\item SQL based: We added the ability to execute the same SQL query on both real-time (Kafka) and offline datasets (Hive). In this case, the FlinkSQL compiler will translate the SQL query to two different Flink jobs: one using DataStream API and the other using DataSet API. Although this is similar to Lambda architecture, the user does not need to maintain 2 distinct jobs.
\item API based: This solution is internally named as Kappa+\cite{kappa}. The Kappa+ architecture is able to reuse the stream processing logic just like Kappa architecture but it can directly read archived data from offline datasets such as Hive. The Kappa+ architecture addressed several issues on processing the batch datasets with streaming logic, such as identifying the start/end boundary of the bounded input, handling the higher throughput from the historic data with throttling, fine tuning job memory as the offline data could be out of order and therefore demand larger window for buffering. Effectively, using Kappa+ we can execute the same code with minor config changes on both streaming or batch data sources.
\end{itemize}
This is an active area of investigation and there are lots of edge cases that need to be handled in both these solutions. A full evaluation of each approach is out of scope of this paper.
\section{Related Work} \label{sec:related}
Real-time data infrastructure spans a wide range of components, and there are plentiful related systems in each area.
{\bfseries Messaging systems.} Traditional enterprise messaging systems ActiveMQ\cite{activemq} RabbitMQ\cite{rabbitmq} Oracle Enterprise Messaging Service\cite{oraclems} IBM Websphere MQ\cite{ibmsq} have existed for a long time and often play a critical role as an event bus for processing asynchronous data flows. However, none of those are comparable to Kafka on features, ecosystem and system performance. Recently, a new messaging system Apache Pulsar\cite{pulsar} emerged with a novel tiered architecture\cite{ramasamy2019unifying} that decouples data serving and data storage for better elasticity and easier operation. However, Pulsar is still relatively new and not as mature as Kafka.
{\bfseries Stream Processing Systems.} The need for highly-scalable stream processing systems has led to the creation of a number of systems in the recent years, including both open-source software like Storm\cite{storm}, Samza\cite{samza}, Heron\cite{kulkarni2015twitter}, Spark Streaming\cite{zaharia2013discretized}, Apex\cite{apex}, and the home-grown ones from large internet companies like Google's Photon\cite{ananthanarayanan2013photon}, Facebook’s Puma/Swift\cite{chen2016realtime}, Amazon's Kinesis\cite{kinesis}. In addition to overcoming the challenges of scalability, efficiency and fault tolerance, another important ongoing trend for these systems is the unification of streaming and batch processing. Systems like Apache Flink\cite{carbone2015apache} are expanding its architecture to support batch processing use cases, while frameworks like Dataflow\cite{akidau2015dataflow} and Apache Beam\cite{beam} approach this via an abstraction layer over different processing engines.
{\bfseries Real-time OLAP Systems.} Real-time OLAP systems have become popular in recent years, as modern businesses need to quickly transform freshly obtained data into insights. Apache Druid\cite{yang2014druid} and Clickhouse\cite{clickhouse} are the open source systems commonly adopted in the industry. They are like Pinot to buffer ingested streams and utilize column stores to achieve efficient column scans. Helios\cite{potharaju2020helios} is a similar real-time OLAP store developed and used at Microsoft. Another way to improve the performance of the OLAP system is to pre-aggregate data into cubes, then perform query execution on the pre-aggregated data \cite{kylin}. However, such performance improvements come at the expense of query inflexibility. HTAP databases are another category of emerging systems to unify transactional and analytical processing in a single system \cite{kemper2011hyper} \cite{huang2020tidb} \cite{farber2012sap}. However, one challenge is to have a clean separation of the two to prevent the interference of the analytical queries over the operational workload\cite{psaroudakis2014scaling} despite some recent attempts on this problem\cite{makreshanski2017batchdb}.
{\bfseries SQL systems.} Systems that run SQL against large data sets have become popular over the past decade. Each of these systems present a unique set of tradeoffs, and a comprehensive examination of the space is outside the scope of this paper. Apache Hive\cite{hive} was originally developed at Facebook to provide a SQL-like interface over data stored in HDFS\cite{hdfs}, and Dremel\cite{melnik2020dremel} is an exa-scale columnar SQL query engine that is optimized for large complex ad-hoc queries used at Google. Spark SQL\cite{armbrust2015spark} is a more modern system built on the popular Spark engine addressing many of the limitations of MapReduce\cite{dean2008mapreduce}. Systems like MySQL\cite{mysql}, Impala\cite{impala}, Drill\cite{drill} are common open source databases that can be used for analytical purposes. In recent years, more SQL systems extended the support of querying real-time data. Procella\cite{chattopadhyay2019procella} is a highly scalable and performant SQL engine used in YouTube with native support for lambda architecture and low-latency data ingestion. Trill\cite{chandramouli2014trill} is a query processor from Microsoft that handles streaming and relational queries with early results, across the latency spectrum from real-time to offline. We chose the open source Presto for its interactiveness, flexibility and extensibility that makes it easy to integrate with other data sources and databases via the Connector API, and enhanced it with the real-time data availability via Pinot.
Real-time data powers many use cases at other very large scale internet companies. Chen et al\cite{chen2016realtime} presented the realtime data processing and analytics infrastructure at Facebook. It shares a similar full-stack architecture with ours, towards a similar scale and targeting latency of seconds. Most components in their architecture were developed in house and private, while at Uber we tried to adopt open source solutions and leverage the wider community. F1 Lightning\cite{yang2020f1} is a HTAP solution from Google that provides analytical processing over change data streamed from transactional stores, and highlights a federated query engine loosely coupled with multiple transaction data stores such as F1 DB\cite{shute2012f1} and Spanner\cite{corbett2013spanner}. The real-time data infrastructure we built at Uber not only integrates with the transactional data via Change Data Capture (CDC), but also works directly over natively generated streaming data. Recently, hybrid serving and analytical processing (HSAP) has emerged as a new kind of architecture that fuses analytical processing and serving as well as the online and offline analysis. Alibaba’s Hologres\cite{jiang2020alibaba} is an example of such architecture that powers Alibaba’s internal big data stack as well as its public cloud offerings. Though such hybrid architecture can lead to more efficiency and enable more optimization, at Uber we chose employing loosely coupled independent systems for the ease of customization and evolution of each component.
\section{Lessons learned} \label{sec:lessons}
We have learned many lessons in our journey of building and scaling the real-time data infrastructure at Uber.
\subsection{Open source adoption}
As seen before, most of the real-time analytics stack and in fact the larger data stack in Uber has been built on open source technologies. The primary reason behind this philosophy is the need to iterate quickly. The engineering needs at Uber are constantly evolving and the ability to deliver a quick solution is crucial. Relying on open source gives us a strong foundation to build upon and reduces the time to market. Naturally, this also helps in handling churn in a graceful way.
However, this is not without its challenges. In our experience, most open source technologies were built for a specific purpose and at Uber, we had to make it work for a lot of dimensions such as a wide spectrum of use cases, programming language, Uber’s underlying infrastructure, security aspects and so on. For instance, Apache Kafka (circa 2014) was meant to be used primarily for log propagation and used with Java applications. We had to build a RESTful ecosystem around Kafka to make it work with 4 languages: Java, Python, Go and NodeJS which was in use at Uber. This also meant we had to invest in building our own throttling mechanism, metadata discovery, client side failure handling and so on. In addition, we customized the core routing and replication mechanism to handle specialized use cases such as zero data loss for financial data, dead letter queue on top of Kafka. Other such examples of customization include integrating with Uber’s container ecosystem, security policies, building a full SQL layer on top of Apache Pinot for our non engineering audience and seamless backfill using Apache Flink.
\subsection{Rapid system development and evolution}
For a large company like Uber, it’s common to see multiple driving forces to the architecture evolution, such as new business requirements, industrial trends, regulation and compliance, and growth. As a result, one lesson we learned is on the importance of enabling rapid software development so that each system can evolve quickly and independently.
On the client side, it’s important to set up best practices to manage the large fleet of applications. First, interface standardization is critical so that a clean boundary is established between the services to minimize the risks of breaking the clients. At Uber, we leverage Monorepo\cite{monorepo} to manage all projects in a single code repository, in order to review the changes by the stakeholders and detect the issues early. Second, a thin client is always preferred in order to reduce the frequency of the client upgrades. For example, upgrading Kafka clients used to take several months prior to the introduction of a RESTful, thin Kafka client. Third, language consolidation is another strategy we employ to reduce the number of clients and ways to interact with the system. For low-level programming languages, we purposely reduced the support to only two languages Java and Golang; and for high-level SQL languages, we chose PrestoSQL as the common language for the majority of the use cases, and built connectors to other databases (e.g. Pinot, MySQL).
On the server side, we integrated all our infrastructure components with Uber’s proprietary CI/CD (Continuous Integration/ Continuous Deployment) framework. This ensures that open source software updates as well as internal feature additions are continuously tested and deployed in a staging environment. This also enables continuous end-to-end testing for the mission critical applications and minimizes any production issues.
\subsection{Ease of operation and monitoring}
Scaling the infrastructure is always a challenge. With rapid business growth, the engineering teams constantly revisit capacity and add more nodes, clusters and data centers. Typically, the speed of scaling physical infrastructure is much faster than scaling the engineering team. As a consequence, lots of manual operations today must be automated in order to sustain the business growth. In fact, at Uber we strategically invested in automation and built declarative frameworks to orchestrate the system deployments. System operators express high-level intentions on operations like cluster turn up and down, resource reallocation, or traffic rebalancing, and the frameworks carry out the instructions without engineer intervention via techniques like configuration generation, containerization and predefined maintenance workflows.
Real-time monitoring and alerting is critical for system reliability and minimizing negative business impact. In addition to cluster wide monitoring, we also provide automated dashboards, alerts and chargeback mechanisms for each use case pertaining to Kafka, Flink or Pinot. This enables the use case owner to monitor health as well as optimize resource utilization.
\subsection{Ease of user onboarding and debugging}
Given the small amount of engineering teams maintaining the underlying technologies, it’s important to build a self-serve system that automates most of the user onboarding, failure handling and triaging for our users. With this in mind, we invested in efforts around the following areas to overcome the challenge of scaling users:
{\bfseries Data discovery.} We use a centralized metadata repository within Uber which is the source of truth for schemas across both realtime and offline systems such as Kafka, Pinot and Hive. This makes it very convenient for users to discover the required datasets. In addition, this system also tracks the data lineage representing flow of data across these components.
{\bfseries Data auditing.} Business events generated by applications are constantly being audited in micro batches from the source all the way to archival. Each such event is decorated with additional metadata such as a unique identifier, application timestamp, service name, tier by the Kafka client. As the events flow from Kafka (regional, aggregate) to Flink or Pinot or Hive, this metadata is used for tracking data loss, duplication for every stage of this data ecosystem as described in Section \ref{sec:kafka-cross-dc}. This makes it very easy for users to detect issues across all of Uber’s data centers.
{\bfseries Seamless onboarding.} Kafka topics used for application logs are automatically provisioned when the corresponding service is deployed in the production environment. These topics are also automatically expanded as the usage increases along with quota enforcement for limiting the maximum capacity. In a similar vein, users can automatically create Flink and Pinot pipelines using a convenient drag and drop UI that hides the complex sequence of provisioning and capacity allocation\cite{uworc}.
\section{Conclusion} \label{sec:conclusion}
As seen in this paper, the real-time data infrastructure has proliferated at Uber, and the whole stack is powering a lot of mission-critical use cases within Uber. This stack has been optimized for flexibility and scale for different user categories and has been running reliably in production for several years, processing multiple petabytes of data per day. The adoption of open source technologies saved a lot of engineering cost and drastically reduced time to market for our analytical products. The unique contributions by Uber’s engineering teams to all these technologies helped overcome the 3 fundamental scaling challenges, which is summarized below:
{\bfseries Scaling Data} Introduction of Kafka thin client libraries, cluster federation and other techniques discussed above have enabled seamless adoption of Kafka by every service in Uber, making it one of the largest deployments in the world. It provides a robust foundation for orchestrating Flink and Pinot data pipelines that are being leveraged for mission critical use cases across all business units. Flink job automation in terms of deployment and failure recovery has promoted widespread adoption with low operational overhead. We were also able to overcome the lack of high availability SLA for our data archival (HDFS) layer with the investments in Flink’s robust checkpoints and Pinot’s peer-to-peer segment recovery scheme.
{\bfseries Scaling use cases} We invested heavily in flexibility of use of individual technologies for powering varied use cases described above. For instance, with the same client protocol (Apache Kafka consumer) we’re able to serve a wide spectrum of use cases from logging which trades off data consistency for achieving high availability, to disseminating financial data that needs zero data loss guarantees in a multi region ecosystem. Similarly, Pinot provides a low latency OLAP layer for mission-critical use cases as well as enables real-time data exploration via Presto integration. Each such technology can be finely tuned depending on the exact set of requirements.
{\bfseries Scaling users} Finally, we were able to add a layer of indirection between our users and the underlying technologies using abstractions and standard interfaces, greatly reducing the user support cost. For instance, the introduction of the FlinkSQL layer enabled data scientists and operations personnel to spin up complex Flink pipelines in a matter of a few hours with just basic SQL knowledge. Anyone within Uber can use PrestoSQL to query data across Pinot and other data systems (eg: Hive) in a seamless manner. Backfilling data across regions is as simple as clicking a button for executing the same query or code in a historical fashion. Moreover, these abstractions provide an extensible framework for us to evolve the underlying technologies and implement future optimizations such as tiered storage.
\section{Future work} \label{sec:future}
Our systems continue to evolve to serve our users better. We have identified a few areas that we will invest strategically in and provide better solutions.
{\bfseries Streaming and batch processing unification} There are several use cases that demand both batch and stream processing, such as the lambda architecture and offline/real-time feature computing for machine learning. It’s common for the users to express the same processing logic twice in different languages and run on different compute frameworks. A unified processing solution will ease the development and pipeline management.
{\bfseries Multi-region and multi-zone deployments} We are working on a multi-region-multi-zone strategy to push the scalability and reliability of our real-time data infrastructure to the next level to tolerate zone-level and region-level disasters. The biggest challenge here is to optimize data placement in order to balance data redundancy for reliability and storage cost due to excessive copies.
{\bfseries On-prem and cloud agnostic} In recent years, the ability to run system infrastructure in the cloud environment has gained a lot of importance. At Uber, we are also looking at cloud adoption and are investigating ways of converting the systems to be agnostic of data centers or cloud, so that we can move freely from on-prem to cloud.
{\bfseries Tiered storage} Storage tiering improves both cost efficiency by storing colder data in a cheaper storage medium as well as elasticity by separating data storage and serving layers. We are actively investigating tiered storage solutions for both Kafka and Pinot and collaborating closely with the open source community in this regard.
\section{Acknowledgement}
Realtime data infrastructure at Uber is an evolving architecture of multi-year effort from several teams. Many engineers, PMs and management leaders contributed to our systems, and we would like to thank them for the contributions.
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,995,622 | arxiv | \section{Introduction}
The quantum nature of gravity is one of the greatest open questions in fundamental physics, and despite decades of effort no complete theory of quantum gravity has been forthcoming. A major obstacle in this endeavour has been the paucity of experimental data constraining potential quantum gravitational effects in any meaningful way, which is a consequence of the incredible weakness of gravity compared to the other known fundamental forces. Indeed, significant quantum gravity effects are generally only expected at the Planck scale, meaning extremely high energies ($E \sim M_{\rm{Planck}} \sim 1.2 \times 10^{19}$ GeV, e.g. the \textit{Planck mass}), or small distances ($L \sim L_{\rm{Planck}} \sim 1.6 \times 10^{-35}$ m, e.g. the \textit{Planck length}).
In recent years however, experimental constraints on potential Planck scale quantum gravity effects have been achieved using high-energy particles of cosmological origin, exploiting observations of photons from distant gamma ray bursts (GRBs), quasars and quiescent gas clouds~\cite{Lieu:2003ee, Abdo2009, HESS:2011aa, Perlman_2015, Vasileiou2015, PhysRevD.99.083009, Cooke:2020rco}, and the high-energy astrophysical neutrinos observed by neutrino telescopes~\cite{AMELINOCAMELIA2016318, ELLIS2019352, PhysRevD.102.063027, Wei:2018ajw} such as the IceCube neutrino observatory~\cite{Aartsen:2016nxy}. These measurements have achieved sensitivity to very weak effects due to the vast distances traversed by the observed particles, potentially allowing even weak effects to accumulate into measurable signals.
In the absence of an accepted model of quantum gravity, heuristic models of the potential characteristics or effects of quantum gravity are often invoked in experimental searches. A common expectation of quantum gravity is that the structure of space-time itself could be subject to the uncertainty principal and fluctuate at very small distance scales~\cite{PhysRev.97.511, misner1973gravitation}. For instance, the very geometry or curvature of space-time may fluctuate, in turn introducing intrinsic uncertainty/fluctuations in defining distance and time. Additionally, it has been conjectured that the fluctuating nature of space-time could manifest as \textit{virtual black holes} (VBH)~\cite{Hawking:1995ag, tHooft:2018waj}, the quantum gravitational analogue of the virtual electron-positron pairs in the well known phenomenon of \textit{vacuum polarisation} in quantum electrodyamics (QED). This uncertain/fluctuating space-time is variously referred to as \textit{space-time foam}, \textit{quantum foam} or \textit{fuzzy space-time}~\cite{Hawking, PhysRev.97.511}.
A direct consequence of these space-time fluctuations are so-called \textit{lightcone fluctuations}, e.g. an intrinsic variability in the travel distance/time -- or indeed velocity -- for a particle propagating through this fluctuating space-time~\cite{PauliLightcone, RevModPhys.29.417, PhysRevLett.13.114, Ford_1995}. This variability can in principal produce measurable signals, such as a variability in arrival times of particles from distant sources such as GRBs~\cite{AMELINOCAMELIA2016318, Vasileiou2015}, and interference effects between otherwise coherent wave-like phenomena such as image degradation in $\gamma$-ray astronomy~\cite{Lieu:2003ee, Perlman_2015} or neutrino flavour decoherence~\cite{Hawking, Ellis:1983jz, Mavromatos2005, Anchordoqui:2005gj}. Lightcone fluctuation effects have been proposed in the contexts of D-brane recoils~\cite{Ellis:1999jf}, compactified space-times~\cite{Yu_2009}, gravitons~\cite{Ford_1995} and loop quantum gravity~\cite{PhysRevD.59.124021}.
Searches for signatures of lightcone fluctuations offer one of only a handful (and arguably the most model independent) avenues to experimentally probe quantum gravity. To date, constraints on lightcone fluctuations resulting from fluctuating space-time largely derive from astrophysical photon observations. Neutrino signals however are less well explored, and offer a number of advantages over other cosmic messenger particles. The feeble interactions between neutrinos and matter allow them to travel vast distances completely unhindered, unlike photons for which the Universe is opaque at high energies. Additionally (and relatedly), astrophysical neutrinos are observed at energies far in excess of cosmological photon sources, reaching PeV and potentially EeV energies (compared to TeV for photons).
In a previous work~\cite{PhysRevD.102.115003} we investigated quantum gravity signals resulting from neutrino interactions with VBHs, demonstrating sensitivity to Planck scale physics is achievable with atmospheric neutrinos (travelling terrestrial baselines). Here we instead investigate neutrino signals from lightcone fluctuations in a heuristic model of fluctuating space-time, including arrival time spread and neutrino decoherence, and consider the expected size of these signals from `natural' Planck scale physics and their detection prospects. In particular, we for the first time evaluate the impact of travel distance uncertainty models employed in $\gamma$-ray quantum gravity searches on neutrino flavour measurements, determining an operator representing decoherence from lightcone fluctuations in the formalism of open quantum systems. This allows experimental constraints on neutrino decoherence to be interpreted with respect to underlying Planck scale fluctuations, and directly compared to $\gamma$-ray results.
\section{Lightcone fluctuations}
\label{sec:lightcone_fluctuations}
Here we present a heuristic model of lightcone fluctuations, specifically of the accumulated uncertainty in a particle's travel distance as a function of distance and particle energy.
The fundamental parameter of this model is the distance uncertainty, $\delta L_{0}$, associated with a particle travelling a reference distance, $L_0$. The accumulation of this uncertainty over a distance $L$ is expressed as:
\begin{equation}
\delta L(L) = \delta L_{0}(L) \left( \frac {L} {L_0} \right)^m ,
\label{eq:deltaL_no_E_dep}
\end{equation}
\noindent where the distance dependence is assumed to follow a power-law characterised by the index $m$, which is a free parameter of the model. $m$ can be predicted for a given concrete fluctuating space-time model, or instead can be fitted to data. Interpretation of the value of $m$ is discussed in Section \ref{sec:distance_dependence}.
We additionally consider the possibility that this distance uncertainty has a dependence on the particle's energy, given that Planck scale physics is commonly expected to be suppressed at energies below $M_{\rm{Planck}}$. An intuitive picture of this is that lower energy particles are less able to resolve the microscopic fluctuating nature of space-time. We therefore modify \Cref{eq:deltaL_no_E_dep} to include a power-law energy dependence characterised by the index $n$, which like $m$ can be either predicted or fit to data, and a reference energy scale, $E_0$:
\begin{equation}
\delta L(E, L) = \delta L_{0} \left( \frac {L} {L_0} \right)^m \left( \frac {E} {E_0} \right)^n .
\label{eq:deltaL}
\end{equation}
Similar phenomenological forms for the energy dependence of Planck scale physics have been assumed in neutrino decoherence searches~\cite{PhysRevLett.85.1166, Anchordoqui:2005gj, Coloma:2018idr, PhysRevD.102.115003}.
When considering Planck scale physics, a `natural' choice of reference values is $E_0 = M_{\rm{Planck}}$ and $L_0 = L_{\rm{Planck}}$, yielding:
\begin{equation}
\delta L(E, L) = \delta L_{\rm{Planck}} \left( \frac {L} {L_{\rm{Planck}}} \right)^m \left( \frac {E} {M_{\rm{Planck}}} \right)^n ,
\label{eq:deltaL_planck}
\end{equation}
\noindent where $\delta L_{\rm{Planck}}$ then represents the uncertainty in travelling one Planck length. This parameter can be fit to experimental data, and given that the Planck length is expected to represent the smallest measurable distance in Nature, a `natural' expectation would be:
\begin{equation}
\delta L_{\rm{Planck}} = L_{\rm{Planck}}
\label{eq:deltaLplanck_natural}
\end{equation}
which leads to the following `natural' distance-uncertainty expression:
\begin{equation}
\delta L(E, L) = L_{\rm{Planck}}^{1-m} L^m \left( \frac {E} {M_{\rm{Planck}}} \right)^n .
\label{eq:deltaL_natural}
\end{equation}
In the absence of energy dependence (e.g. $n=0$), \Cref{eq:deltaL_natural} reduces to a form used in a number of previous works~\cite{Ng:2003ag, Ng:2004qq, Perlman_2015, Cooke:2020rco}, characterised by single free parameter $\alpha$, referred to as the \textit{accumulation parameter}, which is the equivalent of $m$ in this work (where $\alpha = 1 - m$).
We caution that the energy scale of quantum gravity may differ from $M_{\rm{Planck}}$, and therefore experimental searches should keep an open mind as to the value of $E_0$.
\subsection{Interpretation of distance dependence}
\label{sec:distance_dependence}
The distance dependence defined in \Cref{eq:deltaL} can be characterised as~\cite{Ng:2004qq}:
\begin{itemize}
\item $m = 0$: The distance uncertainty has no distance dependence, e.g. does not accumulate. This implies either that the uncertainty is fundamentally distance independent (e.g. one might consider that the Planck length is the fundamental measurement precision limit of the Universe, regardless of the actual distance being measured), or that the fluctuations experienced by the particle as it travels are fully anti-correlated and cancel.
\item $m = 1/2$: The distance uncertainty accumulates as $\delta L(L) \propto L^{1/2}$, which is characteristic of the accumulation of uncorrelated fluctuations (the so-called \textit{random walk} model).
\item $m = 1$: The distance uncertainty accumulates as $\delta L(L) \propto L$. Such a scenario is expected if the fluctuations experienced by the particle are fully-correlated.
\end{itemize}
The cases $m = 0$ and $m = 1$ are therefore the bounding cases, representing the most pessimistic and optimistic scenarios respectively. The case $m=1/2$ can be considered a relatively `natural' scenario, implying that fluctuations in one region of space are independent of those in another spatially separated region. A mildly anti-correlated scenario consistent with the \textit{holographic principal} is given by $m=1/3$~\cite{Ng:2003jk, Perlman_2015}. The $m = 1/2$ and $m = 1$ scenarios are explicitly tested via Monte Carlo (MC) simulations in Section \ref{sec:decoh_mc}.
\subsection{Natural expectation for distance uncertainty}
\label{sec:distance_uncertainty_natural}
\Cref{fig:delta_L_vs_L} shows the accumulation of distance uncertainty for the `natural' expectation defined by \Cref{eq:deltaL_natural}, for a range of different accumulation scenarios (e.g. $m$ values). The assumed particle energy is $E=M_{\rm{Planck}}$, e.g. this is a maximal bounding case, or alternatively represents particles of any energy if an energy independent scenario (e.g. $n = 0$) is assumed.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.5cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{lightcone_fluc_delta_L_vs_L.pdf}
\caption{Distance uncertainty expected for the `natural' scenario given by \Cref{eq:deltaL_natural} for a particle with $E=M_{\rm{Planck}}$ (or $n=0$), as a function of particle travel distance. A number of reference distances are shown with dashed lines. Scenarios with differing $m$ are shown, with their interpretations discussed in Section \ref{sec:distance_dependence}.}
\label{fig:delta_L_vs_L}
\end{figure}
The distance uncertainty accumulated over large distances varies greatly depending on $m$. The most pessimistic case, $m=0$, yields an uncertainty of $L_{\rm{Planck}}$ regardless of distance, which is essentially unmeasurable. On the other extreme, the optimistic scenario, $m=1$, results in $\delta L \sim L$, which is only viable if such effects are suppressed at energies below the Planck scale (e.g. $n>0$).
For the more `natural' uncorrelated $n=1/2$ case, the distance uncertainty accumulated over cosmological distances is $\order{ \rm{\micro m - mm} }$, even for a particle with Planckian energy. This effect, although small, is potentially feasible to study. However, even a weak suppression with energy would render these effects unmeasurable at the particle energies we are able to observe.
\subsection{Velocity fluctuations}
\label{sec:velocity_fluctuations}
In addition to fluctuations in travel distance, a related form of lightcone fluctuation that has been considered in the context of fluctuating space-time is velocity fluctuations~\cite{Vasileiou2015}, also referred to as \textit{stochastic Lorentz invariance violation}, which would result from any stochastic modifications to a particle's dispersion relation. Such a scenario is phenomenologically similar to distance fluctuations, as both result in fluctuations to a particle's travel time between two points. However, in the case of distance fluctuations, the particle's velocity remains unchanged from the particle's own perspective, whereas an observer sees an apparent fluctuation in velocity due to the fluctuating distance. The inverse is true when velocity fluctuations are the underlying mechanism.
A phenomenological form for velocity fluctuations proposed in \cite{Vasileiou2015} is:
\begin{equation}
\delta v = \delta v_{0} \left( \frac {E} {E_0} \right)^n ,
\label{eq:delta_v}
\end{equation}
\noindent where $\delta v_{0}$ represents velocity fluctuation for a particle with energy $E_0$, with the energy dependence characterised in a similar manner to the distance fluctuations in \Cref{eq:deltaL}.
From standard uncertainty propagation we see that apparent velocity fluctuations resulting from underlying distance fluctuations are given by:
\begin{equation}
\delta v = v \frac{\delta L}{L} ,
\label{eq:delta_v_vs_delta_L}
\end{equation}
\noindent which combined with \Cref{eq:deltaL} gives:
\begin{equation}
\delta v = v \frac{\delta L_{0} L^{m-1}}{L_0^m} \left( \frac {E} {E_0} \right)^n .
\label{eq:delta_v_vs_L}
\end{equation}
The distance independent velocity fluctuation expression in \Cref{eq:delta_v} is recovered from \Cref{eq:delta_v_vs_L} when $m=1$ and $\delta L_{0} = L_0$, implying $\delta v_{0} = v$ in this case. This is consistent with the proposed natural scenario $\delta v(M_{\rm{Planck}}) = c$ proposed in \cite{Vasileiou2015}.
We therefore see that velocity fluctuations can also be represented in terms of the distance fluctuation model proposed in this work (e.g. \Cref{eq:deltaL}), even if distance fluctuations are not the underlying mechanism, and thus experimental constraints on the parameters of this model constrain both velocity and distance fluctuation scenarios.
Note that an implication of this is that the distance independent velocity fluctuations of the form in \Cref{eq:delta_v} can only be the result of distance fluctuations with $m=1$, e.g. $\delta L \propto L$. As discussed in Section \ref{sec:distance_dependence}, this corresponds to the highly optimistic fully-correlated distance fluctuation scenario. Distance independent velocity fluctuations are therefore unlikely to result from underlying distance fluctuations, and thus some other underlying fuzzy space-time mechanism is likely required to explain such a phenomenon, such as interactions with the virtual black hole or string/brane backgrounds. Such scenarios can be constrained by placing experimental constraints on $\delta v_{0}$ and/or $E_0$.
\section{Neutrino signals of lightcone fluctuations}
\label{sec:neutrino_signals}
We now consider the influence of the lightcone fluctuations described in Section \ref{sec:lightcone_fluctuations} on neutrino propagation, and the potential observable signals that could result. We explore two possibilities here; neutrino decoherence and arrival time fluctuations.
\subsection{Neutrino decoherence}
One of the major consequences of lightcone fluctuations is the loss of coherence of wave-like phenomena due to the variability in particle propagation distances/times, resulting in the potentially detectable degradation of superposition phenomena.
One of the key proposed observable consequences of such effects is blurring/degradation of images in high energy photons from cosmological sources. For example, fluctuating photon propagation would degrade the wavefront at a telescope aperture, potentially preventing the formation of Airy disks~\cite{Lieu:2003ee, Perlman_2015}. More generally, lightcone fluctuations would blur photon point source images and ultimately render them undetectable once the fluctuations are comparable in scale to the photon wavelength~\cite{Perlman_2015}. Studies of these effects have enabled distance fluctuations to be constrained at the natural Planck scale for correlated, uncorrelated and even some anti-correlated scenarios, albeit only in cases where the effects are not suppressed by energy (e.g. $n=0$)~\cite{Perlman_2015}. Similar arguments also predict the degradation of a narrow FeII absorption line in photon spectra, with a recent study~\cite{Cooke:2020rco} also yielding a natural Planck scale constraint on such effects (again only for energy independent scenarios).
Far less explored is the impact of the loss of coherence in neutrino propagation resulting from lightcone fluctuations in fluctuating space-time scenarios. A neutrino propagates as a superposition of three quantum states, known as \textit{mass eigenstates}. These are distinct from and misaligned with respect to the states in which the the neutrinos undergo interactions via the weak nuclear force, known as \textit{flavor eigenstates}. Together with the differing masses of the mass states, this produces the phenomena of \textit{neutrino oscillations}, whereby a neutrino produced in one flavor state may be detected as another. A neutrino therefore acts as a quantum interferometer, and is intrinsically sensitive to the fluctuations considered in this work.
Neutrinos propagating through fluctuating space-time will become increasingly and stochastically out of phase with one another. This loss of coherence results in a damping of neutrino oscillations over distance, in a phenomenon known as \textit{neutrino decoherence}. Neutrino decoherence has been the subject of a number of experimental searches which are often cited as sensitive to quantum gravity, but the connections of these measurements to potential underlying models (heuristic or otherwise) is little explored. In previous work~\cite{PhysRevD.102.115003} we studied neutrino decoherence resulting from neutrino interactions with VBH, and here we instead quantitatively assess the influence of lightcone fluctuations
\subsubsection{Simulating neutrinos propagating in fluctuating space-time}
\label{sec:decoh_mc}
To test the influence of space-time fluctuations on neutrino propagation and the resulting neutrino decoherence, we implement a simulation of propagating neutrino states and stochastically inject travel distance fluctuations. This simulation software is also described in our previous study of neutrino decoherence from $\nu$-VBH interactions~\cite{PhysRevD.102.115003}. The neutrino mass states are propagated in discrete distance steps, with the states given by:
\begin{equation}
\label{eqn:plane_wave_perturbed}
\ket{\nu_{j}(L)} = \exp{ -i \left( \frac{ m_j^2 }{ 2 E } \left[ L + \Delta L(L) \right] \right) } \ket{\nu_{j}(0)} ,
\end{equation}
\noindent where $\ket{\nu_{j}}$ is the neutrino mass state $j$ ($j=1,2,3$ in the $3v$ paradigm) of mass $m_j$, with $E$ being the neutrino energy. Our lightcone model (defined in Section \ref{sec:lightcone_fluctuations}) specifies the uncertainty, $\delta L_{0}$, of each distance $L_0$ travelled by a particle, which we represent in these simulations by evolving the neutrino states in discrete distance steps of size $L^{\prime}_0$, where the value of $L^{\prime}_0$ is a random number drawn from a normal distribution with mean $L_0$ and standard deviation $\delta L_{0}$. The accumulated distance travelled is the sum of these steps, given by $L^\prime = \sum L^\prime_0 = L + \Delta L(L)$, where $L$ is the travel distance in the absence of fluctuations and $\Delta L(L)$ is the accumulated change in distance for a particular neutrino. This expression yields standard neutrino propagation when $\delta L_{0} = 0$ (and thus $\Delta L(L) = 0$).
The flavor transition probability after a given distance is determined by rotating the current state to the neutrino flavor basis, as defined by the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix~\cite{Pontecorvo:1957qd, Maki:1962mu}, $U$, and projecting onto the desired final flavor state according to:
\begin{equation}
\label{eqn:osc_prob_projection}
P( \nu_\alpha \rightarrow \nu_\beta ) \equiv P_{\alpha \beta} = |\braket{\nu_{\beta}(L)}{\nu_{\alpha}(0)}|^2 ,
\end{equation}
\noindent where $\alpha,\beta$ represent flavor indices ($e,\mu,\tau$ in the $3v$ paradigm).
To probe the phenomenology of this system, we first test a 2-state system with toy (e.g. unrealistic) parameters chosen for clear visualisation (listed in \Cref{table:2nu_params}). Two fluctuation scenarios are considered. In the first case, each distance step is fluctuated independently, e.g. the fluctuations are uncorrelated as considered in the $m=1/2$ scenario described in Section \ref{sec:distance_dependence}. In the second case, the first step is fluctuated randomly as for the uncorrelated case, but all subsequent steps feature are fluctuated by the same amount. This represents a fully-correlated ($m=1$) distance fluctuation scenario (or equivalently a scenario where velocity instead fluctuates).
\begin{table}[htp]
\begin{tabular}{|c| c|}
\hline
Parameter & Value \\
\hline
\hline
\# states & 2 \\ \hline
Mixing angle, $\theta$ & 30\degree \\ \hline
$\lambda$ & 200\,km \\ \hline
$\Delta m^2$ & 0.012\,$\rm{eV}^2$ \\ \hline
$E$ & 1\,GeV \\ \hline
$L_0$ & 1 km \\ \hline
Initial flavor & $\nu_\alpha$ \\ \hline
$L_{\rm{coh}}$ & 500 km \\ \hline
\end{tabular}
\caption{Parameters used for the propagating 2-state system. The mass states are labelled $0,1$ and the flavor states $\alpha,\beta$. The parameter values are chosen to produce clear demonstrations of the behaviour, rather than to represent realistic neutrino parameters. The mass splitting $\Delta m^2$ is chosen to give the desired oscillation wavelength $\lambda$.}
\label{table:2nu_params}
\end{table}
The neutrino survival probabilities resulting from these simulations in both the the uncorrelated and fully-correlated scenarios are shown in upper and lower panels of \Cref{fig:toy_model_2flav} respectively. In both cases, the translucent coloured lines represent individual neutrinos, whilst the dashed colored line shows the average behaviour of the neutrino ensemble. It is this average behaviour which a neutrino counting experiment is ultimately sensitive to. $\delta L_{0}$ is chosen for the two cases such that they have the same \textit{coherence length} (see Section \ref{sec:decoh_analytic_2flav}), which in practice means a far smaller step size fluctuation for the fully-correlated case since the accumulation effect is much stronger. In both cases the expected damping of oscillations that is characteristic of neutrino decoherence is clearly observed, verifying that decoherence does indeed result from lightcone fluctuations.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{toy_model_2flav.pdf}
\caption{Decoherence in a MC simulation of neutrino propagation in the presence of lightcone fluctuations, resulting in the damping of neutrino oscillations. The upper panel shows an uncorrelated fluctuation scenario whilst the lower panel shows fully-correlated fluctuations. In both cases $\delta L_{0}$ is chosen such that the coherence length is the same for both. A 2-flavor system is shown with toy parameters selected for clarity, see \Cref{table:2nu_params}. }
\label{fig:toy_model_2flav}
\end{figure}
We see in \Cref{fig:toy_model_2flav} that for both scenarios the large $L$ limit (e.g. when coherence is completely lost at large distances) is the average of the unfluctuated oscillations, given by $\overline{P_{\alpha \beta}} = \sum_j |U_{\alpha j}|^2 |U_{\beta j}|^2$. This is distinct from so-called \textit{relaxation} scenarios~\cite{GUZZO2016408, Gomes:2020muc} where the limiting case is equal populations of all flavors, as was identified in our previous work for certain $\nu$-VBH interaction models~\cite{PhysRevD.102.115003}. Differences in these large $L$ limits can in principal be used to distinguish between decoherence scenarios should a signal be detected.
An important distinction between the uncorrelated and fully-correlated cases is the functional form of the damping, visualised by the purple damping envelopes shown in \Cref{fig:toy_model_2flav}. This is expected given the differing distance dependence of the two scenarios. For the uncorrelated case, the envelope follows a $e^{-L}$ trend, whilst for the fully-correlated case we instead see damping of the form $e^{-L^2}$. This is explored further in the next section.
\subsubsection{Connecting the simulations and distance fluctuation parameterisation}
\label{sec:decoh_analytic_2flav}
We now seek an analytic description of the decoherence phenomenon observed in the simulations presented in Section \ref{sec:decoh_mc}, and by extension neutrino decoherence from lightcone fluctuations more generally. This description should relate the damping effects to the underlying distance fluctuations parameterised by \Cref{eq:deltaL}.
The damping effect occurs as the spread in neutrino travel distances due to lightcone fluctuations grows, and the effect is expected to become large when $\delta L \sim \lambda$, where $\lambda$ is the oscillation wavelength. Given that we observe $e^{-L}$ damping for the uncorrelated $\delta L \propto L^{1/2}$ (e.g. $m=1/2$) case and $e^{-L^2}$ damping for the fully-correlated $\delta L \propto L$ (e.g. $m=1$) case, this implies a damping envelope of the form:
\begin{equation}
\exp{ - \left( \frac{\delta L}{\lambda} \right)^2 } ,
\label{eq:envelope_vs_deltaL_propto}
\end{equation}
which given \Cref{eq:deltaL} yields:
\begin{equation}
\exp{ - \left[ \frac{ \delta L_{0} }{\lambda} \left( \frac {L} {L_0} \right)^m \left( \frac {E} {E_0} \right)^n \right]^2 } .
\label{eq:envelope_vs_L_propto}
\end{equation}
Damping envelopes of this form are shown by purple solid curves in \Cref{fig:toy_model_2flav}, defined as:
\begin{equation}
P_{\alpha \alpha} = \overline{P_{\alpha \alpha}} + \left( 1 - \overline{P_{\alpha \alpha}} \right) \exp{ - \left( \frac{\delta L}{ \eta \lambda} \right)^2 } ,
\label{eq:envelope_vs_deltaL}
\end{equation}
\noindent where $\eta$ is a $\order{1}$ dimensionless constant of proportionality defined such that the damping term is $e^{-1}$ when $\delta L = \eta \lambda$, and can be thought of as defining the fraction of the oscillation wavelength that the distance uncertainty must accumulate to in order to produce strong decoherence effects. The value of $\eta$ will depend on the specific functional form of the fluctuations, and for the normally distributed step size fluctuations in our simulations we find $\eta \sim 0.23$ (e.g. when $\delta L$ is comparable to a quarter of the wavelength). In common with other works~\cite{PhysRevD.102.115003}, we define the distance after which the damping term is $e^{-1}$ as the \textit{coherence length}, $L_{\rm{coh}}$, which is given by:
\begin{equation}
L_{\rm{coh}} = L_0 \left(\frac{\eta \lambda}{\delta L_{0}}\right)^{\frac{1}{m}} \left(\frac{E_0}{E}\right)^{\frac{n}{m}} .
\end{equation}
The $\delta L^2$ dependence observed in \Cref{eq:envelope_vs_deltaL_propto,eq:envelope_vs_L_propto,eq:envelope_vs_deltaL} can be understood by noticing that distance fluctuations are equivalent to frequency fluctuations for a sine wave, e.g. $
\Delta \omega \equiv ( \Delta L / L ) \omega \implies \omega \left[ L + \Delta L (L) \right] = \left[ \omega + \Delta \omega (L) \right] L$, where $\omega$ is the angular frequency of the wave. The sum of an infinite series of sine waves with differing frequencies but common amplitude and phases indeed features the same squared damping effect, and is directly analogous to the case of a neutrino propagating in fluctuating space-time.
\subsubsection{Analytic decoherence operator}
\label{sec:decoh_analytic_3flav}
Now that we have expressed the damping effects we observe in these simulations in terms of our distance fluctuation parameterisation, we proceed to define a full decoherence operator suitable for describing neutrino propagation in fluctuating space-time.
Neutrino decoherence is often represented using an open quantum system formalism~\cite{Benatti_2000, gago2002study, PhysRevLett.85.1166, PhysRevD.96.093009, Mavromatos:2006yy, Buoninfante:2020iyr, PhysRevD.99.075022, OHLSSON2001159, Farzan:2008zv, Coloma:2018idr, Carpio:2018gum, Carpio:2017nui, Anchordoqui:2005gj, PhysRevD.95.113005, PhysRevD.91.053002, PhysRevD.76.033006, PhysRevLett.118.221801, GUZZO2016408, Morgan:2004vv, Abbasi:2009nfa, Nieves:2019izk, Gomes:2020muc, Ohlsson:2020gxx, PhysRevD.101.056004, de_Holanda_2020, Nieves:2020jjg} considering both the neutrino and its environment, and beyond neutrinos this formalism has also be employed to study decoherence resulting from gravitational sources more generally~\cite{Anastopoulos:2013zya, Oniga:2017pyq, Bassi:2017szd}. The stochastic processes we consider in this work cause our knowledge of the neutrino to degrade over time, which in the language of open quantum systems constitutes the evolution from an initially \textit{pure} quantum state to a \textit{mixed} quantum state. Both mixed and pure quantum states can be mathematically expressed using the density matrix formalism, where the density matrix, $\rho$, for a system of $j$ states $\psi_j$ of probability $p_j$ is given by:
\begin{equation}
\label{eqn:density_matrix}
\rho = \sum_j p_j \ket{\psi_j} \bra{\psi_j} .
\end{equation}
The time (or equivalently distance) evolution of an open quantum system is given by the Lindblad master equation~\cite{lindblad1976}:
\begin{equation}
\label{eqn:decoh_master}
\dot{\rho} = -i[H,\rho] - \mathcal{D}[\rho] ,
\end{equation}
\noindent where $H$ is the Hamiltonian of the system (in which conventional oscillation effects are encoded) and $\mathcal{D}[\rho]$ is a \textit{decoherence operator} defining stochastic/decoherence effects in the system. For a 3-flavour neutrino system (e.g. Nature as we currently know it), $\rho$, $H$ and $\mathcal{D}[\rho]$ are $3 \times 3$ matrices. The neutrino flavor transition probability for such a system is given by:
\begin{equation}
\label{eqn:density_matrix_transition_prob}
P_{\alpha \beta} = \mathrm{Tr}[\rho_\alpha(t)\rho_\beta(0)] .
\end{equation}
The operator $\mathcal{D}[\rho]$ encodes the decoherence effects in the system. In many existing studies simple forms for this operator have been assumed with manageable numbers of free parameters to test against experimental data, although in some cases the decoherence effects have been derived from first principals (see e.g.~\cite{Nieves:2019izk, PhysRevD.101.056004, Nieves:2020jjg}). In this week we seek to determine the form of $\mathcal{D}[\rho]$ representing the distance fluctuations we are considering, and ultimately produce the damping effects we observe. From \Cref{eq:envelope_vs_L_propto} we see the need for a solution to \Cref{eqn:decoh_master} of the general form $\rho \propto \exp{ - L^{2m} }$ implying a form $\mathcal{D}[\rho] \propto - 2m L^{2m-1} \rho$, which once differentiated yields the desired damping form. Taking into account the full distance fluctuation parameterisation \Cref{eq:deltaL}, for a 3-flavour system the decoherence operator is:
\begin{widetext}
\begin{equation}
\label{eqn:Drho_lightcone_fluctuations}
\renewcommand{\arraystretch}{2}
\mathcal{D}[\rho] = \frac{ 2m (\delta L_0)^2 L^{2m-1} }{ L^{2m}_0 } \left( \frac{E}{E_0} \right)^{2n} \begin{pmatrix}
0 & \dfrac{\rho_{21}}{ (\eta \lambda_{21})^2} & \dfrac{\rho_{31}}{ (\eta \lambda_{31})^2} \\
\dfrac{\rho_{21}}{ (\eta \lambda_{21})^2} & 0 & \dfrac{\rho_{32}}{ (\eta \lambda_{32})^2} \\
\dfrac{\rho_{31}}{ (\eta \lambda_{31})^2} & \dfrac{\rho_{32}}{ (\eta \lambda_{32})^2} & 0 \\
\end{pmatrix} ,
\end{equation}
\end{widetext}
\noindent where $\lambda_{ij}$ is the oscillation wavelength corresponding to the mass splitting $\Delta m^2_{ij}$, where $\lambda_{ij} = 4 \pi E / \Delta m^2_{ij}$ in vacuum. There are three wavelengths to consider here, rather than a single wavelength for the 2-flavour system considered in Section \ref{sec:decoh_analytic_2flav}.
\Cref{eqn:Drho_lightcone_fluctuations} is one of the primary results of this work, and provides an operator characterising the general case of neutrino decoherence from lightcone fluctuations, including those resulting from fluctuating space-time models. This allows neutrino transition probabilities to be computed given some underlying distance fluctuation parameters, e.g. $\{L_0, \delta L_{0}, m, n\}$, which can then be tested against experimental data. Alternatively, existing constraints on neutrino decoherence can be re-interpreted in terms of this underlying model, and the results compared to corresponding constraints on space-time fluctuations from $\gamma$-ray observations~\cite{Vasileiou2015, Perlman_2015, Cooke:2020rco}.
To verify and demonstrate this operator, we now simulate distance fluctuations as described in Section \ref{sec:decoh_analytic_2flav} but for a full 3-flavour system with realistic neutrino mixing parameters (listed in \Cref{table:3nu_params}), and compare the resulting neutrino transition probabilities with those computed using $\mathcal{D}[\rho]$. \Cref{fig:toy_model_vs_lindblad_3flav} shows these results for the $\nu_\mu \rightarrow \nu_{e,\mu,\tau}$ channels, where the neutrino energy and baseline are chosen to be representative of atmospheric neutrinos. Both fully-correlated and uncorrelated fluctuations are shown, with the injected step size fluctuation (e.g. $\delta L_{0}$) chosen such that both share a common coherence length with respect to $\lambda_{31}$. The Lindblad analytic form exactly matches the simulation results, again with $\eta \sim 0.23$ as was the case in the 2-flavour system.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{toy_model_vs_lindblad_3flav_dm2_atmo.pdf}
\caption{Decoherence in a MC simulation of neutrino propagation in the presence of lightcone fluctuations for a 3-flavour system with realistic mixing parameters in the atmospheric neutrino parameter space (see \Cref{table:3nu_params}). Both uncorrelated (UC) and fully-correlated (FC) fluctuations are shown. Solid lines show simulation results, whilst dotted lines show the corresponding analytic expression computed using $\mathcal{D}[\rho]$. The travel distance is expressed as the number of Earth diameters, $L_\oplus \sim$ 12,700 km, traversed (an atmospheric neutrino experiment is only sensitive to neutrinos crossing a single diameter). Matter effects are not included.}
\label{fig:toy_model_vs_lindblad_3flav}
\end{figure}
\begin{table}[htp]
\begin{tabular}{|c| c|}
\hline
Parameter & Value \\
\hline
\hline
$\Delta m^2_{21}$ & $7.39 \times 10^{-5}$ $\rm{eV}^2$ \\ \hline
$\Delta m^2_{31}$ & $2.528 \times 10^{-3}$ $\rm{eV}^2$ \\ \hline
Mass ordering & Normal \\ \hline
$\theta_{12}$ & 33.82$\degree$ \\ \hline
$\theta_{13}$ & 8.60$\degree$ \\ \hline
$\theta_{23}$ & 48.6$\degree$ \\ \hline
$\delta_{CP}$ & 221$\degree$ \\ \hline
$E$ & 25 GeV \\ \hline
$L_{\rm{coh},31}$ & $3 L_\oplus$ \\ \hline
\end{tabular}
\caption{Parameters used for evaluating atmospheric neutrino oscillations. Neutrino oscillation parameters are taken from NuFit 4.1 global fit results (normal mass ordering, SuperKamiokande data included)~\cite{NuFit41}.}
\label{table:3nu_params}
\end{table}
One interesting aspect of decoherence resulting from lightcone fluctuations is the differing coherence lengths for different oscillation frequencies. In \Cref{fig:toy_model_vs_lindblad_3flav} the damping of the higher frequency oscillations resulting from the atmospheric mass splitting $\Delta m^2_{31/2}$ (with $\lambda \sim 2 L_\oplus$) is clearly seen, with an injected coherence length of $L_{\rm{coh}} = 3 L_\oplus$. However, the flavour transition probability still continues to change with distance due to the lower frequency oscillations resulting from the solar mass splitting, $\Delta m^2_{21}$.
\Cref{fig:lindblad_3flav_long} shows the $\nu_\mu$ survival channel over larger distances where this second (solar) oscillation frequency is clearly visible even after the first (atmospheric) frequency has damped. Even over the larger distance these lower frequency oscillations have not damped, although it can be seen that the fully-correlated case is damping more quickly with distance than the uncorrelated case (despite having identical coherence lengths for the higher frequency oscillations), which is expected due to the differing distance dependence of the damping terms ($e^{-L^2}$ and $e^{-L}$ respectively). The large difference in coherence length between the two oscillation frequencies is a consequence of the orders of magnitude difference between the solar and atmospheric mass splittings.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{lindblad_3flav_dm2_solar.pdf}
\caption{The $\nu_\mu$ survival probability as shown in the central panel of \Cref{fig:toy_model_vs_lindblad_3flav}, but shown over a longer distance such that the lower frequency oscillations of resulting from $\Delta m^2_{21}$ can be seen. Only the analytic Lindblad curves are shown.}
\label{fig:lindblad_3flav_long}
\end{figure}
This difference between the coherence length of different oscillation frequencies differs from the $\nu$-VBH interaction case we considered in our previous work, which produced uniform damping in all channels. This therefore provides a potential discriminating factor between different scenarios should a decoherence signal be discovered experimentally.
Another key difference between decoherence from lightcone fluctuations and other possible sources relates to the energy dependence. The $\lambda_{ij}^{-2}$ dependence of the damping effects results in an intrinsic $E^{-2}$ dependence (since $\lambda \propto E$). This means that in the absence of any explicit energy dependence (e.g. $n \neq 0$) in the system -- for example the suppression of $\delta L$ below the Planck scale ($n > 0$) -- lower energy neutrinos offer greater sensitivity to these decoherence effects. Indeed, only in cases with $n > 1$ (given the $E^{2n}$ term) do the decoherence effects start to grow with neutrino energy.
\subsubsection{Sensitivity to natural Planck scale effects}
We have established that neutrino decoherence results from lightcone fluctuations, and shown that the resulting damping effects become large when $\delta L(L) \sim \lambda_{ij}$. We now consider the energies and baselines of neutrinos observed from various sources to determine where a potential signal would be expected to be strongest. Given that evidence of fluctuating space-time has not yet been observed, such effects likely only manifest over very large distances, and/or are suppressed at energies below the Planck scale.
\Cref{fig:decoherence_sensitivity} shows the underlying model parameters required to produce strong decoherence effects for a variety of neutrino sources. Damping of both the higher (atmospheric) and lower (solar) frequencies are shown. The upper panel considers the case of energy independent uncorrelated distance fluctuations, for which sensitivity to natural Planck scale effects (e.g. $\delta L_{0} = L_{\rm{Planck}}$) has been achieved using astrophysical photon observations~\cite{Perlman_2015, Cooke:2020rco}. The y axis indicates the size of fluctuations required for each Planck length travelled to produce strong decoherence effects, which is inversely proportional to energy in this case due to the wavelength-dependence. We see that for all neutrinos sources tested, even those with cosmological baselines, the required fluctuations is orders of magnitude larger than the natural expectation, giving little prospect of a signal detection unless lightcone fluctuations significantly exceed this natural expectation. This is a consequence of the macroscopic oscillation wavelengths, whereas the microscopic wavelengths of high energy photons makes them more susceptible to the effects of lightcone fluctuations. The prospects for a neutrino signal would be further reduced in the case of any energy-suppression ($n>0$) of the effects.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.0cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{lightcone_fluctuation_nu_sensitivity_summary.pdf}
\caption{Underlying model parameters producing strong decoherence effects for neutrinos from a range of sources. The upper panel shows the travel distance fluctuation requirement for each Planck length traversed in the case of uncorrelated, energy independent distance fluctuations. The lower panel shows the new physics energy scale required to produce strong decoherence effects in a velocity fluctuation scenario with energy suppression $\propto E/M_{\rm{Planck}}$. Upper limits for travel distance are assumed, e.g. one Earth diameter for atmospheric neutrinos, etc. }
\label{fig:decoherence_sensitivity}
\end{figure}
The lower panel of \Cref{fig:decoherence_sensitivity} instead shows the case of distance independent velocity fluctuations as discussed in Section \ref{sec:velocity_fluctuations}, suppressed by a single energy power ($n=1$). Natural Planck scale limits for such a scenario have been achieved by constraining the arrival time spread of high energy photons from a short GRB~\cite{Vasileiou2015} (see Section \ref{sec:propagation_time_fluctuations}). In this case we vary the energy scale $E_0$ of the new physics producing the velocity fluctuations, where the natural expectation for quantum gravity is $E_0 = M_{\rm{Planck}}$. We see that significant decoherence effects are indeed expected in this scenario for neutrinos travelling cosmological and possibly even galactic baselines, yielding a possible detection channel.
However, there are major challenges in observing such a signal. The majority of high energy astrophysical neutrinos, observed by the IceCube neutrino observatory~\cite{Aartsen:2013jdh}, have not been associated with a particular source, but instead appear as an approximately isotropic diffuse flux. This flux likely results from many individual sources at unknown distances, and is thus incoherent even in the absence of lightcone fluctuations, producing an oscillation averaged flavor composition at the Earth exactly as would be expected in the lightcone fluctuation signal case\footnote{This is contrary to decoherence scenarios resulting in equal flavour populations, which can in principal be distinguished even with an incoherent source~\cite{PhysRevD.102.115003}.}. This is further compounded by the finite energy resolution of IceCube and other neutrino telescopes, which also degrades and averages the oscillation signal~\cite{OHLSSON2001159}. The only hope for detecting such a signal would therefore be the observation of neutrinos from a coherent astrophysical source (e.g. with a compact emission region compared to the oscillation wavelength).
We therefore see that the prospects of detecting neutrino decoherence from natural Planck scale lightcone fluctuations via flavour-based measurements do not look promising, given the macroscopic scale of oscillation wavelengths and challenges in observing decoherence in astrophysical neutrinos. We now also consider an alternative potential detection channel.
\subsubsection{Comparison to other studies}
\label{sec:comparison_to_other_work}
In this work we have considered scenarios where the consequence of fluctuating/uncertain space-time is fluctuations in the travel distance/time between two points, and the resulting decoherence effects in propagating neutrinos. In \cite{Mavromatos:2006yy}, an alternative but comparable model is studied where the space-time metric itself experiences fluctuations, also resulting in lightcone fluctuations and neutrino decoherence. An analytic treatment is applied to quantify the average damping effects, as opposed to the simulation-based methods employed here.
The study considers fluctuations of a $(1+1)$D metric tensor (one time dimension and one spatial dimension, aligned with the particle travel direction) of the form:
\begin{equation}
g^{\prime} = O g O^T ,
\end{equation}
\begin{equation}
O = \begin{pmatrix}
a_1 + 1 & a_2 \\
a_3 & a_4 + 1
\end{pmatrix} ,
\end{equation}
where $a_i$ represent perturbations to the metric, which are Gaussian random variables with an average value of zero and standard deviation $\sigma_i$. $\sigma_i$ are free parameters characterising the fluctuations and can be considered the analogue of $\delta L_{0}$ in our work. $g$ and $g^{\prime}$ are the unfluctuated and fluctuated metric tensors respectively, with $g$ being taken as the Minkowski metric representing flat space-time.
The case $\sigma_1 = \sigma_2 = \sigma_3 = 0$, $\sigma_4 > 0$ corresponds to pure distance fluctuations along the particle direction of travel, which is directly comparable to the model we have proposed in this work. The resulting damping term (in the two flavour neutrino system considered) has the form\footnote{\cite{Mavromatos:2006yy} also considers possible matter effects resulting from neutrinos interacting with a VBH background, characterised by a potential, $V$. This is a distinct effect from the model considered in this article, and we neglect these terms in this comparison (e.g. set $V=0$). These effects can however be compared to our previous study on neutrino-VBH interactions~\cite{PhysRevD.102.115003}.}:
\begin{equation}
\exp{ - \frac{ \left( \Delta m^2 \right)^2 }{ 2 E^2 } \sigma_4 L^2 } \sim \exp{ - \frac{ 1 }{ \lambda^2 } \sigma_4 L^2 } .
\label{eq:metric_fluc_damping}
\end{equation}
This is very similar to the damping term in the two flavour scenario derived in this work in the case where $m=1$ (and $n=0$, since \cite{Mavromatos:2006yy} does not consider energy-dependence in the fluctuations), where \Cref{eq:envelope_vs_L_propto} becomes:
\begin{equation}
\exp{ - \frac{1}{\lambda^2} \left( \frac {\delta L_{0}} {L_0} \right)^2 L^2 } .
\label{eq:envelope_vs_L_propto_energy_independent_m1}
\end{equation}
We thus seem that both approaches (distance vs. metric fluctuations, simulation vs. analytic methods) produce qualitatively the same neutrino decoherence effects, with $\exp{-L^2}$ damping\footnote{Referred to as \textit{Gaussian damping} in \cite{Mavromatos:2006yy}.} dependent on $1/\lambda^2$ and a parameter characterising the uncertain/fluctuating space-time. This agreement serves to verify both approaches.
\subsubsection{Comparison to wave packet decoherence}
\label{sec:comparison_to_wp_decoh}
Neutrino mass states propagate as wave packets, which physically separate over large enough distances due to their differing masses, degrading the superposition producing neutrino oscillations and resulting in the damping of flavour transitions and neutrino decoherence~\cite{Nussinov:1976uw}. The coherence length of neutrinos in current neutrino oscillation experiments is expected to far exceed the measurement baselines however, meaning such effects can typically be neglected.
It is however interesting to compare the decoherence resulting from lightcone fluctuations considered in this work to the case of wave packet decoherence, where the damping can be expressed as~\cite{Giunti:2003ax}:
\begin{equation}
\exp{ - \frac{ \left( \Delta m^2 \right)^2 }{ 32 E^4 } \frac{1}{\sigma_x^2} L^2 } \sim \exp{ - \frac{ 1 }{ \lambda^2 } \frac{1}{ E^2 \sigma_x^2} L^2 } ,
\label{eq:wp_decoh_damping_term}
\end{equation}
where $\sigma_x$ is the spatial width of neutrino wave packet along the direction of travel. Wave packet decoherence in curved space-time can also be considered, which for the case of a neutrino propagating along radial geodesics in the Schwarzschild metric yields nearly identical effects to the flat space-time case except that the travel distance is increased due to the space-time curvature, enhancing the damping effects~\cite{Chatelain:2019nkf}.
Comparisons between Equations (\ref{eq:wp_decoh_damping_term}) and (\ref{eq:envelope_vs_L_propto_energy_independent_m1}) indicate that wave packet decoherence is phenomenologically similar to the lightcone fluctuation decoherence considered in this work in the case of $m=1$ (e.g. fully-correlated fluctuations), with the damping term depending on both $L^2$ and $1/\lambda^2$. Notably however, wave packet decoherence features an intrinsic $1/E^2$ dependence (in addition to the energy-dependence from the $1/\lambda^2$ term) not present in the lightcone fluctuation scenario. The wave packet and lightcone fluctuation cases therefore become largely degenerate when the size of the lightcone fluctuations are assumed to have an extrinsic $n=-2$ energy-dependence, and the two scenarios cannot easily be separated. However, such an inverse energy-dependence is not typical of Planck scale physics scenarios, where effects are instead expected to be suppressed at lower energies ($n>0$).
We therefore see that despite some phenomenological similarities, wave packet decoherence and lightcone fluctuation decoherence from Planck scale physics can in principal be distinguished via their energy- and distance-dependence, aside from the special case where $m=1, n=-2$ for the lightcone fluctuations.
\subsection{Propagation time fluctuations}
\label{sec:propagation_time_fluctuations}
Fluctuations in neutrino travel distance would correspondingly produce fluctuations in a particle's propagation time between two points, given by:
\begin{equation}
\delta t = \frac{\delta L}{v} .
\label{eq:delta_t}
\end{equation}
Variations in the arrival time of cosmic messenger particles (mainly high energy photons) from cosmological sources have been searched for extensively in an effort to detect possible quantum gravity signals~\cite{Abdo2009, Vasileiou2015, ELLIS2019352, PhysRevD.102.063027, PhysRevD.99.083009, AMELINOCAMELIA2016318, Wei:2018ajw, HESS:2011aa}, with no signal observed to date. Searches have mainly focused on deterministic variations in particle velocity via modified dispersion relations, typically motivated by the prospect that Lorentz invariance symmetry or the weak equivalence principal is violated in quantum gravity. Neutrino-based constraints~\cite{ELLIS2019352, PhysRevD.102.063027, Wei:2018ajw} have more recently been derived from the multi-messenger observations of the first identified astrophysical high energy neutrino point source, the flaring blazar TXS 0506+056~\cite{IceCube:2018cha, IceCube:2018dnn}.
Stochastic modifications to particle propagation times as would result from lightcone fluctuations are less well explored. Planck scale constraints of velocity fluctuations have been derived~\cite{Vasileiou2015} from the arrival times of $\gamma$-rays from the distant GRB 090510~\cite{Ackermann_2010} in a weakly energy-suppressed scenario ($n=1$), but no limits of this kind exist for neutrinos.
Short GRBs are a powerful probe of particle travel time variations~\cite{AmelinoCamelia1998, AmelinoCamelia:2009pg}, since they have been detected at cosmological distances, display high energy photon emission in the keV-TeV energy range and crucially feature an initial prompt emission period of $\order{\rm{1\:s}}$. The duration of the observed emission in this prompt phase constrains variations in travel time due to lightcone fluctuations. The prompt emission additionally shows evidence of temporal sub-structure of $\order{\rm{\mu s - ms}}$~\cite{Ackermann_2010, Vasileiou2015} which can in principal yield even tighter constraints.
GRBs are a candidate production site of high energy neutrinos~\cite{Aartsen_2016}, although to date there has been no significant correlation of GRBs with neutrinos\footnote{It has been show that GRBs can at most account for a small fraction of the diffuse astrophysical neutrino flux observed by the IceCube neutrino observatory~\cite{Aartsen_2016}. The source of the majority of this flux remains unknown. An alternative possibility is that lightcone fluctuations have prevented associations of $\gamma$-rays and neutrinos of GRBs in these studies due to arrival time fluctuations.}. Neutrino astronomy is still a new field however, and GRB neutrino emission, if detected, offers a number of advantages in searches for Planck scale physics compared to photons. Astrophysical neutrinos are observed at energies far in excess of that seen for photons, up to PeV~\cite{glashow_nature} and potentially EeV, and can therefore better overcome any suppression of quantum gravity effects below the Planck scale. Unlike neutrinos, high energy photons have a limited range due to absorption by the cosmic microwave background (CMB)~\cite{PhysRev.155.1408}. Additionally, the highest energy $\order{\rm{TeV}}$ photon emission observed from GRBs is typically not detected during the initial burst, but instead from subsequent longer duration processes (the so-called \textit{afterglow})~\cite{MAGIC_GRB, HESS_GRB}. This is a consequence of the small field of view of the imaging atmospheric Cherenkov telescopes (IACT) used in these observations, meaning that they typically only observe GRBs when triggered by other telescopes with larger fields of view such as Fermi-LAT~\cite{Atwood_2009} (sensitive only to lower energy emission). The prompt emission is missed during the time taken to respond to the alert and point the telescopes, which is $\order{1 \: \rm{min}}$. This longer time scale emission is far less powerful in constraining arrival time fluctuations. Neutrino telescopes however typically observe the entire sky continuously, meaning that even prompt neutrino emission can be detected.
To ascertain the potential of neutrino arrival time fluctuation searches to quantum gravity effects, \Cref{fig:delta_t_vs_L} shows the scale of time fluctuations vs. particle travel distance in the `natural' Plank scale distance fluctuation scenario discussed in Section \ref{sec:distance_uncertainty_natural}. This is the time analogue of the distance fluctuations shown in \Cref{fig:delta_L_vs_L}, and shows the case of a neutrino with Planck scale energy, or equivalently an energy independent ($n=0$) scenario. This is an upper limit on the size of the effects in energy-suppressed scenarios.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.5cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{lightcone_fluc_delta_t_vs_L.pdf}
\caption{Propagation time variation expected for the `natural scenario' given by \Cref{eq:deltaL_natural} for a particle with $E=M_{\rm{Planck}}$ (or $n=0$), as a function of particle propagation distance. A number of reference distances are shown with dashed lines. Scenarios with differing $m$ are shown, with their interpretations discussed in Section \ref{sec:distance_dependence}.}
\label{fig:delta_t_vs_L}
\end{figure}
For the uncorrelated distance fluctuation case ($m=1/2$), even over cosmological propagation distances the distance fluctuations only accumulate to $\leq \order{\rm{ps}}$ in scale, even before any possible energy-suppression is considered at observable particle energies. Since no source of high energy neutrino emission with such a short time scale is known, there are currently no real prospects of detecting such a scenario via arrival time fluctuations.
However, in velocity fluctuation ($m=1$) or correlated distance fluctuations ($m>1/2$) scenarios, the prospects are much improved. \Cref{fig:delta_t_vs_E_GRB} shows the arrival time fluctuations expected when $m=1$ as a function of particle energy for two energy-suppressed scenarios, $n=1, 2$. For particle travel distance the source is assumed to be GRB 090510 (redshift, $z = 0.903$~\cite{McBreen2010}), which has been the subject of a number of quantum gravity motivated arrival time studies~\cite{Abdo2009, Vasileiou2015} and has $\order{\rm{ms}}$ sub-structure in its prompt photon time distribution which would be degraded and ultimately unresolved if travel time fluctuations exceed this scale.
\begin{figure}[htp]
\centering
\includegraphics[trim=0.5cm 0.0cm 0.0cm 0.0cm, clip=true, width=\linewidth]{lightcone_fluc_delta_t_GRB.pdf}
\caption{Propagation time variation expected for the `natural scenario' given by \Cref{eq:deltaL_natural} for a particle travelling from GRB 090510 to Earth, as a function of particle energy. Fully-correlated ($m=1$) distance fluctuations or equivalently distance independent velocity fluctuations are assumed. Both $E / M_{\rm{Planck}}$ (e.g. $n=1$) and $\left ( E / M_{\rm{Planck}} \right )^2$ (e.g. $n=2$) energy suppression scenarios are shown. The horizontal grey line indicates the approximate time sub-structure in the time profile of the prompt emission from the GRB, with $\delta t$ above this line yielding sensitivity to natural Planck scale physics. }
\label{fig:delta_t_vs_E_GRB}
\end{figure}
From \Cref{fig:delta_t_vs_E_GRB} we see that travel time fluctuations would exceed this $\order{\rm{ms}}$ scale for $\gtrsim$GeV particles in the `natural' scenario shown when the energy suppression is $E / M_{\rm{Planck}}$ ($n=1$), which has enabled Planck scale limits on velocity fluctuations to be set using $\order{\rm{GeV}}$ $\gamma$-rays from this GRB~\cite{Vasileiou2015}. Should $\gtrsim$TeV neutrino emission be eventually detected from distant GRBs, we see that the sensitivity to Planck scale physics will significantly exceed that currently available from GeV photon observations, allowing the possible detection of quantum gravity effects beyond the reach of current measurements (for example if the energy scale of quantum gravity exceeds $M_{\rm{Planck}}$, or $\delta L_{0} < \delta L_{\rm{Planck}}$). In fact, even longer duration sources (seconds or minutes) could still yield Planck scale physics signals for $>$TeV particles. Combined analyses of neutrino emission from multiple GRBs and multiple neutrino telescopes could be used to help overcome the low statistics inherent in neutrino point source observations when compared to $\gamma$-rays.
For the $\left ( E / M_{\rm{Planck}} \right )^2$ ($n=2$) energy suppression scenario, the prospect of detecting Planck scale physics is far weaker, requiring observations of multiple EeV neutrinos from GRBs (for example by future large scale radio neutrino observatories). Since the Universe is opaque to photons at these energies however, neutrinos still offer the best detection prospect in this scenario.
Aside from the potential gains from the high energies observed in astrophysical neutrinos, constraining Planck scale physics with different messenger particles is also inherently desirable, as it is also possible that the effects of fluctuating space-time differ for different particle types~\cite{ELLIS2004669, PhysRevD.59.116008, Coleman:1998en}.
\section{Summary and conclusions}
In this work we have presented a heuristic parameterisation of particle propagation distance fluctuations, with the aim of probing the expectation that space-time fluctuates if gravity is a quantum force. This parameterisation accounts for both distance and energy dependence, unlike previous work that has considered only one or the other individually, and has also been shown to representative of velocity fluctuation scenarios.
The influence of these lightcone fluctuations on neutrino propagation was studied, considering both the loss of neutrino coherence (and corresponding damping of neutrino oscillations), and the broadening of neutrino arrival times from short duration distant astrophysical sources. Using simulations of propagating neutrino states in the presence of distance fluctuations, we have quantified the decoherence effects resulting from lightcone fluctuations, and determined an operator representing these effects in the framework of open quantum systems. This operator allows experimental searches for neutrino decoherence to be connected to potential underlying fluctuations in space-time, and compared to results from $\gamma$-ray astronomy.
Due to their macroscopic oscillation wavelengths, we have seen that neutrinos only experience significant decoherence effects in a small number of (optimistic) lightcone fluctuation scenarios, and even then only over cosmological or perhaps galactic distances. The incoherent nature of the diffuse astrophysical neutrino flux is however unfortunately not well suited to constraining such effects, further limiting the potential of neutrino decoherence measurements to constrain natural Planck scale physics.
However, we have seen that should $\gtrsim$TeV neutrinos be detected in association with GRBs, sensitivity to lightcone fluctuations via the arrival time spread of the observed particles (neutrinos, $\gamma$-rays, ...) can likely be significantly enhanced beyond present limits from $\gamma$-ray observations, potentially even far beyond the Planck scale. Current or next generation neutrino telescopes like IceCube, IceCube-Gen2~\cite{Aartsen:2020fgd} and KM3Net~\cite{MARGIOTTA201483} may therefore make crucial contributions in the ongoing search for a quantum theory of gravity.
\section*{Acknowledgements}
\noindent The authors thank Markus Ahlers for feedback on the paper draft, and Jason Koskinen, Subir Sarkar, Jo\~{a}o Coelho, Mauricio Bustamante, Shashank Shalgar, Mohamed Rameez, Peter Denton and Pilar Coloma for valuable conversations. This work was supported by a Carlsberg Young Researcher Fellowship grant `NuFront: Neutrinos at the Physics Frontier' [case no. CF19-0652] and by VILLUM FONDEN (project no. 13161). The authors would like to acknowledge the contribution of the COST Action CA18108.
\nocite{*}
|
1,314,259,995,623 | arxiv | \section{The Questions}
I think it is appropriate to ask what are some of the "Big Questions" in particle and nuclear physics,
and how does this meeting address them. The choice of questions
reflects my view of the field, and it is certainly not universally held.
\begin{itemize}
\item{\bf What are the fundamental interactions?}
We understand electromagnetic interactions, and weak interactions up to
a scale of around $100~GeV$. Classical gravity is understood, and
quantum gravity is the subject of much theoretical speculation with little experimental
input.
QCD is the central subject of this meeting. The basic interactions of QCD are understood,
and one has a wide variety of experimental tests of QCD at high energy and large momentum
transfer. This is the short distance limit of the theory.
\item{\bf What is the structure of matter?}
Strongly interacting matter can take many forms. Strong interactions bind the quarks and gluons
into hadrons, and make nuclei from nucleons. Part of the subject of this
meeting is how matter is formed from the collisions of particles at very high energy.
This involves the formation of quarks and gluons from energy in the initial collision process, and
ultimately the hadronization of quarks and gluons.
\item{\bf What are the different forms this matter can take?}
One of the remarkable features of the high energy limit of QCD is that it appears to
be described by new forms of matter. The part of the wavefunction which controls
the high energy limit of QCD is composed of gluons in a very high energy density, highly coherent state,
the Color Glass Condensate. This mattes is liberated upon collision, and has properties
like a plasma except with high density color electric and magnetic fields, the so called Glasma.
In nuclear collisions, and also possibly in pp collisions at the highest energies, this matter thermalizes
and forms a Quark Gluon Plasma.\cite{cgcreview}-\cite{zakopane}
These new forms of matter, and perhaps other forms of matter not yet thought of, are probed by studying the multiparticle dynamics of
high energy collisions. Insofar as this matter has universal properties, the study of this matter is of fundamental interest.
In order to understand the matter formed in these collisions, we need to develop fully the space-time
description of high energy collisions. This understanding is good in some cases, such
as the formation of jets, or the relatively late stage evolution of a Quark Gluon Plasma. It
is the subject of much more speculation when describing hadronization, or the early time
development of a Glasma.
\end{itemize}
Before proceeding, I should say that it is impossible to be fully comprehensive in a summary talk.
It is even harder in the written version of the talk, as there is even a tighter space limitation.
So please forgive me if I have not discussed, nor fully elaborated, on the subject of your presentation.
Also, please be tolerant of my lack of deep understanding of many of the topics presented here.
I also have taken the liberty of referring to the original literature collectively through
some very nice review papers. Please find references to the original literature in these
reviews.\cite{cgcreview}-\cite{whitepaper}
The contributions to the conference which I discuss below are
of course part of these conference proceedings.\cite{ismd} Also, please look for references to some of the original
literature described in these talks in the corresponding written contributions.
\section{Beyond the Standard Model}
There were two talks at this meeting about the physics beyond the standard
model. the first by Bill Gary concerned CP violation in B decays, $ B^0 \rightarrow \overline K^{*0} K^0$.
One of the reasons for this study is to test the unitarity of the CKM model of CP violation.
If a lack of unitarity was found, then this would imply that the Standard Model must be extended.
There are a variety of inputs needed to draw such a conclusion, including high precision
lattice computations of weak matrix elements.
The other talk was by Horst Stoecker, who has been using extra dimensions and TeV scale gravity
to explore the possible new particles which might be produced at the LHC. This work is
very speculative, since extra dimensions might not exist, and if they did, the size of such
an extra dimension could be anywhere from the Compton wavelength associated with the TeV energy scale up to the Planck length. There are also a virtual continuum of mass ranges and properties
of the particles predicted by extra dimensions. The test of all of this
will be the LHC. If there is something there associated with TeV mass black holes, theoretical
physics will be changed in a deep and fundamental way. If there is nothing there, few of us would lose sleep. Such is the nature of speculation.
\section{QCD Works at Short Distances}
QCD describes strong interaction processes well at short distances. One of course has to choose
infrared safe observables, and for many jet computations one typically uses Monte-Carlo fragmentation
models. In the presentation by Maramidas, the cross section predicted by NLO pQCD was shown to
describe well the D production data in deep inelastic scattering measured by Zeus. In the presentation
by Mesropian, CDF jet production was compared to pQCD computations. This is shown in Fig.
\ref{mesropian}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{mesropian-incl-jets-5}
\caption{A Comparison of pQCD computations with CDF jet production data. }
\label{mesropian}
\end{center}
\end{figure}
There were presentations showing the agreement of NLO computations with photon-gluon fusion
and photon-quark scattering to produce pairs of jets
in deep inelastic scattering, by Efremenko, as seen in the H1 experiment.. Messinia from CDF presented data on associated jet production in the production of W bosons. In both cases the agreement with pQCD is quite good. Soares presented data on mutlti-jet production at Zeus. In this case, the agreement is not good as the other pQCD comparisons, but this is presumably because
the theory computations have not been done with corresponding accuracy.
The remarkable agreement between pQCD and short distance processes in QCD forces us
to ask:
\vspace{0.1in}
\noindent{\bf Where are the frontiers of our knowledge?}
\vspace{0.1in}
One of the issues here is that the distribution and fragmentation functions in QCD are largely phenomenological. One can use DGLAP equations to describe their evolution, for high
$Q^2$ and not too small x. Perhaps the BFKL evolution works at small $x$ and not too
large $Q^2$. How these distribution functions originate either from boundary conditions
in solving the evolution equations, or as universal fixed points of the evolution equations is
not fully understood.
This means that as a matter of first principle in QCD, we neither have full understanding of the
the origin of the distribution quarks and gluons inside a hadron wavefunction nor how hadrons are produced
from these quarks and gluons.
There are also a wide variety of machines, some operating, some almost operating,
and some proposed which can help us to understand these issues. Hera has provided hints
about the nature of matter which controls the physics at the highest energies. This comes
from both deep inelastic scattering and diffractions at small values of x. RHIC
has produced a strongly interacting Quark Gluon Plasma, and we are beginning to learn
some of its properties. It has also provided hints about the nature of small x matter produced
at Hera, and about how this matter is converted from the wavefunction of a nucleus into
matter which evolves and ultimately becomes the Quark Gluon Plasma. If the theoretical
speculations concerning these results from RHIC and Hera are more or less correct,
we have (in my opinion) the beginnings of a first principles understanding of the high energy limit of QCD,
and the remarkable conclusion that it is due to the universal properties of matter made
in these collisions.
Soon, we will have the LHC with unprecedented range in
$x$ and $Q^2$, with potential for both new discovery, and perhaps turning some of the hints seen
at Hera and RHIC into substantial scientific discovery. An electron-ion collider dedicated to
QCD studies, may provide detailed quantitative tests of hypothesis about the nature of such
matter.
\section{Deep Inelastic Scattering and Diffraction}
Glazov presented the latest results from Hera on the distributiuon of quarks and gluons seen at Hera.
In Fig. \ref{glue}, the distribution of gluons extracted from the measurements of quarks is shown.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{glueathera}
\caption{The gluon distribution function measured at Hera. }
\label{glue}
\end{center}
\end{figure}
The rise of the gluon density at small x has led to the idea that the gluon density gets quite
large. Ultimately, at any fixed $Q^2$, general arguments require the gluon density
to stop growing so rapidly with decreasing $x$. This phenomenon is called saturation.
It is a a consequence of the repulsive interactions of a high density state of gluons.
Much theoretical speculation as to the nature of this saturated matter has arisen,
and the most popular of these ideas is that the high density gluons form a Color Glass Condensate.
This is a highly coherent distribution of gluons with properties similar to that of Bose condensates and
spin glasses.\cite{cgcreview}
Janssen presented data on the cross section for diffractive deep inelastic scattering. Diffractive scattering is like deep inelastic scattering, except that the final state has no particles with
$x$ values roughly between that of the photon and that of the target. One produces a few
particles with $x$ close to that of the photon, and then there is a gap with no particles
at intermediate $x$ ranges. One of the predictions of saturation models is that
the ratio of deep inelastic diffraction to deep inelastic scattering cross sections
is roughly constant. Data on these cross section ratios is shown in Fig. \ref{difratio} Polini presented data on diffractive production of vector mesons.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{diffratio}
\caption{The ratio of deep inelastic to diffractive cross sections measured at H1}
\label{difratio}
\end{center}
\end{figure}
Kugeratiski presented a theoretical analysis of the diffractive data within a saturation
model based on ideas related to the Color Glass Condensate.\cite{zakopane} He argued that diffractive
scattering on nuclei provide sensitive tests of these ideas. The physical reason for this is
that saturation predicts a black disk scattering limit at the $Q^2$values for which
there is saturation. Diffractive scattering is strongest when scattering from black disks.
Nuclei, since they have higher gluon densities, allow one to probe at higher values of
$Q^2$ where theoretical computations are better under control.
Ducatti and Machado also presented a dipole model of deep inelastic scattering. The cross
secion for deep inelastic scattering at small x has a scaling property
\begin{eqnarray}
\sigma_{\gamma^* p} = F(Q^2/Q_{sat}^2(x)),
\end{eqnarray}
and does not have a separate
dependence on $x$. All of the dependence on $x$ comes through the saturation momentum,
and its dependence on $x$. This dependence may be inferred phenomenolgically,
or by theoretical computation. The scaling property was established by Golec-Biernat, Kweicinski
and Stasto and well describes deep inelastic data for $x \le 10^{-2}$.\cite{zakopane} Machado and Ducatti
established that gemetirc scaling works also for scattering from nuclei.
\section{Exotic Resonances}
The exotic resonances seen in BABAR at SLAC and in CLEO, Belle and Focus were the
subject of Nielsen' s talk. She argued that these states might be interpreted as charm molecules.
Her analysis requires a reinterpretation of some low lying hadronic states as molecular
states. This aspect of QCD, molecular bounds states or states containing glue is fascinating
as it probes the multiparticle dynamics of QCD. It is very difficult, nevertheless since
there is always mixing between quark-antiquark states and gluons, and ithis is difficult to disentangle.
\section{Matter in the Earliest Stages of Hadronic Collisios}
The matter in the initial wavefunctions of colliding hadrons is very coherent. That is the nature
of a bound state wavefunction. This matter must somehow become decoherent in the scattering
process and form distributions of quarks and gluons. In the Color Glass Condensate description,
there is a collision of two sheets of colored glass, which then melt into gluons. During
this melting, there are highly coherent color electric and color magnetic fields. These fields
carry a topological charge density.
This matter thermalizes in the collisions of large nuclei, and probably also in the collisions of protons at the highest energies. The matter at intermediate times between the initial collision and
the ultimate formation of a Quark Gluon Plasma is called the Glasma.\cite{cgcreview}-\cite{zakopane}
Of course there are alternative frameworks to that of the Color Glass Condensate, and they share
the common goals and many common features of the Color Glass Condensate description.
Gustafson gave presentations where he used the Lund string model to attempt a generic understanding
of the formation of matter. He argued that Lund kinematic diagrams provide an understanding
of anomalous dimensions. He also addresses one of the unresolved problems of QCD:
Pomeron Loops or Ploops. Pomerons can be thought of as collective excitations of the Color Glass, or
more generally the matter in the initial state hadronic wavefunction. In collisions, one
excites these modes. Such modes should have quantum fluctuations, and in diagrammatic
language, these are loops. The Gribov Reggeon Calculus was one attempt to make a theory
of Pomeron loops. In either the Color Glass or the Lund String model, one should ultimately
have a complete theory of such Ploops.
There are haunting similarities between the Lund String Model and the Color Glass Condensate
descriptions. Perhaps in future years, these descriptions will somehow merge.
Strikland presented arguments that during the Glasma phase, the initially approximately
boost invariant distribution of particles is unstable with respect to formation of rapidity
dependent density fluctuations. The seeds of these fluctuations arise in the quantum
wavefunctions for the hadrons, and over time become amplified to a magnitude
typical of the Glasma fields. It is not established whether there is sufficient time in
collisions of nuclei either at RHIC or LHC for this effect to thermalize the produced matter.
This is an area where there is much progress and excitement.
\section{The Quark Gluon Plasma}
\subsection{Hydrodynamics}
If the matter produced in heavy ion collisions forms a well thermalized Quark Gluon Plasma,
then the evolution of this matter should be well described by perfect fluid hydrdynamics.
There are a variety of approaches. Some are phenomenological such as the Blast Wave Model.
Some are fundamental, but make assumptions on the initial conditions which are inflexible,
such as the Buda-Lund Hydro Model. Some such as the SPheRIO model use state of the art
numerical methods and can solve the hydrodynamic equations for arbitrary initial conditions
and equations of state.
Csorgo presented the state of the art for Buda-Lund Hydro. This theory is for very specific
initial conditions allows for analytic solutions. It can be directly compared to the more
phenomenological results of Blast Wave presented by Kiesal. Buda-Lund provides a nice
theoretical laboratory where one can study the effects of various equations of state. It also may
be a fixed point of the more general hydrodynamic solutions at large time.
Grassi presented beautiful results from the SPheRIO model. She showed that fluctuations
in the initial conditions can affect the extraction of v2, and argued that this may affect
previous extractions.
Koide and Wolschin presented the state of the art for attempts to include
the affect of viscosity into relativistic hydrodynamics. This causes problems
with either negative entropy productions or with causality. It seems that these problems
are controllable.
\subsection{The High Density Quark Gluon Plasma}
The Quark Gluon Plasma at high baryon number density is the subject of renewed interest.
As shown in Fig. \ref{lowdensity}, there is an expected critical point at some value
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{lowdensity}
\caption{The phase diagram for QCD as a function of baryon density
and temperature. }
\label{lowdensity}
\end{center}
\end{figure}
of baryon number density and temperature. Roland showed that experiments
to search for this critical point were feasible at RHIC. He also presented the
plot of $K^+/\pi^+$ from the SPS heavy ion experiments which shows a sharp peak
at an energy of 5-10 GeV. This has been argued to be a hint for the critical point, but one should
be cautious as precisely where the peak occurs, one is joining together data from different
experiments.
Lacy argued that either the low $p_T$ $K/\pi$ fluctuation, or a minimum in the viscosity to entropy ratio may
provide a signal for the critical point. He presented provocative arguments that such a minimum
might be seen.
\subsection{Jet Energy Loss}
In experiments at RHIC energy, jet energy loss has provided strong indications
what one has produced a strongly interacting Quark Gluon Plasma. Lajoie presented latest
data from Phenix concerning this energy loss. One has a good phenomenological
understanding of jet energy loss for light quarks, but a detailed quantitative theory
is difficult.\cite{whitepaper}
One of the outstanding mysteries is the apparent large amount of energy loss of
charmed particles. This is also related to the large flow of charm. It appears that
in spite of the large mass of the charmed particle, it slows down in a media like a light particle.
This is a surprise since in the rest frame of a charmed quark, the typical energy exchange
in a hadronic interaction, should be of the QCD scale, and the fractional energy loss
decreases as the inverse of the charm mass. Boosting to the fast moving frame of
a charmed quark, the fractional energy loss remains invariant, so the heavy quark
does not slow down much due to a collision.
Charm quark energy loss was underscored as one of the major problems with jet energy
loss calculations in the talk by Vitev. He also presented some beautiful calculations which show
the jet suppression factor dependence on the number of participating nucleons in a collision,
$ln(R_{AA}) \sim -\kappa A^{2/3}$.
Gossiaux emphasized that our lack of understanding of heavy quarks is great. Predictions
for the cross section of open charm production do not agree with RHIC data. We
do not have a comprehensive picture of $J/\Psi$ production. Energy loss and flow data do
not agree with expectations.
Perhaps some of the problem concerning heavy quark energy loss might be resolved
if there was a smaller contribution of bottom than expected. Suade argued that
electro-hadron correlations might be useful here.
Xu argued that the fragmentation of gluon jets and quark jets should be different. Quark
jets and gluon jets should have different energy loss mechanisms in the Quark Gluon Plasma.
It is therefore mysterious why the fragmentation products in AA collisions appears to be the
same as in pp collisions.
It is difficult to judge how serious the problems are here. There is still not consensus about how
to compute jet energy loss, and which mechanisms are the dominant one. Nevertheless,
there are very bright people thinking about these problems, and the field is young.
\subsection{Global Properties of Heavy Ion Collisions}
Fachini gave an excellent overview of the global properties of heavy ion collisions.
She argued that flow is well described by hydrodynamic computations up to transverse
momenta of $1~GeV$. She argued that perhaps the saturation of hydrodynamic
bounds at RHIC may be accidental and that at higher energies these bounds might be exceeded.
These bounds arise from assuming Glauber type initial conditions, and Color Glass Condensate
initial conditions allow for more flow. This implies that viscous effects would be non-negligable
at RHIC, if the Color Glass Initial Conditions were used.
Fachini also argued that the flow behaviour at $p_T \ge 1~GeV$ might be explained by
participant scaling. Here one takes the v2 of an observed particle and divides by the number
of its participants. Then one takes either the transverse kinetic energy or the transverse momentum
and divides also by the number of participants. The resulting distribution is universal and
independent of observed particle up to several GeV.
This can be explained in coalescence models. Such models however violate energy conservation.
The scaling behaviour is nevertheless remarkably good, better than I would expect from the
models. Nevertheless, this basic underlying coalescence mechanisim is strongly suggested.
Fachini also discussed a possible mass shift of the $\rho$ meson. In peripheral
$Au+Au$ collisions in Star there is a mass shift but no broadening. This may be due
to interferences between different channels producing the $\rho$ and is likely a final state effect.
In NA60 central $In+In$ collisions, there is a broadening of the $\rho$ but no mass shift.
Are these measurements in agreement?
Rapp argued that one can understand the $\rho$ broadening in $In+In$ collision at the SPS.
It is due to in media interactions of the $\rho$ meson.
Finally, Fachini analyzed thermal models of particle production. Thermal models provide a
remarkably good description of particle ratios. One can also estimate transverse
flow velocities and temperatures at decoupling by the shapes of $p_T$ distributions.
Wiik argued that there should be fluctuations in the Hadgedorn spectra. This would put
the SPS data in better agreement with experiment. Such an adjustment is not required for the
RHIC data.
\section{Hanberry-Brown-Twiss Interferometry}
By studying the correlations of identical particles, it is possible to experimentally determine
the time and spatial region over which particles stop interacting. This is the so called
surface of decoupling. (In fact for an evolving system such as a heavy ion collision,
it is not really a surface, since at each time there is a spread out surface
due to by fluctuations in the last scattering position, and the
shape of the surface evolves in time.)
Metzger presented a remarkable analysis of data from the L3 detector at LEP. By
a very detailed analysis he showed that deviations from a Gaussian parameterization
demonstrated that the the space-time surface of decoupling is consistent with
inside-outside cascade dynamics. This picture is at the heart of the description
of heavy ion collisions. A description which incorporates this dynamics is that of Csorgo
and Zimanyi.\cite{csorgo}
Ukleja presented a comparison of of typical size scales associated with LEP and Hera.
Chung presented an analysys which suggest that the non-Gaussian tail of the distribution
measured at RHIC may, as was the case at LEP, provide non-trivial information on
the decoupling surface. This has the potential to modify conclusions drawn from
a Gaussian analysis.
In my opinion, there is still much to be learned from HBT analysis of heavy ion collisions.
It is clear that Gaussian fits miss much of the physics. Cramer argued that there may
be coherent scattering combined with absorption at late time in the collision, and that
this can modify the conclusions. Perhaps some of ideas described by Flowkowski concerning
multiple correlations and jest may be useful in sorting all of this out.
\section{The Highest Energies}
The highest energies collisions observed remain those of cosmic rays.
Licinio argued that one may have access to new physics in such collisions,
but this is difficult since the properties of showers are largely determined by the physics
of the fragmentation region. So long as there is limiting fragmentation, the shower
properties are determined. Nevertheless, Escobar argued that cosmic rays
at the highest energies may allow us to do astronomy. At the highest energies,
cosmic rays are no bent much by galactic and extra-glactic magnetic fields.
This allows in principle the identification of the highest energy source of cosmic rays.
Candidates for such sources include active gaactic nuclie (black holes) and neutron stars.
\section{Acknowledgements}
I thank the organizers for inviting me to give this summary. Also for the wonderful
atmosphere in Paraty. It was a very good meeting, where the full range
of physics associated with strong interactions at high energies was presented.
This manuscript has been authorized under Contract No. DE-AC02-98CH0886
with
the U. S. Department of Energy.
|
1,314,259,995,624 | arxiv | \section{INTRODUCTION \label{intro}}
It is well known that the formation rate per unit mass of low-mass X-ray binaries (LMXBs) is orders
of magnitude greater in globular clusters (GCs) than in the Galactic field (Katz 1975; Clark 1975). The
high formation rate of LMXBs in GCs is attributed to the frequent dynamical interactions in the dense
stellar environment. As an additional formation channel, the binaries in a GC can also be
a result of a standard evolutionary path identified for MSP formation in the Galactic
field. This has stimulated many theoretical and observational studies to investigate the relative
contribution of these two formation processes of compact binaries in the population of GCs (e.g. Fregeau
2008; Pooley et al. 2003; Pooley \& Hut 2006).
With the superior sub-arcsecond spatial resolution of the \emph{Chandra X-Ray Observatory}, remarkable
progress has been made in the understanding of the formation processes of close binaries in GCs. For
example, Pooley et al. (2003) found a positive correlation between the number of close X-ray binaries
in GCs and the stellar encounter rate, $\Gamma_{\rm c}$. Specifically, Pooley et al. (2003)
found an approximately linear relationship between the number of LMXBs and $\Gamma_{\rm c}$, indicating a
dependence on the properties of GCs. A similar relationship has also been reported by Gendre et al. (2003)
and taken together with the results of Pooley et al. (2003) provide evidence for the dynamical origin of
LMXBs in GCs.
Since millisecond pulsars (MSPs) have long been proposed as the descendants of LMXBs, they are also
expected to have a dynamical origin in GCs.
Due to the existence of extensive pulsar surveys, 140 MSPs
have been detected in 26 different clusters and a statistical study of their relationship to cluster
parameters is desirable.
\footnote{For updated statistics, please refer to http://www2.naic.edu/$\sim$pfreire/GCpsr.html}
However, previous studies were not successful in finding evidence for the dynamical origin of MSPs in
the clusters due to the lack of a relation between the pulsar population and $\Gamma_{\rm c}$
in the GCs (e.g. Ransom 2008).
This can be ascribed to the observational bias in the pulsar searches. As the distance
of the GCs spans a rather wide range (cf. Harris 1996), the sensitivity of the observations can differ
in the searches toward different clusters and hence induce selection effects in the observed sample
(Ransom 2008). Therefore, the observed number of MSPs is not representative of an unbiased sample for the
analysis.
In this paper, we present a method to alleviate the aforementioned problem and investigate the
possible relationship between the number of MSPs and the cluster properties. In \S2, an investigation of
the cumulative luminosity distribution functions of MSPs in a number of selected GCs is carried out. We
subsequently use the obtained results in a correlation analysis in \S3 and discuss the physical
implications of the possible correlation in \S4.
\section{CUMULATIVE RADIO LUMINOSITY FUNCTIONS \label{clf}}
Among the GCs, nine contain more than three MSPs for which their radio flux densities are reported. This
enables one to create the cumulative luminosity distribution functions (CLFs) of MSPs for these clusters
and to estimate their logarithmic slopes by a power law fit. The physical properties of these nine selected
GCs are summarized in Table~\ref{gc_info}. As the observations for an individual cluster were conducted at
different frequencies, we convert all flux densities to the values at 1.4 GHz by assuming a spectral index
of $-1.8$, which is the mean value reported by Maron et al. (2000).
We model the CLFs with a form of $N(>L_{\rm 1.4~GHz})=N_{0}L_{\rm 1.4~GHz}^q$, where $N_{0}$ represents
the number of MSPs with the pseudo radio luminosity $>$ 1 mJy~kpc$^{2}$ at 1.4~GHz.
Since no obvious turn-off is observed from the distribution of individual cluster, we have
taken the entire sample into account. Following Hessels et al. (2007), we adopt the
square root of $N(>L_{\rm 1.4~GHz})$ as the uncertainties in the analysis and
fit $\log N(>L_{\rm 1.4~GHz})$ versus
$\log L_{\rm 1.4~GHz}$ with a linear regression analysis that take the uncertainties into account.
The best-fit parameters are tabulated in Table~\ref{gc_lf_par}.
It can be seen that the best-fit values of $q$ lie in a range $\sim -0.6$ to $\sim -1.6$ with the steepest slope
inferred for the MSPs in M~3. In this analysis, we have also taken the
uncertainties associated with the cluster distance determination into account. In reviewing different
methods in determining the distances to GCs, it has been concluded that these various methods imply an
uncertainty of $\pm6\%$ in the cluster distance (Krauss \& Chaboyer 2003; Chaboyer 2008). Adopting the
uncertainty of distance in constructing the CLFs, this results in an additional error in the normalization
beyond the statistical error from the fits.\footnote{We adopted $68\%$ confidence intervals in all the
reported analysis.} Both errors are combined in quadrature which are quoted in
Table~\ref{gc_lf_par} and adopted in subsequent analysis in \S3.
By using the best-fit CLFs for extrapolation, we estimate the
number of MSPs in each cluster with $L_{\rm 1.4 GHz}>0.5$~mJy~kpc$^{2}$.
Almost all the MSPs in GCs considered in this study have their
radio luminosities above this threshold level. Since the errors of both $q$ and $N_{0}$
are considered in the extrapolation, the uncertainties of these estimates are larger than that of $N_{0}$.
These population estimates are also given in Table~\ref{gc_lf_par}.
For those clusters which have been searched deep enough that the sensitivity is close to this threshold,
e.g., 47~Tuc, the population estimates based on the CLFs are close to the observed number of uncovered MSPs.
On the other hand, for those clusters where pulsation searches have not yet reached this sensitivity level, our
results provide predictions for their MSP populations when searches towards these clusters become sufficiently deep.
Apart from estimating the CLFs for these individual GCs, we have also constructed the luminosity function
by combining all the cluster MSPs (i.e. 76 pulsars in total) used in this study. The combined CLFs of the
selected cluster MSP population are displayed in Figure~\ref{gal_gc_clf}.
In the combined distribution, we have observed there is a turn-off for the pseudo-luminosities smaller
than $\sim1.5$~mJy~kpc$^{2}$. Hessels et al. (2007) have also observed the same behaviour in an independent
analysis. In order to compare our result with Hessels et al. (2007), we followed their procedure and
used a minimum luminosity cut-off of $L_{\rm 1.4~GHz}=1.5$~mJy~kpc$^{2}$ for the fitting procedure. The results
of this analysis are summarized in Table~\ref{gal_gc_clf_par}. The logarithmic slope inferred
from this collective sample is $q=-0.83\pm0.05$, which is consistent with the value deduced by Hessels et
al. (2007) (i.e. $q=-0.77\pm0.03$) within $1\sigma$ error.
Hessels et al. (2007) have further compared the luminosity distribution of the MSP population in GCs with those of
other pulsar populations. They suggested that the distribution of cluster MSPs is marginally consistent with
that of the MSPs in the Galactic field reported by Corde \& Chernoff (1997) and Lyne et al. (1998). However, the samples
adopted in these investigations contain not more than 22 pulsars. This relative small sample size may introduce
bias in the analysis and inference. Thanks to the extensive pulsar searches in
the recent years, the whole pulsar population has been significantly increased. Hessels et al. (2007) have also
compared their results with a more updated pulsar sample reported by Lorimer et al. (2006) which
used 1008 pulsars in their
study. Lorimer et al. (2006) have found a distribution of $d\log N\sim -0.8d\log L$ for their sample,
which is very similar to that inferred from the cluster population as reported by Hessels et al. (2007) and this
paper. Nevertheless, the sample used by Lorimer et al. (2006) consist of non-recycled canonical pulsars.
It is more instructive to compare the CLF of the cluster MSP population and the MSPs in the Galactic field,
both of which have undergone the recycling processes in binary systems. Therefore, we construct the luminosity
function for the MSP population in the Galactic field with all the available data in the ATNF catalog
(Manchester et al. 2005).
To do this, we specifically selected all pulsars in the Galactic field with $P<$
20 ms. In total, there are 51 MSPs with flux densities reported in the catalog, which enable us to
construct the CLF. For comparison, the CLF of the Galactic population is over-plotted along with the
cluster population in Figure~\ref{gal_gc_clf}.
To be consistent with the analysis of the cluster population, we used a minimum luminosity cut-off of
$L_{\rm 1.4~GHz}=1.5$~mJy~kpc$^{2}$ in the fitting. The inferred slope of the Galactic population is
$q=-0.48\pm0.04$, which is found to be flatter than the value deduced from the cluster population.
To further investigate the difference between two populations, we separate each population into two
sub-samples, namely the binary and the solitary MSPs. The results are summarized in Table~\ref{gal_gc_clf_par}.
\footnote{For PSRs J1342+2822A and J1342+2822C located in M~3, it is still uncertain whether
they are binary or solitary and therefore they are omitted in the analysis of separate populations.}
By studying these sub-populations in GCs separately, we deduce the slopes to be $q=-0.73\pm0.08$ and
$q=-0.89\pm0.11$ for the binary and the solitary MSPs respectively. These values are consistent with those
inferred by the Hessel et al. (2007) within $1\sigma$ uncertainties. In the Galactic field, we also
examine the luminosity functions separately for the binary and solitary MSPs. The slopes inferred
for the binary and solitary populations are $q=-0.49\pm0.05$ and $q=-0.24\pm0.11$ respectively,
which are rather different from those inferred from the cluster population.
For completeness, we have also repeated the above analysis with all the MSPs (i.e., without excluding those
with their pseudo-luminosities smaller than 1.5~mJy~kpc$^{2}$). The results are also tabulated in
Table~\ref{gal_gc_clf_par} for the sake of comparison.
\section{CORRELATION ANALYSIS \label{correlation}}
We have attempted to search for correlations between the observed number of MSPs in each GC with cluster
parameters. As the most promising correlation is expected with the two-body encounter rate, we begin our analysis
with this parameter. The two-body encounter rate in a cluster can be estimated as $\Gamma_{\rm c}\propto\rho_{0}^{2}
r_{\rm c}^{3}\sigma_{0}^{-1}$, where $\rho_{0}$ is the central luminosity density, $r_{\rm c}$
is the core radius and $\sigma_{0}$ is the velocity dispersion at the cluster center. In adopting
the central luminosity density as an estimate of the stellar density at the center, there is an underlying
assumption that the average luminosities of the stars in the cluster centers are approximately equal
to $\sim1~L_{\odot}$. Figure~\ref{obs_msp} shows the relation between the observed
populations with $\Gamma_{\rm c}$. However, no obvious correlations can be identified with this observed sample.
These two quantities are correlated at a confidence level of $\sim75\%$ only.
This conclusion is similar to that reported by Ransom (2008). Owing to the limited amount of telescope time, many
clusters that host only a single pulsar have not been searched to the same sensitivity level as those of the
specifically selected targets, such as 47~Tuc. Therefore, the selection effect biases the observed numbers of
MSPs in different clusters.
In order to alleviate this problem, we suggest that the use of the CLFs of the investigated clusters.
With the best-fits of the CLFs (see \S2), we are able to estimate the number of MSP in these
GCs above a given luminosity threshold and thus obtain an unbiased sample.
Specifically, we take the best-fit values of $N_{0}$
in these GCs to estimate the numbers of the MSPs in these clusters with their pseudo-luminosities
above $>1$~mJy~kpc$^{2}$ and examine whether it is related to different physical quantities of the clusters.
In this analysis, the possible
correlation between $N_{0}$ with two-body encounter rate $\Gamma_{\rm c}$, metallicity [Fe/H], cluster
mass $M_{\rm GC}$, velocity dispersion $\sigma_{0}$ and escape velocity $v_{\rm escape}$ at the cluster
center are explored. All these quantities are speculated to have influence on the binary formation and hence
the MSP population in a cluster.
While the two-body encounter rate $\Gamma_{\rm c}$ is related to
the binary population resulting from dynamical interactions, the metallicity [Fe/H] of a cluster
can have a profound influence on the evolution of LMXBs (see Ivanova 2006 and the discussion in \S4).
On the other hand, if stellar encounters were not the major channel of the binary formation, one would
expect the binary population to be correlated with the cluster mass $M_{\rm GC}$.
Assuming a constant mass-to-light ratio, $M_{\rm GC}$ can be estimated from the absolute visual
magnitude $M_{V}$: $M_{\rm GC}\propto 10^{-0.4M_{V}}$.
We have also tested the correlation with $\sigma_{0}$ and
$v_{\rm escape}$ which may possibly be related to the retention of the neutron stars in a cluster.
Without a priori knowledge of the distributions of the tested quantities, a nonparametric correlation
analysis is adopted. The computed Spearman rank correlation coefficients between $N_{0}$ and the various
quantities are tabulated in Table~\ref{correl}. Among all the tested quantities, the strongest correlation
is found between $N_{0}$ and $\Gamma_{\rm c}$. The corresponding Spearman correlation is 0.78 with a chance
correlation probability of 0.0125. The plot of $N_{0}-\Gamma_{\rm c}$ is displayed in Fig.~\ref{n_gamma_metal}a.
The correlation between $N_{0}$ and [Fe/H] with a Spearman correlation=0.72 has also been found to be
significant with a chance correlation probability=0.0298, which is plotted in Figure~\ref{n_gamma_metal}b.
By taking the errors of $N_{0}$ as the weight in the linear regression analysis, the logarithmic slopes of the
$N_{0}-\Gamma_{\rm c}$ and $N_{0}-$[Fe/H] relations are found to be $0.69\pm0.11$ and $0.72\pm0.11$ respectively.
For the other tested quantities, there are marginal correlations of $N_{0}$ versus $v_{\rm escape}$ and $\sigma_{0}$
at a confidence level $\gtrsim89\%$, though it is not sufficiently significant to secure the relations.
It is not surprising to note that the rank correlation coefficients are the same for these
two quantities, as Gnedin et al. (2002) have found that the ratio of $v_{\rm escape}$ to $\sigma_{0}$ has a
narrow range between $\sim3-5$.
Among all the tested quantities, the weakest correlation is found for the $N_{0}-M_{\rm GC}$ relation which has a
chance correlation probability over $60\%$.
As this choice of luminosity threshold is arbitrary, we further check the robustness of the correlation
analysis results by repeating the investigation with different thresholds.
We have repeated the analysis by adopting $N\left(L_{\rm 1.4 GHz}>0.5\right)$ in Table~\ref{gc_lf_par},
which provide the estimates
for the number of MSPs with $L_{\rm 1.4 GHz}>0.5$~mJy~kpc$^{2}$. Almost all the MSPs in GCs considered in this investigation
have their radio luminosities above this threshold. With these new values,
the correlations of the MSP number versus $\Gamma_{\rm c}$, [Fe/H], $M_{\rm GC}$, and $v_{\rm escape}$ (or $\sigma_{0}$) are found at
the confidence levels of $99.47\%$, $92.31\%$, $26.76\%$ and $84.56\%$ respectively.
For a further test of the robustness, by using the best-fit CLFs in Table~\ref{gc_lf_par},
we have also repeated the analysis for a minimum luminosity cut-off
of $L_{\rm 1.4 GHz}>2$~mJy~kpc$^{2}$. In this case, the correlations with $\Gamma_{\rm c}$, [Fe/H], $M_{\rm GC}$,
and $v_{\rm escape}$ (or $\sigma_{0}$) are confident at the levels of $98.32\%$, $98.68\%$, $40.03\%$ and $92.94\%$
respectively. Therefore, the degrees of correlation for the tested quantities are found to be insensitive
to the choice of the threshold. We conclude that the correlation between $\Gamma_{\rm c}$ and the MSP number is
the most robust among all the tested quanities, which have a confidence level $>98\%$ regardless of the chosen threshold.
\section{SUMMARY \& DISCUSSION \label{discussion}}
The CLFs of nine GCs, each containing a population of MSPs has been examined. Upon comparison of the MSP
population in GCs with that in the Galactic field, it has been found that the slopes of the CLFs inferred
in these two populations significantly differ.
It is natural to speculate that the CLF is somehow related to the magnetic field and spin of the MSPs.
Wang, Jiang \& Cheng (2005) have compared the distributions of
the spin period and the dipolar surface magnetic field for both cluster and disk populations (cf. Fig.~2 and
Fig.~3 in their paper). Despite the broader distribution for the disk population, their mean values
are not dissimilar in both populations and therefore cannot solely explain the difference of CLFs.
Apart from the radio luminosity functions, the X-ray emission properties of the MSPs in the GCs are also found
to be very different from those in the Galactic field. While the MSPs in the Galactic field generally require
a hot polar cap component plus a non-thermal power-law tail to model their X-ray spectra (cf. Zavlin 2006),
the X-rays from a majority of the MSPs in GCs are purely thermal in nature (see Hui et al. 2009 and the
references therein for a recent review). Cheng \& Taam (2003) suggest the absence of non-thermal X-ray from
the cluster MSPs can be possibly related to the complicated magnetic field structure. Since the stellar
interaction in GCs is much more frequent than that in the Galactic field, MSPs in the GCs can possibly
change their companion several times throughout their lives. Since the orientation of the
binary after each exchange can differ, the direction of the angular momentum
accreted during the mass transfer phase subsequent to each exchange can vary possibly affecting the magnetic
field configuration at the neutron star surface. Such an evolution could lead to
a much more complicated magnetic field structure for the MSPs in the GCs than in the case in the Galactic field.
In such a complicated magnetic field,
Ruderman \& Cheng (1988) have argued that high energy curvature photons will be emitted and subsequently
converted into pairs to quench the accelerating region. This provides an explanation for the absence of non-thermal
emission in the cluster MSPs. For the same reason, the complicated magnetic field structure can also possibly
alter the coherent radio emission and result in a different radio luminosity of the cluster MSPs in comparison with
the disk population.
Adopting the best-fit normalization inferred from the CLFs of individual cluster as an unbiased estimate
of the number of MSPs, we have further examined the relationships between the pulsar population and the
physical properties in GCs. We have found the positive correlations of $N_{0}$ versus $\Gamma_{\rm c}$ as well
as $N_0$ versus [Fe/H] at a relatively high confidence level. A marginal positive correlation between $N_{0}$
and $v_{\rm escape}$ is also suggested. Although a high escape speed implies the presence of a deeper
gravitational potential well and hence a higher neutron star retention, this correlation is not sufficiently
significant to warrant such an interpretation. Hence, we do not discuss this relation any further and focus
on the physical implications of the $N_{0}-\Gamma_{\rm c}$ and $N_{0}-$[Fe/H] relations.
Due to the different selection effects in the pulsar search surveys, it is not feasible to directly use the
detected MSP populations in GCs for a statistical analysis. Instead, we alleviate the problem by taking
$N_{0}$ as the estimator for the number of pulsars with pseudo radio luminosites at 1.4~GHz larger than
1~mJy~kpc$^{2}$. With this consideration, we have found a correlation between $N_{0}$ and $\Gamma_{\rm c}$ at a
confidence level $>98\%$. We have further found that the strength of this correlation is robust and
independent of the choice of the luminosity cut-off by repeating the analysis with different
thresholds.
This provides evidence for the dynamical formation of MSPs in GCs.
For a competing scenario that the MSPs have a binary origin similar to the Galactic field,
one should expect the number of MSPs
to scale with the cluster mass, $M_{GC}$, instead of $\Gamma_{\rm c}$. However, we do not find any convincing
relationship between $N_{0}$ and $M_{GC}$ (see Table~\ref{correl}). The absence of correlation with $M_{GC}$
provides additional support for the dynamical formation scenario. Taken together with the difference in
the X-ray
luminosity functions of LMXBs in the field and in globular clusters (see Voss et al. 2009; Kim et al. 2009), it is
likely that the MSPs have different origins/evolutions in globular clusters relative to the Galactic field.
We note that the logarithmic slope of the power-law fit in the $N_{0}-\Gamma_{\rm c}$ relationship (i.e.
$0.69\pm0.11$) is not dissimilar to that of the number of X-ray sources versus $\Gamma_{\rm c}$ ($0.74\pm0.36$
Pooley et al. 2003). This dependence on the two-body encounter rate suggests a possible
relationship between the MSP population and close X-ray binaries in GCs. Apart from the whole X-ray
binary population, Pooley et al. (2003) and Gendre et al. (2003) have also examined the relationship
for the individual class of LMXBs which has a logarithmic slope of $0.97\pm0.5$. Although the large
uncertainty of this slope resulting from the limited sample of LMXBs precludes a definitive conclusion
concerning the link between LMXBs and MSPs, it is consistent with such an interpretation.
Theoretical arguments (Verbunt \& Hut 1987) suggest that the number of LMXBs is linearly proportional to
the stellar encounter rate of the cluster, however direct comparison of their relationship with the current
two-body encounter rate may be misleading.
As the MSPs are long lived and are produced by the previous generations of LMXBs, they can have a
different formation rate from the LMXB population currently observed. This point is important since
the relaxation time at the cluster core is generally longer than the lifetime of LMXBs (cf. Harris 1996).
Therefore, the continuous mass segregation at the cluster center can result in a evolution of the stellar
collision frequency and hence a varying formation rate of compact binary systems. Nevertheless,
the combination of X-ray and HST observations of Cen A (see Jord\'{a}n et al. 2007) indicate that globular
clusters with LMXBs are characterized by higher stellar encounter rates than those devoid of LMXBs.
In addition to the $N_{0}-\Gamma_{\rm c}$ relation, we have also found a positive correlation between $N_{0}$ and
the metallicity of the GCs. It has been noted that observational evidence suggests that bright LMXBs are
preferably formed in metal-rich clusters in our Milky Way as well as other galaxies (e.g Bellazzini et al.
1995; Maccarone et al. 2004; Jord\'an et al. 2004). Ivanova (2006) proposes that the absence of the outer
convective zone in metal-poor main sequence donor stars in the mass range of $0.85M_{\odot}$ - $1.25 M_{\odot}$,
in comparison to their metal rich counterparts can be responsible, since the absence of magnetic
braking in such stars precludes orbital shrinkage, thereby, significantly reducing the binary parameter
space for the production of bright LMXBs. For the conventional scenario that LMXBs are the progenitors of
MSPs, the positive correlation between $N_{0}$ and [Fe/H] is not unexpected since the MSP number should
scale with that of their progenitors.
While the stellar encounter rate has been widely accepted as a parameter to indicate which clusters
are likely to host a large MSP population, our study suggests that the metallicity can also be an
important parameter. To explore this hypothesis, we suggest that pulsar searches be carried out toward
metal-rich GCs, such as Liller~1 which has the highest metallicity ([Fe/H]=0.22) among all 150 GCs
in the Milky Way (cf. Harris 1996). Furthermore, its two-body encounter rate is estimated to be comparable
with that of 47~Tuc. Therefore, according to these parameters, it is very likely to host a considerable
number of MSPs. With a dedicated search, this hidden population may be revealed.
\acknowledgments KSC was supported by a GRF grant of Hong Kong Government under HKU700908P.
RET was supported in part by NSF grant AST-0703950 at Northwestern University and by the Theoretical
Institute for Advanced Research in Astrophysics (TIARA) operated under the Academia Sinica Institute
of Astronomy \& Astrophysics in Taipei, Taiwan.
|
1,314,259,995,625 | arxiv | \section{Introduction}
Currently observable metal-poor stars ($[\mathrm{Fe}/\mathrm{H}]\footnotemark<-2$)\footnotetext{The relative abundance of element A with respect to element B is $\text{[A/B]}=\log\left(C_\text{A}/C_\text{B}\right)-\log\left(C_\text{A}/C_\text{B}\right)_\sun$ where $C$ is the number or mass fraction.} are relatively unevolved low-mass objects that formed within the first couple of billion years after the Big Bang. Because these stars have not modified their surface composition by internal nucleosynthesis, they are expected to carry the signature of the chemical evolution of these early epochs providing us with the means to study this long-gone era. The number of known metal-poor stars in our Galaxy has exploded thanks to large-scale photometric and spectroscopic surveys such as the HK survey \citep{1985AJ.....90.2089B,1992AJ....103.1987B}, and more recently the Hamburg/ESO survey \citep{2001A&A...375..366C,2008A&A...484..721C}, the Sloan Digital Sky Survey \citep[SDSS; e.g.][]{2000AJ....120.1579Y,2012ApJS..203...21A,2014ApJS..211...17A} and its sub-survey, the Sloan Extension for Galactic Understanding and Exploration \citep[SEGUE;][]{2009AJ....137.4377Y}. A common finding of these surveys is that a substantial fraction of all metal-poor stars are relatively carbon-rich with $[\mathrm{C}/\mathrm{Fe}]\gtrsim1.0$. This fraction is around 10\% at $[\mathrm{Fe}/\mathrm{H}]\approx-2$ and increasing towards lower metallicities \citep[e.g.][]{2006ApJ...652L..37L,2012ApJ...744..195C,2013AJ....146..132L,2014ApJ...797...21P}.
These so-called carbon-enhanced metal-poor (CEMP) stars are further classified into CEMP -\emph{no}, -\emph{s}, -\emph{r}, and -\emph{r}/\emph{s} sub-classes depending on whether they show enhancements of elements produced by slow (\emph{s}) or rapid (\emph{r}) neutron-capture nucleosynthesis. Most CEMP stars with $[\mathrm{Fe}/\mathrm{H}]>-3$ display significant \emph{s}-process element enrichment and are classified either as CEMP-\emph{s} ($[\mathrm{Ba}/\mathrm{Fe}]>1$ and $[\mathrm{Ba}/\mathrm{Eu}]>0.5$) or CEMP-\emph{r}/\emph{s} ($[\mathrm{Ba}/\mathrm{Fe}]>1$ and $0<[\mathrm{Ba}/\mathrm{Eu}]<0.5$) stars \citep{2005ARA&A..43..531B}.\footnote{Slightly different distinctions between CEMP stars enriched in \emph{s}-process elements have also been proposed \citep{2006A&A...451..651J,2010A&A...509A..93M,2012A&A...548A..34A}.} While the abundance patterns of these stars have been linked to nucleosynthesis occurring in asymptotic giant branch (AGB) stars \citep{2005ARA&A..43..435H,2008ARA&A..46..241S,2010MNRAS.404.1529B,2011MNRAS.418..284B,2012ApJ...747....2L}, most \emph{s}-process-rich CEMP stars are not luminous enough to be AGB stars. However, results from radial velocity monitoring are consistent with all of them being in binaries which is not the case for the other sub-classes \citep{2005ApJ...625..825L,2014MNRAS.441.1217S,2015A&A...583A..49H,2016A&A...586A.160H,2016A&A...588A...3H}. Therefore, these stars are generally thought to be a product of mass transfer from an AGB companion that later became a white dwarf.\footnote{\citet{2016A&A...588A...3H} have argued that four stars in their observed sample of 22 CEMP-\emph{s} stars appear to be single. Regardless, the mass transfer scenario should apply to most CEMP-\emph{s} stars.}As noted by \citet{2005ApJ...625..825L} this would make CEMP-\emph{s} stars the low-metallicity analogues of CH and Ba stars \citep{1990ApJ...352..709M,2016A&A...586A.158J}. If this is the case, observations of the secondary can in principle be used to infer the nucleosynthesis of low-metallicity AGB stars with initial masses around solar and above, which, although important for galactic and globular cluster chemical evolution, no longer exist in the Local Universe \citep[e.g.][]{1999ApJ...521..691T,2001ApJ...549..346T,2009MNRAS.397.1661V,2011MNRAS.414.3231K,2014ApJ...787...10B,2010MNRAS.407..854D,2014MNRAS.437.3274V}.
A comparison of AGB nucleosynthesis models with abundances of CEMP-\emph{s} stars is straightforward only if the accreted material remains on the surface of the star. This will certainly not be the case once it evolves off the main sequence and develops a deep convective envelope. But as demonstrated by \citet{2007A&A...464L..57S}, this is also unlikely on the main sequence because the higher mean molecular weight of the accreted material should trigger thermohaline mixing \citep{1972ApJ...172..165U,1980A&A....91..175K}. Furthermore, gravitational settling of heavier elements could both modify the extent of this mixing and the subsequent evolution of the secondary \citep{2008MNRAS.389.1828S,2008ApJ...677..556T,2009MNRAS.394.1051S}. If the overall effect is to dilute the accreted material by uniformly mixing it throughout some portion of the secondary, a comparison between AGB nucleosynthesis models and abundances of CEMP-\emph{s} stars is still possible provided this amount of dilution can be estimated \citep[e.g. as attempted by][]{2011MNRAS.418..284B,2012MNRAS.422..849B}. However, the rather impartial (leading to similar dilution of all elements) process of settling will be counteracted by the highly selective process of radiative levitation (also known as radiative accelerations), a process in which the ions in the stellar plasma gain a net outward momentum from absorption of the diffusing photons. Metal-poor stars with masses around $\mathrm{0.8~M}_{\sun}$ have very shallow convective envelopes and, in absence of any counteracting processes, large abundance anomalies can result \citep[e.g.][]{2002ApJ...580.1100R,2002ApJ...568..979R}. If radiative levitation is important during the post-mass-transfer evolution of CEMP-\emph{s} stars, the interpretation of their abundances in the context of AGB nucleosynthesis gets considerably more complicated. In this paper we model the main sequence evolution of CEMP-\emph{s} stars including the effect of radiative levitation to investigate whether this is the case.
We focus primarily on the evolution of carbon and iron surface abundances. Abundances of \emph{s}-process elements are not modelled. However, levitation is expected to have a much greater impact on iron than on carbon \citep{1995A&A...297..223G,1997MNRAS.289..700S,2007MNRAS.382..245S}. We therefore attempt to constrain its overall importance for CEMP-\emph{s} stars by investigating these two elements.
\section{Methods\label{sec:Methods}}
We use the stellar evolution code STARS originally written by \citet{1971MNRAS.151..351E,1972MNRAS.156..361E,1973A&A....23..325E} and since improved by many authors \citep[e.g.][]{1995MNRAS.274..964P,2009MNRAS.396.1699S}. The version used in this work tracks the abundances of the nuclear species $^{1}\mathrm{H}$, $^{3}\mathrm{He}$, $^{4}\mathrm{He}$, $\mathrm{^{12}\mathrm{C}}$, $^{14}\mathrm{N}$, $^{16}\mathrm{O}$, $^{20}\mathrm{Ne}$, $^{24}\mathrm{Mg}$, $^{28}\mathrm{Si}$, and $^{56}\mathrm{Fe}$, the last three of which were previously not tracked in detail. The mass fraction $X_{i}$ of each species \emph{i} is governed by an advection-diffusion equation:
\begin{equation}
\frac{\mathrm{d}X_{i}}{\mathrm{d}t}=\frac{\partial}{\partial m}\left[\left(4\pi r^{2}\rho\right)^2 D_{\mathrm{mix}}\frac{\partial X_{i}}{\partial m}\right]-\frac{\partial}{\partial m}\left(4\pi r^{2}\rho X_{i}w_{i}\right)+R_{i},\label{eq:dxdt}
\end{equation}
where the first term on the right-hand side accounts for convective mixing, thermohaline mixing, and concentration diffusion ($D_{\mathrm{mix}}$ is the sum of the individual diffusion coefficients $D_{\mathrm{conv}}$, $D_{\mu}$, and $D_{i}$, respectively), the second term describes the net effect from atomic diffusion, and the last term, $R_{i}$, accounts for nuclear processing.\footnote{Other symbols in Eq.~\eqref{eq:dxdt} have their usual meaning, namely: $t$ is time; $\rho$ is the mass density; $r$ and $m$ are the radial and mass coordinate, respectively. The diffusion velocity $w_{i}$ is defined in Eq.~\eqref{eq:wi}.} Inside convective regions $D_{\mathrm{conv}}$ is obtained from the mixing length theory (MLT; \citealt{1958ZA.....46..108B}) using a solar-calibrated value of $\alpha_{\mathrm{MLT}}=2.0$ that is fully consistent with the models presented in this paper (see Section~\ref{subsec:uncertainties} for details). Near the convective boundaries the convective mixing coefficient takes the form from \citet{1972MNRAS.156..361E} for numerical stability reasons. The diffusion coefficient for thermohaline mixing is taken from \citet{2010ApJ...723..563D} assuming a finger length-to-diameter ratio of 0.5 as constrained by their numerical simulations. This assumption of more blob-like than finger-like structures is in accord with \citet{1980A&A....91..175K} and results in relatively inefficient thermohaline mixing.
Following \citet{2008MNRAS.389.1828S} we treat atomic diffusion in a trace approximation which allows the diffusion velocity of elements other than hydrogen to be written as \citep{2008EAS....32...81T}
\begin{equation}
w_{i}=\frac{D_{i}}{kT}\left[g\left(\mu-\mu_{i}\right)+\mu_{i}g_{\mathrm{r},i}\right]-D_{i}\alpha_{\mathrm{T},i}\frac{\partial\ln T}{\partial r},\label{eq:wi}
\end{equation}
where $g$ and $g_{\mathrm{r}}$ are the gravitational and radiative acceleration, respectively; $k$ is the Boltzmann constant; $T$ is temperature; $\mu$ is the mean molecular weight of the stellar plasma; $\mu_{i}=m_{i}/\left(1+\bar{Z}_{i}\right)$ is the molecular weight of element $i$ with an atomic mass $m_{i}$ and a mean charge $\bar{Z}_{i}$; and $\alpha_{\mathrm{T},i}$ is the thermal diffusion coefficient. The velocity of hydrogen follows from mass conservation: $X_{\mathrm{H}}w_{\mathrm{H}}=-\sum_{i\ne\mathrm{H}}X_{i}w_{i}$. The diffusion coefficients $D_{i}$ and $\alpha_{\mathrm{T},i}$ are taken from \citet{1986ApJS...61..177P}.
We simulate the accretion of AGB ejecta by adding mass of a given composition to our models. We fix the accretion rate to $10^{-6}~\mathrm{M}_{\sun}\thinspace\mathrm{yr^{-1}}$ following \citet{2007A&A...464L..57S}, and the accreted composition to the average composition of the ejecta from the models of \citet{2012ApJ...747....2L}. These yields together with the zero-age main sequence (ZAMS) abundances \citep{2009ARA&A..47..481A} are given in Table~\ref{tab:xinp}. Mass loss is not included in our models until Section~\ref{subsec:Mass-loss}.
\begin{table*}
\caption{Chemical composition of the secondaries on the zero-age main sequence \citep[ZAMS; abundance distribution from][scaled to $Z=10^{-4}$]{2009ARA&A..47..481A} and of the ejecta from the AGB models of \citet{2012ApJ...747....2L}. The second column lists the age when accretion of the corresponding composition begins ($t_{\text{mt}}$). Mass fractions of all elements other than helium are sums over their isotopes. \label{tab:xinp}}
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}>{\raggedright}p{0.09\columnwidth}>{\raggedright}m{0.07\columnwidth}llllllllllll>{\raggedright}m{1.5cm}}
\hline
\hline
\multirow{2}{0.09\columnwidth}{{\tiny{}Model}} & \multirow{2}{0.07\columnwidth}{{\tiny{}$t_{\mathrm{mt}}$ (Gyr)}} & \multicolumn{2}{l}{{\tiny{}Mass fraction}} & \multicolumn{8}{l}{{\tiny{}Mass fraction $\times10^{-6}$}} & \multicolumn{2}{l}{{\tiny{}Abundance}} & \multirow{2}{1.5cm}{{\tiny{}Mean mol. weight}}\tabularnewline
\cline{3-14}
& & {\tiny{}H} & {\tiny{}$^{4}\mathrm{He}$} & {\tiny{}$^{3}\mathrm{He}$} & {\tiny{}C} & {\tiny{}N} & {\tiny{}O} & {\tiny{}Ne} & {\tiny{}Mg} & {\tiny{}Si} & {\tiny{}Fe} & {\tiny{}{[}Fe/H{]}} & {\tiny{}{[}C/Fe{]}} & \tabularnewline
\hline
{\tiny{}ZAMS} & \emph{\tiny{}\ldots} & \emph{\tiny{}$0.75770$} & \emph{\tiny{}$0.24217$} & \emph{\tiny{}$30.30$} & \emph{\tiny{}$17.72$} & \emph{\tiny{}$5.190$} & \emph{\tiny{}$42.95$} & \emph{\tiny{}$9.390$} & \emph{\tiny{}$5.300$} & \emph{\tiny{}$4.980$} & \emph{\tiny{}$9.680$} & \emph{\tiny{}$-2.14$} & \emph{\tiny{}$0.00$} & {\tiny{}$0.5934$}\tabularnewline
\multicolumn{15}{c}{\tiny{}Composition of AGB ejecta}\tabularnewline
\emph{\tiny{}$\mathrm{0.90\thinspace M}_{\sun}$} & \emph{\tiny{}$9.10$} & \emph{\tiny{}$0.73302$} & \emph{\tiny{}$0.26222$} & \emph{\tiny{}$235.8$} & \emph{\tiny{}$3680$} & \emph{\tiny{}$135.1$} & \emph{\tiny{}$217.5$} & \emph{\tiny{}$457.0$} & \emph{\tiny{}$12.77$} & \emph{\tiny{}$4.943$} & \emph{\tiny{}$8.895$} & \emph{\tiny{}$-2.16$} & \emph{\tiny{}$2.35$} & {\tiny{}$0.6046$}\tabularnewline
\emph{\tiny{}$\mathrm{1.00\thinspace M}_{\sun}$} & \emph{\tiny{}$6.30$} & \emph{\tiny{}$0.74907$} & \emph{\tiny{}$0.24956$} & \emph{\tiny{}$261.4$} & \emph{\tiny{}$933.0$} & \emph{\tiny{}$21.47$} & \emph{\tiny{}$92.47$} & \emph{\tiny{}$37.29$} & \emph{\tiny{}$5.395$} & \emph{\tiny{}$4.913$} & \emph{\tiny{}$8.910$} & \emph{\tiny{}$-2.17$} & \emph{\tiny{}$1.76$} & {\tiny{}$0.5972$}\tabularnewline
\emph{\tiny{}$\mathrm{1.25\thinspace M}_{\sun}$} & \emph{\tiny{}$3.06$} & \emph{\tiny{}$0.71670$} & \emph{\tiny{}$0.27604$} & \emph{\tiny{}$228.9$} & \emph{\tiny{}$6032$} & \emph{\tiny{}$42.42$} & \emph{\tiny{}$305.3$} & \emph{\tiny{}$620.9$} & \emph{\tiny{}$14.96$} & \emph{\tiny{}$4.976$} & \emph{\tiny{}$8.869$} & \emph{\tiny{}$-2.15$} & \emph{\tiny{}$2.57$} & {\tiny{}$0.6122$}\tabularnewline
\emph{\tiny{}$\mathrm{1.50\thinspace M}_{\sun}$} & \emph{\tiny{}$1.80$} & \emph{\tiny{}$0.69878$} & \emph{\tiny{}$0.28562$} & \emph{\tiny{}$203.7$} & \emph{\tiny{}$12840$} & \emph{\tiny{}$56.60$} & \emph{\tiny{}$590.2$} & \emph{\tiny{}$1854$} & \emph{\tiny{}$40.11$} & \emph{\tiny{}$5.121$} & \emph{\tiny{}$8.821$} & \emph{\tiny{}$-2.14$} & \emph{\tiny{}$2.90$} & {\tiny{}$0.6212$}\tabularnewline
\hline
\end{tabular*}
\end{table*}
\subsection{Opacity and radiative accelerations}
We compute the radiative acceleration of each element using the monochromatic data from version 3.3 of the Opacity Project (OP) database \citep{2005MNRAS.360..458B,2007MNRAS.382..245S}. The OP data consists of cross-sections $\sigma_{i}(u\equiv h\nu/kT)$ and electron scattering corrections $a_{i}(u)$ for 17 chemical elements between H and Ni in a temperature range between $\log_{10}T=3.5$ and $\log_{10}T=8.0$. The monochromatic data are used to compute the Rosseland mean opacity $\kappa_{\mathrm{R}}$ and, for each element $i$, a factor $\gamma_{i}$ which is proportional to the radiative acceleration of the respective element:
\begin{equation}
\frac{1}{\kappa_{\mathrm{R}}}=\sum_{j}N_{j}m_{j}\int\frac{1}{\sum_{i}N_{i}\sigma_{i}(v)}\mathrm{d}v,\label{eq:kappar}
\end{equation}
\begin{equation}
\gamma_{i}=\int\frac{\sigma_{i}(u)\left[1-\exp\left(-u\right)\right]-a_{i}(u)}{\sum_{j}N_{j}\sigma_{j}(u)}\mathrm{d}v.\label{eq:gamma}
\end{equation}
Here $N_{i}$ is the number fraction of element $i$, and $v(u)$ is the OP frequency variable
\begin{equation}
v(u)=\frac{15}{4\pi^{4}}\int_{0}^{u}\frac{u^{4}\exp\left(-u\right)}{\left[1-\exp\left(-u\right)\right]^{3}}\mathrm{d}u.\label{eq:freqv}
\end{equation}
With these quantities the radiative accelerations are given by
\begin{equation}
g_{\mathrm{r,}i}=\frac{l_{\mathrm{r}}\kappa_{\mathrm{R}}}{4\pi cr^{2}}\frac{\gamma_{i}}{m_{i}}\sum_{j}N_{j}m_{j},\label{eq:grad}
\end{equation}
where $l_{\mathrm{r}}$ is the radiative luminosity, and $c$ is the speed of light.
The OP team have created the OPserver module \citep{2007MNRAS.378.1031M} which is intended to facilitate the computation of accelerations in stellar evolution calculations. We have made some changes to this module in coupling it to the STARS code. First, we have made it possible to store multiple opacity (and acceleration) tables in memory at the same time. This requires computing the opacity corresponding to a given chemical composition only once. When this composition is encountered during evolution, one only needs to interpolate to the required temperature and density in the corresponding table. The second modification is the same as made by \citet{2011MNRAS.418..195H} in their incorporation of the OPserver module in a version of the STARS code -- instead of calculating the acceleration for multiple relative abundances of a given element, we calculate the accelerations of all elements in a given mixture. Finally, we have added a routine that computes the mean charge of each element from the OP data. These charges are computed on the same temperature and density grid as the opacity and are used to calculate the molecular weights of the elements.
Calculating both the opacity and the accelerations from the monochromatic OP data makes the models self-consistent in that changes in relative abundances modify the structure of the star through the opacity, which, in turn, changes the accelerations. Unfortunately, the OP opacities do not include any contribution from conduction, which becomes important after the main sequence when the central regions of the star become increasingly degenerate. Since in many cases we follow the evolution all the way up the giant branch, we use the opacity tables introduced in the code by \citet{2004MNRAS.348..201E} for regions hotter than $\log_{\mathrm{10}}T=7.3$. These tables are based on the OPAL opacities \citep{1996ApJ...464..943I} supplemented by the low-temperature opacities of \citet{1994ApJ...437..879A} and the conductive opacities of \citet{1969ApJS...18..297H,1970ApJ...159..641C}. While switching to the tabulated OPAL opacities means that changes in the relative abundances no longer modify the structure at high temperatures, by then the effects of atomic diffusion have already started to disappear because of the first dredge-up (FDU), and none of our results depend on this choice (see Section~\ref{subsec:uncertainties}).
\subsection{Grid selection}
Our simulations cover a range of primary masses $M_{\mathrm{1}}$, accreted masses $\Delta M$, and initial secondary masses $M_{2,\mathrm{i}}$ (or, equivalently, final masses $M_{2,\mathrm{f}}$). In this work we consider those systems that are the most probable in the synthetic populations computed by \citet{2015A&A...581A..62A}. According to their work, typical masses are $M_{1}\simeq0.9\text{\text{--}}1.25~\mathrm{M}_{\sun}$, $M_{2,\mathrm{f}}\simeq0.8\text{--}0.9~\mathrm{M}_{\sun}$, and $\Delta M\simeq0.05\text{--}0.2~\mathrm{M}_{\sun}$. These accreted masses and final masses of the secondaries are larger than considered in a related earlier study by \citet{2008MNRAS.389.1828S}. Therefore, we also consider some systems with smaller $\Delta M$ values, namely $0.001$ and $0.01~\mathrm{M}_{\sun}$.
In summary, we evolve stellar models with initial masses of 0.60, 0.65, 0.70, 0.75, and $0.80~\mathrm{M}_{\sun}$ and metallicity $Z=10^{-4}$ ($\mathrm{[Fe/H]}=-2.14$) starting from the pre-main-sequence. At the ages listed in Table~\ref{tab:xinp}, somewhere between $0.001$ and $0.2~\mathrm{M}_{\sun}$ of material of the corresponding composition is added to the models at a rate of $10^{-6}~\mathrm{M}_{\sun}\thinspace\mathrm{yr^{-1}}$ yielding CEMP-\emph{s} stellar models with masses between $0.8$ and $0.95~\mathrm{M}_{\sun}$. These models are evolved up to the core helium flash or an age of 16~Gyr, whichever comes first.
\section{\label{sec:Results}Results}
Two sets of models were initially evolved: in one set only thermohaline mixing, gravitational settling, and thermal diffusion were active; in the other, radiative levitation was also included. Table~\ref{tab:Results_main} lists some properties of these systems, including the {[}C/Fe{]} ratio at the surface at key points of the evolution: after thermohaline mixing, at the point where the convective envelope is smallest in mass (near the turn-off), and after first dredge-up.
\subsection{An illustrative model sequence\label{subsec:A-typical-model}}
To understand how the evolution of surface abundances is influenced by the different physical processes included in our simulations, let us consider a particular model sequence in detail. Figure~\ref{fig:ms0750dm0050mp125} illustrates the case of a secondary with an initial mass of $0.75~\mathrm{M}_{\sun}$ that accretes $0.05~\mathrm{M}_{\sun}$ of material from a $1.25~\mathrm{M}_{\sun}$ primary. Multiple stages of evolution can be distinguished.
\begin{figure*}
\subfloat[Hertzsprung-Russell diagram]{\includegraphics[width=1\columnwidth]{fig1a}\label{fig:ms0750dm0050mp125-hrd}}\hspace{\columnsep}\subfloat[Evolution of carbon and iron surface mass fractions]{\includegraphics[width=1\columnwidth]{fig1b}\label{fig:ms0750dm0050mp125-XvsL}}
\subfloat[Interior profiles of carbon]{\includegraphics[width=1\columnwidth]{fig1c}\label{fig:ms0750dm0050mp125-XCvsDpth-gs}}\hspace{\columnsep}\subfloat[Interior profiles of iron]{\includegraphics[width=1\columnwidth]{fig1d}\label{fig:ms0750dm0050mp125-XCFeVSdpth-ra}}\caption{Evolution and abundances of a $M_{2,\mathrm{i}}=0.75~\mathrm{M}_{\sun}$ secondary accreting $\Delta M=0.05~\mathrm{M}_{\sun}$ of material from a $M_{1}=1.25~\mathrm{M}_{\sun}$ primary. The labels highlight specific parts of the evolution discussed in the text. The model sequences with and without radiative levitation overlap at this scale of the HRD. Interior abundance profiles are shown at a few of the stages indicated in the upper panels: before mass transfer (`1', solid); before thermohaline mixing (`4', long-dashed); after thermohaline mixing (`5', dotted); post-mass-transfer main sequence (`6', dot-dashed); minimum of convective envelope mass (`7', dot-dot-dashed); during first dredge-up (`9', short-dashed). The vertical lines in the lower panels indicate the position of the base of the convective envelope at the respective time. The interior profiles of carbon with and without levitation nearly coincide and only the case with levitation is shown.\label{fig:ms0750dm0050mp125}}
\end{figure*}
Prior to mass transfer the secondary slowly evolves as a $0.75~\mathrm{M}_{\sun}$ main sequence star (the part of the evolution labelled `1' in Figs.\,\ref{fig:ms0750dm0050mp125-hrd} and \ref{fig:ms0750dm0050mp125-XvsL}). During this stage gravitational settling dominates and the abundance of every element other than hydrogen decreases at the surface.
At $t=3.06$~Gyr mass transfer begins and the surface composition quickly becomes equal to that of the accreted material (`2'). During the accretion the star becomes hotter and more luminous. This is common for many system configurations in which the Kelvin-Helmholtz timescale of the secondary becomes comparable to the accretion timescale. In some models the effective temperature and luminosity can reach values as high as 9500~K and $30~\mathrm{L}_{\sun}$, respectively. But once accretion stops (`3'), both luminosity and temperature rapidly drop, resulting in loops in the Hertzsprung-Russell diagram (HRD) as the star settles back on the main sequence. Generally, these loops are more characteristic of secondaries with larger initial masses.
Shortly after accretion stops the accreted material starts to mix with the original material of the secondary as a result of the thermohaline instability (`4'). As shown by Fig.~\ref{fig:ms0750dm0050mp125-XCvsDpth-gs}, some of the interior is already mixed by the time the surface abundances change. The mixing takes only a few hundred million years (about $150~\mathrm{Myr}$ in this case) and is over before the star has settled back on the main sequence (`5'). Ultimately, the surface carbon abundance is reduced by about 0.8~dex (regardless of radiative levitation), whereas the iron abundance is barely affected because it is virtually the same in the original and accreted compositions.
Over the rest of the post-mass-transfer main sequence lifetime the abundances are again modified by atomic diffusion (`6'). At first gravitational settling prevails over radiative levitation and the surface becomes increasingly hydrogen-rich as all heavier elements settle out of the surface convection zone. As the star nears the turn-off, this convection zone becomes ever more superficial and radiative effects become increasingly important (Fig.~\ref{fig:ms0750dm0050mp125-XCFeVSdpth-ra}). Once an element's radiative velocity at the base of the convection zone exceeds its settling velocity, the surface abundance of this element increases. This is typically the case with iron. In contrast, if an element's settling velocity is always greater than its radiative velocity, the surface abundance of this element continues to decrease (although less so than in the case when radiative effects are ignored). This is always the case with helium and carbon. The behaviour of other elements is not readily predicted because of the non-monotonic shape of the radiative accelerations (as a function of temperature) and the outward movement of the base of the envelope (decreasing temperature at the base). Therefore, the abundance of most elements alternates between increasing at those times when the radiative velocity exceeds the settling velocity and decreasing at others \citep[e.g. see figure 2 of][]{2002ApJ...568..979R}.
Eventually, the abundance anomalies, i.e. their values relative to those after thermohaline mixing, reach their maxima (`7'). At this stage the difference between the two sets of models is greatest -- compared to the abundances after thermohaline mixing, in models without levitation the abundances of all elements are reduced (dotted lines in Fig.~\ref{fig:ms0750dm0050mp125-XvsL}), whereas in models with levitation this is not always the case (solid lines in Fig.~\ref{fig:ms0750dm0050mp125-XvsL}; in both cases only carbon and iron are shown for clarity) and readily levitated elements can be over-abundant. The anomalies are maximal shortly after the turn-off when the convective envelope is smallest (Figs.~\ref{fig:ms0750dm0050mp125-XCvsDpth-gs} and \ref{fig:ms0750dm0050mp125-XCFeVSdpth-ra}).\footnote{In fact, the convective envelope has already slightly grown in mass. For a short time the diffusion timescale is shorter than the evolutionary timescale.} This occurs at the same time in models with and without levitation.
Next, as the convective envelope grows in mass, the material in it is mixed with that of the immediately adjacent, previously radiative layers. In models without diffusion no change in surface abundances would occur until the envelope reached depths where CN cycling had occurred (i.e. at first dredge-up). With diffusion, however, the composition of the envelope is different from that of the radiative layers below, and therefore the effect of the deepening of the envelope is to first undo all the work done by diffusion (`8'). When the envelope mass has reached a few thousandths of a solar mass, all surface evidence of atomic diffusion has been erased and the abundances are similar to those after thermohaline mixing (`9').
First dredge-up homogenizes the composition in the layers above a mass coordinate of $0.3\text{--}0.35~\mathrm{M}_{\sun}$ ($0.34~\mathrm{M}_{\sun}$ in this case). What effect this has on the surface abundances depends on how this depth compares to the depth of thermohaline mixing ($m_{\text{thm}}=0.37~\mathrm{M}_{\sun}$ in this case). If thermohaline mixing is not as deep as the maximum depth reached by the envelope at FDU, the accreted material is further diluted with the original material of the secondary. Otherwise, most abundances do not change. However, some of the accreted carbon will then have been converted into nitrogen. As shown by \citet{2007A&A...464L..57S}, during late FDU (which ends at around $\log L\approx1.5$; `10') this nitrogen is dredged up to the surface.
Finally, after the luminosity bump (`11') $^{3}\mathrm{He}$-burning reduces the mean molecular weight above the hydrogen-burning shell. Thus, a $\mu$-inversion, which is magnified by the settling of $^{4}\text{He}$ \citep{2010A&A...510A.104M}, develops between the shell and the receding convective envelope -- a situation again unstable to thermohaline mixing. This alters the surface abundance of nitrogen by 0.1~dex at most. The much greater carbon abundance remains essentially unchanged. The abundance change after the bump is much smaller than found by \citet{2009MNRAS.396.2313S} because the thermohaline mixing coefficient in this work is about $10^{3}$ times smaller.
This model sequence illustrates the role each physical process plays in all models with atomic diffusion. We see that diffusion modifies the surface composition on the main sequence both before and after mass transfer. This modification is greatest around the turn-off, when the convective envelope is shallowest (point `7' in Fig.~\ref{fig:ms0750dm0050mp125}). We now turn to discussing the expected abundance changes for all CEMP-\emph{s} stars in this evolutionary stage.
\subsection{Abundance anomalies near the turn-off}
During the main sequence the mass of the convective envelope, $M_{\mathrm{env}}$, of a low-mass star decreases. Therefore, the timescale for atomic diffusion, which is proportional to roughly the square root of $M_{\mathrm{env}}$ \citep{1977Natur.266..433M}, also decreases. In nearly all of our CEMP-\emph{s} models the envelope mass reaches a minimum of less than $10^{-4}~\mathrm{M}_{\sun}$ around the turn-off. The corresponding timescales are short enough compared to the nuclear timescale for atomic diffusion to notably modify the surface composition. Figure~\ref{fig:ab_vs_menv} summarizes the extent of the abundance variations in our models with diffusion (Table~\ref{tab:Results_main}). Specifically, the figure shows the {[}Fe/H{]}, {[}C/H{]}, and {[}C/Fe{]} abundances at the time when the convective envelope mass reaches the minimum in each of the CEMP-\emph{s} models. In models with envelope masses always above about $2\times10^{-5}~\mathrm{M}_{\sun}$ gravitational settling prevails, however, the abundances are decreased only by up to a factor of two from their values after thermohaline mixing. But in models with even smaller $M_{\mathrm{env}}$ at the turn-off, abundances are modified by a factor of ten or more and radiative levitation becomes important (Fig.~\ref{fig:ab_vs_menv-FeH}). The model discussed in Section~\ref{subsec:A-typical-model} is close to this limit -- its envelope mass has a minimum of about $1.2\times10^{-5}~\mathrm{M}_{\sun}$. At this minimum its {[}C/Fe{]} is $1.71$ when levitation is ignored versus $1.12$ when it is included (in both cases down from $1.78$ after thermohaline mixing), and {[}Fe/H{]} is $-2.74$ and $-2.11$, respectively.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.965\columnwidth]{fig2a}\label{fig:ab_vs_menv-FeH}}
\subfloat{\includegraphics[width=0.965\columnwidth]{fig2b}\label{fig:ab_vs_menv-CH}}
\subfloat{\includegraphics[width=0.965\columnwidth]{fig2c}\label{fig:ab_vs_menv-CFe}}
\caption{Symbols show {[}Fe/H{]} (a), {[}C/H{]} (b), and {[}C/Fe{]} (c) in each of the CEMP-\emph{s} models at the point where the mass of the convective envelope is smallest, i.e. just before first dredge-up. Models with and without radiative levitation are plotted with black and grey symbols, respectively. All values for envelope masses below $10^{-7}~\mathrm{M}_{\sun}$ denote upper or lower limits. The {[}Fe/H{]} plot also shows the metallicity evolution in a $0.8~\mathrm{M}_{\sun}$ model sequence with no accretion (solid lines). Similarly, the {[}C/Fe{]} plot shows the evolution in a model corresponding to $M_{1}=1.25~\mathrm{M}_{\sun}$, $M_{2,\text{i}}=0.8~\mathrm{M}_{\sun}$, and $\Delta M=0.05~\mathrm{M}_{\sun}$. Both sequences without levitation stop during the first dredge-up. The criterion for being classified as a carbon-enhanced metal-poor star ($[\mathrm{C/Fe}]\geq1.0$) is from \citet{2005ARA&A..43..531B}.\label{fig:ab_vs_menv}}
\end{figure}
The results from many simulations plotted in Figs.~\ref{fig:ab_vs_menv-FeH} and \ref{fig:ab_vs_menv-CFe} form two sequences corresponding to model sets with and without levitation. As shown by the solid lines, to a decent approximation we can interpret these sequences as describing the abundance evolution in a single simulation as the envelope mass changes. For example, as the envelope mass decreases from $10^{-5}$ to $10^{-6}~\mathrm{M}_{\sun}$, {[}C/Fe{]} decreases by about two orders of magnitude in models with levitation because while carbon continues to settle, iron is levitated. On the other hand, in models without levitation {[}C/Fe{]} does not change because both elements settle at similar rates (Figs.~\ref{fig:ab_vs_menv-FeH} and \ref{fig:ab_vs_menv-CH}). At still smaller envelope masses, {[}C/Fe{]} increases because of the low degree of ionization of iron at the base of the envelope. The small mean charge of iron gives a large diffusion coefficient because of the approximately $D\sim\bar{Z}^{-2}$ dependence \citep{1986ApJS...61..177P} and, consequently, a large settling velocity.
Figure~\ref{fig:ab_vs_menv} shows that for given post-thermohaline-mixing abundances the abundance evolution prior to FDU can be parametrized as a function of only $M_{\text{env}}$. But once FDU starts and the envelope deepens, a kind of hysteresis is seen in that the abundances at a given $M_{\text{env}}$ are not the same as they were prior to FDU. This is because diffusion has modified the radiative layers below the envelope in the meantime.
Carbon-enhanced metal-poor stars are distinguished from other metal-poor stars based on their {[}C/Fe{]} value. Assuming that atomic diffusion is correctly predicted and no additional mixing processes operate in the radiative regions below the envelope, Fig.~\ref{fig:ab_vs_menv-CFe} implies that they must have envelope masses larger than $10^{-5}~\mathrm{M}_{\sun}$. Otherwise, they would not be classified as carbon-enhanced. While models without levitation do not have such a limit, the metallicity rapidly decreases below this value because of settling -- at $M_{\mathrm{env}}\approx10^{-6}~\mathrm{M}_{\sun}$ the surface $[\mathrm{Fe/H}]\approx-4$, which is much lower than typical of CEMP-\emph{s} stars.
For a given combination of AGB and CEMP-\emph{s} star masses ($M_{1}$ and $M_{2,\mathrm{f}}$, respectively) the convective envelope is deeper in models with larger accreted mass. For example, a $0.8~\mathrm{M}_{\sun}$ CEMP-\emph{s} star with an initial mass of $0.6~\mathrm{M}_{\sun}$ retains a more massive envelope than one with an initial mass of $0.7~\mathrm{M}_{\sun}$. This is due to the higher average opacity of these stars (more metal-rich stars maintain thicker convective envelopes for the same reason). The difference in $M_{\mathrm{env}}$ can be a factor of 2--10 (depending on $M_{1}$, $M_{2,\mathrm{f}}$, and the range of $M_{2,\mathrm{i}}$) which can lead to substantially different abundances when the envelopes are small (Fig.~\ref{fig:ab_vs_menv}).
As can be seen from Figure~\ref{fig:ab_vs_menv} and Table~\ref{tab:Results_main}, diffusion is extremely efficient in many of our models leading to unrealistic abundance anomalies (e.g. $\mathrm{[C/Fe]}<-1$ or $\mathrm{[C/Fe]}>4$). In most of our more massive CEMP-\emph{s} models ($M_{2,\mathrm{f}}\geq0.85~\mathrm{M}_{\sun}$) diffusion is so efficient that our code is incapable of resolving the steep abundance gradients developing at the base of the envelope and we are forced to stop the computations before the main-sequence turn-off. Such massive CEMP-\emph{s} stars are nevertheless probable according to population synthesis calculations \citep{2015A&A...581A..62A} and would help explain the properties of some CEMP-\emph{s} RR Lyrae stars \citep{2013MNRAS.435..698S}, so we would like to explore their connection to observations. Therefore, we proceed by assuming that diffusion, and possibly thermohaline mixing, is inhibited throughout these stars for one reason or another (leaving open the nature and cause of the underlying mechanism) and evolve two sets of model sequences without atomic diffusion: one set with thermohaline mixing and one without. The results from these simulations are summarized in Table~\ref{tab:Results_massive}.
These models have some key differences in global properties and surface abundances from the model sequences with atomic diffusion. First, the surface abundances do not change prior to mass transfer. More importantly, after thermohaline mixing has reached equilibrium, no further abundance changes occur until FDU (i.e. between the stages labelled `5' and `9' in Figs.~\ref{fig:ms0750dm0050mp125-hrd} and \ref{fig:ms0750dm0050mp125-XvsL}). The importance of FDU still depends on the depth of thermohaline mixing, as in models with diffusion. In models without thermohaline mixing the surface abundances do not change until FDU during which the accreted material is invariably diluted by mixing throughout most of the star (down to a mass coordinate of $0.3\text{--}0.35~\mathrm{M}_{\sun}$).
In agreement with previous studies, models with diffusion are younger (by a few percent) at a given evolutionary stage and have lower effective temperatures at the turn-off than models without diffusion, primarily because of the gravitational settling of helium throughout the star \citep[e.g.][]{1999A&A...344...97C,2002ApJ...571..487V,2012MNRAS.427..127B}. Our non-accreting models with diffusion are about 150~K cooler than models without diffusion (compare Table~\ref{tab:Results_main} and \ref{tab:Results_massive}). Among our CEMP-\emph{s} models, those with thermohaline mixing but without diffusion are generally between 100 to 300~K hotter than models without both. The latter are cooler because of the high opacity of the outer layers owing to the metal-richness of the accreted material. CEMP-\emph{s} models with diffusion likely fall somewhere in between but we cannot do a proper comparison because our massive models with diffusion do not make it to turn-off for numerical reasons.
\subsection{Thermohaline mixing}
The fraction of the star mixed by thermohaline convection, which we shall call the thermohaline mixing efficiency, essentially depends on the mean molecular weight of the accreted material and its total mass compared to the final mass of the star. The more helium- and metal-rich the accreted material, the greater its molecular weight compared to the initial composition, and the greater the portion of the star that gets mixed. From the last column of Table~\ref{tab:xinp} we therefore expect that, for a given amount of accreted material, thermohaline mixing should be most efficient when that material comes from a primary of $1.5~\mathrm{M}_{\sun}$ and least efficient when it comes from a primary of $1.0~\mathrm{M}_{\sun}$, which is indeed the case (Fig.~\ref{fig:thmix_eff}).
\begin{figure}
\includegraphics[width=1\columnwidth]{fig3}
\caption{Thermohaline mixing efficiency (fraction of the star that is mixed) as a function of the ratio between accreted mass and final mass for different primaries. Black and grey symbols correspond to models with and without diffusion, respectively. Each symbol represents a unique combination of $M_{1}$, $M_{2,\mathrm{f}}$, and $\Delta M$.\label{fig:thmix_eff}}
\end{figure}
Furthermore, the greater the amount of the high-$\mu$ material, the deeper the mixing must be for the $\mu$-gradient to be removed. If an amount $\Delta M$ of AGB ejecta with a mean molecular weight $\mu_{\text{a}}$ is mixed with $M_{2,\text{f}}-\Delta M-m_{\text{thm}}$ of the unpolluted material with an average molecular weight $\mu_{\text{i}}$ ($<\mu_{\text{a}}$) before the $\mu$-gradient is removed, a mixed region $M_{2,\text{f}}-m_{\text{thm}}$ with molecular weight $\mu_{\text{f}}$ results (Fig.~\ref{fig:thmix_illustration}). Equating the states before and after mixing, one gets that the removal of the $\mu$-gradient implies a linear relationship between the accreted-to-final mass ratio and mixing efficiency. Indeed, a linear relationship is a reasonable approximation in the range $0.05\lesssim\Delta M/M_{2,\mathrm{f}}\lesssim0.2$ of accreted-to-final mass ratios (Fig.~\ref{fig:thmix_eff}). However, higher and lower ratios of $\Delta M/M_{2,\mathrm{f}}$ require special consideration.
\begin{figure}
\includegraphics[width=1\columnwidth]{fig4}
\caption{Schematic illustration of the structure of the star before (a) and after (b) thermohaline mixing. The shaded area around $m_\text{thm}$ and below indicates a region in which the mean molecular weight has been raised as a result of nuclear processing.\label{fig:thmix_illustration}}
\end{figure}
First, a high $\Delta M/M_{2,\mathrm{f}}$ corresponds to a case where a lot of mass is transferred to a low-mass star, which implies that the mass transfer takes place when the secondary is still nearly on the ZAMS. Two outcomes are possible. If the accreted material has a sufficiently high molecular weight compared to the molecular weight throughout the star, thermohaline mixing affects the whole star, and the composition is nearly homogenized. For example, this occurs when a $0.65~\mathrm{M}_{\sun}$ secondary accretes $0.2~\mathrm{M}_{\sun}$ of material from a $1.5~\mathrm{M}_{\sun}$ primary. On the other hand, if the accreted material has a lower molecular weight than some central region of the star, increasing the accreted mass will not lead to a much deeper mixing because of the steepness of the $\mu$-gradient near the center. The mixing efficiency can therefore decrease for higher $\Delta M/M_{2,\mathrm{f}}$ (although at this point almost all of the star will be mixed, anyway).
Second, in models with diffusion low accreted-to-final mass ratios lead to relatively inefficient thermohaline mixing. Contrary to models without diffusion, where the molecular weight is constant outside of nuclear burning regions, in these models there is a stabilizing $\mu$-gradient throughout the star owing to gravitational settling. This presents a ``$\mu$-barrier'' to thermohaline mixing that must be overcome for mixing to happen \citep{2008ApJ...677..556T}. If only a tiny amount of material is accreted ($\Delta M\lesssim0.001~\mathrm{M}_{\sun}$), thermohaline mixing can be almost completely inhibited (left bottom corner of Fig.~\ref{fig:thmix_eff}). Nevertheless, even in these cases the surface carbon content is depleted by a factor of two or more because the mass of the mixed region, $M_{2,\text{f}}-m_{\text{thm}}$, is always greater than $\Delta M$.
For $\mathrm{\Delta}M>0.01~\mathrm{M}_{\sun}$ the $\mu$-barrier is largely overwhelmed. However, the mixing is nevertheless slightly less efficient than in models without diffusion even for large amounts of accretion. Similar conclusions were reached by \citet{2008MNRAS.389.1828S}. Overall, the surface carbon abundance is typically reduced by somewhere between 0.3 to 1~dex depending on the relative amount of the accreted material and its molecular weight. Since radiative accelerations have almost no influence on the molecular weight profile deep in the star, they have almost no influence on the efficiency of thermohaline mixing.
\section{\label{sec:Discussion}Comparison with observations}
We have presented four sets of models of CEMP-\emph{s} stars. Two of the sets comprise models with thermohaline mixing and atomic diffusion (one set with, one without radiative levitation). The other two sets comprise models without diffusion (one set with, one without thermohaline mixing). We now compare the four sets of models with observations of CEMP stars.
The largest data set of Galactic metal-poor stellar spectra currently available is that from the SDSS and SEGUE surveys. \citet{2013AJ....146..132L} used the spectra collected by SDSS/SEGUE to derive the stellar parameters and carbon abundances ({[}C/Fe{]}) in close to 250\,000 stars, around 10\,000 of which have $\text{[Fe/H]}<-2$. We now use this homogeneous metal-poor sample \citep[priv. comm.]{2013AJ....146..132L} to compare the observed carbon abundances with our models.
Comparing the observed {[}C/Fe{]} abundances with models must be done with care. Use of a fixed metallicity ({[}Fe/H{]}) range might not be adequate because of the diffusion of iron (Figs.~\ref{fig:feh_gsra_0.8} and \ref{fig:feh_gsra_0.85}). Only the $0.8~\mathrm{M}_{\sun}$ models with levitation generally predict {[}Fe/H{]} to remain within a factor of about two from the initial value ($\text{[Fe/H]}=-2.14$) throughout the evolution, whereas the models without levitation have $[\mathrm{Fe/H}]\lesssim-2.5$ near the turn-off. Meanwhile, most of the $0.85~\mathrm{M}_{\sun}$ models have $[\mathrm{Fe/H}]<-3$ (without levitation) or $[\mathrm{Fe/H}]\gtrsim-1$ (with levitation).
\begin{figure*}
\subfloat{\includegraphics[width=1\columnwidth]{fig5a}\label{fig:feh_gsra_0.8}}\hspace{\columnsep}\subfloat{\includegraphics[width=1\columnwidth]{fig5b}\label{fig:feh_gsra_0.85}}
\subfloat{\includegraphics[width=1\columnwidth]{fig5c}\label{fig:ch_gsra_0.8}}\hspace{\columnsep}\subfloat{\includegraphics[width=1\columnwidth]{fig5d}\label{fig:ch_gsra_0.85}}
\subfloat{\includegraphics[width=1\columnwidth]{fig5e}\label{fig:cfe_gsra_0.8}}\hspace{\columnsep}\subfloat{\includegraphics[width=1\columnwidth]{fig5f}\label{fig:cfe_gsra_0.85}}
\caption{Evolution of {[}Fe/H{]} (upper panels), {[}C/H{]} (middle panels) and {[}C/Fe{]} (lower panels) in CEMP-\emph{s} models of $0.8~\mathrm{M}_{\sun}$ (left panels) and $0.85~\mathrm{M}_{\sun}$ (right panels) with diffusion. Thick lines are models with radiative levitation, whereas thin lines are those without. Empty circles mark an age of 10~Gyr and filled circles mark an age of 13.8~Gyr, i.e. the part of the track between the circles covers the expected age range of CEMP-\emph{s} stars. The small, grey circles show the metal-poor stars observed in the Sloan Digital Sky Survey in the metallicity range $-2.5\leq\text{[Fe/H]}\leq-2.0$ \citep{2013AJ....146..132L}.\label{fig:feh_ch_cfe_gsra_0.8_0.85}}
\end{figure*}
It is safer to first consider {[}C/H{]} because the largest {[}C/H{]} values should be close to 0.5~dex, independent of metallicity. In the metallicity range typical of CEMP-\emph{s} stars ($-2.5\leq\text{[Fe/H]}\leq-2.0$), this is indeed the case for subgiants ($\log g\lesssim3.7$) but very few turn-off stars ($\log g\approx4$) seem to have $\text{[C/H]}>0$ (Figs.~\ref{fig:ch_gsra_0.8} and \ref{fig:ch_gsra_0.85}). Is this difference in the maximum {[}C/H{]} values between the two groups in the observational data ($\Delta\text{[C/H]}\approx0.5$) evidence of gravitational settling of carbon in CEMP stars? And does the similar 0.5~dex difference seen in the {[}C/Fe{]} data (Figs.~\ref{fig:cfe_gsra_0.8} and \ref{fig:cfe_gsra_0.85}) then imply that iron is levitated just enough that its abundance stays roughly constant throughout the evolution? This seems rather unlikely because then the carbon-normal population should plausibly also have lower {[}C/H{]} (and {[}C/Fe{]}) values near the turn-off which is not the case. On the contrary, the observations suggest that the carbon abundance in carbon-normal metal-poor stars is increasing on the main sequence until they reach the turn-off. This is exactly the opposite behaviour that atomic diffusion predicts! Moreover, there is no obvious candidate for a physical process that could cause the surface carbon abundance to increase on the main sequence.
The carbon-normal metal-poor stars listed in the Stellar Abundances for Galactic Archeology database \citep[SAGA;][]{2008PASJ...60.1159S,2011MNRAS.412..843S} do not seem to show a similar behaviour of increasing carbon abundance on the main sequence (C.~Abate, priv. comm.), although the relatively small number statistics (23 stars with $-2.5\leq\text{[Fe/H]}\leq-2.0$, $\text{[C/Fe]}<1$ and $\log g>4.0$) and the heterogeneity of the data could perhaps hide such a trend. Unfortunately, whether there is a real difference in the upper {[}C/H{]} and {[}C/Fe{]} values between turn-off stars and subgiants remains unclear.
Most of the models are at odds with the \citet{2013AJ....146..132L} data, regardless of whether the abundance differences between turn-off stars and subgiants are caused by atomic diffusion. For example, while some of the $0.8~\mathrm{M}_{\sun}$ models predict abundances that are consistent with the observations, they only do so at very late times, i.e. at ages exceeding the age of the Universe (13.8~Gyr; \citealt{2013ApJS..208...19H}). At earlier times stars with low-mass AGB companions (which thus accreted mass later) and/or with low initial masses (which were thus less evolved at the point of mass transfer) are still relatively unevolved. They should be observable as carbon-rich low-luminosity ($\log g\gtrsim4.1$) objects. But such objects are conspicuous by their absence in the \citet{2013AJ....146..132L} results (at all metallicities; their figure~6). Since there are plenty of carbon-normal low-luminosity stars in the data, it is difficult to imagine how this could be a selection effect, particularly since a few carbon-rich dwarfs have been found in the SDSS data in detailed abundance studies \citep{2008ApJ...678.1351A,2010A&A...513A..72B}.\footnote{Two examples are the CEMP-\emph{no} (or -\emph{r}) star SDSS0036-10 ($\text{[Fe/H]}=-2.4$, $\log g=4.5$, $\text{[C/Fe]}=2.3$, $\text{[Ba/Fe]}=0.3$) and the CEMP-\emph{s} (or -\emph{r}/\emph{s}) star SDSS2047+10 ($\text{[Fe/H]}=-2.1$, $\log g=4.5$, $\text{[C/Fe]}=2.0$, $\text{[Ba/Fe]}=1.5$) from \citet{2008ApJ...678.1351A}. \citet{2010A&A...513A..72B} present the CEMP-\emph{r}/\emph{s} star SDSSJ0912+0216 ($\text{[Fe/H]}=-2.5$, $\log g=4.5$, $\text{[C/Fe]}\approx1.5$, $\text{[Ba/Fe]}=1.5$, $\text{[Eu/Fe]}=1.2$), which may, however, be more evolved with $\log g\approx4.0$ \citep{2013AJ....145...13A}.} By contrast, the $0.85~\mathrm{M}_{\sun}$ CEMP-\emph{s} models are sufficiently evolved but diffusion is so efficient that unrealistic abundances (e.g. $[\mathrm{C/Fe}]<-0.5$ with levitation or $[\mathrm{C/Fe}]>3.5$ without levitation) are predicted in nearly all such stars around the turn-off (Fig.~\ref{fig:cfe_gsra_0.85}). Clearly, in these more massive CEMP-\emph{s} stars some physical process must be countering atomic diffusion, at least near their surface.
The disagreement between observations and models concerning the existence of low-luminosity carbon-rich stars has little to do with atomic diffusion -- even if we identify a process that neatly counteracts diffusion near the surface, models will still predict many carbon-rich unevolved objects. In fact, if this process were to counteract diffusion throughout the star, the tension with observations would increase because diffusion starves the core of fuel and accelerates the evolution, making the star spend less time on the main sequence. Perhaps, the SDSS sample indicates that the mass ratio ($q\equiv M_{2,\mathrm{i}}/M_{1}$) in CEMP-\emph{s} progenitor systems is biased towards unity, contrary to the common assumption of a flat distribution. If $q$ is close to one, the mass transfer occurs relatively late, giving the secondary more time to evolve before it becomes a CEMP-\emph{s} star.
Overall, models without diffusion are better able to envelop the observational data (Fig.~\ref{fig:cfe_notm_0.9_0.95}). For example, CEMP-\emph{s} models that have accreted less carbon-rich material (here coming from $1~\mathrm{M}_{\sun}$ primaries) have $[\mathrm{C/Fe}]<1.5$ with {[}C/Fe{]} increasing with accreted mass. Models that have accreted more carbon-rich material (from primaries with masses $1.25~\mathrm{M}_{\sun}$, $1.5~\mathrm{M}_{\sun}$) can have $[\mathrm{C/Fe}]\gtrsim2$ throughout evolution. Models with thermohaline mixing seem to be preferred because no sharp change in {[}C/Fe{]} at FDU is evident from the data. Lower-mass models ($M_{2,\text{f}}\approx0.8~\mathrm{M}_{\sun}$) without diffusion would predict similar abundance evolution \citep{2007A&A...464L..57S,2008MNRAS.389.1828S}. However, most of them, coming from systems with relatively low-mass AGB companions compared to earlier works, would be consistent with the data only for $t>13.8~\text{Gyr}$.
\begin{longtab}
\begin{longtable}{ll>{\raggedright}p{1.2cm}l>{\raggedright}p{1.4cm}lllllll>{\raggedright}p{1.4cm}}
\caption{Results from simulations including atomic diffusion. The columns list
the initial mass of the secondary ($M_{2,\text{i}}$); accreted mass
($\Delta M$); whether levitation was included; the deepest mass coordinate
reached by \foreignlanguage{british}{thermohaline} mixing ($m_{\text{thm}}$);
{[}C/Fe{]} after \foreignlanguage{british}{thermohaline} mixing ends;
the age ($t$), luminosity ($L$), effective temperature ($T_{\text{eff}}$),
surface gravity ($g$), envelope mass ($M_{\text{env}}$), metallicity
({[}Fe/H{]}), and {[}C/Fe{]} when the envelope mass reaches a minimum;
{[}C/Fe{]} after first dredge-up. The
table is sectioned according to the initial primary mass, $M_{1}$.
\label{tab:Results_main}} \\
\hline
\hline
\multirow{2}{*}{{\tiny{}$M_{2,\mathrm{i}}$}} & \multirow{2}{*}{{\tiny{}$\Delta M$}} & \multirow{2}{1.2cm}{{\tiny{}Levitation?}} & \multirow{2}{*}{{\tiny{}$m_{\mathrm{\text{thm}}}$}} & \multirow{2}{1.6cm}{{\tiny{}{[}C/Fe{]} post-th.mix.}} & \multicolumn{7}{l}{{\tiny{}At the time when envelope mass is smallest}} & \multirow{2}{1.4cm}{{\tiny{}$\text{[C/Fe]}$\tablefootmark{b} post-FDU}}\tabularnewline
\cline{6-12}
& & & & & {\tiny{}$t$ (Gyr)} & {\tiny{}$\log(L/\mathrm{L}_{\sun})$} & {\tiny{}$T_{\mathrm{eff}}$} & {\tiny{}$\log g$} & {\tiny{}$M_{\mathrm{env}}$} & {\tiny{}$\text{[Fe/H]}$\tablefootmark{a}} & {\tiny{}$\text{[C/Fe]}$\tablefootmark{a}} & \tabularnewline
\hline
\endfirsthead
\caption{continued.}\\
\hline
\hline
\multirow{2}{*}{{\tiny{}$M_{2,\mathrm{i}}$}} & \multirow{2}{*}{{\tiny{}$\Delta M$}} & \multirow{2}{1.2cm}{{\tiny{}Levitation?}} & \multirow{2}{*}{{\tiny{}$m_{\mathrm{\text{thm}}}$}} & \multirow{2}{1.6cm}{{\tiny{}{[}C/Fe{]} post-th.mix.}} & \multicolumn{7}{l}{{\tiny{}At the time when envelope mass is smallest}} & \multirow{2}{1.4cm}{{\tiny{}$\text{[C/Fe]}$\tablefootmark{b} post-FDU}}\tabularnewline
\cline{6-12}
& & & & & {\tiny{}$t$ (Gyr)} & {\tiny{}$\log(L/\mathrm{L}_{\sun})$} & {\tiny{}$T_{\mathrm{eff}}$} & {\tiny{}$\log g$} & {\tiny{}$M_{\mathrm{env}}$} & {\tiny{}$\text{[Fe/H]}$\tablefootmark{a}} & {\tiny{}$\text{[C/Fe]}$\tablefootmark{a}} & \tabularnewline
\hline
\endhead
\hline
\endfoot
\multicolumn{13}{c}{{\tiny{}$M_{1}=0.9\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.411$} & {\tiny{}$ 1.88$} & {\tiny{}$15.84$} & {\tiny{}$ 0.4461$} & {\tiny{}$6366$} & {\tiny{}$4.06$} & {\tiny{}$2.79(-5)$} & {\tiny{}$-2.54$} & {\tiny{}$ 1.82$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.410$} & {\tiny{}$ 1.88$} & {\tiny{}$15.83$} & {\tiny{}$ 0.4462$} & {\tiny{}$6365$} & {\tiny{}$4.06$} & {\tiny{}$2.90(-5)$} & {\tiny{}$-2.25$} & {\tiny{}$ 1.56$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{}$0.526$} & {\tiny{}$ 1.77$} & {\tiny{}$14.02$} & {\tiny{}$ 0.4573$} & {\tiny{}$6400$} & {\tiny{}$4.06$} & {\tiny{}$1.59(-5)$} & {\tiny{}$-2.61$} & {\tiny{}$ 1.71$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.525$} & {\tiny{}$ 1.78$} & {\tiny{}$14.00$} & {\tiny{}$ 0.4544$} & {\tiny{}$6402$} & {\tiny{}$4.06$} & {\tiny{}$1.69(-5)$} & {\tiny{}$-2.13$} & {\tiny{}$ 1.27$} & {\tiny{}$ 1.38$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.001$} & {\tiny{}no } & {\tiny{}$0.797$} & {\tiny{}$ 1.85$} & {\tiny{}$11.94$} & {\tiny{}$ 0.4986$} & {\tiny{}$6471$} & {\tiny{}$4.04$} & {\tiny{}$3.36(-6)$} & {\tiny{}$-3.05$} & {\tiny{}$ 1.86$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.797$} & {\tiny{}$ 1.83$} & {\tiny{}$11.93$} & {\tiny{}$ 0.4959$} & {\tiny{}$6465$} & {\tiny{}$4.04$} & {\tiny{}$3.98(-6)$} & {\tiny{}$-1.60$} & {\tiny{}$ 0.46$} & {\tiny{}$ 0.15$} \tabularnewline
{\tiny{}$0.800$\tablefootmark{\textasteriskcentered}} & {\tiny{}$0.010$} & {\tiny{}no } & {\tiny{}$0.760$} & {\tiny{}$ 1.76$} & {\tiny{}$11.79$} & {\tiny{}$ 0.4922$} & {\tiny{}$6464$} & {\tiny{}$4.05$} & {\tiny{}$5.28(-6)$} & {\tiny{}$-2.82$} & {\tiny{}$ 1.73$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.760$} & {\tiny{}$ 1.75$} & {\tiny{}$11.80$} & {\tiny{}$ 0.4930$} & {\tiny{}$6457$} & {\tiny{}$4.05$} & {\tiny{}$6.04(-6)$} & {\tiny{}$-1.73$} & {\tiny{}$ 0.67$} & {\tiny{}$ 0.72$} \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.462$} & {\tiny{}$ 1.91$} & {\tiny{}$13.24$} & {\tiny{}$ 0.5482$} & {\tiny{}$6581$} & {\tiny{}$4.05$} & {\tiny{}$3.94(-7)$} & {\tiny{}$-5.32$} & {\tiny{}$ 2.72$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.461$} & {\tiny{}$ 1.91$} & {\tiny{}$13.25$} & {\tiny{}$ 0.5490$} & {\tiny{}$6555$} & {\tiny{}$4.04$} & {\tiny{}$4.66(-7)$} & {\tiny{}$-0.63$} & {\tiny{}$-1.67$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{}$0.576$} & {\tiny{}$ 1.84$} & {\tiny{}$11.50$} & {\tiny{}$ 0.5567$} & {\tiny{}$6647$} & {\tiny{}$4.05$} & {\tiny{}$1.44(-7)$} & {\tiny{}$-7.71$} & {\tiny{}$ 3.95$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.578$} & {\tiny{}$ 1.76$} & {\tiny{}$11.48$} & {\tiny{}$ 0.5522$} & {\tiny{}$6616$} & {\tiny{}$4.05$} & {\tiny{}$1.56(-7)$} & {\tiny{}$-0.35$} & {\tiny{}$-2.60$} & {\tiny{}$ 1.34$} \tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.361$} & {\tiny{}$ 2.06$} & {\tiny{}$12.76$\tablefootmark{\dag}} & {\tiny{}$ 0.4587$} & {\tiny{}$6798$} & {\tiny{}$4.22$} & {\tiny{}$5.97(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.360$} & {\tiny{}$ 2.05$} & {\tiny{}$12.99$\tablefootmark{\dag}} & {\tiny{}$ 0.4907$} & {\tiny{}$6781$} & {\tiny{}$4.18$} & {\tiny{}$4.14(-8)$} & {\tiny{}$-0.44$} & {\tiny{}$-3.90$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.515$} & {\tiny{}$ 2.06$} & {\tiny{}$10.13$\tablefootmark{\dag}} & {\tiny{}$ 0.4557$} & {\tiny{}$6858$} & {\tiny{}$4.23$} & {\tiny{}$2.91(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.518$} & {\tiny{}$ 1.49$} & {\tiny{}$10.54$\tablefootmark{\dag}} & {\tiny{}$ 0.5170$} & {\tiny{}$6874$} & {\tiny{}$4.18$} & {\tiny{}$1.03(-8)$} & {\tiny{}$-0.42$} & {\tiny{}$-3.17$} & {\tiny{} \ldots } \tabularnewline
\multicolumn{13}{c}{{\tiny{}$M_{1}=1.0\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.600$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.342$} & {\tiny{}$ 1.55$} & {\tiny{}$16.00$\tablefootmark{\ddag}} & {\tiny{}$ 0.4132$} & {\tiny{}$6495$} & {\tiny{}$4.13$} & {\tiny{}$4.03(-6)$} & {\tiny{}$-3.05$} & {\tiny{}$ 1.50$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.341$} & {\tiny{}$ 1.55$} & {\tiny{}$16.00$\tablefootmark{\ddag}} & {\tiny{}$ 0.4120$} & {\tiny{}$6486$} & {\tiny{}$4.13$} & {\tiny{}$4.80(-6)$} & {\tiny{}$-1.84$} & {\tiny{}$ 0.38$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.477$} & {\tiny{}$ 1.44$} & {\tiny{}$14.66$} & {\tiny{}$ 0.4815$} & {\tiny{}$6489$} & {\tiny{}$4.06$} & {\tiny{}$2.21(-6)$} & {\tiny{}$-3.44$} & {\tiny{}$ 1.48$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.479$} & {\tiny{}$ 1.44$} & {\tiny{}$14.67$} & {\tiny{}$ 0.4854$} & {\tiny{}$6476$} & {\tiny{}$4.05$} & {\tiny{}$2.66(-6)$} & {\tiny{}$-1.48$} & {\tiny{}$-0.37$} & {\tiny{}$ 1.11$} \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{}$0.602$} & {\tiny{}$ 1.36$} & {\tiny{}$13.45$} & {\tiny{}$ 0.4875$} & {\tiny{}$6498$} & {\tiny{}$4.06$} & {\tiny{}$1.82(-6)$} & {\tiny{}$-3.56$} & {\tiny{}$ 1.45$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.601$} & {\tiny{}$ 1.36$} & {\tiny{}$13.43$} & {\tiny{}$ 0.4871$} & {\tiny{}$6488$} & {\tiny{}$4.06$} & {\tiny{}$2.16(-6)$} & {\tiny{}$-1.37$} & {\tiny{}$-0.62$} & {\tiny{}$ 0.83$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.001$} & {\tiny{}no } & {\tiny{}$0.797$} & {\tiny{}$ 1.28$} & {\tiny{}$11.93$} & {\tiny{}$ 0.4992$} & {\tiny{}$6531$} & {\tiny{}$4.06$} & {\tiny{}$9.59(-7)$} & {\tiny{}$-4.11$} & {\tiny{}$ 1.56$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.797$} & {\tiny{}$ 1.28$} & {\tiny{}$11.92$} & {\tiny{}$ 0.4987$} & {\tiny{}$6515$} & {\tiny{}$4.05$} & {\tiny{}$1.18(-6)$} & {\tiny{}$-1.10$} & {\tiny{}$-1.25$} & {\tiny{}$ 0.03$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.010$} & {\tiny{}no } & {\tiny{}$0.778$} & {\tiny{}$ 1.34$} & {\tiny{}$11.73$} & {\tiny{}$ 0.5113$} & {\tiny{}$6558$} & {\tiny{}$4.06$} & {\tiny{}$5.99(-7)$} & {\tiny{}$-4.69$} & {\tiny{}$ 1.85$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.777$} & {\tiny{}$ 1.33$} & {\tiny{}$11.73$} & {\tiny{}$ 0.5109$} & {\tiny{}$6540$} & {\tiny{}$4.05$} & {\tiny{}$7.05(-7)$} & {\tiny{}$-0.85$} & {\tiny{}$-1.74$} & {\tiny{}$ 0.31$} \tabularnewline
{\tiny{}$0.650$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.380$} & {\tiny{}$ 1.56$} & {\tiny{}$13.16$\tablefootmark{\dag}} & {\tiny{}$ 0.4455$} & {\tiny{}$6764$} & {\tiny{}$4.20$} & {\tiny{}$7.90(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.382$} & {\tiny{}$ 1.58$} & {\tiny{}$13.51$\tablefootmark{\dag}} & {\tiny{}$ 0.5007$} & {\tiny{}$6759$} & {\tiny{}$4.14$} & {\tiny{}$3.76(-8)$} & {\tiny{}$-0.42$} & {\tiny{}$-4.47$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.528$} & {\tiny{}$ 1.48$} & {\tiny{}$11.33$\tablefootmark{\dag}} & {\tiny{}$ 0.4298$} & {\tiny{}$6776$} & {\tiny{}$4.21$} & {\tiny{}$7.68(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.528$} & {\tiny{}$ 1.48$} & {\tiny{}$11.78$\tablefootmark{\dag}} & {\tiny{}$ 0.5005$} & {\tiny{}$6783$} & {\tiny{}$4.15$} & {\tiny{}$2.78(-8)$} & {\tiny{}$-0.42$} & {\tiny{}$-4.82$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.800$\tablefootmark{\textasteriskcentered}} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{}$0.655$} & {\tiny{}$ 1.42$} & {\tiny{}$10.32$\tablefootmark{\dag}} & {\tiny{}$ 0.4584$} & {\tiny{}$6812$} & {\tiny{}$4.20$} & {\tiny{}$3.87(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.654$} & {\tiny{}$ 1.39$} & {\tiny{}$10.31$\tablefootmark{\dag}} & {\tiny{}$ 0.4566$} & {\tiny{}$6781$} & {\tiny{}$4.19$} & {\tiny{}$4.25(-8)$} & {\tiny{}$-0.44$} & {\tiny{}$-4.46$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.427$} & {\tiny{}$ 1.62$} & {\tiny{}$ 9.88$\tablefootmark{\dag}} & {\tiny{}$ 0.3482$} & {\tiny{}$6879$} & {\tiny{}$4.35$} & {\tiny{}$5.60(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.428$} & {\tiny{}$ 1.58$} & {\tiny{}$10.35$\tablefootmark{\dag}} & {\tiny{}$ 0.3998$} & {\tiny{}$6918$} & {\tiny{}$4.31$} & {\tiny{}$1.68(-8)$} & {\tiny{}$-0.56$} & {\tiny{}$-7.12$} & {\tiny{} \ldots } \tabularnewline
\multicolumn{13}{c}{{\tiny{}$M_{1}=1.25\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.600$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.158$} & {\tiny{}$ 2.21$} & {\tiny{}$14.35$} & {\tiny{}$ 0.4497$} & {\tiny{}$6302$} & {\tiny{}$4.04$} & {\tiny{}$6.44(-5)$} & {\tiny{}$-2.46$} & {\tiny{}$ 2.15$} & {\tiny{}$ 2.04$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.156$} & {\tiny{}$ 2.21$} & {\tiny{}$14.34$} & {\tiny{}$ 0.4487$} & {\tiny{}$6302$} & {\tiny{}$4.04$} & {\tiny{}$6.63(-5)$} & {\tiny{}$-2.34$} & {\tiny{}$ 2.04$} & {\tiny{}$ 2.04$} \tabularnewline
{\tiny{}$0.700$\tablefootmark{\textasteriskcentered}} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.263$} & {\tiny{}$ 1.94$} & {\tiny{}$13.31$} & {\tiny{}$ 0.4596$} & {\tiny{}$6358$} & {\tiny{}$4.05$} & {\tiny{}$2.42(-5)$} & {\tiny{}$-2.60$} & {\tiny{}$ 1.87$} & {\tiny{}$ 1.80$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.264$} & {\tiny{}$ 1.95$} & {\tiny{}$13.28$} & {\tiny{}$ 0.4534$} & {\tiny{}$6359$} & {\tiny{}$4.05$} & {\tiny{}$2.59(-5)$} & {\tiny{}$-2.26$} & {\tiny{}$ 1.57$} & {\tiny{}$ 1.80$} \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{}$0.370$} & {\tiny{}$ 1.78$} & {\tiny{}$12.68$} & {\tiny{}$ 0.4628$} & {\tiny{}$6400$} & {\tiny{}$4.06$} & {\tiny{}$1.19(-5)$} & {\tiny{}$-2.74$} & {\tiny{}$ 1.71$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.370$} & {\tiny{}$ 1.78$} & {\tiny{}$12.66$} & {\tiny{}$ 0.4598$} & {\tiny{}$6397$} & {\tiny{}$4.06$} & {\tiny{}$1.32(-5)$} & {\tiny{}$-2.11$} & {\tiny{}$ 1.12$} & {\tiny{}$ 1.58$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.001$} & {\tiny{}no } & {\tiny{}$0.796$} & {\tiny{}$ 1.90$} & {\tiny{}$11.90$} & {\tiny{}$ 0.4925$} & {\tiny{}$6457$} & {\tiny{}$4.04$} & {\tiny{}$3.56(-6)$} & {\tiny{}$-3.15$} & {\tiny{}$ 1.88$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.796$} & {\tiny{}$ 1.90$} & {\tiny{}$11.89$} & {\tiny{}$ 0.4914$} & {\tiny{}$6449$} & {\tiny{}$4.04$} & {\tiny{}$4.40(-6)$} & {\tiny{}$-1.69$} & {\tiny{}$ 0.50$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.010$} & {\tiny{}no } & {\tiny{}$0.600$} & {\tiny{}$ 1.52$} & {\tiny{}$11.58$} & {\tiny{}$ 0.5006$} & {\tiny{}$6507$} & {\tiny{}$4.05$} & {\tiny{}$1.46(-6)$} & {\tiny{}$-3.74$} & {\tiny{}$ 1.64$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.600$} & {\tiny{}$ 1.51$} & {\tiny{}$11.57$} & {\tiny{}$ 0.4979$} & {\tiny{}$6495$} & {\tiny{}$4.05$} & {\tiny{}$1.88(-6)$} & {\tiny{}$-1.30$} & {\tiny{}$-0.62$} & {\tiny{}$ 0.92$} \tabularnewline
{\tiny{}$0.650$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.149$} & {\tiny{}$ 2.12$} & {\tiny{}$11.65$} & {\tiny{}$ 0.5482$} & {\tiny{}$6513$} & {\tiny{}$4.03$} & {\tiny{}$1.13(-6)$} & {\tiny{}$-3.96$} & {\tiny{}$ 2.32$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.150$} & {\tiny{}$ 2.12$} & {\tiny{}$11.62$} & {\tiny{}$ 0.5426$} & {\tiny{}$6501$} & {\tiny{}$4.03$} & {\tiny{}$1.47(-6)$} & {\tiny{}$-1.17$} & {\tiny{}$-0.29$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{}$0.309$} & {\tiny{}$ 1.96$} & {\tiny{}$10.90$} & {\tiny{}$ 0.5598$} & {\tiny{}$6589$} & {\tiny{}$4.04$} & {\tiny{}$3.02(-7)$} & {\tiny{}$-6.13$} & {\tiny{}$ 3.10$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.308$} & {\tiny{}$ 1.97$} & {\tiny{}$10.88$} & {\tiny{}$ 0.5578$} & {\tiny{}$6561$} & {\tiny{}$4.03$} & {\tiny{}$3.60(-7)$} & {\tiny{}$-0.56$} & {\tiny{}$-1.98$} & {\tiny{}$ 1.78$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{}$0.419$} & {\tiny{}$ 1.81$} & {\tiny{}$10.28$} & {\tiny{}$ 0.5552$} & {\tiny{}$6662$} & {\tiny{}$4.06$} & {\tiny{}$1.15(-7)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.421$} & {\tiny{}$ 1.81$} & {\tiny{}$10.30$} & {\tiny{}$ 0.5597$} & {\tiny{}$6625$} & {\tiny{}$4.05$} & {\tiny{}$1.27(-7)$} & {\tiny{}$-0.36$} & {\tiny{}$-3.19$} & {\tiny{} \ldots } \tabularnewline
\multicolumn{13}{c}{{\tiny{}$M_{1}=1.5\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.600$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.000$} & {\tiny{}$ 2.37$} & {\tiny{}$12.70$} & {\tiny{}$ 0.2746$} & {\tiny{}$6166$} & {\tiny{}$4.18$} & {\tiny{}$1.12(-3)$} & {\tiny{}$-2.29$} & {\tiny{}$ 2.35$} & {\tiny{}$ 2.28$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.000$} & {\tiny{}$ 2.40$} & {\tiny{}$12.72$} & {\tiny{}$ 0.2751$} & {\tiny{}$6166$} & {\tiny{}$4.18$} & {\tiny{}$1.13(-3)$} & {\tiny{}$-2.28$} & {\tiny{}$ 2.34$} & {\tiny{}$ 2.28$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.010$} & {\tiny{}no } & {\tiny{}$0.495$} & {\tiny{}$ 1.67$} & {\tiny{}$11.56$} & {\tiny{}$ 0.4913$} & {\tiny{}$6449$} & {\tiny{}$4.05$} & {\tiny{}$4.15(-6)$} & {\tiny{}$-3.11$} & {\tiny{}$ 1.64$} & {\tiny{}$ 1.22$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.493$} & {\tiny{}$ 1.67$} & {\tiny{}$11.54$} & {\tiny{}$ 0.4886$} & {\tiny{}$6445$} & {\tiny{}$4.05$} & {\tiny{}$5.01(-6)$} & {\tiny{}$-1.73$} & {\tiny{}$ 0.35$} & {\tiny{}$ 1.22$} \tabularnewline
{\tiny{}$0.650$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{}$0.000$} & {\tiny{}$ 2.37$} & {\tiny{}$10.44$} & {\tiny{}$ 0.3817$} & {\tiny{}$6307$} & {\tiny{}$4.14$} & {\tiny{}$1.30(-4)$} & {\tiny{}$-2.41$} & {\tiny{}$ 2.32$} & {\tiny{}$ 2.25$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.000$} & {\tiny{}$ 2.38$} & {\tiny{}$10.42$} & {\tiny{}$ 0.3804$} & {\tiny{}$6305$} & {\tiny{}$4.14$} & {\tiny{}$1.34(-4)$} & {\tiny{}$-2.35$} & {\tiny{}$ 2.28$} & {\tiny{}$ 2.25$} \tabularnewline
\multicolumn{13}{c}{{\tiny{}no accretion}}\tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.000$} & {\tiny{}no } & {\tiny{}$0.000$} & {\tiny{}$ 0.00$} & {\tiny{}$15.06$} & {\tiny{}$ 0.3914$} & {\tiny{}$6345$} & {\tiny{}$4.08$} & {\tiny{}$2.59(-5)$} & {\tiny{}$-2.63$} & {\tiny{}$-0.09$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.000$} & {\tiny{}$ 0.00$} & {\tiny{}$15.06$} & {\tiny{}$ 0.3901$} & {\tiny{}$6345$} & {\tiny{}$4.09$} & {\tiny{}$2.75(-5)$} & {\tiny{}$-2.33$} & {\tiny{}$-0.24$} & {\tiny{} \ldots } \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.000$} & {\tiny{}no } & {\tiny{}$0.000$} & {\tiny{}$ 0.00$} & {\tiny{}$11.93$} & {\tiny{}$ 0.4951$} & {\tiny{}$6580$} & {\tiny{}$4.07$} & {\tiny{}$4.48(-7)$} & {\tiny{}$-5.38$} & {\tiny{}$ 0.73$} & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.000$} & {\tiny{}$ 0.00$} & {\tiny{}$11.94$} & {\tiny{}$ 0.4961$} & {\tiny{}$6558$} & {\tiny{}$4.07$} & {\tiny{}$5.27(-7)$} & {\tiny{}$-0.80$} & {\tiny{}$-3.33$} & {\tiny{}$-0.02$} \tabularnewline
{\tiny{}$0.850$} & {\tiny{}$0.000$} & {\tiny{}no } & {\tiny{}$0.000$} & {\tiny{}$ 0.00$} & {\tiny{}$ 8.35$\tablefootmark{\dag}} & {\tiny{}$ 0.3818$} & {\tiny{}$6819$} & {\tiny{}$4.27$} & {\tiny{}$6.75(-8)$} & {\tiny{}$-$inf } & {\tiny{}inf } & {\tiny{} \ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.000$} & {\tiny{}$ 0.00$} & {\tiny{}$ 8.51$\tablefootmark{\dag}} & {\tiny{}$ 0.4027$} & {\tiny{}$6814$} & {\tiny{}$4.25$} & {\tiny{}$5.28(-8)$} & {\tiny{}$-1.07$} & {\tiny{}$-4.25$} & {\tiny{} \ldots } \tabularnewline
\end{longtable}
\tablefoot{
All masses are in solar masses; other quantities are in cgs units unless indicated otherwise. Values of $M_\text{env}$ are given in the format $n(m)=n\times10^m$ for concision.
\tablefoottext{a} {An `inf' indicates that one of the mass fractions is below $10^{-12}$.}
\tablefoottext{b} {Most of the models stop earlier.}
\tablefoottext{\textasteriskcentered} {Systems (with levitation) used in resolution tests (see Section~\ref{subsec:uncertainties}).}
\tablefoottext{\dag} {Models stop before reaching the minimum of $M_{\text{env}}$. The listed values are from the last converged model.}
\tablefoottext{\ddag} {Models reach $t=16\thinspace\text{Gyr}$ before reaching the minimum of $M_{\text{env}}$. The listed values are for the final model.}
}
\end{longtab}
\begin{longtab}
\begin{longtable}{ll>{\raggedright}p{1cm}l>{\raggedright}p{1.4cm}lllllll>{\raggedright}p{1.4cm}}
\caption{Results from simulations without atomic diffusion. Columns have the
same meaning as in Table~\ref{tab:Results_main} except for the third
column which here indicates whether \foreignlanguage{british}{thermohaline}
mixing is included.\label{tab:Results_massive}} \\
\hline
\hline
\multirow{2}{*}{{\tiny{}$M_{2,\mathrm{i}}$}} & \multirow{2}{*}{{\tiny{}$\Delta M$}} & \multirow{2}{1cm}{\foreignlanguage{british}{{\tiny{}Th.}\foreignlanguage{english}{{\tiny{} mixing?}}}} & \multirow{2}{*}{{\tiny{}$m_{\text{thm}}$\tablefootmark{a}}} & \multirow{2}{1.6cm}{{\tiny{}$\text{[C/Fe]}$\tablefootmark{a} post-th.mix.}} & \multicolumn{7}{l}{{\tiny{}At the time when envelope mass is smallest}} & \multirow{2}{1.4cm}{{\tiny{}$\text{[C/Fe]}$\tablefootmark{b} post-FDU}}\tabularnewline
\cline{6-12}
& & & & & {\tiny{}$t$ (Gyr)} & {\tiny{}$\log(L/\mathrm{L}_{\sun})$} & {\tiny{}$T_{\mathrm{eff}}$} & {\tiny{}$\log g$} & {\tiny{}$M_{\mathrm{env}}$} & {\tiny{}{[}Fe/H{]}} & {\tiny{}{[}C/Fe{]}} & \tabularnewline
\hline
\endfirsthead
\caption{continued.}\\
\hline
\hline
\multirow{2}{*}{{\tiny{}$M_{2,\mathrm{i}}$}} & \multirow{2}{*}{{\tiny{}$\Delta M$}} & \multirow{2}{1cm}{\foreignlanguage{british}{{\tiny{}Th.}\foreignlanguage{english}{{\tiny{} mixing?}}}} & \multirow{2}{*}{{\tiny{}$m_{\text{thm}}$\tablefootmark{a}}} & \multirow{2}{1.6cm}{{\tiny{}$\text{[C/Fe]}$\tablefootmark{a} post-th.mix.}} & \multicolumn{7}{l}{{\tiny{}At the time when envelope mass is smallest}} & \multirow{2}{1.4cm}{{\tiny{}$\text{[C/Fe]}$\tablefootmark{b} post-FDU}}\tabularnewline
\cline{6-12}
& & & & & {\tiny{}$t$ (Gyr)} & {\tiny{}$\log(L/\mathrm{L}_{\sun})$} & {\tiny{}$T_{\mathrm{eff}}$} & {\tiny{}$\log g$} & {\tiny{}$M_{\mathrm{env}}$} & {\tiny{}{[}Fe/H{]}} & {\tiny{}{[}C/Fe{]}} & \tabularnewline
\hline
\endhead
\hline
\endfoot
\multicolumn{13}{c}{{\tiny{}$M_{1}=0.9\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$13.98$} & {\tiny{}$ 0.6032$} & {\tiny{}$6719$} & {\tiny{}$4.05$} & {\tiny{}$1.59(-6)$} & {\tiny{}$-2.16$} & {\tiny{}$ 2.35$} & {\tiny{}$ 1.89$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.329$} & {\tiny{}$ 1.91$} & {\tiny{}$13.93$} & {\tiny{}$ 0.6405$} & {\tiny{}$6990$} & {\tiny{}$4.08$} & {\tiny{}$1.37(-8)$} & {\tiny{}$-2.14$} & {\tiny{}$ 1.91$} & {\tiny{}$ 1.83$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$11.29$} & {\tiny{}$ 0.6136$} & {\tiny{}$6734$} & {\tiny{}$4.05$} & {\tiny{}$1.15(-6)$} & {\tiny{}$-2.16$} & {\tiny{}$ 2.35$} & {\tiny{}$ 1.58$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.461$} & {\tiny{}$ 1.72$} & {\tiny{}$11.23$} & {\tiny{}$ 0.6221$} & {\tiny{}$7079$} & {\tiny{}$4.12$} & {\tiny{}$6.39(-9)$} & {\tiny{}$-2.14$} & {\tiny{}$ 1.72$} & {\tiny{}$ 1.59$} \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$12.20$} & {\tiny{}$ 0.6829$} & {\tiny{}$7019$} & {\tiny{}$4.07$} & {\tiny{}$1.07(-8)$} & {\tiny{}$-2.16$} & {\tiny{}$ 2.35$} & {\tiny{}$ 1.85$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.372$} & {\tiny{}$ 1.91$} & {\tiny{}$12.05$} & {\tiny{}$ 0.6799$} & {\tiny{}$7359$} & {\tiny{}$4.16$} & {\tiny{}$2.84(-10)$} & {\tiny{}$-2.14$} & {\tiny{}$ 1.91$} & {\tiny{}$ 1.82$} \tabularnewline
\multicolumn{13}{c}{{\tiny{}$M_{1}=1.0\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.650$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$14.18$} & {\tiny{}$ 0.5809$} & {\tiny{}$6841$} & {\tiny{}$4.08$} & {\tiny{}$1.59(-7)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 1.34$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.343$} & {\tiny{}$ 1.39$} & {\tiny{}$14.14$} & {\tiny{}$ 0.5877$} & {\tiny{}$6928$} & {\tiny{}$4.10$} & {\tiny{}$3.49(-8)$} & {\tiny{}$-2.15$} & {\tiny{}$ 1.39$} & {\tiny{}$ 1.32$} \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$12.39$} & {\tiny{}$ 0.5819$} & {\tiny{}$6836$} & {\tiny{}$4.08$} & {\tiny{}$1.73(-7)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 1.05$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.455$} & {\tiny{}$ 1.21$} & {\tiny{}$12.35$} & {\tiny{}$ 0.5838$} & {\tiny{}$6953$} & {\tiny{}$4.11$} & {\tiny{}$2.53(-8)$} & {\tiny{}$-2.15$} & {\tiny{}$ 1.21$} & {\tiny{}$ 1.06$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$11.17$} & {\tiny{}$ 0.5820$} & {\tiny{}$6849$} & {\tiny{}$4.08$} & {\tiny{}$1.38(-7)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 0.78$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.543$} & {\tiny{}$ 1.02$} & {\tiny{}$11.16$} & {\tiny{}$ 0.5861$} & {\tiny{}$6979$} & {\tiny{}$4.11$} & {\tiny{}$1.73(-8)$} & {\tiny{}$-2.14$} & {\tiny{}$ 1.01$} & {\tiny{}$ 0.79$} \tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$12.04$} & {\tiny{}$ 0.6436$} & {\tiny{}$7198$} & {\tiny{}$4.13$} & {\tiny{}$6.53(-10)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 1.30$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.381$} & {\tiny{}$ 1.39$} & {\tiny{}$11.98$} & {\tiny{}$ 0.6391$} & {\tiny{}$7291$} & {\tiny{}$4.16$} & {\tiny{}$3.65(-10)$} & {\tiny{}$-2.15$} & {\tiny{}$ 1.39$} & {\tiny{}$ 1.30$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$10.31$} & {\tiny{}$ 0.6459$} & {\tiny{}$7190$} & {\tiny{}$4.13$} & {\tiny{}$6.91(-10)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 1.01$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.511$} & {\tiny{}$ 1.20$} & {\tiny{}$10.24$} & {\tiny{}$ 0.6369$} & {\tiny{}$7321$} & {\tiny{}$4.17$} & {\tiny{}$3.14(-10)$} & {\tiny{}$-2.15$} & {\tiny{}$ 1.20$} & {\tiny{}$ 1.02$} \tabularnewline
{\tiny{}$0.850$} & {\tiny{}$0.050$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$ 9.17$} & {\tiny{}$ 0.6493$} & {\tiny{}$7193$} & {\tiny{}$4.12$} & {\tiny{}$6.71(-10)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 0.74$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.593$} & {\tiny{}$ 1.02$} & {\tiny{}$ 9.10$} & {\tiny{}$ 0.6384$} & {\tiny{}$7350$} & {\tiny{}$4.17$} & {\tiny{}$2.74(-10)$} & {\tiny{}$-2.14$} & {\tiny{}$ 1.02$} & {\tiny{}$ 0.75$} \tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$10.20$} & {\tiny{}$ 0.6613$} & {\tiny{}$7544$} & {\tiny{}$4.22$} & {\tiny{}$1.52(-10)$} & {\tiny{}$-2.17$} & {\tiny{}$ 1.76$} & {\tiny{}$ 1.27$} \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.433$} & {\tiny{}$ 1.38$} & {\tiny{}$10.13$} & {\tiny{}$ 0.6538$} & {\tiny{}$7631$} & {\tiny{}$4.25$} & {\tiny{}$1.21(-10)$} & {\tiny{}$-2.15$} & {\tiny{}$ 1.38$} & {\tiny{}$ 1.27$} \tabularnewline
\multicolumn{13}{c}{{\tiny{}$M_{1}=1.25\thinspace\mathrm{M}_{\sun}$}}\tabularnewline
{\tiny{}$0.700$} & {\tiny{}$0.200$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$10.18$} & {\tiny{}$ 0.6008$} & {\tiny{}$6677$} & {\tiny{}$4.04$} & {\tiny{}$3.31(-6)$} & {\tiny{}$-2.15$} & {\tiny{}$ 2.57$} & {\tiny{}\ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.194$} & {\tiny{}$ 2.03$} & {\tiny{}$10.08$} & {\tiny{}$ 0.6829$} & {\tiny{}$6963$} & {\tiny{}$4.03$} & {\tiny{}$2.76(-9)$} & {\tiny{}$-2.14$} & {\tiny{}$ 2.03$} & {\tiny{}$ 1.94$} \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.100$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$ 9.24$} & {\tiny{}$ 0.6103$} & {\tiny{}$6688$} & {\tiny{}$4.04$} & {\tiny{}$2.67(-6)$} & {\tiny{}$-2.15$} & {\tiny{}$ 2.57$} & {\tiny{}\ldots } \tabularnewline
{\tiny{} } & {\tiny{} } & {\tiny{}yes} & {\tiny{}$0.324$} & {\tiny{}$ 1.83$} & {\tiny{}$ 9.02$} & {\tiny{}$ 0.6225$} & {\tiny{}$7091$} & {\tiny{}$4.13$} & {\tiny{}$6.12(-9)$} & {\tiny{}$-2.14$} & {\tiny{}$ 1.82$} & {\tiny{}$ 1.75$} \tabularnewline
\multicolumn{13}{c}{{\tiny{}no accretion}}\tabularnewline
{\tiny{}$0.750$} & {\tiny{}$0.000$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$15.44$} & {\tiny{}$ 0.4033$} & {\tiny{}$6509$} & {\tiny{}$4.12$} & {\tiny{}$2.31(-5)$} & {\tiny{}$-2.14$} & {\tiny{}$ 0.00$} & {\tiny{}\ldots } \tabularnewline
{\tiny{}$0.800$} & {\tiny{}$0.000$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$12.17$} & {\tiny{}$ 0.5013$} & {\tiny{}$6741$} & {\tiny{}$4.11$} & {\tiny{}$1.16(-6)$} & {\tiny{}$-2.14$} & {\tiny{}$ 0.00$} & {\tiny{}$-0.01$} \tabularnewline
{\tiny{}$0.850$} & {\tiny{}$0.000$} & {\tiny{}no } & {\tiny{} \ldots } & {\tiny{} \ldots } & {\tiny{}$ 9.75$} & {\tiny{}$ 0.5897$} & {\tiny{}$7047$} & {\tiny{}$4.12$} & {\tiny{}$7.61(-9)$} & {\tiny{}$-2.14$} & {\tiny{}$ 0.00$} & {\tiny{}$-0.03$} \tabularnewline
\end{longtable}
\tablefoot{
Values of $M_\text{env}$ are given in the format $n(m)=n\times10^m$ for concision.
\tablefoottext{a} {Undefined for models without thermohaline mixing.}
\tablefoottext{b} {Some of the models stop earlier.}
}
\end{longtab}
\begin{figure}
\includegraphics[width=1\columnwidth]{fig6}
\caption{Similar to Figs.~\ref{fig:cfe_gsra_0.8} and \ref{fig:cfe_gsra_0.85} but here the lines correspond to CEMP-\emph{s} models of $0.9~\mathrm{M}_{\sun}$ without diffusion. Thick lines are models with thermohaline mixing, whereas thin lines are those without.\label{fig:cfe_notm_0.9_0.95}}
\end{figure}
Last, there are some objects with $[\mathrm{C/Fe}]\gtrsim2.5$ whose surface gravities ($\log g\lesssim3$) imply that they are close to the end of FDU if not past it. How might we explain such objects, assuming that they were polluted by an AGB companion? Since carbon is not produced in the star, it is hard to imagine how the surface carbon abundance could be above the one in the accreted material. This limits the possible primaries to those that are able to produce at least this much carbon. From the models of \citet{2012ApJ...747....2L} these are AGB stars with $1.25\lesssim M_{1}/\mathrm{M}_{\sun}\lesssim3$. Lower-mass AGB stars do not produce enough carbon; and higher-mass stars convert the carbon into nitrogen in the lower part of their convective envelope (a process known as hot-bottom-burning). Moreover, a large amount of mass must be transferred because the combined carbon reduction from thermohaline mixing and FDU can only be about $0.5~\text{dex}$ at most \citep[the maximum $\text{[C/Fe]}$ given by the AGB models of][is about $+3.2$]{2012ApJ...747....2L}. This implies an accreted-to-initial mass ratio, $\Delta M/M_{2,\text{i}}$, between about $0.25$, if all the mixing occurs during FDU (where the envelope mass grows to about $0.5~\mathrm{M}_{\sun}$), and $0.5$, if thermohaline mixing is efficient (as it will be in the absence of some inhibitory process because of how unevolved the progenitor of the CEMP-\emph{s} star must be). The respective accreted-to-final mass ratios, $\Delta M/M_{2,\text{f}}$, are between $0.2$ and $0.35$. Hence, the progenitor systems of the most carbon-rich evolved stars must be small mass-ratio binaries in which a lot of mass has been transferred to the secondary. It may be difficult to account for such stars without also predicting too many low-luminosity carbon-rich stars from cases where less mass has been transferred.
\section{Discussion\label{subsec:uncertainties}}
The results presented in this paper may lead one to wonder whether the abundance anomalies predicted by our diffusion models are overestimated. We have run multiple tests to address this concern. First, we have tried to reproduce the results of \citet{2002ApJ...568..979R}. In particular, we have compared the abundance evolution in a $M=0.8~\mathrm{M}_{\sun}$ model with an initial composition taken from their table~1 ($Z=1.7\times10^{-4}$; $[\mathrm{Fe/H}]=-2.31$). We obtain good agreement in terms of the temperature and luminosity at the turn-off although our model is longer-lived by about $0.5~\text{Gyr}$. The abundance anomalies from settling and/or levitation of He, C, N, and Fe agree within 0.1~dex. For other elements the abundances differ by $0.3\text{--}0.4~\text{dex}$. Given that in our model the convective envelope mass is smaller by about a factor of two throughout most of the evolution (and the minimum size of the convective envelope in our model is only about $3.7\times10^{-6}~\mathrm{M}_{\sun}$ whereas \citet{2002ApJ...568..979R} get about $2.5\times10^{-5}~\mathrm{M}_{\sun}$), these differences are plausible. Judging from their figure~2, a smaller envelope mass in their model would lead to greater over-abundances of O, Ne, and Mg, and a smaller over-abundance of Si. All of these changes would reduce the discrepancies between their model and ours.
Second, we have tested whether the large abundance anomalies predicted by our models stem specifically from our simplified treatment of diffusion. For this purpose we have ported into our code the relevant parts from the code used by \citet[priv. comm.]{2010A&A...511A..87H}\footnote{In their code the full set of Burgers flow equations \citep{1969fecg.book.....B} is solved and their treatment of diffusion is thus valid for arbitrary compositions.} and run a $M=0.8~\mathrm{M}_{\sun}$, $Z=10^{-4}$ diffusive model without radiative levitation (with the ZAMS chemical composition from Table~\ref{tab:xinp}). In this model diffusion reduces the helium and metal abundances in the envelope on much shorter timescales. The same $M=0.8~\mathrm{M}_{\sun}$ model run with the MESA code \citep{2011ApJS..192....3P,2015ApJS..220...15P} yields similar results, which is reassuring given that the treatment of diffusion is based on the work of \citet{1969fecg.book.....B} and \citet{1994ApJ...421..828T} in both MESA and \citet{2010A&A...511A..87H}. The abundances in the MESA model are depleted to a much greater degree: 5~dex for helium (compared to our 2.5~dex) and 5--6~dex (3--4~dex) for metals, even though the envelope masses are in a good agreement (the minimum envelope mass is $4.5\times10^{-7}~\mathrm{M}_{\sun}$ in STARS and $6.0\times10^{-7}~\mathrm{M}_{\sun}$ in MESA). The conclusion from all three of these tests is the same -- if anything, the diffusion models presented here underestimate the amount of diffusion that we would get from a more rigorous treatment.
We have performed spatial resolution tests in three systems (denoted by an asterisk in Table~\ref{tab:Results_main}) by varying the default number of meshpoints (999) by a factor of two. All models give consistent results (within a couple of percent) in terms of the global properties, depth of thermohaline mixing, abundance anomalies after turn-off, and post-FDU abundances. The size of the convective envelope at minimum is consistent within ten percent. We thus conclude that the models are sufficiently resolved.
Our approach of interpolating the opacities and accelerations from tables computed during the run time necessitates the introduction of some numerical parameters. These parameters control mainly the amount by which some species has to change to warrant the computation of a new table. We have done extensive tests to make sure that our results do not depend on the choice of these parameters, i.e. the tables are computed often enough. As stated earlier, we set the temperature above which we use the old opacity tables that include conduction \citep{2004MNRAS.348..201E} to $\log T=7.3$. We have since included the conductive opacities from \citet{2007ApJ...661.1094C} in our code and made sure that use of OP opacities above $\log T=7.3$ would have virtually no effect on any of our results.
The size of the convective envelope throughout the evolution depends somewhat on the choice of the mixing-length parameter with larger values resulting in more massive envelopes. Our value, $\alpha_{\mathrm{MLT}}=2.0$, is based on a calibration between the radius, effective temperature, and luminosity of a $Z=0.0142$, $M=1~\mathrm{M}_{\sun}$ diffusive model with OP opacities at an age of 4.56~Gyr and the Sun \citep[our $\alpha_\text{MLT}$ value is slightly smaller than the value of 2.025 presented by][because of the different opacities]{2016A&A...586A.119S}. Stars of masses, metallicities, and evolutionary stages different from the Sun should have other values of $\alpha_{\text{MLT}}$ \citep[e.g. ][]{2014MNRAS.445.4366T} but meaningful quantitative predictions are virtually impossible. In a $0.8~\mathrm{M}_{\sun}$, $Z=10^{-4}$ model increasing or decreasing $\alpha_{\mathrm{MLT}}$ by 5\% accordingly changes the envelope mass by about 50\%, which, given that $M_{\mathrm{env}}<10^{-6}~\mathrm{M}_{\sun}$, translates into substantial changes in the surface abundances (Fig.~\ref{fig:ab_vs_menv}). Since theoretical models suggest that at lower metallicities one should use lower $\alpha_{\mathrm{MLT}}$ values \citep[at least for $-0.6<\text{[Fe/H]}<+0.3$;][]{2012ApJ...755L..12B}, it is unlikely that we have overestimated the importance of diffusion by underestimating the value of $\alpha_{\mathrm{MLT}}$.
\subsection{Missing mixing processes\label{subsec:missing-phys}}
The strong abundance anomalies predicted by diffusive models are not observed in CEMP stars. This suggests that in real stars atomic diffusion is inhibited by some physical process that we have not included in our models. While we leave a more in-depth investigation of possible culprits to future work, we examine a simple test case here. We add a ``turbulent'' diffusion term to $D_{\mathrm{mix}}$ in Eq.~\eqref{eq:dxdt} as proposed by \citet{2000ApJ...529..338R,2005ApJ...619..538R}:
\begin{equation}
D_{\mathrm{T}}=D_{0}D_{\mathrm{He}}\left(T_{0}\right)\left[\frac{\rho}{\rho\left(T_{0}\right)}\right]^{-3}.\label{eq:dturb}
\end{equation}
This type of parametrization extends the surface mixing region down to where the local temperature is somewhat larger than $T_{0}$. \citet{2005ApJ...619..538R} find that observations of lithium abundances in population~II stars require $T_{0}\approx10^{6}~\text{K}$. Similar values have been found to reproduce the small systematic abundance differences between turn-off and giant stars in old globular clusters \citep{2006Natur.442..657K,2012ApJ...753...48N,2014A&A...567A..72G}. Nevertheless, the ad-hoc nature of this prescription should be kept in mind. With this prescription we can primarily constrain the depth to which some form of mixing must occur to reconcile the models with observations, but not the physical processes responsible for this mixing.
We test the effect of turbulent diffusion on the evolution of a $0.75\mathrm{~M}_{\sun}$ star accreting $0.1~\mathrm{M}_{\sun}$ of material from a $1~\mathrm{M}_{\sun}$ primary. While in the absence of turbulence the resulting $0.85~\mathrm{M}_{\sun}$ model shows extremely large abundance anomalies ($[\mathrm{Fe/H}]>-0.5$ with levitation and $[\mathrm{Fe/H}]<-9.2$ without levitation), turbulence with $D_{0}=400$ \citep[as used by][]{2005ApJ...619..538R} and $\log T_{0}=6.0$ completely negates them (Fig.~\ref{fig:turb-test}). Even much smaller turbulent diffusion coefficients (e.g. $D_{0}=1$) suffice to erase the anomalies. Indeed, the key parameter here is $T_{0}$ -- as long as the mixing region remains large enough ($\log T_{0}\gtrsim5.5$ or $M_{\mathrm{env}}\gtrsim10^{-4}~\mathrm{M}_{\sun}$), atomic diffusion is strongly suppressed. In terms of global properties, models with more pervasive turbulence are hotter and therefore more closely resemble models without diffusion. This can be seen from comparing the turbulent models with the model with thermohaline mixing only (solid grey line) in Fig.~\ref{fig:turb-test_hrd}.\footnote{The model with $D_{0}=400$ and $\log T_{0}=6.0$ is quite different from the basic model with no diffusion or thermohaline mixing, although their tracks almost coincide. In the basic model the accreted material remains on the surface of the star. In the turbulent model the material is diluted but not as much as in a model without diffusion because of the stabilizing $\mu$-gradient in layers where $T\gtrsim T_{0}$.}
\begin{figure}
\subfloat{\includegraphics[width=1\columnwidth]{fig7a}\label{fig:turb-test_C}}
\subfloat{\includegraphics[width=1\columnwidth]{fig7b}\label{fig:turb-test_Fe}}
\subfloat{\includegraphics[width=1\columnwidth]{fig7c}\label{fig:turb-test_hrd}}
\caption{Effect of turbulence on the evolution of carbon (a) and iron (b) mass fractions, and the HRD (c) of a $0.75\mathrm{~M}_{\sun}$ star accreting $0.1~\mathrm{M}_{\sun}$ of material from a $1~\mathrm{M}_{\sun}$ primary. Large abundance variations are expected after accretion in absence of turbulence (black and red dashes). Inclusion of turbulence as in Eq.~\eqref{eq:dturb} erases all abundance signatures of atomic diffusion on the post-mass-transfer main sequence (dark blue) even for small turbulent diffusion coefficients (orange). Only when the temperature parameter is reduced to $\log T_{0}\lesssim5.5$ do the abundance variations start to reappear (light blue and magenta). The thermohaline-mixing-only model (solid grey) is hotter than the models with turbulence because in the latter diffusion still modifies the layers with $T\gtrsim T_{0}$ and thermohaline mixing is not as deep. For clarity, the HRD only shows the post-mass-transfer part of the evolutionary tracks.\label{fig:turb-test}}
\end{figure}
Figure~\ref{fig:turb-test} also shows that while the turbulent diffusion prescription of \citet{2005ApJ...619..538R} can inhibit diffusion in the outer layers of a star, it has almost no influence on thermohaline mixing. This is not to say, however, that some form of turbulence \citep[e.g. rotationally driven horizontal turbulence;][]{2008ApJ...684..626D} could not inhibit thermohaline mixing as well. Rather, investigating this requires treating the different processes together instead of considering them as independent and simply adding the individual diffusion coefficients \citep{2013A&A...553A...1M}. Such a treatment is beyond the scope of this paper.
\subsection{Mass loss\label{subsec:Mass-loss}}
So far we have ignored mass loss. Simple estimates imply that it may be too important to neglect. With a mass-loss rate comparable to the current solar value, $2\text{--}3\times10^{-14}~\mathrm{M}_{\sun}\thinspace\text{yr}^{-1}$ \citep{1998ASPC..154..131W}, a star will lose more than $10^{-4}~\mathrm{M}_{\sun}$ over the roughly $10^{10}$ years it spends on the main sequence. This is a very large amount compared to the envelope masses of our models (Fig.~\ref{fig:ab_vs_menv}) and could greatly interfere with atomic diffusion.
In the absence of mass loss the convective envelope of a star moves outwards in mass until the beginning of FDU. But when mass loss erodes the surface, this outward movement is halted and eventually reversed while the star is still on the main sequence. As the envelope now moves inwards, the surface abundances reflect the composition of the progressively deeper layers that get exposed. Qualitatively, if the mass-loss rate is sufficiently high, the removal of the outer layers is so fast that diffusion has not had enough time to modify the newly exposed layers and only small abundance anomalies can develop \citep{1995ApJ...438L..87S}. On the other hand, mass-loss rates below some limit must be negligible and have essentially no effect on the surface abundances.
We now estimate what mass-loss rates are necessary to prevent the development of abundance anomalies and what mass-loss rates are negligible. For simplicity, we use \citet{1975MSRSL...8..369R} mass-loss formula with different factors $\eta$:
\begin{equation}
\dot{M}=-4\times10^{-13}\eta\frac{LR}{M}\left(\frac{LR}{M}\right)_{\sun}^{-1}~\mathrm{M}_{\sun}\thinspace\text{yr}^{-1}.\label{eq:RML}
\end{equation}
Here we only consider metal-poor $0.8~\mathrm{M}_{\sun}$ and $0.85~\mathrm{M}_{\sun}$ models without accretion. We find that if $\eta\gtrsim0.1$, the effects of atomic diffusion are almost entirely erased in both models (Fig.~\ref{fig:mass-loss}). This translates to mass-loss rates of a few times $10^{-14}\text{--}10^{-13}~\mathrm{M}_{\sun}\thinspace\text{yr}^{-1}$ throughout the main sequence. In contrast, when the mass-loss rate falls below about $10^{-16}~\mathrm{M}_{\sun}\thinspace\text{yr}^{-1}$, the abundance evolution proceeds as in models without mass loss. Intermediate mass-loss rates result in less extreme but non-negligible abundance variations. Note that the lost material is assumed to have the same composition as the surface at that time. Depending on the mass-loss rate and mechanism, some elements may be lost more readily than others, leading to more complicated abundance variations \citep[e.g.][]{1987ApJ...322..302M,2010A&A...521A..62V}.
\begin{figure*}
\subfloat{\includegraphics[width=1\columnwidth]{fig8a}\label{fig:mass-loss_0.80_Menv-vs-L}}\hspace{\columnsep}\subfloat{\includegraphics[width=1\columnwidth]{fig8b}\label{fig:mass-loss_0.85_Menv-vs-L}}
\subfloat{\includegraphics[width=1\columnwidth]{fig8c}\label{fig:mass-loss_0.80_FeH-vs-L}}\hspace{\columnsep}\subfloat{\includegraphics[width=1\columnwidth]{fig8d}\label{fig:mass-loss_0.85_FeH-vs-L}}
\subfloat{\includegraphics[width=1\columnwidth]{fig8e}\label{fig:mass-loss_0.80_CFe-vs-L}}\hspace{\columnsep}\subfloat{\includegraphics[width=1\columnwidth]{fig8f}\label{fig:mass-loss_0.85_CFe-vs-L}}
\caption{Envelope mass and abundance evolution in metal-poor $0.8~\mathrm{M}_{\sun}$ (left panels) and $0.85~\mathrm{M}_{\sun}$ (right panels) models with different Reimers-type mass-loss rates. The values in the parentheses are the mass-loss rates at $\log L\approx0$; the average mass-loss rate during the main sequence is about 50\% higher as a result of the mass, radius, and luminosity scaling in Reimers' law.\label{fig:mass-loss}}
\end{figure*}
Is it reasonable to expect somewhat super-solar mass-loss rates from CEMP-\emph{s} stars on the main sequence? That is difficult to say. The form of mass loss in these stars is presumably the same as in normal low-mass main sequence stars -- as magnetized winds originating in a corona that is heated by turbulent dissipation of Alfv\'en waves \citep[e.g.][]{2007ApJ...659.1592S,2011ApJ...741...54C}. While mass-loss rates in these stars are too small to be directly observable, indirect measurements based on the interaction of the wind with the interstellar medium yield values within about an order of magnitude of the solar mass-loss rate \citep{2002ApJ...574..412W,2005ApJ...628L.143W}. Various theoretical models also normally predict mass-loss rates in this range \citep[e.g.][]{2007A&A...463...11H,2011ApJ...741...54C,2015A&A...577A..28J}. While most of these models concern stars with near solar metallicity, to first order the mass-loss rates are not expected to depend on metallicity. Note that the mass-loss rate in our test models increases over time because of the $LR/M$ scaling. This is because there is no rotational dependence in the Reimers' mass-loss law which is reasonable given that it was derived from observations of red giants (i.e. slow rotators). But CEMP-\emph{s} stars could have high rotation rates after the mass transfer phase, if the transferred material carries with it some angular momentum. Over most of the post-mass-transfer main sequence evolution their mass-loss rate would then be decreasing as the wind carried away the excess angular momentum \citep[e.g. for solar-type stars][give $\dot{M}\sim\Omega^{1.3}\sim t^{-0.75}$]{2015A&A...577A..28J}, which should result in higher average mass-loss rates. This scenario may even have further complications aside from mass loss, because rapid rotation could directly lead to enhanced chemical mixing in the star.
In any case, the mass-loss rates of CEMP-\emph{s} dwarfs are likely at least a few times $10^{-15}~\mathrm{M}_{\sun}\text{yr}^{-1}$ throughout their evolution. Such mass-loss rates should at least moderate the effects of atomic diffusion and help explain why turn-off stars with extreme abundance anomalies are not observed. Additionally, or alternatively, some form of turbulence might play a role. As discussed by \citet{2010A&A...521A..62V}, two models, one with mass loss and one with turbulent diffusion, that predict the same surface abundances do not necessarily have the same internal abundance profiles. In a model with turbulence the abundance profiles are flat down to some depth (e.g. determined by $T_{0}$ in Eq.~\eqref{eq:dturb}). In a model with mass loss no mixing is enforced outside the convective region so the abundance profiles will not be flat unless the mass-loss rate is large ($\dot{M}\gtrsim10^{-13}~\mathrm{M}_{\sun}\thinspace\text{yr}^{-1}$). Asteroseismic measurements sensitive to the internal structure of a star might in principle be able to distinguish between the two types of models \citep[e.g.][]{2012ApJ...746...16V}. However, in practice the difference in the internal structure might be too small at these low metallicities for such measurements to be possible.
\section{\label{sec:Conclusions}Conclusions}
In this paper we present stellar evolution models of \emph{s}-process-rich carbon-enhanced metal-poor (CEMP-\emph{s}) stars under the assumption that they form when a low-mass metal-poor star accretes material from an AGB companion. Motivated by results from binary population synthesis calculations of \citet{2015A&A...581A..62A}, our models cover current CEMP-\emph{s} star masses between $0.8$ and $0.95~\mathrm{M}_{\sun}$ deriving from initial secondary masses between $0.6$ and $0.8~\mathrm{M}_{\sun}$, and initial primary masses between $0.9$ and $1.5~\mathrm{M}_{\sun}$. Our main focus is the post-mass-transfer evolution of the surface abundances of carbon and iron driven by thermohaline mixing and atomic diffusion, including radiative levitation.
Our simulations with atomic diffusion indicate that CEMP-\emph{s} stars should show large surface abundance variations on the main sequence, particularly as they approach the turn-off. This is because they have very shallow convective envelopes ($M_{\mathrm{env}}\lesssim10^{-4}~\mathrm{M}_{\sun}$ throughout most of the evolution and perhaps as little as $10^{-8}~\mathrm{M}_{\sun}$ near the turn-off) and, therefore, short diffusion timescales. In stars whose envelope masses fall below about $10^{-5}~\mathrm{M}_{\sun}$ (which happens in most of our models, including nearly all those with $M>0.8~\mathrm{M}_{\sun}$) the abundances should vary by a factor of about ten. This factor rapidly increases with decreasing envelope mass resulting in unrealistic abundances (Fig.~\ref{fig:ab_vs_menv}). But even though our treatment of diffusion is not as detailed as in some other works, we do not find evidence that the surface abundance variations predicted by our models are exaggerated.
Radiative levitation has only a minor influence on carbon but a large one on iron. Whereas in diffusive models without levitation the metallicity ({[}Fe/H{]}) of the star decreases until first dredge-up, in models with levitation the metallicity can increase as the star evolves along the main sequence. Consequently, models with levitation predict reduced carbon enhancements ({[}C/Fe{]}) around the turn-off. This implies a systematic difference between the {[}C/Fe{]} values of stars near the turn-off and those at the beginning of FDU. Unfortunately, whether there is any such difference is difficult to establish even from the largest homogeneous sample of observational data \citep[from SDSS;][]{2013AJ....146..132L}. Any such difference, however, would clearly be smaller than predicted by most of our models (Fig.~\ref{fig:feh_ch_cfe_gsra_0.8_0.85}). And while some of our $0.8~\mathrm{M}_{\sun}$ models do predict only a small variation in {[}C/Fe{]}, at ages typical of metal-poor halo stars most of them are still relatively unevolved and should be visible as carbon-rich low-luminosity objects. Very few such objects have been observed.
Although they too would predict many low-luminosity carbon-rich stars, models without atomic diffusion are generally much more successful at covering the range of observations (Fig.~\ref{fig:cfe_notm_0.9_0.95}). We thus conclude that atomic diffusion cannot be acting alone near the surface convection zone of real CEMP-\emph{s} stars and needs to be largely counteracted by some other physical process(es). For example, a turbulent diffusive process like proposed by \citet{2002ApJ...568..979R} can suppress surface abundance variations almost entirely by extending the mixing region to depths where the temperature exceeds about $10^{6}~\text{K}$ (about $10^{-4}~\mathrm{M}_{\sun}$ from the surface; Fig.~\ref{fig:turb-test}). Additionally, at least the most extreme abundance variations (corresponding to stars with the smallest envelopes) should also be moderated by mass loss. In fact, a mass-loss rate of a few times the current solar value sustained throughout the evolution could on its own prevent substantial abundance anomalies from developing (Fig.~\ref{fig:mass-loss}).
While this work has primarily dealt with carbon and iron, given how divergent their abundance evolution is expected to be, these conclusions should extend to other elements, including those produced by neutron capture. The common assumption that the material coming from the AGB companion has simply been diluted by some factor after accretion onto the CEMP-\emph{s} star is likely not too far from the truth.
\begin{acknowledgements}
We thank the anonymous referee for comments that have helped improve the clarity of the paper. We thank Carlo Abate for help with the model grid selection, constructive comments on this manuscript, and many useful discussions. We also thank Young Sun Lee and Timothy Beers for sharing the SDSS CEMP data, Haili Hu for sharing her code, and Evert Glebbeek and Olivier Richard for useful discussions. RJS is the recipient of a Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,995,626 | arxiv | \subsection*{I. Interacting soft gluons in the small-$x_B$
region of DIS}
A number of striking phenomena have
been observed in recent
deep-inelastic electron-proton scattering experiments
in the small-$x_B$ region. In particular it is seen,
that the contribution of the gluons dominates\cite{r1},
and that large-rapidity-gap (LRG) events exist\cite{r2}.
The latter shows that the virtual photons in such processes may
encounter colorless objects originating from the proton.
The existence of LRG events in these scattering processes
have attracted much attention, and
there has been much discussion\cite{r2,r3,r4,r5,r6,r7,r8}
on problems associated with the origin and/or the
properties of such colorless objects.
Reactions in which ``exchange'' of such objects dominate
are known in the literature\cite{r3,r4,r5} as
``diffractive scattering processes''.
While the concepts and methods used by different
authors are in general very much different from one another,
all the authors in describing such processes
(experimentalists as well as theorists)
seem to agree on the following\cite{r5}
(see also Refs.
[\ref{r2}--\ref{r4}, \ref{r6}--\ref{r8}]):
(a) Interacting soft gluons play a dominating role in
understanding the phenomena in the small-$x_B$ region in
general, and in describing the properties of LRG events in particular.
(b) Perturbative QCD should be, and can be, used to describe the
LRG events associated with high transverse-momentum ($p_\perp$)
jets which have been observed at HERA\cite{r6} and at the Tevatron\cite{r7}.
Such events are, however, rather rare.
For the description of the bulk of LRG events, concepts
and methods beyond the perturbative QCD (for example,
Pomeron Models\cite{r4} based on Regge Phenomenology) are needed.
The question, whether or how
perturbative QCD plays a role in such non-perturbative approaches does
not have an unique answer.
In a previous paper\cite{r8}, we suggested that the observed dominance of
interacting soft gluons\cite{r1} and the existence of LRG events\cite{r2}
in the small-$x_B$
region are closely related to each other, and that the interacting soft
gluons may form colored
and colorless systems --- which we called
``gluon clusters''. Such gluon clusters have finite lifetimes
which (in the small-$x_B$ region) can be of the same order as the
interaction time $\tau_{\rm int}$ --- the time-interval in which the
virtual photon $\gamma^\star$ ``sees''
the cluster in the sense that it is absorbed by
the charged
constituents of
the latter.
In Ref.[\ref{r8}]
the lifetime of such a
gluon-cluster was {\em estimated} by using uncertainty principle and
kinematical considerations --- without any {\em dynamical} input.
In analogy to hadron structure function, a quantity $F_2^c$ which we called
``the structure function of the gluon cluster $c_0^\star$'' was introduced,
and then it was set to be a constant --- in accordance with
the purpose of that paper which is to discuss the {\em kinematical}
aspects of a statistical approach.
After having seen what phase-space considerations can, and cannot do,
we decided to go one step further, and study {\em the dynamical aspects}
of the interacting soft-gluons in these scattering processes.
In doing so,
we realized that the system of interacting
soft-gluons is
extremely complicated. It is not only too complicated (at least for us)
to take
the details of local interactions into account
(for example by describing
the reaction mechanisms in terms of Feynman diagrams),
but also too complicated to apply well-known concepts
and methods in conventional equilibrium statistical mechanics.
In fact, having the above-mentioned empirical facts about LRG events
and the basic properties of gluons prescribed by the QCD-Lagrangian
in mind, we are readily led to the following picture:
Such a system is
{\it an open dynamical system with many degrees of freedom},
and it is in general {\em far from equilibrium}.
This is because, once we accept that the colorless object (which the
virtual photon encounters) is a system of soft gluons whose interactions
are not negligible, we are also forced to accept
that, in such a system,
gluons can be emitted and absorbed
by the members of the system as well as
by gluons and/or quarks and antiquarks outside the system
(we note in particular that, since the gluons are soft,
their density in space is high,
and the distances between the interacting gluons are in general
not short, the ``running-coupling-constant''
can be very large). Furthermore, since in general more than one
gluons can be emitted or absorbed
by the members of the system, the system itself can remain to be a
color-singlet. This means in particular that, in such a system,
{\it neither the
number of gluons nor the energy of the system can be a
conserved quantity}.
Do we see comparable open, dynamical, complex systems in Nature?
If yes, what
are the characteristic
features of such systems?
\subsection*{II. Characteristic features of open dynamical complex systems}
Open dynamical complex systems are not difficult to find in Nature ---
at least not in the macroscopic world! Such systems have been studied,
and in particular the following have been observed by
Bak, Tang and Wiesenfeld (BTW) some time ago\cite{r9}:
Open
dynamical systems with many degrees of freedom may
evolve to
self-organized critical states which lead to
fluctuations extending over all length- and
time-scales, and that
such fluctuations manifest themselves in form of
spatial and temporal power-law scaling behaviors
showing properties
associated with fractal
structure and flicker noise respectively.
BTW\cite{r9} and many other
authors\cite{r10} proposed, and demonstrated by
numerical simulations, the following: Dynamical systems with local
interacting degrees of freedom can evolve into self-organized
structures of states which are barely stable. A local perturbation of a
critical state may ``propagate'', in the sense that it spreads to (some)
nearest neighbors, and than to the next-nearest neighbors, and so on in
a ``domino effect'' over all length scales, the size of
such an ``avalanche'' can be as
large as the entire
system. Such a ``domino effect'' eventually terminates after a total time $T$,
having reached a final amount of dissipative energy and having
effected a total spatial extension $S$. The quantity $S$ is called by
BTW the ``size'', and the quantity $T$ the ``lifetime'' of the
avalanche --- named by BTW a ``cluster''
(hereafter referred to as BTW-cluster). As we
shall see in more details later on, it is of considerable importance to
note that a BTW-cluster {\it cannot}, and {\it should not}
be identified with a cluster in the usual sense.
It is an avalanche,
{\it not} a {\it static} object
with a fixed structure
which remains unchanged until it
decays after a time-interval (known as the lifetime in
the usual sense).
It has been
shown\cite{r9,r10} that the
distribution ($D_S$) of
the ``size'' (which is a measure of
the dissipative energy, $S$) and the distribution
($D_T$) of the lifetime
($T$) of BTW-clusters in such open
dynamical systems obey power-laws:
\begin{equation}
\label{e1}
D_S(S)\sim S^{-\mu},
\end{equation}
\begin{equation}
\label{e2}
D_T(T)\sim T^{-\nu},
\end{equation}
where $\mu$ and $\nu$ are positive real constants. In fact, such spatial and
temporal power-law
scaling behaviors can be, and have been, considered
as the universal signals --- the ``fingerprints'' --- of the
locally perturbed
self-organized critical states in such systems.
It is
expected\cite{r9,r10} that the general concept of self-organized
criticality (SOC), which is
complementary to chaos, may be
{\it the} underlying concept for temporal and spatial scaling in a wide class
of {\it open non-equilibrium systems} --- although it is not yet known
how the exponents in such power law can be calculated analytically.
SOC has been observed in
a large number of open dynamical complex systems in
non-equilibrium\cite{r9,r10,r12,r13,r14,r15}
among which the following examples are
of particular interest, because they illuminate several aspects of
SOC which are relevant for the discussion in this paper.
First, the well known Gutenberg-Richter law\cite{r11,r12}
for earthquakes as a special
case of Eq.(1):
In this case, $S$ stands for the released energy (the magnitude)
of the earthquakes. $D_S(S)$ is the number of
earthquakes at which an energy $S$ is released.
Such a simple law is known to be valid
for all earthquakes, large (up to $8$ or $9$ in Richter scale)
or small! We note, the power-law behavior given by the
Gutenberg-Richter law implies in particular the following.
The question ``How large is a typical earthquake?'' does
not make sense!
Second, the sandpile experiments\cite{r9,r10} which show
the simple regularities mentioned in Eqs.(1) and (2):
In this example, we see how local perturbation can be caused by the
addition of one grain of sand (note that we are dealing with
an open system!). Here,
we can also see how
the
propagation of perturbation in form of ``domino effect''
takes place, and
develops into avalanches of all possible sizes and durations.
The size- and duration-distributions are given by Eqs.(1)
and (2) respectively.
This example is indeed a very attractive one,
not only because such
experiments can be, and have been performed in labs\cite{r10}, but also
because they can
be readily simulated on a PC\cite{r9,r10}.
Furthermore, it has been pointed out, and demonstrated
by simple models\cite{r10,r13,r14,r15},
that the concept of SOC can also be applied
to Biological
Sciences.
It is amazing to see how phenomena as complicated as Life
and Evolution can be simulated
by simple models such as the ``Game of Life''\cite{r13} and
the ``Evolution Model''\cite{r14,r15}.
Having seen that systems of interacting soft-gluons
are open dynamical complex systems,
and that a wide class of open systems with many degrees of
freedom in the macroscopic world
evolve to self-organized critical states which lead to
fluctuations extending over all length- and time-scales,
it seems natural to ask the following:
Can such states and such fluctuations
also exist in the microscopic world --- on the
level of quarks and gluons?
\subsection*{III. Are gluon-clusters hadron-like?}
How can we find out whether the general concept
of self-organized criticality
(mentioned in Section II)
plays a role
in diffractive deep-inelastic lepton-hadron scattering
processes (discussed in Section I)?
A simple and effective way
of doing this, is to check whether the ``fingerprints''
mentioned in Eqs.(~\ref{e1}) and (~\ref{e2}),
which can be considered as the necessary conditions for
the existence of self-organized criticality, show up
in the relevant
experiments.
For such a comparison, we need
the spatial and the temporal distributions of the gluon-clusters.
Hence, an important step in our quantitative study is
to obtain these distributions directly from
the experimental data --- if possible, without
{\em any} theoretical input.
Having this goal in mind, we now try to express such
cluster-distributions in terms of
the measured \cite{r3}
``diffractive structure function''
\mbox{$F_2^{D(3)}(\beta,Q^2;x_P)\equiv \int dt F_2^{D(4)}(\beta,Q^2;x_P,t)$}.
Here, we note that $F_2^{D(4)}(\beta,Q^2;x_P,t)$
is related \cite{r3,r4,r5,r6} to the
differential cross-section for large-rapidity-gap
events
\begin{equation}
\label{a3}
{d^4\sigma^D\over d\beta dQ^2 dx_P dt}={4\pi\alpha^2\over\beta
Q^4}(1-y+{y^2\over 2})F_2^{D(4)}(\beta,Q^2;x_P,t),
\end{equation}
in analogy to
the relationship between the corresponding quantities
[namely $d^2\sigma/(dx_B\,dQ^2)$ and $F_2(x_B,Q^2)$]
for normal deep-inelastic electron-proton scattering events
\begin{equation}
\label{a4}
{d^2\sigma\over dx_BdQ^2}={4\pi\alpha^2\over
x_BQ^4}(1-y+{y^2\over 2})F_2(x_B,Q^2).
\end{equation}
The kinematical variables, in particular $\beta$, $Q^2$, $x_P$ and $x_B$
(in both cases) are directly measurable quantities, the definitions
of which are shown in Fig.1 together with the corresponding
diagrams of the
scattering processes. We note
that, although these variables are
Lorentz-invariants, it is sometimes convenient to interpret them in a
``fast moving frame'', for example the electron-proton center-of-mass
frame where the proton's 3-momentum $\vec P$ is large (i.e. its
magnitude $|\vec P|$ and thus the energy $P^0\equiv (|\vec P|^2+M^2)^{1/2}$
is much larger than the proton mass $M$). While $Q^2$ characterizes
the virtuality of the space-like photon
$\gamma^\star$, $x_B$ can be interpreted,
in such a ``fast moving frame'' (in the framework
of the celebrated parton model), as the
fraction of proton's energy $P^0$ (or longitudinal momentum $|\vec P|$)
carried by the struck charged constituent.
We recall, in the framework
of the parton model, $F_2(x_B, Q^2)/x_B$ for ``normal events''
can be interpreted as the sum of the probability densities
for the above-mentioned $\gamma^\star$ to interact with such
a charged constituent inside the proton. In analogy to this,
the quantity
$F_2^{D(3)}(\beta,Q^2;x_P)/\beta$ for LRG events
can be interpreted as the sum of the probability
densities for $\gamma^\star$ to interact with
a charged constituent which
carries a fraction $\beta\equiv x_B/x_P$ of the energy (or longitudinal
momentum) of the colorless object,
under the condition that the colorless object
(which we associate with a system of interacting soft gluons) carries a
fraction $x_P$
of proton's energy (or longitudinal momentum).
We hereafter denote this
charged-neutral and color-neutral gluon-system by
$c^\star_0$ (in Regge pole models\cite{r4} this object is known as
the ``pomeron'').
Hence, by comparing Eq.\,(3) with Eq.\,(4) and by comparing the two
diagrams shown in Fig.\,1(a) and Fig.\,1(b), it is tempting to draw
the following conclusions:
The diffractive process is nothing else but
a process in which the virtual photon $\gamma^\star$
encounters a $c_0^\star$,
and $\beta$ is nothing else but the Bjorken-variable with respect to
$c_0^\star$ (this is why it is called $x_{BC}$ in Ref.[\ref{r8}]).
This means,
a diffractive $e^-p$ scattering event can be envisaged as an event in
which the virtual photon $\gamma^\star$ collides with ``a $c_0^\star$-target''
instead of ``the proton-target''.
Furthermore, since $c_0^\star$ is charge-neutral,
and a photon can only directly interact with an object
which has electric charges and/or magnetic moments,
it is tempting to assign $c_0^\star$ an
electromagnetic structure function $F_2^{c}(\beta, Q^2)$,
and study the interactions between the virtual photon and the quark(s)
and antiquark(s) inside $c_0^\star$.
In such a picture
(which should be formally the same as that of
Regge pole models\cite{r4},
if we would replace the $c_0^\star$'s by ``pomerons'')
we are confronted with the following two questions:
First, is it possible and meaningful to discuss the $x_P$-distributions of
the $c_0^\star$'s without knowing the intrinsic properties, in particular the
electromagnetic structures, of such objects?
Second,are gluon-clusters hadron-like, such that their electromagnetic
structures can be studied
in the same way as those for
ordinary hadrons?
We discuss the second question here, and leave the first question to
the next section.
We note, in order to be able to answer the second question
in the {\em affirmative},
we need to know {\em whether}
$F_2^{D(3)}(\beta,Q^2;x_P)$ can be factorized in the form
\begin{equation}
\label{eee1}
F_2^{D(3)}(\beta, Q^2;x_P)=f_c(x_P)F_2^c(\beta,Q^2).
\end{equation}
Here, $f_c(x_P)$ plays the role of a ``kinematical factor''
associated with the ``target $c_0^\star$'',
and $x_P$ is the fraction
of proton's energy (or longitudinal momentum) carried by
$c_0^\star$. [We could call $f_c(x_P)$
``the $c_0^\star$-flux'' --- in exactly the same
manner as in Regge pole models\cite{r4}, where it is called
``the pomeron flux''.] $F_2^c(\beta,Q^2)$ is
``the electromagnetic structure function of $c_0^\star$''
[the counterpart of $F_2(x_B,Q^2)$ of the proton] which
--- in analogy to proton (or any other hadron) ---
can be expressed as
\begin{equation}
\label{eee2}
\frac{F_2^c(\beta,Q^2)}{\beta}
= \sum_i e_i^2 [q_i^c(\beta,Q^2)+\bar q_i^c(\beta,Q^2)],
\end{equation}
where $q_i^c(\bar q_i^c)$ stands for the probability
density for $\gamma^\star$
to interact with a quark (antiquark) of flavor $i$ and electric
charge $e_i$ which carries a fraction $\beta$ of the energy
(or longitudinal momentum)
of $c_0^\star$. It is clear that
Eq.(6) should be valid for all $x_P$-values in this kinematical
region, that is, both the right- and the left-hand-side
of Eq.(6) should be independent of the energy (momentum) carried
by the ``hadron'' $c_0^\star$.
Hence, to find out experimentally whether the second question can be
answered in the affirmative, we only need to check whether the
data are in agreement with the assumption
that $F_2^c(\beta , Q^2)$ prescribed by Eqs.(5) and (6) exists.
For such a test,
we take the existing
data\cite{r3} and plot $\log [F_2^{D(3)}(\beta, Q^2;x_P)/\beta]$
against $\log\beta$ for different $x_P$-values.
We note, under the assumption
that the factorization shown in Eq.(5)
is valid, the $\beta$-dependence for a given $Q^2$ in
such a plot should have exactly the same form as that in the
corresponding
$\log [F_2^{c}(\beta, Q^2)/\beta]$ vs $\log \beta$ plot;
and that the latter is the analog of
$\log [F_2(x_B, Q^2)/x_B]$ vs $\log x_B$ plot for normal events.
In Fig.2 we show the result of such
plots for three fixed $Q^2$-values (3.5, 20 and 65 GeV$^2$,
as representatives of three different ranges in $Q^2$).
Our goal is to examine whether or
how the $\beta$-dependence of the function given in
Eq.(6) changes with $x_P$. In principle,
if there were enough data points, we should, and we could, do such
a plot for the data-sets associated with every $x_P$-value.
But, unfortunately there are not so much data at present.
What we can do, however, is to consider
the $\beta$-distributions in different $x_P$-bins, and to vary
the bin-size of $x_P$,
so that we can explicitly
see whether/how the shapes of the $\beta$-distributions
change. The results are shown
in Fig.2. The $\beta$-distribution in the first
row, corresponds to the integrated value $\tilde{F}^D_2(\beta, Q^2)$
shown in the literature\cite{r3,r5}.
Those in the second and in the third row are obtained by considering
different bins and/or by
varying the sizes of the bins.
By joining the points associated with a given $x_P$-interval
in a plot for a given $Q^2$,
we obtain the $\beta$-distribution for a $c_0^\star$ carrying
approximately the amount of energy $x_P P^0$, encountered
by a photon of virtuality $Q^2$. Taken together with Eq.(6) we can
then extract the distributions $q_i^c(\beta, Q^2)$ and
$\bar{q}_i^c(\beta, Q^2)$ for this $Q^2$-value, provided
that $F_2^c(\beta, Q^2)/\beta$ is independent of $x_P$.
But, as we can see in Fig.2, the existing data\cite{r3,r5}
show that the $x_P$-dependence of this function is far from
being negligible!
Note in particular
that according to Eq.(\ref{eee1}), by choosing a suitable $f_P(x_P)$
we can shift the curves for different $x_P$-values in the vertical
direction (in this log-log plot); but {\em we can never change
the shapes of the $\beta$-distributions} which are different for
different $x_P$-values!
In order to see, and to realize, the meaning of the $x_P$-dependence
of the distributions of the charged constituents of $c^\star_0$
expressed in terms of $F_2^c(\beta, Q^2)/\beta$
in LRG events [see Eqs.(5) and (6)],
let us, for a moment, consider
normal deep-inelastic scattering events in the
$x_B$-region where quarks dominate ($x_B > 0.1$, say).
Here we can plot the data for
$\log [F_2(x_B, Q^2)/x_B]$ as a function of $\log x_B$ obtained
at {\em different incident energies ($P^0$'s)} of the proton.
{\em Suppose} we see, that
at a given $Q^2$, the data for $x_B$-distributions taken
at different values
of $P^0$ are very much different.
{\em Would} it still be possible to introduce $F_2(x_B,Q^2)$
as ``the electromagnetic structure function'' of the proton,
from which we can extract the $x_B$-distribution of the quarks
$q_i(x_B,Q^2)$ at a given $Q^2$?
\subsection*{IV. Distributions of the gluon-clusters}
After having seen that the existing data
are not in agreement with the picture in which
the colorless gluon-clusters ($c_0^\star$'s) are
hadron-like, we now come back
to the first question in Section III, and try to find out whether it is
never-the-less possible and meaningful to talk about the
$x_P$-distribution of $c_0^\star$. We shall see in this section,
the answer to this question is Yes!
Furthermore, we shall also see,
in order to answer this question in the
affirmative, we do not need the factorization mentioned
in Eq.(5); and we do not need to know whether the gluon-clusters are
hadron-like. But, as we shall show later on, it is of considerable importance
to discuss the second question in understanding
the nature of the $c_0^\star$'s.
In view of the fact that we do use the concept ``distributions
of gluons''
in deep-inelastic lepton-hadron scattering, although the gluons
do not directly interact with the virtual photons,
we shall try to introduce the notion ``distribution of
gluon-clusters'' in a similar manner.
In order to see what we should do for the introduction
of such distributions, let us recall the following:
For normal deep-inelastic $e^-p$ collision
events, the structure function $F_2(x_B, Q^2)$ can be expressed
in term of the distributions of partons, where the partons are
not only quarks and antiquarks, but also gluons which
can contribute to the structure function by quark-antiquark
pair creation and annihilation.
In fact, in order to satisfy energy-momentum-conservation
(in the electron-proton system),
the contribution of the gluons $x_gg(x_g,Q^2)$ has to be taken into account
in the energy-momentum sum rule
for all measured $Q^2$-values. Here, we denote by
$g(x_g,Q^2)$ the probability density
for the virtual photon $\gamma^\star$ (with virtuality $Q^2$) to meet a
gluon which carries the energy (momentum) fraction $x_g$ of the proton,
analogous to $q_i(x_B, Q^2)$ [or $\bar q_i(x_B, Q^2)$] which
stands for the probability density for this $\gamma^\star$
to interact with a quark (or an antiquark) of flavor $i$ and electric
charge $e_i$ which carries the energy (momentum) fraction $x_B$ of the
proton. We note, while both $x_B$ and $x_g$ stand for energy
(or longitudinal momentum) fractions carried by partons,
the former can be, but the latter {\em cannot} be directly
measured.
Having these, in particular the energy-momentum sum rule in mind,
we immediately see the following: In a given
kinematical region
in which the contributions of only
one category of partons (for example quarks for $x_B > 0.1$ or
gluons for $x_B < 10^{-2}$) dominate, the structure
function $F_2(x_B,Q^2)$ can approximately
be related to the
distributions of that particular kind of partons in a
very simply manner. In fact,
the expressions below can be, and have been,
interpreted as the probability-densities for the virtual photon $\gamma^\star$
(with virtuality $Q^2$) to meet a quark or a gluon which carries the energy
(momentum) fraction $x_B$ or $x_g$ respectively.
\begin{eqnarray}
\label{ee2}
{F_2(x_B,Q^2)\over x_B}\approx \sum_i e_i^2\, q_i(x_B,Q^2) &
\mbox{\hspace*{1cm}or\hspace*{1cm}} &
{F_2(x_B,Q^2)\over x_g}\approx g(x_g,Q^2)\mbox{\ .}
\end{eqnarray}
The relationship between $q_i(x_B,Q^2)$,
$g(x_g,Q^2)$ and
$F_2(x_B, Q^2)$ as they stand in Eq.(\ref{ee2})
are general
and formal (this is the case especially for that between $g$ and
$F_2$) in the following sense:
Both $q_i(x_B, Q^2)$ and $g(x_g,Q^2)$ contribute to the
energy-momentum sum rule and both of them are in accordance
with
the assumption that partons
of a given category
(quarks or gluons)
dominate a given kinematical region
(here $x_B>0.1$ and $x_B<10^{-2}$ respectively).
But, neither the dynamics which leads to the observed $Q^2$-dependence
nor the relationship between $x_g$ and $x_B$ are given. This means,
{\it without further theoretical inputs}, the simple expression for
$g(x_g, Q^2)$ as given by Eq.(7) is {\it practically useless}!
Having learned this, we now discuss what happens
if we assume, in diffractive lepton-nucleon scattering,
the colorless gluon-clusters ($c_0^\star$'s) dominate the
small-$x_B$ region ($x_B< 10^{-2}$, say). In this simple picture, we
are assuming that the following is approximately true:
The gluons in this region appear predominately in form
of gluon clusters. The interaction
between the struck $c_0^\star$
and the rest of the proton
can be neglected during the
$\gamma$-$c_0^\star$ collision such that
we can apply impuls-approximation to the $c_0^\star$'s in this
kinematical region.
That is, here we can
introduce
--- in the same manner as we do for
other partons
(see Eq.\ref{ee2}), a
probability density $D_S(x_P|\beta,Q^2)$ for $\gamma^\star$ in the
diffractive scattering process to ``meet'' a $c_0^\star$ which carries
the fraction $x_P$ of the proton's
energy $P^0=(|\vec{P}|^2+M^2)^{1/2} \approx |\vec{P}|$
(where $\vec{P}$ is the momentum and $M$ is the mass of the proton).
In other words,
in
diffractive scattering events
for processes in the kinematical region
$x_B < 10^{-2}$, we should have, instead of $g(x_g,Q^2)$, the following:
\begin{equation}
\label{ee3}
{F_2^{D(3)}(\beta,Q^2;x_P)\over x_P}\approx D_S(x_P|\beta,Q^2)\,.
\end{equation}
Here, $x_PP^0$ is the energy carried by $c_0^\star$,
and $\beta$ indicates the corresponding fraction carried by
the struck charged constituent in $c_0^\star$.
In connection with the similarities and the differences between
$q_i(x_B,Q^2)$, $g(x_B,Q^2)$ in (\ref{ee2}) and $D_S(x_P|\beta, Q^2)$
in (\ref{ee3}), it is
useful to note in particular the significant difference
between $x_g$ and $x_P$,
and thus that between
the $x_g$-distribution $g(x_g,Q^2)$ of the gluons and the
$x_P$-distribution $D_S(x_P|\beta, Q^2)$ of the $c^\star_0$'s: Both $x_g$ and
$x_P$ are energy (longitudinal momentum) fractions of charge-neutral
objects, with which
$\gamma^\star$ {\it cannot} directly interact. But, in contrast to $x_g$,
$x_P$ {\it can be directly measured in experiments}, namely by
making use of the kinematical relation
\begin{equation}
\label{ee4}
x_P\approx {Q^2+M_x^2\over Q^2+W^2},
\end{equation}
and
by measuring the quantities $Q^2$, $M_x^2$ and $W^2$
in every collision event. Here, $Q$, $M_x$
and $W$ stand respectively for the invariant momentum-transfer from
the incident electron, the invariant-mass of the final hadronic state
after the $\gamma^\star-c_0^\star$ collision, and the invariant mass of the
entire hadronic system in the collision between $\gamma^\star$ and the
proton. Note that $x_B\equiv\beta x_P$, hence $\beta$ is also
measurable. This means, in sharp contrast to $g(x_g,Q^2)$, {\it
experimental information} on $D_S(x_P|\beta, Q^2)$
in particular its $x_P$-dependence
can be obtained ---
{\it without further theoretical inputs}!
\subsection*{V. The first SOC-fingerprint: Spatial scaling}
We mentioned at the beginning of Section III, that in order
to find out whether the concept of SOC
indeed plays a role in diffractive DIS we need to check the
fingerprints of SOC shown in Section II, and that such tests
can be made by examing the
corresponding cluster-distributions obtained from experimental data.
We are now ready to do this, because we have
learned in Sections III and IV, that it is not only
meaningful but also possible to extract $x_P$-distributions from the
measured diffractive structure functions,
although the gluon-clusters {\em cannot} be treated as hadrons.
In fact, as we can explicitly see
in Eqs.(8) and (9), in order to extract the $x_P$-dependence of
the gluon-clusters from the data, detailed knowledge about the intrinsic
structure of the clusters are not necessary.
Having these in mind, we now
consider $D_S$ as a function of $x_P$ for given values
of $\beta$ and $Q^2$,
and plot
$F_2^{D(3)}(\beta,Q^2;x_P)/x_P$ against $x_P$
for different sets of $\beta$ and $Q^2$. The results of such
log-log plots are shown in Fig. 3. As we can see,
the data\cite{r3} suggest
that the probability-density for the virtual photon $\gamma^\star$ to
meet a color-neutral and charged-neutral object $c_0^\star$ with energy
(longitudinal momentum) fraction $x_P$ has a power-law behavior in
$x_P$, and the exponent of this power-law
depends very little on $Q^2$ and $\beta$.
This is to be compared with $D_S(S)$ in Eq.(~\ref{e1}), where $S$,
the dissipative energy (the size of the BTW-cluster)
corresponds to the energy of the
system $c_0^\star$. The latter is $x_PP^0$, where $P^0$ is the
total energy of the proton.
It means, the existing data\cite{r3} show that
$D_S(x_P | \beta, Q^2)$ exhibits the same kind of power-law behavior
as the size-distribution of BTW-clusters.
This result is
in accordance with the expectation that
self-organized critical phenomena may exist
in the colorless systems of interacting soft gluons in
diffractive deep-inelastic electron-proton
scattering processes.
We note, up to now, we have only argued (in Section I) that
such gluon-systems are
open, dynamical, complex systems
in which SOC may occur, and we have mentioned (in Section II) the
ubiquitousness of SOC in Nature.
Having seen the first piece of experimental evidence
that one of the necessary conditions for the existence of
SOC is satisfied, let us now take a second look at the colorless
gluon-systems from a theoretical point of view: Viewed from a
``fast moving frame'' which can
for example be the electron-proton c.m.s. frame,
such
colorless systems
of interacting soft gluons are part of the proton
(although, as color-singlets, they can also be outside the confinement
region). Soft gluons can be
intermittently emitted or absorbed by gluons in such a system,
as well as by gluons,
quarks and antiquarks outside the system.
The emission- and absorption-processes are due to local interactions
prescribed by the well-known QCD-Lagrangian (here ``the running
coupling constants'' are in general large,
because the distances between the interacting colored objects
cannot be considered as ``short''; remember that the
spatial dimension of a $c_0^\star$ can be much
larger than that of a hadron!).
In this connection, it is however very useful to keep in mind that,
due to the complexity of the system,
details about the local interactions may be relatively
unimportant, while
general and/or global features --- for example
energy-flow between different parts (neighbors and neighbor's
neighbors $\ldots$) of the system ---
may play an important role.
How far can one go in neglecting dynamical details when one
deals with such open
complex systems? In order to see this, let us
recall how Bak and Sneppen\cite{r14}
succeeded in modelling
some of the essential aspects of
The Evolution in Nature.
They consider the ``fitness'' of different ``species'', related to one
another through a ``food chain'', and assumed
that the species with the lowest fitness
is most likely to disappear or mutate at the next time-step
in their computer simulations.
The crucial step in their simulations
that {\em drives} evolution is the adaption of the individual species to
its present {\em environment} (neighborhood) through mutation
and selection of a
fitter variant.
Other interacting species form part of the {\em environment}.
This means, the neighbors will be influenced by
every time-step.
The result these authors
obtained strongly suggests
that the process of evolution is
a self-organized critical phenomenon. One of the essential
simplifications they made in their evolution models\cite{r14,r15}
is the following: Instead of the explicit
connection between
the fitness and the configuration of the
genetic codes,
they use random numbers for the fitness of the
species.
Furthermore, as they have pointed out in their papers, they
could in principle have chosen to model evolution on a less
coarse-grained scale by considering mutations at the individual
level rather than on the level of species, but that would make the
computation prohibitively difficult.
Having these in mind, we are naturally led to the questions:
Can we consider the creation and
annihilation processes of colorless
systems of interacting soft gluons associated
with a proton as ``evolution'' in a microscopic world?
Before we try to build models for a quantitative description
of the data, can we simply apply the existing evolution
models\cite{r14,r15} to such open, dynamical, complex
systems of interacting soft-gluons,
and check whether some of the essential features
of such systems
can be
reproduced?
To answer these questions, we now report on the result of our
first trial in this direction:
Based on the fact that we know {\em very little} about
the detailed reaction mechanisms in such gluon-systems and
{\em practically}
{\em nothing} about their structures, we simply {\em ignor} them,
and assume that they are self-similar in space
(this means, colorless gluon-clusters can be considered as clusters of
colorless gluon-clusters and so on). Next,
we divide them in an arbitrary given number of subsystems $s_i$
(which may or may not have the same size). Such a system is open,
in the sense that neither its energy $\varepsilon_i$, nor
its gluon-number $n_i$ has a fixed value. Since we do not
know, in particular, how large the $\varepsilon_i$'s are, we
use random numbers. As far the $n_i$'s are concerned, since
we do not know how these numbers are associated with the energies
in the subsystems $s_i$, except that they are not conserved
quantities,
we just ignor them, and consider only the $\varepsilon_i$'s.
As in Ref.[\ref{r14}] or in Ref.[\ref{r15}], the random number of this
subsystem as well as those of the fixed\cite{r14} or random (see the
first paper of Ref.[\ref{r15}]) neighbors will be changed at every time-step.
Note, this is how we simulate the processes of energy flow due to
exchange of gluons between the subsystems, as well as those with
gluons/quarks/antiquarks outside the system. In other words, in the
spirit of Bak and Sneppen\cite{r14} we neglecting the dynamical
details {\it totally}.
Having in mind that,
in such systems,
the gluons as well as the
subsystems ($s_i$'s) of gluons are {\it virtual}
(space-like), we can ask:
``How long can such a colorless subsystem
$s_i$ of interacting soft gluons exist,
which carries energy $\varepsilon_i$?''
According to the uncertainty principle,
the answer should be:
``The time interval
in which the subsystem $s_i$ can exist
is proportional to $1/\varepsilon_i$,
and this quantity can be considered as the lifetime $\tau_i$ of
$s_i$.'' In this sense, the subsystems of colorless gluons are
expected to have larger probabilities to mutate because they are
associated with higher energies, and thus shorter ``lifetimes''.
Note that the basic local interaction
in this self-organized evolution
process is the emission (or absorption) of gluons by gluons prescribed
by the QCD-Lagrangian --- although the detailed mechanisms
(which can in principle be explicitly written down by
using the QCD-Lagrangian)
do not play a
significant role.
In terms of the evolution model\cite{r14,r15}
we now call $s_i$ the ``species'' and identify
the corresponding
lifetime $\tau_i$ as the ``fitness of $s_i$''.
Because of the one-to-one correspondence between $\tau_i$ and
$\varepsilon_i$, where the latter is a random number,
we can also directly assign random numbers to the $\tau_i$'s
instead. From now we can adopt the evolution model\cite{r14,r15}
and note that,
at the start of such a process (a simulation), the fitness on average
grow, because the least fit are always eliminated. Eventually the
fitness do not grow any further on average. All gluons have a fitness
above some threshold. At the next step, the least fit species (i.e. the
most energetic subsystem $s_i$ of interacting soft gluons),
which would be right at the threshold,
will be ``replaced''
and starts an
avalanche (or punctuation of mutation events), which is causally
connected with this triggering ``replacement''.
After a while, the avalanche will
stop, when all the fitnesses again will be over that threshold.
In this sense, the evolution goes on, and on, and on.
As in Refs.[\ref{r14}] and [\ref{r15}], we can monitor the duration of
every avalanche, that is the total number of mutation events in
everyone of them, and count how many avalanches of each size are observed.
The
avalanches mentioned here are
special cases of those discussed in Section II.
Their size- and lifetime-distributions are
given by Eq.(1) and Eq.(2) respectively. Note in particular that the
avalanches in the Bak-Sneppen model correspond to sets of subsystems
$s_i$, the energies ($\epsilon_i$) of which are too high ``to be fit
for the colorless systems of low-energy gluons''. It means, in the
proposed picture, what the virtual photon in deep-inelastic
electron-proton scattering ``meet'' are those ``less fit'' one ---
those who carry ``too much'' energy.
In a geometrical picture this means, it is
more probable for such ``relatively energetic'' colorless
gluons-clusters to be spatially
further away from the (confinement region of)
the proton.
There exists, in the mean time, already several versions of evolution
models\cite{r10,r15} based
on the original idea of Bak and Sneppen\cite{r14}
Although SOC phenomena have been observed in all these cases\cite{r10,r14,r15},
the slopes of the power-law distributions for the avalanches are different
in different models --- depending on the rules applied to the mutations.
The values range from approximately $-1$ to approximately $-2$.
Furthermore, these models\cite{r10,r14,r15} seem to show that neither the
size nor the dimension of the system used for the computer simulation
plays a significant role.
Hence, if we identify
the colorless charge-neutral object $c_0^\star$ encountered by the
virtual photon $\gamma^\star$ with
such an avalanche,
we are identifying the
lifetime of $c_0^\star$ with $T$, and the ``size''
(that is the total amount of dissipative energy in this
``avalanche'') with the total amount of energy of $c_0^\star$.
Note that the latter is nothing else but $x_PP^0$, where $P^0$
is the total energy of the proton. This is how and why the
$S$-distribution in Eq. (\ref{e1}) and the $x_P$-distribution of
$D_S(x_P|\beta,Q^2)$ in Eq.(\ref{ee3}) are related to each other.
\subsection*{VI. The second fingerprint: Temporal scaling}
In this section we discuss in more detail the effects associated with
the time-degree-of-freedom. In connection with the two questions
raised in Section III,
one may
wish to know
{\em why} the parton-picture
does not {\em always} work when we apply it in a straightforward
manner --- not only to hadrons but also to gluon-clusters.
The answer is very simple:
The time-degree
of freedom cannot be ignored when we wish to find out whether
impulse-approximation
is applicable, and the applicability of the latter
is the basis of the parton-model.
We recall that,
when we apply this model to stable hadrons,
the quarks, antiquarks and
gluons are considered as free and stable objects,
while the virtual photon $\gamma^\star$ is associated
with a given interaction-time $\tau_{\rm int}(Q^2,x_B)$ characterized
by the values $Q^2$ and $x_B$ of such scattering processes.
We note however that, this is possible only when the interaction-time
$\tau_{\rm int}$ is much shorter than the
corresponding
time-scales (in particular the average
propagation-time of color-interactions in hadron).
Having these in mind, we see that, we are confronted with
the following questions when we deal
with gluon-clusters associated with finite lifetimes:
Can we consider the $c_0^\star$'s as ``{\it free}'' and
``{\it stable}'' particles when
their lifetimes are {\it shorter} than the interaction-time $\tau_{\rm
int}(Q^2,x_B)$? Can we say that a $\gamma^\star-c_0^\star$
collision process takes place,
in which the incident $\gamma^\star$ is
absorbed by one a or a system of the charged constituents of $c_0^\star$, when
the lifetime $T$ of $c_0^\star$ is {\it shorter} than
$\tau_{\rm int}(Q^2,x_B)$?
Since the notion
``stable objects'' or ``unstable objects'' depends on the
scale which is used in
the measurement, the question whether a $c_0^\star$ can
be considered as a parton
(in the sense that it can be considered as a free
``stable object'' during the $\gamma^\star$-$c_0^\star$ interaction)
depends very much on
on the interaction-time
$\tau_{int}(Q^2, x_B)$.
Here, for
given values of $Q^2$, $x_B$, and thus $\tau_{int}(Q^2, x_B)$,
only those $c^\star_0$'s whose lifetime ($T$'s) are greater
than $\tau_{int}(Q^2, x_B)$ can absorb the corresponding
$\gamma^\star$.
That
is to say, when we consider diffractive electron-proton scattering in
kinematical regions in which $c_0^\star$'s dominate,
we must keep in mind that the measured cross-sections (and thus the
diffractive structure function $F_2^{D(3)}$)
only include contributions from collision-events in which the
condition $T>\tau_{\rm int}(Q^2,x_B)$
is satisfied\,!
We note that $\tau_{\rm int}$ can be estimated by making use of the
uncertainty principle. In fact, by calculating $1/q^0$ in the
above-mentioned
reference frame,
we obtain
\begin{equation}
\label{e4}
\tau_{\rm int}={4|\vec P|\over Q^2}{x_B\over 1-x_B},
\end{equation}
which implies that, for given $|\vec P|$ and $Q^2$ values,
\begin{equation}
\label{eee3}
\tau_{\rm int}\propto x_B,\hskip 1cm \mbox{\rm for } x_B\ll 1.
\end{equation}
This means, for diffractive $e^-p$ scattering events in the
small-$x_B$ region at given $|\vec
P|$ and $Q^2$ values, $x_B$ is directly proportional to the interaction
time $\tau_{\rm int}$. Taken together with the relationship between
$\tau_{\rm int}$ and the minimum lifetime $T({\rm mim})$ of the
$c_0^\star$'s mentioned above, we reach the following conclusion: The
distribution of this minimum value,
$T({\rm min})$ of the $c_0^\star$'s which dominate the
small-$x_B$ ($x_B<10^{-2}$, say) region can be obtained by examining
the $x_B$-dependence of
$F_2^{D(3)}(\beta,Q^2;x_P)/\beta$ discussed in
Eqs. (5), (6) and in Fig. 2. This is because, due to the fact that
this function is proportional to
the quark (antiquark) distributions $q^c_i(\bar{q_i}^c)$ which can be
directly probed by the incident virtual photon
$\gamma^\star$, by measuring $F_2^{D(3)}(\beta,Q^2,x_P)/\beta$
as a function of $x_B\equiv \beta x_P$, we are in fact asking
the following questions:
Do the distributions of the charged constituents of
$c_0^\star$ depend on the interaction time $\tau_{\rm int}$,
and thus on the minimum lifetime $T({\rm min})$ of the
to be detected gluon-clusters\,?
We use the identity $x_B\equiv\beta x_P$ and plot the quantity
$F_2^{D(3)}(\beta,Q^2;x_P)/\beta$ against the variable $x_B$
for fixed values of $\beta$ and $Q^2$.
The result of such a log-log plot is given in Fig.4. It shows
not only how the dependence on the
time-degree-of-freedom can be extracted from the existing
data\cite{r3}, but also that, for all the measured
values of $\beta$ and $Q^2$, the quantity
\begin{equation}
\label{e5}
p(x_B|\beta, Q^2) \equiv
{F_2^{D(3)}(\beta, Q^2; x_B/\beta)
\over \beta }
\end{equation}
is approximately
independent of $\beta$, and independent of $Q^2$.
This strongly suggests that the quantity given in Eq.(\ref{e5})
is associated with some {\em global} features of $c_0^\star$ ---
consistent with the observation made in Section III which shows
that it cannot be used to describe the structure of $c_0^\star$.
This piece of empirical fact can be expressed by setting
$p(x_B|\beta, Q^2)\approx p(x_B)$.
By taking a closer look at this $\log$-$\log$ plot, as well
as the corresponding plots for different sets of
fixed $\beta$- and $Q^2$-values (such plots are not
shown here, they are similar to those in Fig.3),
we see that they are straight lines indicating that
$p(x_B)$ obeys a power-law. What does this piece of
experimental fact tell us? What can we learn from
the distribution of the lower limit of the lifetimes (of the
gluon-systems $c_0^\star$'s)?
In order to answer these questions, let us,
for a moment, assume that we know the lifetime-distribution $D_T(T)$
of the $c_0^\star$'s. In such a case,
we can readily evaluate the integral
\begin{equation}
\label{e6}
I[\tau_{\rm int}(x_B)]\equiv\int^\infty_{\tau_{\rm int}(x_B)}D_T(T)dT,
\end{equation}
and thus obtain the number density of all those clusters which live
longer than the interaction time $\tau_{\rm int}(x_B)$.
Hence, under the statistical assumption
that the chance for a $\gamma^\star$
to be absorbed by one of those
$c_0^\star$'s of lifetime $T$
is proportional to $D_T(T)$ (provided that
$\tau_{\rm int}(Q^2,x_B)\le T$, otherwise this chance is
zero), we
can then interpret the integral in Eq.(13) as follows:
$I[\tau_{\rm int}(Q^2,x_B)]\propto p(x_B)$ is
the probability density for $\gamma^\star$ [associated with the
interaction-time $\tau_{\rm int}(x_B)$]
to be absorbed by $c_0^\star$'s.
Hence,
\begin{equation}
\label{e7}
D_T(x_B)\propto {d\over dx_B}p(x_B).
\end{equation}
This means in particular,
the fact that $p(x_B)$ obeys a power-law in $x_B$ implies that
$D_T(T)$ obeys a power-law in $T$.
Such a {\em behavior is similar} to
that shown in Eq.(~\ref{e2}).
In order to see the {\em quality} of this power-law behavior of $D_T$, and
the {\em quality} of its independence of $Q^2$ and $\beta$, we compare the
above-mentioned behavior with the existing data\cite{r3}. In Fig.5,
we show the log-log plots of
$d/dx_B[p(x_B)]$ against $x_B$. We note that $d/dx_B[p(x_B)]$ is
approximately $F_2^{D(3)}(\beta, Q^2; x_B/\beta)/(\beta x_B)$.
The quality of the power-law
behavior of $D_T$ is explicitly shown in Fig.5.
\subsection*{VII. $Q^2$-dependent exponents in the power-laws?}
We have seen, in Sections V and VI, that in
diffractive deep-inelastic
electron-proton scattering, the size- and the
lifetime-distributions of the gluon-clusters obey power-laws,
and that the exponents depend very little
on the variables $\beta$ and $Q^2$. We interpreted
the power-law behaviors as the fingerprints of SOC in the formation
processes of such clusters. Can such approximately
independence (or weak
dependence) of the exponents on $Q^2$ and $\beta$
be understood in a physical picture based on
SOC? In
particular, what do we expect to see in photoproduction
processes
where the
associated value for $Q^2$ is zero?
In order to answer these questions, let us
recall the space-time aspects of the
collision processes which are closely related
to the above-mentioned
power-law behaviors.
Viewed in a fast moving frame (e.g. the c.m.s. of the colliding
electron and proton), the states of the interacting soft gluons
originating from the
proton are self-organized.
The colorless gluon-clusters caused by local perturbations
and developed through ``domino effects'' are BTW-clusters.
That is, they are avalanches (see Sections I and V), the
size-distribution of which [see Eqs.(8) and (1)] are given by
Fig.3. This explicitly shows that
there are gluon-clusters of all sizes,
because a power-law
size-distribution implies that there is no scale in size.
Recall that, since such clusters are color-singlets, their
spatial extensions can be much larger than that of the proton,
and thus they can be ``seen'' also {\em outside} the proton
by a virtual
photon originating from the electron.
In other words, what the virtual photon encounters is a cloud of
colorless gluon-clusters
spatially extended in- and outside the proton.
The virtual photon, when it encounters a colorless
gluon-cluster, will be absorbed
by the charged constituents
(quarks and antiquarks due to fluctuation of the gluons)
of the gluon-system. Here it is useful to recall that in such a space-time
picture, $Q^2$ is inversely proportional to the transverse size,
and $x_B$ is a measure of the interaction time [See Eqs. (10) and (11)
in Section VI] of the virtual photon.
It is conceivable, that the values for the cross-sections for virtual
photons (associated with a given $Q^2$ and a given $x_B$) to
collide with gluon-clusters (of a given size and a given
lifetime) may depend on these variables. But, since the
processes of self-organization (which produce such gluon-clusters)
take
place independent of the virtual photon (which originates from the
incident electron and enters ``the cloud'' to
look for suitable partners), the power-law behaviors of the size-
and lifetime-distributions of the gluon-clusters are expected to be
independent of the properties associated with the virtual photon.
This means, by using
$\gamma^\star$'s associated with different values
of $Q^2$ to detect clusters of various sizes,
we are moving up or down on the straight lines in the
log-log plots for the size- and lifetime distributions,
the slopes of which do not change.
In other words,
the approximative $Q^2$-independence of the slope is
a natural consequence of the SOC picture.
As far as the $\beta$-dependence is concerned, we recall the
results obtained in Sections III and IV,
which explicitly show the following:
The gluon-clusters ($c_0^\star$'s)
can {\em not} be considered as hadrons. In particular, it is
neither possible
nor meaningful
to talk about ``the electromagnetic structure of the gluon-cluster''.
This suggests, by studying the $\beta$-dependence of
the ``diffractive structure functions'' we cannot expect to gain further
information about the structure
of the gluon-clusters or further insight about the reaction mechanisms.
Having seen these, we try to look for
measurable quantities in which the integrations over $\beta$
have already been
carried out.
A suitable candidate for this purpose is the differential cross-section
\begin{eqnarray}
\frac{1}{x_P}\,\frac{d^2\sigma^D}{dQ^2 dx_P}
& = &
\int d\beta \,\frac{4\pi\alpha^2}{\beta Q^4}\,
\left( 1-y+\frac{y^2}{2}\right)\,
\frac{F_2^{D(3)}(\beta, Q^2; x_P)}{x_P} \nonumber\\
& \approx &
\int d\beta \,\frac{4\pi\alpha^2}{\beta Q^4}\,
\left( 1-y+\frac{y^2}{2}\right)\,
D_S(x_P| \beta, Q^2)
\end{eqnarray}
Together with Eqs.(3) and (8), we see that this cross-section is
nothing else but the effective $\beta$-weighted
$x_P$-distribution $D_S(x_P|Q^2,\beta)$ of the
gluon-clusters. Note that the weighting factors shown on the
right-hand-side of Eq.(15) are simply results of QED!
Next, we use the data\cite{r3} for
$F_2^{D(3)}$ which are available at present,
to do a log-log plot for the integrand of the expression
in Eq.(15) as a function of $x_P$
for different values of $\beta$ and $Q^2$.
This is shown in
in Fig.6a. Since the absolute values of this quantity depend
very much, but the slope of the curves very little on $\beta$,
we carry out the integration as follows:
We first fit every set of the data separately.
Having obtained the slopes and the intersection points,
we use the obtained fits to perform the integration over $\beta$.
The results are shown in the
\begin{eqnarray*}
\log{\left(\frac{1}{x_P}\,\frac{d^2\sigma^D}{dQ^2\,dx_P}\right)}
& \mbox{\ \ versus\ \ \ } &
\log{(x_P)}
\end{eqnarray*}
plots of Fig.6b.
These results show the $Q^2$-dependence of the slopes
is practically negligible, and that the slope
is approximately $-1.95$ for all values of $Q^2$.
Furthermore, in order to see whether the quantity introduced in
Eq.(15) is indeed
useful, and in order to perform a decisive test of the
$Q^2$-independence of the slope in the power-law behavior
of the above-mentioned size-distributions,
we now
compare the results in deep-inelastic
scattering\cite{r3} with those obtained in photoproduction\cite{r16},
where LRG events have
also be observed. This means, as in
diffractive deep-inelastic scattering, we again associate the
observed effects with colorless objects which are interpreted as
system of interacting soft gluons originating from the proton.
In order to find out whether it is the same kind of
gluon-clusters as in deep-inelastic scattering, and whether
they ``look'' very much different when we probe them with
real ($Q^2=0$) photons, we replot the existing
$d\sigma/dM_X^2$ data\cite{r16} for photoproduction
experiments performed at different total energies,
and note
the kinematical relationship between $M_X^2$, $W^2$ and $x_P$
for $Q^2\ll M^2$ and $|t|\ll M_X^2$:
\begin{eqnarray}
x_P \approx \frac{M_X^2 + t}{W^2 - M^2}\approx \frac{M_X^2}{W^2} & &
\end{eqnarray}
The result of the corresponding
\begin{eqnarray*}
\log{\left(\frac{1}{x_P}\,\frac{d\sigma}{dx_P}\right)}
& \mbox{\ \ versus\ \ \ } &
\log{(x_P)}
\end{eqnarray*}
plot is shown in Fig.7. The slope obtained from a least-square
fit to the existing data\cite{r16} is $-1.98\pm 0.07$.
The results obtained in diffractive
deep-inelastic electron-proton scattering
and that for diffractive photoproduction strongly suggest
the following: The formation processes of gluon-clusters
in the proton is due to self-organized criticality, and thus
the spatial distributions of such clusters
--- represented by the $x_P$-distribution ---
obey power-laws.
The exponents of
such power-laws are
independent of
$Q^2$. Since $1/Q^2$ can be interpreted,
in a geometrical picture, as a measure for the transverse
size of the incident virtual photon, the observed
$Q^2$-independence of the exponents can be
considered as further evidence for SOC ---
in the sense that the self-organized gluon-cluster formation
processes take place independent of the virtual
photon (which is ``sent in'' to detect the clusters).
\subsection*{VIII Concluding remarks}
The existence of large rapidity gap (LRG) events\cite{r2,r3} in deep-inelastic
electron-proton scattering is one of the most striking
features, if not {\em the} most striking feature of
the experimental data obtained in the small-$x_B$ ($x_B<10^{-2}$, say)
region. Taken together with the empirical facts\cite{r1}
that gluons dominate
in this kinematical region and that their interactions are not
negligible, it seems quite natural to think, that such events are due to
collisions between the virtual photons originated from
the lepton and colorless gluon-systems originating from the proton.
What we propose in the present paper is {\it a statistical
approach} to study such colorless gluon-systems.
The reasons, why we think such an approach is useful, can be
summarized as follows:
First, a number of theoretical arguments and experimental
indications suggest that such
a system of interacting soft-gluons
is a system with
the following properties:
(a) It is a complex system with many degrees of freedom,
because in general it has a large --- unknown --- number of
gluons.
(b) It is an open system. This is because the members of a colorless
gluon-system may interact (through emission and/or absorption
of soft gluons) not only with one another, but also with
gluons and/or quarks and antiquarks outside the system. Thus,
due to such interactions, neither the gluon-number
nor the energy of this system can remain constant.
(c) It is neither in chemical nor in thermal equilibrium.
This is because, as we can for example see in the analysis shown
in Section III, it is not possible to
consider the colorless gluon-cluster $c_0^\star$
as a hadron-like object which has a given structure. In this sense,
we are forced to consider it as
a dynamical system --- probably very far from
thermal and chemical equilibria.
(d) The basic interactions between the members of the system, as well as
those between a member and quarks or gluons outside the system, are local.
In fact, they are explicitly given by the well-known QCD-Lagrangian.
But, as it is often the case in complex systems, whether
the local dynamical details or the general global features of
the system plays a more significant role is a different
question.
Second, it has been proposed by Bak, Tang and Wiesenfeld\cite{r9,r10}
some time ago,
that a wide class of open dynamical complex systems far from equilibrium
may evolve in a self-organized manner to critical
states, which give rise to
spatial and temporal power-law scaling behaviors.
Such scaling behaviors are universal and robust,
in fact they can be considered as the ``fingerprints'' of self-organized
criticality (SOC).
In the macroscopic world, there are many open dynamical
complex systems which show this kind of scaling
behaviors\cite{r9,r10}.
Under the condition
(see above) that the colorless system of interacting gluons
can
indeed
be considered as an open, dynamical, complex system, it would
be of considerable interest to see whether there can be
self-organized criticality
also in the microscopic world --- at the level
of gluons and quarks.
Third, by using the existing data for
deep inelastic electron-proton scattering\cite{r3}
and those for photoproduction\cite{r16},
where colorless systems of interacting soft-gluons are expected to play a
dominating role, we checked the above-mentioned fingerprints.
The obtained results show that {\em the above-mentioned characteristic
features for SOC indeed exist}.
Furthermore, it is seen that the relevant exponents
in such power-laws
are {\em the same
for different reactions}.
The existence of SOC in systems of interacting soft gluons in such
reactions has a number of consequences. It seems worthwhile
to study them in more detail. In particular, it would be very helpful to
build realistic models and/or cellular automata to
do quantitative calculations.
Fourth, based on the obtained results in
particular the validity of the power-law behaviors,
the physical picture for a colorless gluon-cluster
should be
as follows:
It is {\em not a hadron} with a given
structure. It has {\em neither a typical size, nor a typical
lifetime}, and its structure is changing all the time.
In fact, it has much in common with an earthquake or an avalanche
(mentioned in more detail in Sections II, IV and V).
Can we learn more about these objects by studying other
reactions\,? Can we use the same concepts and methods to treat
hadron-hadron and hadron-nucleus collision processes\,?
It is known that ``the exchange of colorless objects'' plays
an important role also in diffractive hadron-hadron collisions.
Shall we
see this kind of power-law behaviors also in
diffractive inelastic hadron-hadron scattering processes?
Studies along this line are in progress.
The results will be published elsewhere, when they are ready.
\subsection*{Acknowledgments}
We thank P. Bak, X. Cai, D. H. E. Gross, C. S. Lam, Z. Liang,
K. D. Schotte, K. Tabelow and E. Yen
for helpful discussions, R. C. Hwa, C. S. Lam and J. Pan for
correspondence, and FNK der FU-Berlin
for financial support (FPS ``cluster'' der FU Berlin).
Y. Zhang also thanks Alexander
von Humboldt Stiftung for the fellowship granted to him.
|
1,314,259,995,627 | arxiv | \section{Introduction}
Radiative emission of neutrino pair (RENP) from atoms or molecules
has been considered as a novel tool in neutrino physics~\cite{Fukumi2012a}.
The standard model of particle physics predicts that an excited state
$|e\rangle$ of an atom deexcites to a ground (or lower-energy) state
$|g\rangle$ by emitting a neutrino-antineutrino pair and a photon,
$|e\rangle\to|g\rangle+\gamma+\nu\bar\nu$, as depicted in
Fig.~\ref{Fig:RENPscheme}.
A rate enhancement mechanism using coherence in a macroscopic ensemble
of atoms is proposed in order to overcome the rate suppression
\cite{Yoshimura:2008a}.
This macrocoherent enhancement mechanism is experimentally confirmed
in the QED process in which the neutrino pair is replaced by
another photon, $|e\rangle\to|g\rangle+\gamma+\gamma$
(paired superradiance, PSR~\cite{YoshimuraSasaoTanaka2012a}), and
a rate amplification of $O(10^{18})$ is achieved using parahydrogen
molecules~\cite{Miyamoto2014a,Miyamoto2015a}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/FigRENPschemeCrop.pdf}
\caption{A schematic description of RENP.
The intermediate state is denoted by $|p\rangle$.}
\label{Fig:RENPscheme}
\end{figure}
In atomic deexcitation processes, the energy of the system is conserved
but the momentum is not as far as the recoil of the atom is
neglected. A peculiar nature of the macrocoherent enhancement is
that the kinematic configurations in which the momenta
of outgoing particles are balanced are selectively amplified.
In the RENP above, assigning momenta as
$|e\rangle\to|g\rangle+\gamma(p_\gamma)+\nu_j(p)\bar\nu_i(p')$,
one may write the total amplitude as
\begin{equation}\label{Eq:Amp}
\text{Amp.}\propto\sum_a e^{-i(\bm{p}_\gamma+\bm{p}+\bm{p'})\cdot\bm{x}_a}
\simeq\frac{N}{V}(2\pi)^3\delta^3(\bm{p}_\gamma+\bm{p}+\bm{p'})\,,
\end{equation}
where $a$ and $\bm{x}_a$ denote an atom and its position respectively,
the summation runs over $N$ atoms in the macroscopic target of volume $V$,
and the exponential factor represents the plane waves of the emitted particles.
All relevant atoms are supposed in an identical state
including their phases, so that the amplitudes of atoms are coherently
summed in Eq.~\eqref{Eq:Amp}. (See below for details.)
Thus, the macrocoherence implies momentum conservation as well as
the energy conservation in atomic processes.
The four-momentum conservation is represented as
$P^\mu=p_\gamma^\mu+p^\mu+p'^\mu$ with $(P^\mu):=(E_{eg},\bm{0})$,
where $E_{eg}:=E_e-E_g$ is the energy difference between the two atomic states.
The four-momentum $P^\mu$ is regarded as that of ``a parent particle''
at rest and then the kinematics of the macrocoherently amplified RENP
is equivalent to that of three-body decay of this virtual parent particle.
Thus the photon spectrum in the RENP enhanced by the macrocoherence
is expected to be sensitive to neutrino masses.
Extending the standard model to include neutrino masses and mixings,
some RENP spectra are calculated
\cite{Fukumi2012a,DinhPetcovSasaoTanakaYoshimura2012a,YoshimuraSasao2013a}.
It is shown that the RENP spectrum gives the information on
unknown neutrino properties, such as absolute masses,
Dirac/Majorana nature and Majorana phases
(if the neutrinos are Majorana fermions).
Quantitatively, the fine sensitivity to neutrino properties related to
masses is owing to the fact that the invariant mass of the parent particle,
$\sqrt{P^2}=E_{eg}$, is typically $O(1)$ eV and closer to the neutrino
mass scale $\sim O(0.1)$ eV than other neutrino experiments.
In the present work, we further pursue this kinematic advantage of RENP
in order to increase sensitivity to neutrino properties taking
the effect of initial spatial phase (ISP) into consideration.
As described in the following, the ISP imprinted in a macroscopic target
works as a spatial component of the momentum $P^\mu$ of the virtual parent
particle, so that the invariant mass becomes smaller than $E_{eg}$.
We call the RENP process from a macroscopic target with the ISP as
boosted RENP.
In Sec.~\ref{Sec:Kinematics}, we describe the spatial phase given to
a macroscopic target in the preparation process of initial coherent state
by two-photon absorption.
The kinematics of the boosted RENP is also examined.
We present a rate formula of the boosted RENP in Sec.~\ref{Sec:Rate}.
Section \ref{Sec:DM} is devoted to our numerical results on
enhanced power of Dirac-Majorana distinction in the boosted RENP
as well as increased sensitivity to Majorana phases.
In Sec.~\ref{Sec:CNB}, we discuss possible improvement in detecting
the cosmic neutrino background with spectral distortion in RENP.
Our conclusion is given in Sec.~\ref{Sec:Conclusion}.
\section{\label{Sec:Kinematics} Initial spatial phase and kinematics}
Prior to describing the ISP and its implication, we recapitulate the nature
of initial atomic states required for RENP.
In Eq.~\eqref{Eq:Amp}, the amplitude of each atom is assumed to
interfere with each other in addition to the phase matching by
the momentum conservation. The interference among atoms is
realized if the initial state of each atom is a superposition of
$|e\rangle$ and $|g\rangle$, e.g.
$|\psi\rangle:=(|e\rangle+|g\rangle)/\sqrt{2}$.
Suppose that $N$ atoms are in the initial state
$\Pi_{a=1}^N|\psi\rangle_a$.
A deexcitation process such as the RENP is schematically expressed by
the lowering operator $\sum_{a=1}^N|g\rangle_a\,{_a\!\langle e|}$.
The action of this operator on the above initial state gives
the wave function of the final state,
$(1/\sqrt{2})\sum_{i=1}^N|\psi\rangle_1\cdots|g\rangle_i\cdots|\psi\rangle_N$.
One finds that all the states in the summation interfere with
each other. Therefore the deexcitation rate, which is proportional to
the square of the final wave function, behaves as $N^2$ when $N$ is
large.
Atomic states including the one like $|\psi\rangle$ are conveniently
described by the density operator $\hat\rho$.
The offdiagonal element, $\langle e|\hat\rho|g\rangle$, provides
the coherence that leads to the above $N^2$ behavior.
An initial atomic state with such coherence in a target can be prepared
by the two-photon absorption process, $\gamma_1+\gamma_2+|g\rangle\to|e\rangle$,
with high quality lasers.
We note that
an electric dipole forbidden metastable state $|e\rangle$ is
preferable as an initial state in order to suppress ordinary fast QED
deexcitation processes. Thus the single photon excitation is disfavored
as well.
In the numerical illustration in Sec.~\ref{Sec:DM},
we consider a $0^-\to 0^+$ transition of ytterbium,
in which all multipole processes of single photon are forbidden.
\footnote{
The E1$\times$E1 two-photon process is prohibited by the parity.
The most serious QED process competing with
the RENP is the macrocoherently amplified three-photon emission.
It is shown that this three-photon process can be controlled with
a metal or photonic crystal
waveguide~\cite{YoshimuraSasaoTanaka2015a,TanakaTsumuraSasaoYosimura2017a}.}
In the two-photon absorption process, the energy is conserved
as $\omega_1+\omega_2=E_{eg}$, where $\omega_{1(2)}$ is the energy of
$\gamma_{1(2)}$, but the momentum need not.
Instead, the sum of the photon momenta,
$\bm{p}_{eg}:=\bm{k}_1+\bm{k}_2$, where $\bm{k}_{1(2)}$ represents
the momentum of $\gamma_{1(2)}$, is memorized in the resulting state
of the macroscopic target as a spatial phase factor. Therefore,
in the continuum approximation, which is valid for high density targets,
one may write $\langle e|\hat\rho|g\rangle$ of the prepared target state
as a product of the slowly varying function of the position $\bm{x}$
and the ISP factor of rapid oscillation,
\begin{equation}\label{Eq:Coherence}
\langle e|\hat\rho|g\rangle
=n\rho_{eg}(\bm{x})e^{i\bm{p}_{eg}\cdot\bm{x}}\,,
\end{equation}
where $n$ is the number density of target\footnote{The target number
density may be a function of the position. Here we assume a uniform
target for simplicity.}
and $\rho_{eg}(\bm{x})$ represents the envelope.
This is called as the slowly varying envelope approximation
in the literature.
We note that, in Eq.~\eqref{Eq:Amp}, the atomic state is implicitly
assumed to be prepared with the parent four-momentum
$(E_{eg},\bm{0})$ in the scheme of counter-propagating irradiation
of identical two lasers, $\omega_1=\omega_2=E_{eg}/2$ and
$\bm{p}_{eg}=\bm{k}_1+\bm{k}_2=0$.
It is apparent that $\bm{p}_{eg}$ in the ISP factor in
Eq.~\eqref{Eq:Coherence} fills the role of the initial momentum and
the four-momentum of the prepared initial state is expressed as
\begin{equation}
P^\mu=(E_{eg},\bm{p}_{eg})\,,
\end{equation}
in the rest frame of the target atomic system, and the $\delta$ function
in Eq.~\eqref{Eq:Amp} is replaced by
$\delta^3(\bm{p}_{eg}-\bm{p}_\gamma-\bm{p}-\bm{p'})$.
The RENP process with nonvanishing $\bm{p}_{eg}$ is mentioned as boosted RENP.
We note that $\sqrt{P^2}\leq E_{eg}$, i.e. the invariant mass of
the virtual parent particle in the boosted RENP is smaller than
that in the case of vanishing boost ($\bm{p}_{eg}=0$).
It is expected that the boosted RENP exhibits a higher kinematic
sensitivity to properties of neutrinos related to their masses.
The energy-momentum conservation in the boosted RENP with a trigger
laser $\gamma$ is expressed as
\begin{equation}
q^\mu=p^\mu+p'^\mu\,,
\end{equation}
where the four-momentum of the neutrino pair $q^\mu$ is given by
\begin{equation}\label{Eq:qmu}
q^\mu=P^\mu-p_\gamma^\mu=(E_{eg}-E_\gamma,\bm{p}_{eg}-\bm{p}_\gamma)\,.
\end{equation}
In order for the RENP process to take place,
the invariant mass of the neutrino pair must be larger than the
sum of the masses of the emitted neutrinos:
\begin{align}
s&:=q^2=E_{eg}^2-\bm{p}_{eg}^2
-2E_\gamma(E_{eg}-|\bm{p}_{eg}|\cos\theta_\gamma)\,,\nonumber\\
&>(m_j+m_i)^2\,,
\end{align}
where $\theta_\gamma$ is the angle between $\bm{p}_{eg}$ and $\bm{p}_\gamma$.
The magnitude of the initial momentum $\bm{p}_{eg}$ is given by
$|\bm{p}_{eg}|=\omega_1-\omega_2$ in the coherence preparation
scheme of the counter-propagating two-photon absorption. Here
the convention of $\omega_1\geq\omega_2$ is employed.
We take $|\cos\theta_\gamma|=1$ assuming that the trigger photon is
(anti)parallel to $\bm{p}_{eg}$.
Then the trigger photon energy is expressed as
\begin{equation}
E_\gamma=\frac{1}{2}\left[E_{eg}\pm|\bm{p}_{eg}|
-\frac{s}{E_{eg}\mp|\bm{p}_{eg}|}\right]
=\omega_{1(2)}-\frac{s}{4\omega_{2(1)}}\,,
\end{equation}
and thus
\begin{equation}
\label{Eq:TrigE}
0<E_\gamma<\omega_{1(2)}-\frac{(m_j+m_i)^2}{4\omega_{2(1)}}\,.
\end{equation}
We note that the case of $\omega_1=\omega_2=E_{eg}/2$ is of
no boost and Eq.~\eqref{Eq:TrigE} reproduces the RENP threshold
$\omega_{ji}=E_{eg}/2-(m_j+m_i)^2/(2E_{eg})$ in
Refs.~\cite{DinhPetcovSasaoTanakaYoshimura2012a,Fukumi2012a}.
\section{\label{Sec:Rate} Rate formula of the boosted RENP}
We present a rate formula of the boosted RENP introduced in the previous
section.
The differential rate is written as
\begin{equation}\label{Eq:DiffRate}
d\Gamma_{ji}=
n^2V \frac{(\bm{d}_{pg}\cdot\langle\rho_{eg}\bm{E}\rangle)^2}
{(E_{pg}-E_\gamma)^2}
\sum_{\nu\text{ hel.'s}}|\mathcal{M}_W|^2 d\Phi_2\,,
\end{equation}
where $\mathcal{M}_W$ is the weak matrix element, the two-body phase
space is given by
\begin{equation}
d\Phi_2=(2\pi)^4\delta^4(q-p-p')\frac{d^3p}{2p^0}\frac{d^3p'}{2p'^0}\,,
\end{equation}
and $\langle\rho_{eg}\bm{E}\rangle$ represents the average of
$\rho_{eg}(\bm{x})\bm{E}(\bm{x})$ over the target with
$\bm{E}(\bm{x})$ being the electric field in the target stimulated
by the trigger laser.
The single intermediate state $|p\rangle$ is assumed to dominate
with the expectation value of the dipole operator $\bm{d}_{pg}$,
and $E_{pg}:=E_p-E_g$ is introduced in the energy denominator.
The four-momentum of the neutrino pair $p+p'$ is subject to the
four-momentum conservation dictated by the macrocoherence as shown
by the delta function, $\delta^4(q-p-p')$. The four-momentum $q$
is given by Eq.~\eqref{Eq:qmu}.
Integrating over the neutrino phase space and summing over the mass
eigenstates, we obtain the following spectral rate,
\begin{align}
\Gamma(E_\gamma)=&\sum_{j,i}\int d\Gamma_{ji}\nonumber\\
=&\Gamma_0\sum_{j,i}\frac{\beta s}{6(E_{pg}-E_\gamma)^2}
\frac{E_\gamma}{E_{eg}}
\left[|c^A_{ji}|^2\left\{2-\frac{m_j^2+m_i^2}{s}
-\frac{(m_j^2-m_i^2)^2}{s^2}\right.\right.
\nonumber\\
&\left. +\frac{2}{3}\frac{\bm{q}^2}{s}
\left(1+\frac{m_j^2+m_i^2}{s}
-2\frac{(m_j^2-m_i^2)^2}{s^2}\right)
\right\}
\left.-6\delta_M\mathrm{Re}(c^{A2}_{ji})\frac{m_jm_i}{s}\right]\,,
\label{Eq:RENPrate}
\end{align}
where
\begin{equation}
\beta^2=1-2\frac{m_j^2+m_i^2}{s}+\frac{(m_j^2-m_i^2)^2}{s^2}\,,
\end{equation}
$c^A_{ji}:=U_{ej}^*U_{ei}-\delta_{ji}/2$ represents the neutrino mixing
factor, and $\delta_M=0(1)$ for Dirac (Majorana) neutrinos.
The overall rate $\Gamma_0$ for the target of number density $n$,
volume $V$ and dynamical activity $\eta$ is given by
\begin{equation}
\Gamma_0:=\frac{2G_F^2}{\pi}\langle\bm{s}\rangle^2 n^2V
|\bm{d}_{pg}\cdot\langle\rho_{eg}\bm{E}\rangle|^2
\frac{E_{eg}}{E_\gamma}
=(2J_p+1)C_{ep}G_F^2\frac{\gamma_{pg}E_{eg}}{E_{pg}^3}n^3V\eta\,,
\end{equation}
where $\bm{s}$ is the electron spin operator,
$J_p$ is the angular momentum of the intermediate state $|p\rangle$,
$C_{ep}$ denotes the spin matrix matrix element,
and $\gamma_{pg}$ is the rate of the $|p\rangle\to|g\rangle$ E1 transition.
The dynamical activity factor $\eta$ of the target is
defined by\footnote{The energy density of the trigger field
is $|\bm{E}|^2/2$ and its value is $E_\gamma n$ when each atom in
the target emits a photon of $E_\gamma$,
while the maximal value of $|\rho_{eg}|$ is 1/2.
Hence $|\langle\rho_{eg}\bm{E}\rangle|^2\leq E_\gamma n/2$ and
$\eta\leq 1$ follows from this definition. The definition of $\eta$
in the present work is different from that in
Refs.~\cite{DinhPetcovSasaoTanakaYoshimura2012a,Fukumi2012a}.}
\begin{equation}
|\langle\rho_{eg}\bm{E}\rangle|^2=:\frac{1}{2}\eta E_\gamma n\,.
\end{equation}
For both Yb~\cite{DinhPetcovSasaoTanakaYoshimura2012a} and
Xe~\cite{Fukumi2012a}, $J_p=1$ and $C_{ep}=2/3$. Thus, we obtain
\begin{equation}
\Gamma_0=2G_F^2\frac{\gamma_{pg}E_{eg}}{E_{pg}^3} n^3V\eta\,.
\end{equation}
It is notable that we may discriminate Dirac and Majorana neutrinos
with the RENP spectrum in Eq.~\eqref{Eq:RENPrate} owing to
the Majorana interference shown by the last term in the square brackets.
Furthermore, if the neutrinos are Majorana fermions, there appear two extra
CP violating phases in the lepton sector. We have virtually
no experimental information on these Majorana phases at present.
One of the advantages of the boosted RENP is its good sensitivity to
Majorana phases as we show below.
The lepton mixing matrix appearing in $c^A_{ji}$ is represented
as a product of two unitary matrices~\cite{PDG2016}
\begin{equation}
U=VP\,,
\end{equation}
where the PMNS matrix $V$ is written in terms of three mixing angles
and the CP violating Dirac phase,
\begin{equation}
V=\left[\begin{array}{ccc}
c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} &
s_{23}c_{13}\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} &
c_{23}c_{13}
\end{array}
\right]\,,
\end{equation}
with $c_{ij}=\cos\theta_{ij}$ and $s_{ij}=\sin\theta_{ij}$.
The diagonal unitary matrix $P$ may be expressed as
\begin{equation}
P=\mathrm{diag.}(1,e^{i\alpha},e^{i\beta})\,,
\end{equation}
for Majorana neutrinos, and we can rotate away the phases $\alpha$
and $\beta$ for Dirac neutrinos resulting in the single CP violating phase.
In our numerical calculation, we employ
the best-fit results of NuFIT~\cite{Esteban2017a} for the neutrino mass and
mixing parameters.
The Majorana phases affect the RENP rate in Eq.~\eqref{Eq:RENPrate}
through the offdiagonal components of $\text{Re}(c^{A2}_{ji})$:
$\text{Re}(c^{A2}_{12})=c_{12}^2s_{12}^2c_{13}^4\cos 2\alpha$,
$\text{Re}(c^{A2}_{13})=c_{12}^2c_{13}^2s_{13}^2\cos 2(\beta-\delta)$,
$\text{Re}(c^{A2}_{23})=s_{12}^2c_{13}^2s_{13}^2\cos 2(\beta-\delta-\alpha)$.
We observe that the dependence of the RENP rate on $\beta-\delta$
is relatively weak because of the rather suppressed mixing angle
$s_{13}^2\sim 0.022$. The RENP experiment is complementary to
oscillation experiments that can probe
$\delta$~\cite{AbeT2K2017a,AdamsonNOvA2017a}.
\section{\label{Sec:DM}
Dirac-Majorana distinction and effect of Majorana phases}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/pDMa1NO.pdf}\
\includegraphics[width=0.45\textwidth]{figs/pDMa50NO.pdf}
\caption{Dirac-Majorana difference in the spectral shape,
$\Gamma(E_\gamma)/\Gamma_0$, with ($b=0.95$) and without
($b=0$) boost.
Yb, NO, $0<\alpha<\pi/2$ and $\beta=0$.
The smallest neutrino mass is chosen as $m_0=$
1 meV (left) and 50 meV (right).}
\label{Fig:DMNO}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/pDMa1IO.pdf}\
\includegraphics[width=0.45\textwidth]{figs/pDMa50IO.pdf}
\caption{Dirac-Majorana difference in the IO case. The other parameters
are the same as Fig.~\ref{Fig:DMNO}.}
\label{Fig:DMIO}
\end{figure}
We compare the boosted RENP spectra for Dirac and Majorana neutrinos
and examine the effect of Majorana phases.
Figure \ref{Fig:DMNO} shows the spectral shape $\Gamma(E_\gamma)/\Gamma_0$
in the case of Yb (%
$|g\rangle=\text{6s}^2\,{^1\text{S}}_0$,
$|e\rangle=\text{6s6p}\,{^3\text{P}}_0$,
$|p\rangle=\text{6s6p}\,{^3\text{P}}_1$,
$E_{eg}=2.14348$ eV and $E_{pg}=2.23072$ eV
\cite{DinhPetcovSasaoTanakaYoshimura2012a})
for the normal ordering (NO) of neutrino masses
with the smallest neutrino mass $m_0$ being 1 meV (left) and 50 meV (right).
The trigger is taken parallel to the ISP momentum $\bm{p}_{eg}$ and
the boost magnitude is $b:=|\bm{p}_{eg}|/E_{eg}=0.95$.
This is realized by choosing $\omega_1=2.08989$ eV and $\omega_2=0.05359$ eV.
The black solid lines represent the spectra of the Dirac case with this boost
and the spectra without boost ($b=0$) are also shown by the black
dashed lines for comparison. The endpoint for $b=0$ is $\sim E_{eg}/2$ and
that for $b=0.95$ is close to $E_{eg}$ as given in Eq.~\eqref{Eq:TrigE}.
As for the case of Majorana neutrinos,
we vary $\alpha$ while $\beta$ is fixed to zero. The red dash-dotted lines
represent the spectra of $\alpha=0$ and $\pi/2$ and the shaded regions
corresponds to $\alpha$ between these two values. We also show the cases of
no boost as the red dotted lines for comparison. We note that
the boundaries of $\alpha\in [0,\pi/2]$ are indistinguishable in the cases
without boost and even with the boost for $m_0=1$ meV.
The case of inverted ordering (IO) is presented in Fig.~\ref{Fig:DMIO}.
We observe that enhancement of Dirac-Majorana difference is possible
in the boosted RENP. In particular, near the endpoint
($E_\gamma\sim E_{eg}$), the difference becomes larger than 10 \%
although the rate itself is suppressed.
The effect of Majorana phases is also significantly enhanced by boosting.
A sizable effect in the rate, say 10 \% or more, is expected if
$m_{1,2}\sim 50\ \text{meV}$, which is always the case in the inverted
ordering.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/lprmuDMNO.pdf}
\hspace{0.05\textwidth}
\includegraphics[width=0.45\textwidth]{figs/lprmuDMIO.pdf}
\caption{Maximal figure of merit as a function of the boost magnitude,
$b=|\bm{p}_{eg}|/E_{eg}$.
Yb, NO (left) and IO (right), $\alpha=\beta=0$, and
$m_0$=1 meV (red dashed) and 50 meV (black solid).}
\label{Fig:FOMDM}
\end{figure}
In order to quantify the power of the boost by the ISP in discriminating
Dirac and Majorana cases, we introduce the following figure of
merit (FoM) function,
\begin{equation}
\mu(E_\gamma):=\frac{2A^2(E_\gamma)}{1+|A(E_\gamma)|}
\left[\Gamma_M(E_\gamma)+\Gamma_D(E_\gamma)\right]\,,
\end{equation}
where $\Gamma_{M}(E_\gamma)$ and $\Gamma_{D}(E_\gamma)$ denote
the Majorana and Dirac RENP rates respectively,
and the asymmetry $A(E_\gamma)$ is defined by
\begin{equation}
A(E_\gamma):=\frac{\Gamma_M(E_\gamma)-\Gamma_D(E_\gamma)}
{\Gamma_M(E_\gamma)+\Gamma_D(E_\gamma)}\,.
\end{equation}
To obtain the best sensitivity, it is presumed in an experiment that
the trigger energy is chosen to maximize $\mu(E_\gamma)$
for a given magnitude of the boost, $b=|\bm{p}_{eg}|/E_{eg}$.
In Fig.~\ref{Fig:FOMDM}, we present the maximal value of $\mu(E_\gamma)$
as a function of the boost magnitude $b$ taking $\alpha=\beta=0$ for
an illustration. The ordinate is normalized so that the maximal
figure of merit is unity for the case of no boost.
The left and right panels show the NO and IO cases respectively.
We observe that the FoM is enhanced by almost a factor of 1000 choosing
the best boost factor.
This means that we effectively gain statistics by a factor of
$\sim\sqrt{1000}$ using the boost.
\section{\label{Sec:CNB} Spectral distortion by cosmic neutrino background}
The standard cosmology predicts that the universe is filled with
background neutrinos, the cosmic neutrino background (CNB).
The neutrinos in a mass eigenstate follow the distribution,
\begin{equation}
f(\bm{p})=\frac{1}{1+e^{|\bm{p}|/T-\xi}}\,,
\end{equation}
where $\bm{p}$ is the neutrino momentum, $T\simeq 1.9\ \text{K}$
represents the neutrino temperature, and $\xi$ denotes the neutrino
degeneracy (assumed common to the three neutrino mass eigenstates),
whose absolute value is constrained as $O(0.1)$ or less by the primordial
nucleosynthesis
\cite{WagonerFowlerHoyle1966a,Rana1982a,KangSteigman1991a,SchwarzStuke2012a}.
The distribution of antineutrinos, $\bar f(\bm{p})$
is given by changing the sign of $\xi$. We take $\xi=0$ in the following
numerical calculation.
As pointed out in Ref.~\cite{YoshimuraSasaoTanaka2014a},
the RENP spectrum is distorted by the CNB owing to the Pauli principle.
The differential rate in Eq.~\eqref{Eq:DiffRate} is modified
by the Pauli-blocking factors as
\begin{equation}
d\Gamma_{ji}=
n^2V \frac{(\bm{d}_{pg}\cdot\langle\rho_{eg}\bm{E}\rangle)^2}
{(E_{pg}-E_\gamma)^2}
\sum_{\nu\text{ hel.'s}}|\mathcal{M}_W|^2
\{1-f(\bm{p})\}\{1-\bar f(\bm{p}')\}d\Phi_2\,.
\end{equation}
The spectral rate is obtained by integrating over the neutrino phase
space and summing over the neutrino mass eigenstates,
\begin{align}
&\Gamma(E_\gamma;T,\xi)=\sum_{j,i}\int d\Gamma_{ji}\nonumber\\
&=\Gamma_0\frac{8\pi}{(E_{pg}-E_\gamma)^2}\frac{E_\gamma}{E_{eg}}
\sum_{j,i}\int d\Phi_2\{1-f(\bm{p})\}\{1-\bar f(\bm{p}')\}\times\nonumber\\
&\phantom{\Gamma_0}\left[|c^A_{ji}|^2\left\{\frac{2}{3}\bm{p}\cdot\bm{p}'+
\frac{1}{2}(s-m_j^2-m_i^2)\right\}
-\delta_M\mathrm{Re}(c^{A2}_{ji})m_j m_i\right]\,.
\end{align}
\begin{comment}
The phase space integration may be done in either the CM or lab frame.
In the CM frame of the neutrino pair, the phase space is given by
\begin{equation}
d\Phi_2=\frac{\beta}{16\pi}d\cos\theta\,,
\end{equation}
where $\theta$ denotes the polar angle of the neutrino with
the $z$ axis taken to be $-\bm{q}$. The lab frame momenta
$\bm{p}$ and $\bm{p}'$ are described in terms of the CM frame parameter
as
\begin{equation}
|\bm{p}|^2=\frac{\beta^2}{4}(s+|\bm{q}|^2\cos^2\theta)
+\frac{|\bm{q}|^2}{4}\left(1+\frac{m_j^2-m_i^2}{s}\right)^2
-\frac{\beta}{2}\left(1+\frac{m_j^2-m_i^2}{s}\right)
q^0|\bm{q}|\cos\theta\,,
\end{equation}
\begin{equation}
|\bm{p}'|^2=\frac{\beta^2}{4}(s+|\bm{q}|^2\cos^2\theta)
+\frac{|\bm{q}|^2}{4}\left(1-\frac{m_j^2-m_i^2}{s}\right)^2
-\frac{\beta}{2}\left(1-\frac{m_j^2-m_i^2}{s}\right)
q^0|\bm{q}|\cos\theta\,,
\end{equation}
and
\begin{equation}
\bm{p}\cdot\bm{p}'=-\frac{\beta^2}{4}(s+|\bm{q}|^2\cos^2\theta)
+\frac{|\bm{q}|^2}{4}
\left[1-\left(\frac{m_j^2-m_i^2}{s}\right)^2\right]
+\frac{\beta}{2}\frac{m_j^2-m_i^2}{s}
q^0|\bm{q}|\cos\theta\,.
\end{equation}
In the lab frame, we obtain
\begin{align}
\Gamma_{ji}
=&\Gamma_0\frac{1}{(E_{pg}-E_\gamma)^2}\frac{E_\gamma}{E_{eg}}
\frac{1}{|\bm{q}|}\int_{E_-}^{E_+} dp^0
\{1-f(\bm{p})\}\{1-\bar f(\bm{p}')\}\times
\nonumber\\
&\left[|c^A_{ji}|^2\frac{1}{3}
\left\{2p^0 p'^0+\frac{1}{2}(s-m_j^2-m_i^2)\right\}
-\delta_M\mathrm{Re}(c^{A2}_{ji})m_j m_i\right]\,,
\end{align}
where
\begin{equation}
E_\pm=\frac{q^0}{2}\left(1+\frac{m_j^2-m_i^2}{s}\right)
\pm\frac{\beta}{2}|\bm{q}|\,.
\end{equation}
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/p2CNB10meV.pdf}
\hspace{0.05\textwidth}
\includegraphics[width=0.45\textwidth]{figs/p2CNBYb.pdf}
\caption{RENP spectral distortion by the CNB.
The ordinate is the ratio of RENP rates with and without
the Pauli blocking by the CNB.
The neutrino parameters are
$m_0=0.1$ meV, NO, Majorana and $\alpha=\beta=0$.
Left: $E_{eg}=10$ meV, $E_{pg}=1$ eV, no boost (black solid),
50\% boost (blue dashed) and 90\% boost (red dotted).
Right: Yb ($E_{eg}=2.14348$ eV, $E_{pg}=2.23072$ eV),
99\% boost (black solid),
99.9\% boost (blue dashed) and 99.99\% boost (red dotted).
The black lines are offset by 0.01 for better separations.}
\label{Fig:CNB}
\end{figure}
We illustrate possible spectral distortions in Fig.~\ref{Fig:CNB}
for the cases of a hypothetical atom with a very small level
splitting, $E_{eg}=10$ meV (left) and Yb (right).
The ordinate is the ratio of RENP rates with and without the Pauli blocking
by the CNB.
The mass of the lightest neutrino is chosen to be $0.1$ meV.
Although the spectral distortion is sizable in the case of no boost
for the tiny level splitting, an appropriate boost substantially
enhances the distortion for the both cases in Fig.~\ref{Fig:CNB}.
Effects of 10\% or more are expected near the endpoints.
\section{\label{Sec:Conclusion} Conclusion}
We have explored the effect of the initial spatial phase (ISP)
in the radiative emission of neutrino pairs (RENP).
The ISP is provided by the two-photon absorption with two lasers of
different frequencies in the preparation process of the coherent initial
state of a macroscopic target. The ISP factor is interpreted to give
a momentum $\bm{p}_{eg}$ to the initial state of RENP,
so that the RENP process with the ISP is called as boosted RENP.
Owing to the momentum conservation dictated by the macrocoherent rate
enhancement mechanism, $\bm{p}_{eg}$ changes the kinematics of the RENP
process as if the invariant mass of the parent particle decreases.
This effective reduction of the energy scale makes the RENP process
kinetically more sensitive to the emitted neutrino masses.
We have evaluated the effect of the ISP in the RENP spectra.
It is shown that the difference between the Dirac and Majorana neutrinos
is significantly enhanced in the boosted RENP as presented in
Figs.~\ref{Fig:DMNO} and \ref{Fig:DMIO}.
The figure of merit function in Fig.~\ref{Fig:FOMDM} shows that the best
choice of the boost factor provides us a statistical merit of $O(10)$.
In addition, the possible spectral distortion by the cosmic neutrino
background is investigated. As shown in Fig.~\ref{Fig:CNB},
the spectral distortion becomes more substantial in the boosted RENP.
For improved capability of the Dirac-Majorana distinction and
the CNB detection, it is vital
to incorporate the ISP effect (or the boost) in the design
of the RENP experiment.
The SPAN collaboration has already observed the signal of the paired
superradiance (PSR) from a parahydrogen target in the two-photon
absorption scheme~\cite{SPAN201X}.
They use two identical counter-propagating lasers at present,
so that no ISP is generated.
After establishing the preparation of the initial coherent state
by the two-photon absorption, the PSR with an ISP (boosted PSR)
becomes possible as a prototype of the boosted RENP.
\section*{Acknowledgments}
This work is supported in part by JSPS KAKENHI Grant Numbers
JP 15H02093, 15H03660, 15K13468, 16H00868, 16H03993, 17H02895 and 17H05405.
|
1,314,259,995,628 | arxiv | \subsection{The ``Teaser Figure''}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[manuscript,screen,review]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Intrinsic motivation and other forms of learning}\label{append:distinction}
Table \ref{tab:rlim} shows the difference between reinforcement learning and the use of IM. Reinforcement learning is an active process since the agent learns from its interactions with the environment, unlike classification or regression which are supervised methods. Unsupervised learning is a passive learning process, \textit{i.e.} it does not use predefined labels, or in other words, learns without a feedback. Finally, the substitution of the feedback by an intrinsic reward allows to break free from an expert supervision; however, the difference remains between IM and unsupervised learning in the sense that IM is an active process which implies interactions.
\begin{table}[t]
\centering
\caption{Type of learning. \textit{feedback} here refers to an expert supervision.}\label{tab:rlim}
\begin{tabular}{|l|l|l|}
\hline
& With \textit{feedback} & Without \textit{feedback} \\
\hline
Active & Reinforcement & Intrinsic motivation \\
Passive & Supervised & Unsupervised \\
\hline
\end{tabular}
\end{table}
\section{Challenges of RL tackled with IM}\label{sect:defis}
In this section, we identify four challenges in DRL for which IM provides a suitable solution. We illustrate these challenges and explain their importance.
\subsection{Sparse rewards}
Classic RL algorithms operate in environments where the rewards are \textbf{dense}, \textit{i.e.} the agent receives a reward after almost every completed action. In this kind of environment, naive exploration policies such as $\epsilon$-greedy \cite{sutton1998reinforcement} or the addition of a Gaussian noise on the action \cite{lillicrap2015continuous} are effective. More elaborated methods can also be used to promote exploration, such as Boltzmann exploration \cite{cesa2017boltzmann,mnih2015human}, an exploration in the parameter-space \cite{plappert2017parameter,ruckstiess2010exploring,fortunato2017noisy} or Bayesian RL \cite{ghavamzadeh2015bayesian}. In environments with \textbf{sparse} rewards, the agent receives a reward signal only after it executed a large sequence of specific actions. The game \textit{Montezuma's revenge} \cite{bellemare15} is a benchmark illustrating a typical sparse reward function. In this game, an agent has to move between different rooms while picking up objects (it can be keys to open doors, torches, ...). The agent receives a reward only when it finds objects or when it reaches the exit of the room. Such environments with sparse rewards are almost impossible to solve with the above mentioned exploration policies since the agent does not have local indications on the way to improve its policy. Thus the agent never finds rewards and cannot learn a good policy with respect to the task \cite{mnih2015human}. Figure \ref{im:sparse_reward} illustrates the issue on a simple environment
\begin{figure}
\begin{centering}
\includegraphics[width=10cm]{images/sparse_reward.png}
\caption{Illustration of the sparse reward issue in a very simple setting. The agent, represented by a circle, strives to reach the star. The reward function is one when the agent reaches the star and zero otherwise. On the left side, the agent explores with standard methods such as $\epsilon-greedy$; as a result, it stays in its surrounded area because of the temporal inconsistency of its behaviour. On the right side, we can imagine an ideal exploration strategie where the agent covers the whole state space to discover where rewards are located.}
\label{im:sparse_reward}
\end{centering}
\end{figure}
Rather than working on an exploration policy, it is common to shape an intermediary dense reward function which adds to the reward associated to the task in order to make the learning process easier for the agent \cite{su2015reward}. However, the building of a reward function often reveals several unexpected errors \cite{ng1999policy,amodei2016concrete} and most of the time requires expert knowledge. For example, it may be difficult to shape a local reward for navigation tasks. Indeed, one has to be able to compute the shortest path between the agent and its goal, which is the same as solving the navigation problem. On the other side, the automation of the shaping of the local reward (without calling on an expert) requires too high computational resources \cite{chiang2019learning}.
We will see in \S\ref{curiosity} how IM is a valuable method to encourage exploration in a sparse rewards setting. In \S\ref{sec:curriculum}, we also provide details on the value of IM in the context of\textit{curriculum learning}for exploration.
\subsection{Building a good state representation}\label{sec:staterepresentation}
What is a good state representation? \citename{bohmer2015autonomous} argue that, in standard RL, this representation must be markovian, able to represent the true value of the policy, generalize well and low-dimensional. Using such adapted feature space to learn a task can considerably accelerate the learning process \cite{raffin2019decoupling,de2018integrating} and may even help with other computations such as learning a forward model. The best way to do this may be to construct a minimal feature space with \textbf{disentangle features} \cite{bengio2013representation,lesort2018state}.
In order to better understand the importance of a relevant state representation in RL, let us consider a simple navigation task where the agent has to reach a target area in an empty space. If the agent accesses pixels input from above, it will have to extract its own position and the target position through complex non-linear transformations to understand which directions it has to take. At the opposite, if it has already access to its position, it will only have to check if its vertical and horizontal positions are greater, equals or smaller than those of the target. In standard RL, this problem is exacerbated, firstly because the only available learning process is the back-propagation of the reward signal, and secondly by the presence of noise in the raw state. It results that if the reward is sparse, the agent will not learn anything from its interactions even though interaction by themselves are rich in information. Furthermore, the state representation learned with a reward fully depends on the task and cannot be generalized to other tasks, whereas a state representation learned independently from the task can be used for other tasks.
Several works are about the learning of a relevant state representation. Auxiliary losses can complement the reward with supervised learning losses. It relies on information such as immediate reward or other predefined functions \cite{shelhamer2016loss,jaderberg2016reinforcement}. The agent may also use some prior knowledge on transitions \cite{jonschkowski2015learning,jonschkowski2017pves} or learn inverse models \cite{zhang2018decoupling}. There is a large literature on the best way to quickly build this kind of state space, we invite the interested reader to look at \cite{lesort2018state} for a general review and recommend \cite{bengio2013representation} for an introduction to the learning of representations. However, it is still difficult to get an entire disentangled representation of controllable objects since it can require interactions with the environment.
Although this issue did not attracted much attention, we will exhibit in Section \ref{sec:staterep} how IM can be a key component in order to build a state representation with such meaningful properties. We emphasize that we focus on works for which the intrinsic goal of the agent is to learn a \textit{state representation}. As a consequence, other ways to learn a \textit{state representation} are out of the scope of the section.
\subsection{Temporal abstraction of actions} \label{sec:abstraction}
Temporal abstraction of actions consists in using high-level actions, also called \textbf{options}, which can have different execution times \cite{sutton1999between}. Each option is associated with an \textbf{intra-option policy }which defines the action (low-level actions or other options) to realize in each state when the option is executed. The length of an option, which is the number of executed actions when an option is chosen, is often fixed. An \textbf{inter-option policy} can be in charge of choosing the options to accomplish. Abstract actions are a key element to accelerate the learning process since the number of decisions to take is significantly reduced if options are used. It also makes easier the \textit{credit assignment problem} \cite{sutton1998reinforcement}.
This problem refers to the fact that rewards can occur with a temporal delay and will only very weakly affect all temporally distant states that have preceded it, although these states may be important to obtain that reward. Indeed, the agent must propagate the reward along the entire sequence of actions (through Equation \eqref{eq:bellman}) to reinforce the first involved state-action tuple. This process can be very slow when the action sequence is large. This problem also concerns determining which action is decisive for getting the reward.
For example, let us assume that a robot is trying to reach a cake on a table which is far from the robot. If the robot has an option \texttt{get to the table} and follows it, the robot will then only have to take the cake to be rewarded. Then it will be easy to associate the acquisition of the cake (the reward) to the option \texttt{get to the table}. In contrast, if the robot has to learn to handle each of its joints (low-level or primitives actions), it will be difficult to determine which action is responsible of the acquisition of the cake, among all executed actions
Furthermore, using options can make exploration easier when rewards are sparse, as illustrated in Figure \ref{im:abstract_actions}. The problem of exploration becomes trivial for the agent using options, since one exploration action can lead to the reward, yet it requires an entire sequence of specific low-level actions for the other agent. This problem arises from the minimal number of actions needed to get a reward. A thorough analysis of this aspect can be found in \cite{nachum2019does}.
\begin{figure}
\begin{centering}
\includegraphics[width=7cm]{images/abstraction_action.png}
\caption{Illustration of the benefits of using \textit{options}. Agents, represented by circles, have to reach the star. The green agent can use an \textit{option} \texttt{Go to the far right}; the orange agent can only use primitive actions to reach the star.}
\label{im:abstract_actions}
\end{centering}
\end{figure}
Regarding the intra-option policy, it can be manually defined, but it requires some extra expert knowledge \cite{sutton1999between}. It can also be learnt with the reward function \cite{bacon2017option,riemer2018learning}, but then, options are not reusable for other tasks and are helpless for the exploration problem.
In Section \ref{gen_goal}, we investigate how IM can bring new insights in handling this.
\subsection{Building a curriculum}
Curriculum learning commonly takes place in the framework of multi-task reinforcement learning \cite{WilsonFRT07,LiLC09} where one agent tries to solve several tasks. This is about defining a schedule in the learning process. It comes from the observation that learning is much easier when examples or tasks are organized in a meaningful order \cite{bengio2009curriculum}. Typically, a curriculum could organize tasks in such a way that they are increasingly complex and close to each other. For example, an helpful curriculum may be to first learn to a robot how to grasp a cube and only then how to move the cube; this way, the robot can take advantage of its ability to grasp a cube to move it. Without any prior knowledge, a robot would probably never succeed in grasping and moving a cube since it requires a large sequence of actions (if the robot moves its joints).
Standard methods rely on pre-specified tasks sequences as a curriculum \cite{karpathy2012curriculum}, or expert score which acts as a baseline score \cite{SharmaR17}. Some other methods require strong assumptions \cite{FlorensaHWZA17}, rely on task decomposition \cite{WuZS18} or availability of source tasks \cite{SvetlikLSSWS17,riedmiller2018learning}. It follows that most of the time in standard methods, \textit{curriculum learning} requires an expert in one way or another.
At the opposite, we will demonstrate in Section \ref{sec:curriculum} that it is possible to replace expert knowledge with IM to both speed up multi-task learning and indirectly make exploration easier.
\section{\textit{Empowerment}} \label{empowerment}
As presented in Section \ref{sect:background_empowerment}, an agent that maximizes empowerment tries to have the most control on its environment. To maximize empowerment in RL, the agent is rewarded if it is heading towards areas where it controls its environment. The intrinsic reward function is then defined as:
\begin{align}
R_{int}(s,a,s') &= \Sigma(s') \nonumber \\
& \approx -\mathbb{E}_{\omega (a|s)} \log \omega (a|s) + \mathbb{E}_{p(s'|a,s)\omega (a|s)}\log p(a|s,s') \label{eq:entropy2}.
\end{align}
where $\omega (a|s)$ is the distribution choosing actions $a_t^n$. Ideally, $\omega (a|s)$ is the distribution maximizing Equation \eqref{eq:entropy2} in accordance with Equation \eqref{eq:meaning}.
The problem is that $p(a|s,s')$ is hard to obtain because it requires $p(s'|a,s)$ which is intractable. \\
\citename{mohamed2015variational} propose to compute the empowerment by approximating Equation \eqref{eq:entropy2}. To do this, they compute a lower bound of mutual information, used in many other works (Section \ref{miskill}):
\begin{equation}
I(a;s'|s) \geq H(a|s) + \mathbb{E}_{p(s'|a,s)\omega (a|s)}\log q_{\xi}(a|s,s'). \label{eq:vlb}
\end{equation}
The idea is to learn an approximator $q_{\xi}$ of the probability distribution $p(a|s,s')$ in a supervised way with maximum likelihood method by using data received by the agent from its environment. This approach allows to generalize the computation of empowerment in order to process continuous observations. In this work, experiments show that the maximization of \textit{empowerment} is particularly useful in dynamic environments, i.e. environments where the agent's state can change even if the executed action is stationary (e.g. the agent does not move). The classic example provided in \citename{mohamed2015variational} is the prey-predator environment: the prey is the learner and tries to avoid to be caught as its death will cause a loss of control on the next states. Implicitly, the prey avoids to die by maximizing its \textit{empowerment}. In contrast to a dynamic environment, a static environment has a static optimal policy (the agent stops moving when it finds the best state) making \textit{empowerment} as an intrinsic reward less interesting according to a task. However, experiments proposed in \citename{mohamed2015variational} use planning methods to estimate \textit{empowerment} instead of interactions with the environment to collect data, which implies the use of a forward model.\\
\textbf{VIC} \cite{gregor2016variational} tries to maximize \textit{empowerment} with interactions with the environment using $\omega(a|s) = \pi(a|s)$. The intrinsic reward then becomes :
\begin{equation}
R_{int}(a,h) = -\log \pi(a|h) + \log \pi(a|s',h)
\end{equation}
where $h$ is the observation history (including current observation and action). The experiments on diverse environments show that learned trajectories lead to diverse areas and that a pretraining using \textit{empowerment} helps to learn a task. However, learned tasks are still relatively simple. The main issue may be that the \textit{empowerment} is hard to compute. We found few works related to \textit{empowerment} not following the formalism, while still rewarding the control of the agent.
Instead of directly using mutual information, \textbf{Mega-reward} \cite{song2019mega} cuts out the pixel space into a matrix which defines the probability of control of the corresponded part of the image. The intrinsic reward is then the matrix sum. They also show that the matrix can act as a mask to hide uncontrollable features, what other intrinsic exploration methods \cite{burda2018exploration} can benefit from to reduce the white-noise problem in a long-term way (as opposite to ICM method which detects short-term controllable features). However the method is inherently linked to pixel state environments. \citename{chuck2019hypothesis} provide a specific architecture relying on multiple assumptions such as the fact that an object can not spontaneously change its direction or its proximity to objects it interacts with. The agent formulates hypothesis on the \textit{controllability} of objects, which it tries to verify through a specific policy rewarded with an intrinsic verification process. Checked hypothesis can then be used directly as skills.\\
\textit{Empowerment} may also be interesting in multi-agents RL. Multi-agents RL is similar to mono-agent RL except that several agent learn simultaneously to solve a task and have to coordinate with each other. \citename{jaques2019social} show that in a non-cooperative game, as social dilemma \cite{leibo2017multi}, an \textit{empowerment}-based intrinsic reward could stabilize the learning process; the agent acts in order to influence other agents instead of looking for extrinsically rewarded behaviors. In fact, it compensates for the decrease of individual reward caused by a policy maximizing the long-term reward of all the agents.
To sum up, \textit{empowerment} is an interesting method to avoid an extrinsic reward and keep various complex behaviors. The main difficulty using \textit{empowerment} in RL is its complexity. Several approaches use an environment model to compute the reward based on \textit{empowerment} \cite{mohamed2015variational,de2018unified}. However the very essence of RL is that the agent does not know \textit{a priori} environment dynamics or the reward function. Existing work in this context remains relatively limited and is not sufficient to demonstrate the potential of \textit{empowerment} to help the learning process. It is interesting to note that \textit{empowerment} can push an agent to learn behaviors even in \textit{a priori} static environments. Indeed, let us assume that the agent does not choose primitive actions directly, but {options} instead. If it has not learned options, it will be unable to distinguish them, thus it is as if the agent had no control on the environment. On the contrary, if its options are perfectly distinguishable in the state space, the agent has control on its environment. In fact, the issue is not about choosing the states maximizing {empowerment}, but about defining options which increase overall \textit{empowerment}. We will come back to this point in Section \ref{gen_goal}.
\section{Intrinsic rewards with expert knowledge}\label{append:expert}
In this part, we will first study an article highlighting the promises of the approach, but relying on strong assumptions. Then we will describe some used heuristics which can not generalize to all environments.
\paragraph{Strong assumptions:}Seminal work shows the interest of decomposing hierarchically actions. Among them, \citename{kulkarni2016hierarchical} present the \textbf{hierarchical-DQN} in which the goal representation is expertly defined with tuples $(entity1,relation,entity2)$. An entity can be an object on the screen or an agent, and the relation notably refers to a distance. Therefore, the goal can be for the agent to reach an object. This reward is one if the goal is reached, zero otherwise. They show that it can help the learning process particularly when rewards are sparse like in \textit{Montezuma's revenge}. In fact, the more hierarchical the task is, the more required a hierarchical policy is \cite{complexity_exploration}. However, by avoiding learning skill representation, \citename{kulkarni2016hierarchical} obfuscate the main problem: it is difficult to choose which features are interesting enough to be considered as goals in a large state space.
\paragraph{Particular heuristics:} Other works demonstrate the potential of the approach using auxiliary objectives specific to the task \cite{riedmiller2018learning} or more abstract ones \cite{dilokthanakul2019feature,rafati2019unsupervised}. More particularly, an heuristic regularly used to generate skills is the search for the states acting as a bottleneck \cite{mcgovern2001automatic,menache2002q}. The main idea is to identify pivotal states relatively to the next visited states (e.g. a door). Recent works \cite{zhang2019scheduled,tomar2018successor} use successor representation \cite{kulkarni2016deep} to generalize the approach to continuous state space. Other heuristic can be the search for salient events \cite{barto2004intrinsically,chentanez2005intrinsically} such as changes in light.
The limitation of this kind of works is that rewards are not sufficiently general to be applied in all environments. For example, there is no bottleneck state in an empty room whereas interesting skills can still be learned (going to the upper left corner).
\section{Simple goal sampling}\label{append:sampling}
Until now we have focused on IM as an intrinsic reward, however, this is not a general rule. For example, one can think of some simple strategies to choose tasks as long as the choice does not depend on an extrinsic reward. In this subsection, we study how such simple strategies can be efficient.
\citename{andrychowicz2017hindsight} fully take advantage of \textbf{HER} and experimented different ways to sample goals on which to learn from trajectories. First the agent randomly sample a transition, then it replaces the initial goal by another one. They propose four strategies to choose the replacing goal:
\begin{itemize}
\item
The final state of the episode whose transition is being replayed.
\item Random goals originating from the episode whose transition is being replayed
\item Sampling randomly from the buffer.
\item States arising after the transition being replayed in the same episode.
\end{itemize}
It appears that using future states or final states are the best working goals and generalization over the goal space pushes the agent towards its main goal. This is probably because these states act as a novelty bonus, helping the policy to generalize over the goal space and learn beyond its current abilities. In fact, count-based methods from Section \ref{sec:novelty} also reward the agent when it reaches a state it never went into: both methods have similar side-effects. The advantage of sampling methods compared to other contributions (\S\ref{sec:multi-armed} and \S\ref{sec:adversarial}) is that the agent continues to try to reach its true-goal state while performing exploration in the goal space. Few works extended HER while remaining in the field of IM. \textbf{Prioritized HER} \cite{zhao2019curiosity} proposes to adapt prioritized experience replay \cite{schaul2015prioritized} by over-weighting rare states. We can see this idea as an heuristic to consolidate novelty-based sampling. They slightly improve the results over HER at the cost of maintaining a density model.
Even though these methods learn with new sampled goals, they act based on an extrinsic goal to solve. Therefore, they require a goal parameterized target. To improve exploration without an extrinsic parameterized goal, \textbf{UNICORN} \cite{mankowitz2018unicorn} samples uniformly in a goal space to interact with. This strategy can be effective since new goals and the generalization ability of the agent can make it go toward boundaries of its skills. However, it is unclear how the agent could behave in a poorly constructed goal space (such as a pixel state space).
\section{Review of tasks involving IM}\label{tasks}
We identified four fundamentally different types of tasks on which IM methods are tested. In this subsection we emphasize their particularities and the solving algorithm proposed in the literature.
\subsection{Locomotion}
Locomotion tasks are mostly related to MuJoCo environments such as \textit{ant} or \textit{humanoid} where the goal of the task is to move an agent \cite{duan2016benchmarking}. Most related work consider exploration and skill acquisition methods. Exploration methods only solve easy locomotion tasks, e.g. Half-Cheetah having a 17-dim observation space and 6-dim action space \cite{houthooft2016vime,pmlr-v97-kim19a,fu2017ex2}. On the other side, skill acquisition methods manage to learn to move forward (by crawling or walking) on harder morphologies, e.g. \textit{Ant} having a 111-dim observation space and a 8-dim action space \cite{achiam2018variational,eysenbach2018diversity}. Interestingly, a diversity heuristic without extrinsic reward suffices to get representations of different interesting skills. It suggests that diversity heuristic could be enough to handle proprioceptive incoming data. However, currently, too much useless skills are learnt and they can not be used while being learnt.
\subsection{Manipulation}\label{sec:manipulation}
Manipulation tasks can be about moving, pushing, reaching objects for a movable robotic arm. Few exploration methods have been tested \cite{lee2019efficient,pathak2019self} and they only manage to touch and move some objects. It is particularly interesting for skill acquisition methods \cite{hausman2018learning,nair2018visual} but this is not actually a major focus since it lacks object-oriented objective (as argued in \S\ref{sec:staterepr}). It is a standard task for\textit{curriculum learning}algorithms \cite{colas2019curious,santucci2019autonomous} since, for example, an agent has to learn to reach an item before moving it.\textit{curriculum learning}algorithms can be very efficient but at the cost of a hand-made goal space.
\subsection{Navigation}\label{sec:navigation}
Navigation tasks are about moving an agent in a maze. This is the broadly tested task and includes every kind of methods we presented. It can consist in moving a MuJoCo \textit{ant} or \textit{swimmer} in order to pick up food or to reach a target area. In the same way, Atari games generally consist in moving an agent into a rich environment, but with simpler discrete action space. Similarly to manipulation tasks, it requires target-oriented behaviors and favors the use of skills as states rather than diversity heuristic (despite a lot of progress in this way made by \citename{sharma2019dynamics}). Exploration methods are particularly efficient in discovering new areas and make sense, but are brute force and could be considerably improved as discussed in Sections \ref{sec:binding} and \ref{sec:staterepr}. Results of exploration through curriculum (\S\ref{curriculum_exploration}) also showed to be a nice alternative to standard exploration methods (\S\ref{sec:adversarial}) because of\textit{curriculum learning}capacity to capture different reward mode (\S\ref{im:detachment}).
\subsection{First-person view navigation}\label{sec:first_view}
First-person view navigation tasks are particularly challenging since the agent only receives a partial first-person visual view of its state and must learn its true state (e.g. its position). There are few work addressing these environments, mostly for exploration \cite{pathak2017curiosity,savinov2018episodic,fu2017ex2}, but they manage to efficiently explore the environment \cite{savinov2018episodic}. There is a lack of an application of count-based methods showing whether partial observability is a drag for the method. To the best of our knowledge, there is no work that tackle these environments in skill learning methods. It suggests a large need for a low-cost way to build the true state of the agent from partial observations. Yet, it is also not tackled in state representation learning methods.
Nevertheless, standard RL methods could take advantage of breaking down the partial observability into a long-term one at the higher level of the hierarchy, and into a short-term one at a lower level of the hierarchy. It could make the training of a recurrent neural network easier by reducing the gap between a notable event and the moment one needs to retrieve it in memory to get a reward. For example, in a 3D maze where the agent tries to reach an exit, a long-term memory could memorize large areas the agent went into whereas the short-term memory could focus on short time coherent behaviors.
\section{Notations}\label{app:notations}
\begin{table*}[h]
\centering
\begin{tabular}{|c|c|}
\hline
$\propto$ & Proportional to \\
\hline
$x \sim p(\cdot)$ & $x \sim p(x)$ \\
\hline
$|| x ||_2$ & Euclidian norm of $x$ \\
\hline
$t$ & timestep \\
\hline
$Const$ & arbitrary constant \\
\hline
$A$ & set of possible actions \\
\hline
$S$ & set of possible states \\
\hline
$a \in A$ & action \\
\hline
$s \in S$ & state \\
\hline
$s_0 \in S$ & first state of a trajectory \\
\hline
$s_f \in S$ & final state of a trajectory \\
\hline
$s' \in S$ & state following a tuple $(s,a)$ \\
\hline
$h$ & history of interactions $(s_0,a_0,s_1,\dots)$\\
\hline
$\hat{s}$ & predicted states \\
\hline
$g \in G$ & goal \\
\hline
$s_g \in S$ & state used as a goal \\
\hline
$S_b$ & set of states contained in $b$ \\
\hline
$\tau \in \mathcal{T}$ & trajectory \\
\hline
$u(\tau)$ & function that extracts parts of the trajectory $\tau$ \\
\hline
$R(s,a,s')$ & reward function \\
\hline
$d^{\pi}_t(s)$ & t-steps state distribution \\
\hline
$d^{\pi}_{0:T}(S)$ & stationary state-visitation distribution of $\pi$ over a horizon T \\
& $\frac{1}{T} \sum_{t=1}^T d^{\pi}_t(S))$\\
\hline
$f$ & representation function \\
\hline
$z$ & compressed latent variable, $z=f(s)$ \\
\hline
$\rho \in \mathrm{P}$ & density model \\
\hline
$\phi \in \Phi$ & forward model \\
\hline
$\phi_T \in \Phi_T$ & true forward model \\
\hline
$q_{\omega} $ & parameterized discriminator \\
\hline
$\pi$ & policy \\
\hline
$\pi^g$ & policy conditioned on a goal $g$ \\
\hline
$nn_k(S,s')$ & k-th closest state to $s'$ in $S$ \\
\hline
$D_{KL}(p(x) || p'(x))$ & Kullback–Leibler divergence \\
& $\mathbb{E}_{x \sim p(\cdot)} \log \frac{p(x)}{p'(x)}$ \\
\hline
$H(X)$ & $-\int_{X} p(x)\log p(x)$ \\
\hline
$H(X|S)$ & $-\int_{S} p(s)\int_{X} p(x|s)\log p(x|s) dx ds$ \\
\hline
$I(X;Y)$ & $H(X) - H(X|Y)$\\
\hline
$I(X;Y|S)$ & $H(X|S) - H(X|Y,S)$ \\
\hline
$IG(h,A,S',S,\Phi)$ & Information gain \\
& $I(S';\Phi|h,A,S)$ \\
\hline
\end{tabular}
\caption{Notations used in the paper.}\label{tab:notations}
\end{table*}
\section{Introduction}
In reinforcement learning (RL), an agent learns by trial-and-error to maximize the expected rewards gathered as a result of its actions performed in the environment \cite{sutton1998reinforcement}. Traditionally, an agent maximizes a reward defined according to the task to perform: it may be a score when the agent learns to solve a game or a distance function when the agent learns to reach a goal. The reward is then considered as extrinsic (or as a feedback) because the reward function is provided expertly and specifically for the task. With an extrinsic reward, many spectacular results have been obtained on Atari game \cite{bellemare15} with the Deep Q-network (DQN) \cite{mnih2015human} through the integration of deep learning to RL, leading to deep reinforcement learning (DRL).
However, despite the recent improvements of DRL approaches, they turn out to be most of the time unsuccessful when the rewards are scattered in the environment, as the agent is then unable to learn the desired behavior for the targeted task \citep{franccois2018introduction}. Moreover, the behaviors learned by the agent are hardly reusable, both within the same task and across many different tasks \citep{franccois2018introduction}. It is difficult for an agent to generalize the learnt skills to make high-level decisions in the environment. For example, such skill could be \textit{go to the door} using primitive actions consisting in moving in the four cardinal directions; or even to \textit{move forward} controlling different joints of a humanoid robot like in the robotic simulator MuJoCo \citep{todorov2012mujoco}.
On another side, unlike RL, developmental learning \cite{piaget1952origins,cangelosi2018babies,oudeyer2016evolution} is based on the trend that babies, or more broadly organisms, acquire new skill while spontaneously exploring their environment \cite{gopnik1999scientist,barto2013intrinsic}. This is commonly called an intrinsic motivation (IM), which can be derived from an intrinsic reward. This kind of motivation allows to autonomously gain new knowledge and skills, which then makes the learning process of new tasks easier \cite{baldassarre2013intrinsically}. For several years now, IM is increasingly used in RL, fostered by important results and the emergence of deep learning. This paradigm offers a greater learning flexibility, through the use of a more general reward function, allowing to tackle the issues raised above when only an extrinsic reward is used. Typically, IM improves the agent ability to explore its environment, to incrementally learn skills independently of its main task, to choose an adequate skill to be improved and even to create a representation of its state with meaningful properties. In addition, as a consequence of its definition, IM does not require additional expert supervision, making it easily generalizable across environments.
\paragraph{Scope of our review.}
In this paper, we study and group together methods through a novel taxonomy based on information theoretic objectives. This way, \textbf{we revisit the notions of surprise, novelty and skill learning and show that they can encompass numerous works.} Each class is characterized by a computational objective that fits its eventual psychological definition. This allows us to situate/relate a large body of works and to highlight important directions of research. To sum up, this paper investigates the use of IM in the framework of DRL and considers the following aspects:
\begin{itemize}
\item The role of IM in addressing the challenges of DRL.
\item Classifying current heterogeneous works through few information theoretic objectives.
\item Important outlooks of IM in RL within and across each category.
\end{itemize}
\paragraph{Related works.} The overall literature on IM is huge \citep{barto2013intrinsic} and we only consider its application to DRL and IMs related to information theory. Therefore, our study of IMs is not meant to be exhaustive. Intrinsic motivation currently attracts a lot of attention and several works made a restricted study of the approaches. \citet{colas2020intrinsically} and \citet{amin2021survey} respectively focus on the different aspects of skill learning and exploration; \citet{baldassarre2019intrinsic} studies intrinsic motivation through the lens of psychology, biology and robotic ; \citet{pateria2021hierarchical} review hierarchical reinforcement learning as a whole, including extrinsic and intrinsic motivations; \citet{linke2020adapting} experimentally compare different goal selection mechanisms. In contrast with these approaches, we study a large part of objectives all based on intrinsic motivation through the lens of information theory. We assume that our work is in line with the work of \citet{schmidhuber2008driven}, which postulates that organisms are guided by the desire to compress the information they receive. However, by reviewing the more recent advances in the domain, we formalize the idea of compression with the tools from information theory.
This paper is organized as follows. As a first step, we discuss RL, define intrinsic motivation and explain how it fits the RL framework (\secref{sec:defs}). Then, we highlight the main current challenges of RL and identify the need for an additional outcome (\secref{sec:defis}). Thereafter, we briefly explain our classification (\secref{sec:classify}), namely surprise, novelty and skill learning and we detail how current works fit it (respectively \secref{sec:infogain}, \secref{sec:novelty} and \secref{sec:skilllearning}). Finally, we highlight some important outlooks of the domain (\secref{sec:outlooks}).
\section{Definitions and Background}\label{sec:defs}
In this section, we will review the background of RL field explain the concept of IM and described how to integrate IM in the RL framework through goal-parameterized RL, hierarchical RL and information theory.
\subsection{Markov decision process}\label{sec:mdp}
The goal of a Markov Decision Process (MDP) is to maximize the expectation of cumulative rewards received through a sequence of interactions \citep{puterman2014markov}. It is defined by: $S$ the set of possible states; $A$ the set of possible actions; $T$ the transition function $T : S \times A \times S \rightarrow p(s'|s,a)$; $R$ the reward function $R : S \times A \times S \rightarrow \mathbb{R}$; $d_0 : S \rightarrow \mathbb{R}$ the initial distribution of states. An agent starts in a state $s_0$ given by $d_0$. At each time step $t$, the agent is in a state $s_t$ and performs an action $a_t$; then it waits for the feedback from the environment composed of a state $s_{t+1}$ sampled from the transition function $T$, and a reward $r_t$ given by the reward function $R$. The agent repeats this interaction loop until the end of an episode. In reinforcement learning the goal can be to maximize the expected discounted reward defined by $\sum_{t=0}^{\infty} \gamma^t r_t$ where $\gamma \in[0,1]$ is the discount factor. When the agent does not access the whole state, the MDP can be extended to a Partially Observable Markov Decision Process (POMDP) \citep{kaelbling1998planning}. In comparison with a MDP, it adds a set of possible observations $O$ which defines what the agent can perceive and an observation function $\Omega: S \times O \rightarrow \mathbb{R}$ that defines the probability of observing $o \in O$ when the agent is in the state $s$, \textit{i.e} $\Omega(s,o) = p(o|s)$.
A reinforcement learning algorithm aims to associate actions $a$ to states $s$ through a policy $\pi$. This policy induces a t-steps state distribution that can be recursively defined as:
\begin{equation}
d^{\pi}_t(s) = \int_S d^{\pi}_{t-1}(s_{t-1}) \int_A p(s_t|s_{t-1},a)\pi(a|s_{t-1}) da\, ds_{t-1}\label{eq:dpi}
\end{equation}
with $d^{\pi}_0(s)=d_0$. The goal of the agent is then to find the optimal policy $\pi^*$ maximizing the reward:
\begin{equation}
\pi^* = \argmax{\pi} \mathbb{E}_{\substack{s_0\sim d_0(s)\\
a_t \sim \pi(\cdot|s{_t})\\
s_{t+1}\sim p(\cdot|s_t,a_t)}}
\left[\sum_{t=0}^{\infty} \gamma{^t} R(s{_t},a_t,s_{t+1})\right] .
\end{equation}
In order to find the action maximizing the long-term reward in a state $s$, it is common to maximize the expected discounted gain following a policy $\pi$ from a state, noted $V_{\pi}(s)$, or from a state-action tuple, noted $Q_{\pi}(s,a)$ (cf. \eqref{eq:espeQ}). It enables to measure the impact of the state-action tuple in obtaining the
cumulative reward \cite{sutton1998reinforcement}.
\begin{equation}
Q_{\pi}(s,a) = \mathbb{E}_{\substack{a{_t}\sim\pi(\cdot|s{_t})\\
s_{t+1}\sim p(\cdot|s_t,a_t)}}
\left[\sum_{t=0}^{\infty} \gamma{^t} R(s{_t},a{_t},s_{t+1})|_{s_0=s,a_0=a} \right]. \label{eq:espeQ}
\end{equation}
To compute these values, one can take advantage of the Bellman equation verified by the optimal Q-function:
\begin{equation}
\label{eq:bellman}
Q^*(s_t,a_t) = \mathbb{E}_{s_{t+1}\sim p(\cdot|s_t,a_t)} \big[ R(s_t,a_t,s_{t+1}) + \gamma \: \max_a Q^*(s_{t+1},a) \big].
\end{equation}
$Q$ and/or $\pi$ are often approximated with neural networks when the state space is continuous or very large \cite{mnih2016asynchronous,lillicrap2015continuous}.
\subsection{Definition of intrinsic motivation}\label{sec:defint}
Simply stated, intrinsic motivation is about doing something for its inherent satisfaction rather than to get a positive feedback from the environment \cite{ryan2000intrinsic}. Looking at this definition, one can notice that intrinsic motivation is defined by contrast with extrinsic motivation; it highlights the difference between the two paradigms. Intrinsic motivation assumes the agent learns on its own while extrinsic motivation assumes there exits an expert/need that supervises the learning process.
According to \citet{singh2010intrinsically}, evolution provides a general intrinsic motivation (IM) function that maximizes a fitness function based on the survival of an individual. Curiosity, for instance, does not immediately produce selective advantages but enables the acquisition of skills providing by themselves some selective advantages. More widely, the use of intrinsic motivation allows to obtain intelligent behaviors which may later serve goals more efficiently than with only a standard reinforcement \cite{baldassarre2013intrinsically,baldassarre2011intrinsic,lehman2008exploiting}. Typically, a student doing his mathematical homework because he/she thinks it is interesting is intrinsically motivated whereas his/her classmate doing it to get a good grade is extrinsically motivated \cite{ryan2000intrinsic}. In this future, the intrinsically motivated student may be more successful in math than the other one. This questions the relevance of using only standard reinforcement methods.
More rigorously, \citet{oudeyer2008can} explain that an activity \textit{is intrinsically motivating for an autonomous entity if its interest depends primarily on the collation or comparison of information from different stimuli and independently of their semantics}. At the opposite, an extrinsic reward results of an unknown environment static function which does not depend on previous experience of the agent on the considered environment. The main point is that the agent must not have any \textit{a priori} on the semantic of the observations it receives. Here the term \textit{stimuli} does not refer to sensory inputs, but more generally to the output of a system which may be internal or external to the independent entity, thereby including \textit{homeostatic} body variables (temperature, hunger, thirst, attraction to sexual activities \dots) \cite{baldassarre2011intrinsic,berlyne1965structure}. Broadly speaking, the motivation of an agent can be internal (\textit{source of motivation}) while still being extrinsic (\textit{why} of the actions). For instance, when an agent is looking for food because of the hunger, hunger is a stimuli coming to the cognitive system of the agent such that it is an internal but extrinsic motivation. As an other example, a child may do his/her home-works because he/she thinks it will be crucial to latter get a job. While the source of the motivation is internal, the true outcome comes from the environment.
Now that the we clarified the notion of intrinsic motivation, we study how to integrate intrinsic motivation in the RL framework.
An extensive overview of IM can be found in \citet{barto2013intrinsic}.
\subsection{A model of RL with intrinsic rewards}\label{sec:modelRL}
Reinforcement learning is derived from behaviorism \cite{skinner} and usually uses extrinsic rewards \cite{sutton1998reinforcement}. However \citet{singh2010intrinsically} and \citet{barto2004intrinsically} reformulated the RL framework to incorporate IM. We can differentiate \textit{rewards}, which are events in the environment, and \textit{reward signals} which are internal stimulis to the agent. Thus, what is named \textit{reward} in the RL community is in fact a \textit{reward signal}. Inside the \textit{reward signal} category, there is a distinction between \textit{primary reward signals} and \textit{secondary reward signals}. The \textit{secondary reward signal} is a local \textit{reward signal} computed through expected future rewards and is related to the value function
whereas the \textit{primary reward signal} is the standard \textit{reward signal} received from the MDP.
In addition, rather than considering the MDP environment as the environment in which the agent achieves its task, it suggests that the MDP environment can be formed of two parts: the \textbf{external part} which corresponds to the potential task and the environment of the agent; the \textbf{internal part} which computes the MDP states and the \textit{secondary reward signal} using potentially previous interactions. Consequently, we can consider an intrinsic reward as a \textit{reward signal} received from the MDP environment. The MDP state is no more the external state but an internal state of the agent. However, from now, we will follow the terminology of RL and the term \textit{reward} will refer to the \textit{primary reward signal}.
Figure \ref{im:rlintrinsic} summarizes the framework: the critic is in the internal part of the agent, it computes the intrinsic reward and deals with the credit assignment. The agent can merge intrinsic rewards and extrinsic rewards in its internal part. The state includes sensations and any form of internal context; in this section we refer to this state as a contextual state. The decision can be a high-level decision decomposed by the internal environment into low-level actions.
\begin{figure}
\begin{centering}
\includegraphics[width=0.4\linewidth]{images/IM.drawio.pdf}
\caption{Model of RL integrating IM, taken in \protect\citet{singh2010intrinsically}. The environment is factored into an internal and external environment, with all reward coming from the former.}
\label{im:rlintrinsic}
\end{centering}
\end{figure}
This conceptual model incorporates intrinsic motivations into the formalism of MDP. Now, we will review how this model is instantiated in practice. Indeed it is possible to extend RL to incorporate the three new components that are intrinsic rewards, high-level decisions and contextual states. We separately study them in the following sections.
\subsection{Intrinsic rewards and information theory}
Throughout our definition of intrinsic motivation, one can notice that the notion of \textit{information} comes up a lot. This is not hazardous and quantifying information proves useful to generate intrinsic rewards. In this section, we provide the basics about information theory and explain how to combine intrinsic and extrinsic rewards. However, we emphasize that intrinsic rewards are not restricted to information measures and their characterization mostly depends one whether the reward function fits the properties of an intrinsic motivation.
The Shannon entropy quantifies the mean necessary information to determine the value of a random variable. Let $X$ be a random variable with a law of density $p(X)$ satisfying the normalization and positivity requirements, we define its entropy by:
\begin{equation}
H(X) = -\int_{X} p(x)\log p(x) .
\end{equation}
In other words, it allows to quantify the disorder of a random variable. The entropy is maximal when $X$ follows a uniform distribution, and minimal when $p(X)$ is equal to zero everywhere except in one value, which is a Dirac distribution. From this, we can also define the entropy conditioned on a random variable $S$. It is similar to the classical entropy and quantifies the mean necessary information to find $X$ knowing the value of an other random variable $S$:
\begin{equation}
H(X|S) = -\int_{S} p(s)\int_{X} p(x|s)\log p(x|s).
\end{equation}
The mutual information allows to quantify the information contained in a random variable $X$ about an other random variable $Y$. It can also be viewed as the decrease of disorder brought by a random variable $Y$ on a random variable $X$. The mutual information is defined by:
\begin{equation}
I(X;Y) = H(X) - H(X|Y)
\end{equation}
We can notice that the mutual information between two independent variables is zero (since $H(X|Y)=H(X)$). Similarly to the conditional entropy, the conditional mutual information allows to quantify the information contained in a random variable about an other random variable, knowing the value of a third one. It can be written in various ways:
\begin{subequations}
\begin{align}
I(X;Y|S) &= H(X|S) - H(X|Y,S) = H(Y|S) - H(Y|X,S) \label{information2} \\
&= D_{KL} \Big[ p(X,Y|S) || p(X|S)p(Y|S)\Big] \label{kldiv}
\end{align}
\end{subequations}
We can see with \eqref{information2} that the mutual information is symmetric and that it characterizes the decrease in entropy on X brought by Y (or inversely). \eqref{kldiv} defines the conditional mutual information as the Kullback-Leibler divergence \cite{cover2012elements} between distribution $P(Y,X|S)$ and the same distribution if $Y$ and $X$ were independent variables (the case where $H(Y|X,S) = H(Y|S)$).
For further information on these notions, the interested reader can refer to \citet{cover2012elements}. Sections 5, 6, 7 illustrate how we can use information theory to reward an agent. In practice, there are multiple ways to integrate an intrinsic reward into a RL framework. The main approach is to compute the agent's reward $r$ as a weighted sum of an intrinsic reward $r_{int}$ and an extrinsic reward $r_{ext}$: $r=\alpha r_{int} + \beta r_{ext}$ \cite{kakade2002dopamine,burda2018exploration}. Of course, one of the weighting coefficient $\alpha$ and $\beta$ can be set to 0.
\subsection{Decisions and hierarchical RL}\label{sec:hrl}
Hierarchical reinforcement learning (HRL) architectures are adequate candidates to model the decision hierarchy of an agent \cite{barto2003recent,dayan1993feudal,sutton1999between}. \citet{dayan1993feudal} introduced the feudal hierarchy, called \textit{Feudal reinforcement learning}. In this framework, a manager selects the goals that workers will try to achieve by selecting low-level actions. Once the worker achieved the goal, the manager can select an other goal, so that the interactions keep going. The manager rewards the RL-based worker to guide its learning process; we formalize this with intrinsic motivation in the next section. Below, \figureautorefname~\ref{im:abstract_actions} illustrates the use of a hierarchical decision in contrast with the use of low-level actions. At the origin, the hierarchical architectures have been introduced to make easier the long-term credit assignment \cite{dayan1993feudal,sutton1999between}. This problem refers to the fact that rewards can occur with a temporal delay and will only very weakly affect all temporally distant states that have preceded it, although these states may be important to obtain that reward. Indeed, the agent must propagate the reward along the entire sequence of actions (through \eqref{eq:bellman}) to reinforce the first involved state-action tuple. This process can be very slow when the action sequence is large. This problem also concerns determining which action is decisive for getting the reward, among all actions of the sequence. In contrast, if an agent can take advantage of temporally-extended actions, a large sequence of low-level actions become a short sequence of time-extended decisions that make easier the propagation of rewards.
This goal setting mechanism can be extended to create managers of managers so that an agent can recursively define increasingly abstract decisions as the hierarchy of RL algorithms increases. Relatively to \figref{im:rlintrinsic}, the internal environment of a RL module becomes the lower level module. We can model these decisions as \textit{options}. An \textit{option} $op \in \mathcal{O}$ is defined through 3 components: 1- A set of starting states $\mathcal{I} \subset S$ from which an \textit{option} can be applied; 2- A policy (or worker) that is responsible of achieving the \textit{options} with lower-level actions. This is studied in the next section; 3- A completion function $\mathcal{F}$ that specifies the probability of completing the \textit{option} in each state.
Typically, the starting state can derive from $d_0$ (all \textit{options} start at the beginning of an episode) or the full set of states $S$ (\textit{options can start everywhere}). The completion function can also set a probability $0$ everywhere \cite{eysenbach2018diversity}, in this case, it ends at the same time as an episode. Such specific cases often occur \cite{eysenbach2018diversity}. \textit{Options} where originally learnt during a pre-training phase with exclusively extrinsic rewards \cite{sutton1999between}, it was meant to take advantage of expert knowledge on the task. However, in our framework, we are interested in intrinsically motivated agent, so, in the next section, we take a closer look on how to learn the policies that learn to achieve goals using intrinsic motivation. In particular, we will define goals, skills and explain how to build a contextual state.
\subsection{Goal-parameterized RL}\label{sec:goalpam}
Usually, RL agents solve only one task and are not suited to learn multiple tasks. Thus, an agent is unable to generalize across different variants of a task. For instance, if an agent learns to grasp a circular object, it will not be able to grasp a square object. In the developmental model described in \secref{sec:modelRL}, the decisions can be hierarchically organized into several levels where an upper-level takes decision (or sets goals) that a lower-level has to satisfy. This questions: 1- how a DRL algorithm can make its policy dependent on the goal set by its upper-level decision module ? 2- How to compute the intrinsic reward using the goal ? These issues rise up a new formalism based on developmental machine learning \cite{colas2020intrinsically}.
In this formalism, a \textbf{goal} is defined by the pair $(g,R_G)$ where $G \subset \mathbb{R}^d$, $R_G$ is a goal-conditioned reward function and $g \in G$ is the $d\text{-dimensional}$ goal embedding. This contrasts with the notion of task which is proper to an extrinsic reward function assigned by an expert to the agent. With such embedding, one can generalize DRL to multi-goal learning, or even to every available goal in the state space, with the Universal Value Function Approximator (UVFA) \cite{schaul2015universal}. UVFA integrates, by concatenating, the state goal embedding $g$ with the state of the agent to create a contextual state $c = (g,s)$. Depending on the semantic meaning of a skill, we can further enhance the contextual states with other actions or states executed after starting executing the skill (cf. \secref{sec:skilllearning}).
We can now define the \textbf{skill} associated to each goal as the goal-conditioned policy $\pi^g(a|s)=\pi(a|g,s)$; in other words, a skill refers to the sensorimotor mapping that achieve a goal \cite{thill2013theories}. This skill may by learnt or unlearnt according to the expected intrinsic rewards it gathers. It implies that, if the goal space is well-constructed (as often a ground state space for example, $R_G=S$), the agent can generalize its policy across the goal space, \textit{i.e} the corresponding skills of two close goals are similar. For example, let us consider an agent moving in a closed maze where every position in the maze can be a goal. We can set $G=S$ and set the intrinsic reward function to be the euclidean distance between the goal and the current state of the agent $R_G: S \times G \rightarrow \mathbb{R}, (s,g) \rightarrow ||s-g||_2$.
This formalism completes the instantiation of the architectures described in \secref{sec:modelRL}. Now we will explain how, in practice, one can efficiently learn the goal-conditioned policy.
\subsection{Efficient learning with goal relabelling}\label{sec:relabeling}
When the goal space is a continuous state space, it is difficult to determine whether a goal is reached or not, since two continuous values are never exactly equal. Hindsight experience replay (HER) \cite{andrychowicz2017hindsight} tackles this issue by providing a way to learn on multiple objectives with only one interaction. With author's method, the agent can use an interaction done to accomplish one goal to learn on an other goal, by modifying the associated intrinsic reward. This mechanism greatly improves the sample efficiency since it avoids to try all interactions for every goals.
Let us roll out an example. An agent acts in the environment to gather a tuple $(s,s',r_g,a,g)$ where $r_g$ is the reward associated to the goal $g$. The agent can learn on this interaction, but can also use this interaction to learn other goals; to do so, it can change the goal into a new goal and recompute the reward, resulting in a new interaction $(s,s',r_{g'},a,g')$. The only constraint for doing this is that the reward function $R(s,a,s',g')$ has to be known, which is the case with an intrinsic reward function. Typically, an agent can have a goal state and a reward function which is $1$ if it is into that state and $0$ otherwise. At every interaction, it can change its true goal state for its current state and learn with a positive reward.
\section{Challenges of DRL}\label{sec:defis}
In this section, we detail two main challenges of current DRL methods that are partially addressed by IMs.
\subsection{Sparse rewards} \label{sec:sparse}
Classic RL algorithms operate in environments where the rewards are \textbf{dense}, \textit{i.e.} the agent receives a reward after almost every completed action. In this kind of environment, naive exploration policies such as $\epsilon$-greedy \cite{sutton1998reinforcement} or the addition of a Gaussian noise on the action \cite{lillicrap2015continuous} are effective. More elaborated methods can also be used to promote exploration, such as Boltzmann exploration \cite{cesa2017boltzmann,mnih2015human} or an exploration in the parameter-space \cite{plappert2017parameter,ruckstiess2010exploring,fortunato2017noisy}. In environments with \textbf{sparse} rewards, the agent receives a reward signal only after it executed a large sequence of specific actions. The game \textit{Montezuma's revenge} \cite{bellemare15} is a benchmark illustrating a typical sparse reward function. In this game, an agent has to move between different rooms while picking up objects (it can be keys to open doors, torches, ...). The agent receives a reward only when it finds objects or when it reaches the exit of the room. Such environments with sparse rewards are almost impossible to solve with the above mentioned \textit{undirected} exploration policies \cite{thrun1992efficient} since the agent does not have local indications on the way to improve its policy. Thus the agent never finds rewards and cannot learn a good policy with respect to the task \cite{mnih2015human}. Figure \ref{im:sparse_reward} illustrates the issue on a simple environment.
This issue stresses out the need for \textit{directed} exploration methods \cite{thrun1992efficient}. While intrinsic motivation can provide such direction, the principle of "optimism in face of uncertainty" \cite{audibert2007tuning} can also execute a directed exploration without intrinsic motivation \cite{thrun1992efficient}. Briefly, this principle can incite agents to go in areas with a lot of epistemic uncertainties about its Q-values \cite{ciosek2019better,pacchiano2020optimism}. Yet, it is hard to approximate the epistemic uncertainty and it only slightly improves exploration \cite{ciosek2019better}. This principle can also relate with some intrinsic motivations when we consider uncertainty about models (see \secref{sec:infogainforward}).
\begin{figure}
\begin{centering}
\includegraphics[width=10cm]{images/sparse_rewards.drawio.pdf}
\caption{\rebut{Example of a very simple sparse reward environment, explored by two different strategies}. The agent, represented by a circle, strives to reach the star. The reward function is one when the agent reaches the star and zero otherwise. (a) the agent explores with standard methods such as $\epsilon\text{-greedy}$; as a result, it stays in its surrounded area because of the temporal inconsistency of its behaviour. (b) we imagine an ideal exploration strategy where the agent covers the whole state space to discover where rewards are located. \rebut{The fundamental difference between the two policies is the volume of the state space explored for a given time.}}
\label{im:sparse_reward2}
\end{centering}
\end{figure}
Rather than working on an exploration policy, it is common to shape an intermediary dense reward function that adds to the reward associated to the task in order to make the learning process easier for the agent \cite{su2015reward}. However, the building of a reward function often reveals several unexpected errors \cite{ng1999policy,amodei2016concrete} and most of the time requires expert knowledge. For example, it may be difficult to shape a local reward for navigation tasks. Indeed, one has to be able to compute the shortest path between the agent and its goal, which is the same as solving the navigation problem. On the other side, the automation of the shaping of the local reward (without calling on an expert) requires too high computational resources \cite{chiang2019learning}. We will see in \secref{sec:infogain}, \ref{sec:novelty} and \ref{sec:skilllearning} how IM is a valuable method to encourage exploration in a sparse rewards setting.
\subsection{Temporal abstraction of actions} \label{sec:abstraction}
As argued in \secref{sec:hrl}, skills, through hierarchical RL, are a key element to speed up the learning process since the number of decisions to take is significantly reduced when skills are used. In particular, they make easier the \textit{credit assignment}. Skills can be manually defined, but it requires some extra expert knowledge \cite{sutton1999between}. To avoid providing hand-made skills, several works proposed to learn them with extrinsic rewards \cite{bacon2017option,subpolicy2020li}. However, if an agent rather learns skills in a \textit{bottom-up} way, \textit{i.e} with intrinsic rewards rather than extrinsic rewards, learnt skills become independent from possible tasks. This way, skills can be reused across several tasks to improve transfer learning \cite{aubret2020elsim,heess2016learning} and an agent can learn skills even though it does not access rewards, improving exploration when rewards are sparse \cite{machado2017laplacian}. Let us illustrate both advantages.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\linewidth]{images/abstraction_action.drawio.pdf}
\caption{\rebut{Example of two policies in a simple environment, one uses \textit{skills} (yellow), the other one only uses primitive actions (blue)}. Agents have to reach the star.}
\label{im:abstract_actions}
\end{centering}
\end{figure}
\paragraph{Exploration when rewards are sparse.} \figref{im:abstract_actions} illustrates the benefit in terms of exploration when an agent hierarchically uses skills.
The yellow agent can use a skill \textit{Go to the far right}, to reach the rewarding star while the blue agent can only use low-level cardinal movements.
The problem of exploration becomes trivial for the agent using skills, since one exploratory action can lead to the reward. In contrast, it requires an entire sequence of specific low-level actions for the other agent to find the reward. This problem arises from the minimal number of specific actions needed to get a reward (see also \secref{sec:sparse}). A thorough analysis of this aspect can be found in \cite{nachum2019does}.
\paragraph{Reusing skills across several tasks.} Skills learnt with intrinsic rewards are not specific to a task. Assuming an agent is required to solve several tasks in a similar environment, \textit{i.e} a single MDP with a changing extrinsic reward function, an agent can execute its discovered skills to solve all tasks. Typically, in \figref{im:abstract_actions}, if both agents learnt to reach the star and we move the star somewhere else in the environment, the yellow agent would still be able to execute \textit{Go to the far right} and executing this skill may make the agent closer to the new star. In contrast, the blue agent would have to learn a whole new policy. In \secref{sec:skilllearning}, we provide insights on how an agent can discover skills in a \textit{bottom-up} way.
\section{Classification of methods}\label{sec:classify}
In order to tackle the problem of exploration, an agent may want to identify and return in \textbf{rarely visited} states or \textbf{unexpected} states, which can be quantified with current intrinsic motivations. We will particularly focus on two objectives that address the challenge of exploring with sparse rewards, each with different properties: maximizing novelty and surprise. We formalize novelty and surprise through the lens of information theory (in respectively \secref{sec:novelty} and \secref{sec:infogain}) and the works that instantiate it. Surprise and novelty are specific notions that have often been used in an interchanged way and we are not aware of a currently unanimous definition of novelty \cite{barto2013novelty}. The third notion we study, skill learning, focuses on the issue of skill abstraction.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Surprise}: $I(S';\Phi_T|h,S,A)$, \secref{sec:infogain}} }\\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Information gain & Information gain & Information gain \\
& over forward model & over the true model & over density model \\
\hline
Sections & \secref{sec:infogainforward} & \secref{sec:predictionerror} & \secref{sec:infogaindensity} \\
\hline
Rewards & $D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h))$ & $||s' - \hat{s}'||_2^2$ & $\frac{1}{\sqrt{\hat{N}(s')}}$ \\
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Novelty}: $I(S;Z)$, \secref{sec:novelty}}}
\\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Parametric density & \multicolumn{2}{c|}{K-nearest neighbors} \\
\hline
Sections & \secref{sec:directdensity} & \multicolumn{2}{c|}{\secref{sec:knearest}} \\
\hline
Rewards & $- \log \rho(s')$ & \multicolumn{2}{c|}{$\log (1+ \frac{1}{K} \sum_0^K || g(s') - nn_k(g(S),g(s')) ||_2)$} \\
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Skill learning}: $I(G; f(\mathcal{T}))$, \secref{sec:skilllearning}}} \\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Fixed goal distribution & Goal-state & Proposing diverse goals \\
& & achievement & \\
\hline
Sections & \secref{sec:predefinedG} & \secref{sec:goalstate} & \secref{eq:diversestate} \\
\hline
Rewards & $\log p(g|s')$ & $-||s_g-s'||_2^2$ & $(1+\alpha_{skew}) \log p(s_g)$ \\
& & & $\alpha_{skew} < 0$ \\
& & & (Goal selection policy) \\
\hline
\end{tabular}
\caption{Summary of our taxonomy of intrinsic motivations in DRL. The function $f$ outputs a part of the trajectories $\mathcal{T}$, $Z$ and $G$ are internal random variables respectively denoting state representations and self-assigned goals. Please, refer to the corresponding sections for more details about methods and notations. The reward function aims to represent the one used in the category.}
\label{tab:taxonomy}
\end{table}
Table \ref{tab:taxonomy} sums up our taxonomy. We classify intrinsic motivations in three categories of objectives based on information theory that reflects the high-level studied concepts of novelty, surprise and skill learning. In practice, we mostly take advantage of the \textit{mutual information} to provide a quantity for our conceptual objectives. These objectives are compatible with each other and may be used simultaneously, as argued \secref{sec:flatim}. Within each category of objectives, we additionally highlight several ways to maximize each objective and provide details about the underlying methods of the literature.
\subsection{Surprise}
Following the definition of \citet{itti2009bayesian}, we re-explore the notion of surprise and quantify it by $I(S';\Phi_T|h,S,A)$ where $h$ refers to a dataset of interactions and $\Phi_T$ represents the distribution over parameters of true forward/density models. Based on the works we analyze, we study surprise maximization over density models and forward models, which are two ways of measuring the unexpectedness. Surprise can also be maximized using prediction error and learning progress over a forward model
\subsection{Novelty}
Based on the analysis of \citet{barto2013novelty}, we define novelty-seeking behavior as actively maximizing the mutual information between states and a learnt representation of states $Z$, $I(S;Z)$. We divide this objective maximization into two kinds of methods: a direct maximization of a parametric entropy of embedded states, and an entropy maximization based on a k-nearest neighbors approximation.
\subsection{Skill learning}
We formalize skill learning as maximizing the mutual information between a goal representation $G$ and a part of a time-extended trajectory $f(\mathcal{T})$, $I(G; f(\mathcal{T}))$ while following $G$. We will consider two ways to achieve this: 1- fixing the goal distribution; 2- deriving the goal representation from the state space. We will see that the second point also needs to maximize the entropy of goals-states.
We justify our objective within each category and study the different ways to maximize this objective and the advantages/disadvantages. In practice, surprise and novelty are currently maximized as a flat intrinsic motivation, \textit{i.e} without using hierarchical decisions. This mostly helps to improve exploration when rewards are sparse. In contrast, skill learning allows to define time-extended hierarchical skills that enjoy all the benefits argued in \secref{sec:abstraction}.
\section{Surprise}\label{sec:infogain}
In this section, we study methods that maximize the surprise. Firstly, we formalize the notions of surprise, then we will study three approaches for computing intrinsic rewards based of these notions.
\subsection{Definition of surprise}\label{sec:expecsurprise}
In this section, we assume the agent learns either a density model (\secref{sec:infogaindensity}) or a forward model of the environment (Sections \ref{sec:infogainforward} and \ref{sec:predictionerror}) parameterized by $\phi \in \Phi$. The density model induces a marginal distribution of state $p(S|\phi)$ and a forward model computes the next-state distribution conditioned on a tuple state-action $p(S'|S,A,\phi)$. Typically, this can be the parameters of a neural network. Trying to approximate the true model, the agent maintains an approximate distribution $p(\Phi|h)$ of models, where $h_t=h$ refers to the ordered history of interactions $((s_0,a_0,s_1),(s_1,a_1,s_2),\dots, (s_{t-1},a_{t-1},s_t))$. In this section, $h$ simulates a dataset of interactions, we use it to clarify the role of the dataset. It is important to notice that the policy feeds this $h$.
In this case, \textbf{surprise quantifies the mismatch between an expectation and the true experience of an agent} \cite{barto2013novelty,ekman1994nature}. In this paper, we refer to the definition of \citet{itti2009bayesian}, which define it as the discrepancy between a prior distribution of beliefs and the posterior probability distribution following an observation \cite{itti2009bayesian,storck1995reinforcement}. If an agent maximizes the surprise over a model through interactions with the environment, which is often the case \cite{barto2013novelty}, it leads to the expected information gain objective \cite{sun2011planning}. Intuitively, the agent returns in states where it experienced an unexpected transition. Using the KL-divergence to assess the discrepancy, surprise can be computed as $D_{KL}(p(\Phi|h_{t+1})||p(\Phi|h_t))$ where $\phi \in \Phi$ are parameters of a model and $t$ denotes the timestep.
In this case, the agent has a prior distribution about model parameters $p(\Phi)$ and this model can be updated using the Bayes rule:
\begin{equation}
p(\phi|h,s,a,s') = \frac{p(\phi|h)\; p(s'|h,s,a,\phi)}{p(s'|h,s,a)}.
\end{equation}
\paragraph{Information gain over agent's model.} The expected information gain \cite{sun2011planning,little2013learning} over a forward or density model parameterized by $\phi$ can be formulated as:
\begin{align}
IG(h,A,S',S,\Phi) &= I(S';\Phi|h,A,S) = \mathbb{E}_{\substack{ (s,a) \sim p(\cdot|h) \\ s' \sim p(\cdot | s,a,h)}} D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:trueexpectedinfogain} \\
%
&\approx \mathbb{E}_{\substack{ (s,a) \sim \pi \\ s' \sim p(\cdot | s,a,h,\phi_T)}} D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:expectedinfogain}
\end{align}
Actively maximizing the expected information gain amounts to reduce the uncertainty of the model. We emphasize that $p(\phi|h) = p(\phi|h,a,s)$ since only full transitions provide information about the true dynamics of the environment. In this case, $p(s'| s,a,h)$ does not refer to the probability induced by the environment, but rather to the probability induced by the current history of transitions. This is stressed out by writing:
\begin{equation}
p(s'|s,a,h) = \sum_{\phi \in \Phi} p(s'|s,a,h,\phi)p(\phi|s,a,h).\label{eq:marginalphi}
\end{equation}
We highlight that the difference between \eqref{eq:trueexpectedinfogain} and \eqref{eq:expectedinfogain} is important and misleading in the literature \cite{houthooft2016vime,little2013learning,sun2011planning}: in the first equation, the agent imagines new outcomes in order to select actions that maximize the change in the internal model, while in \eqref{eq:expectedinfogain}, the agents acts and uses the new states to update its model.
\paragraph{Information gain over the true forward model.} In our formalism, we assume that there is a distribution of true models $p(\Phi_T)$ that underpins the transition function of the environment $T$. In contrast with $\Phi$, this is a property of the environment. One can see this distribution as a Dirac distribution if only one model exists or as a categorical distribution of several forward models. We define the expected information gain over the true models as:
\begin{align}
IG(h,A,S',S,\Phi_T) &= I(S';\Phi_T|h,A,S) = H(\Phi_T|h,A,S) - H(\Phi_T|h,A,S,S') \nonumber \\
&= \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} \log p(s'|s,a,h,\phi_T) - \log p(s'|s,a,h) \label{eq:predicterror3}.
\end{align}
Maximizing \eqref{eq:predicterror3} amounts to look for states that provides new information about the true models distribution. We can see that the left-hand side of \eqref{eq:predicterror3} incites the agent to target inherently deterministic areas, \textit{i.e}, given the true forward model, the agent would exactly know where it ends up. At the opposite, the right-hand term pushes the agent to go in stochastic areas according to its current knowledge. Overall, to improve this objective, an agent has to reach areas that are more deterministic than what it thinks they are. One can see that, assuming $p(s'|s,a,h,\phi_T) \approx p(s' | s, a, \phi, h)$, one falls back on the expected information gain (see also \eqref{eq:predicterror2}). In contrast with \eqref{eq:expectedinfogain}, this objective takes advantage of the true model, which is most of the time unknown, thereby making the objective hardly tractable. As such, in this perspective, surprise results from an agent-centric approximation of the discrepancy between the agent's model and the environment model
In the following, we will study three objectives: the expected information gain over the true forward models, the expected information gain over the forward model and the expected information gain over density models.
\subsection{Information gain over the true forward model}\label{sec:predictionerror}
To avoid the need of the true forward model, the agent can omit the left-hand term of \eqref{eq:predicterror3} by assuming the true forward model is modelled as a deterministic forward model. In this case, we can write:
\begin{subequations}
\begin{align}
I(S';\Phi_T|h,A,S) &\propto \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h), \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} - \log p(s'|s,a,h) \label{eq:predicterror4} \\
&= \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} - \log \sum_{\phi \in \Phi} p(s'|h,s,a,\phi)p(\phi|h) \\
%
&\geq \mathbb{E}_{\substack{\phi_T \sim p(\cdot),\, (s,a) \sim p(\cdot|h) \\ s' \sim p(\cdot|s,a,\phi_T), \phi \sim p(\cdot|h)}} - \log p(s'|h,s,a,\phi) \label{eq:predicterror5}
\end{align}
\end{subequations}
where we applied the Jensen inequality in \eqref{eq:predicterror5} and $\phi_T \sim p(\cdot)$ is fixed. One can model $p(s'|h,s,a,\phi)$ with a unit-variance Gaussian distribution in order to obtain a tractable loss. This way, we have:
\begin{subequations}
\begin{align}
\mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T),\, \phi \sim p(\cdot|h)}} - \log p(s' | \phi,h,a,s) &\approx \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h) ,\, s' \sim p(\cdot|s,a,\phi_T) \\ \phi \sim p(\cdot|h),\, \phi_T \sim p(\cdot) }} - \log \frac{1}{(2\pi)^{d/2}}e^{-0.5 (s' - \hat{s}')^T (s' - \hat{s}')} \label{eq:gaussianinfogain} \\
&\propto \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h) ,\, s' \sim p(\cdot|s,a,\phi_T) \\ \phi \sim p(\cdot|h),\, \phi_T \sim p(\cdot) }} ||s' - \hat{s}'||_2^2 + Const
\end{align}
\end{subequations}
%
where
\begin{equation}
\hat{s}' = \argmax{s'' \in S} p(s''|h,a,s,\phi)
\end{equation}
represents the mean prediction and $\phi$ parameterizes a deterministic forward model.
Following the objective, we can extract a generic intrinsic reward as:
\begin{align}
R(s,a,s')= ||f(s')- f(\hat{s}')||_2^2
\label{eq:rewpredicterror}
\end{align}
where $f$ is a generic function (e.g. identity or a learnt one) encoding the state space into a feature space. \eqref{eq:rewpredicterror} amounts to reward the predictor error of $\phi$ in the representation $f$. In the following, we will see that learning a relevant function $f$ is the main challenge.
The first natural idea to test is whether a function $f$ is required. \citet{burda2019largescale} learn the forward model from the ground state space and observe it is inefficient when the state space is large. In fact, the euclidean distance is meaningless in such high-dimensional state space. In contrast, they raise up that random features extracted from a random neural network can be very competitive with other state-of-art methods. However they poorly generalize to environment changes. An other model, \textit{Dynamic Auto-Encoder (Dynamic-AE)} \cite{stadie2015incentivizing}, computes the distance between the predicted and the real state in a state space compressed with an auto-encoder \cite{hinton2006reducing}. $g$ is then the encoding part of the auto-encoder. However this approach only slightly improves the results over Boltzmann exploration on some standard Atari games. Other works also consider a dynamic-aware representation \cite{ermolov2020latent}. These methods are unable to handle the local stochasticity of the environment \cite{burda2019largescale}. For example, it turns out that adding random noise in a 3D environment attracts the agent; it passively watches the noise since it is unable to predict the next observation. \label{tele} This problem is also called \textit{the white-noise} problem \cite{pathak2017curiosity,schmidhuber2010formal}. This problem emerges by considering only the right-hand term of \eqref{eq:predicterror3}, making the agent assumes environments are deterministic. Therefore, exploration with prediction error breaks down when this assumption is no longer true.
To tackle exploration with local stochasticity, the \textit{intrinsic curiosity module (ICM)} \cite{pathak2017curiosity} learns a state representation function $f$ end-to-end with an \textit{inverse model} (i.e. a model which predicts the action done between two states). Thus, the function $f$ is constrained to represent things that can be controlled by the agent during next transitions. Secondly, the forward model used in ICM predicts, in the feature space computed by $f$, the next state given the action and the current state. The prediction error does not incorporate the white-noise that does not depend on actions, so it will not be represented in the feature state space. ICM notably allows the agent to explore its environment in the games \textit{VizDoom} and \textit{Super Mario Bros}. Building a similar action space, \textit{Exploration with Mutual Information (EMI)} \cite{pmlr-v97-kim19a} significantly outperforms previous works on Atari but at the cost of several complex layers. EMI transfers the complexity of learning a forward model into the learning of states and actions representation through the maximization of $I([S,A];S')$ and $I([S,S'];A)$. Then, the forward model $\phi$ is constrained to be a simple linear model in the representation space. Furthermore, EMI introduces a \textit{model error} which offloads the linear model when a transition remains strongly non-linear (such as a screen change). However one major drawback of ICM and EMI is the incapacity of their agent to keep in their representation what depends on their long-term control. For instance, in a partially observable environment, an agent may perceive the consequences of its actions several steps later.
An other way to tackle local stochasticity can be to maximize the improvement of prediction error, or learning progress, of a transition model \cite{schmidhuber1991curious,azar2019world,lopes2012exploration,oudeyer2007intrinsic,kim2020active}. One can see this as approximating the left-hand side of \eqref{eq:predicterror3} with:
\begin{align}
\log p(s'|s,a,h,\phi_T) - \log p(s'|s,a,h) &\approx \log p(s'|s,a,h') - \log p(s'|s,a,h)
\end{align}
where $h'$ concatenates $h$ with an arbitrary number of additional interactions. As $h'$ becomes large enough and the agent updates its forward model, its forward model converges to the true transition model. Formally, if one stochastic forward model can describe the transitions, we can write:
\begin{align}
\lim_{|h'|\rightarrow \inf} p(s'|s,a,h') &= \lim_{|h'|\rightarrow \inf} \sum_{\Phi} p(s'|s,a,h',\phi) p(\phi|h') \nonumber \\
&= p(s'|s,a,h',\phi_T) \label{eq:approxlearningprogress}
\end{align}
In practice, we can not wait for discovering a long sequence of new interactions and the reward can be dependent on a small set of interactions and the efficiency of the gradient update of the forward model. Yet, the theoretical connection with the true expected information gain may indeed explain the robustness of learning progress to stochasticity \cite{linke2020adapting}.
\paragraph{Conclusion.} While these methods perform well in deterministic environments, they struggle to offset the determinism assumption that underpines the focus on \eqref{eq:predicterror4}; it results that standard methods focus on the more stochastic areas. Methods that tackle stochasticity may not predict important long-term information about the environment or they need to compute a learning progress measure, which is non-trivial.
\subsection{Information gain over forward model}\label{sec:infogainforward}
In this subsection, we study the works that maximize the expected information gain over forward models. Here, $\phi$ are parameters of a learnt forward model. Using \eqref{eq:expectedinfogain}, we can extract an intrinsic reward:
\begin{equation}
R(s,a,s') = D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)).\label{eq:rewinfogain}
\end{equation}
This way, an agent executes actions that provide information about the dynamics of the environment. This allows, on one side, to push the agent towards areas it does not know, and on the other side to prevent attraction towards stochastic areas. Indeed, if the area is deterministic, environment transitions are predictable and the uncertainty about its dynamics can decrease. At the opposite, if transitions are stochastic, the agent turns out to be unable to predict transitions and does not reduce uncertainty. The exploration strategy \textit{VIME} \cite{houthooft2016vime} computes this intrinsic reward by modelling $p(\phi|h)$ with Bayesian neural networks \cite{graves2011practical}. The interest of Bayesian approaches is to be able to measure the uncertainty of the learned model \cite{blundell2015weight}. This way, assuming a fully factorized Gaussian distribution over model parameters, the KL-divergence has a simple analytic form \cite{houthooft2016vime,linke2020adapting}, making it easy to compute.
However, the interest of the proposed algorithm is shown only on simple environments and the reward can be computationally expensive to compute. \citet{achiam2017surprise} propose a similar method (\textit{AKL}), with comparable results, using deterministic neural networks, which are simpler and quicker to apply. The weak performance of both models is probably due to the difficulty to retrieve the uncertainty reduction by rigorously following the mathematical formalism of information gain
The expected information gain can also be written:
\begin{subequations}
\begin{align}
I(S';\Phi|h,A,S) &\approx H_T(S'|h,A,S) - H_T(S'|A,\Phi,S,h) \nonumber \\
&= - \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s'|h,s,a) + \mathbb{E}_{\substack{\phi \sim p(\cdot|h,s,a,s') \\ (s,a) \sim p(\cdot|h), s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s' | s, a, \phi, h) \label{eq:predicterror} \\
&= \mathbb{E}_{\substack{\phi \sim p(\cdot|h,s,a,s'),\, \phi_T \sim p(\cdot) \\ (s,a) \sim p(\cdot|h), s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s' | s, a, \phi, h) - \log \sum_{\phi \in \Phi} p(s'|\phi,h,s,a)p(\phi|h) \label{eq:predicterror2}
\end{align}
\end{subequations}
where $H_T$ refers to the entropy with true transitions in its expected part. Using similar equations than in \eqref{eq:predicterror2}, in \textit{JDRX} \cite{shyam2018model}, authors show that one can maximize the information gain by computing the Jensen-Shannon or Jensen-Rényi divergence between distributions of states induced by several forward models. The more the models are trained on a state-action tuple, the more they will converge to the expected distribution of next states. Intuitively, the reward represents how much the different transition models disagree on the next-state distribution. Other works also maximize a similar form of disagreement \cite{pathak2019self,yao2021sample,sekar2020planning} by looking at the variance of predictions among several learnt transition models. These models can also predict in latent spaces \cite{sekar2020planning}. It appears that such methods are competitive with state of the art approaches \cite{burda2019largescale}. However the main intrinsic issue is computational since it requires multiple forward models to train.
\paragraph{Conclusion.} Despite the theoretical power of the information gain for improving exploration, it remains hard to efficiently estimate it and use it in difficult tasks.
\subsection{Information gain over density model}\label{sec:infogaindensity}
Surprise can also arise by quantifying \textit{the discrepancy between its probability of occurring and the fact that it actually occurred} \cite{barto2013novelty}. To quantify this probability of occuring, in this paragraph, we assume the agent tries to learn a density model $\phi \in \Phi$ that approximates the current marginal density distribution of states $p(s')$. In this setting, we can define the expected information gain over a density model $\phi$ \cite{bellemare2016unifying}:
\begin{align}
IG(h,S,A,S',\Phi)&\approx \mathbb{E}_{\substack{ (s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot | s,a,h,\Phi_T)}} D_{KL}(p(\phi|h,s')||p(\phi|h)).
\end{align}
We hypothetize that the adversarial training that results from the objective (active maximization of the KL-divergence and density fitting) results in an approximately uniform distribution of states (and a uniform density estimation). This may be due to the convexity of the KL-divergence in $p(\phi|h,s')$ and $p(\phi|h)$ but we leave the proof to future work. To our knowledge, no works directly optimize this objective, but it has been shown that the information gain lower-bounds the squared inverse pseudo-count objective \cite{bellemare2016unifying}, which derives from count-based objectives; in the following, we will review \textit{count} and \textit{pseudo-count} objectives.
To efficiently explore its environment, an agent can count the number of times it visits a state and returns in rarely visited states. Such methods are said to be \textit{count-based} \cite{strehl2008analysis}. As the agent visits a state, the intrinsic reward associated with this state decreases. It can be formalized with:
\begin{equation}
R(s,a,s') = \frac{1}{\sqrt{N(s')}}
\end{equation}
where $N(s)$ is the number of times that the state $s$ has been visited. Although this method is efficient and tractable in a tabular environment (with a discrete state space), it hardly scales when states are numerous or continuous since an agent never really returns in the same state. A first solution proposed by \citet{tang2017exploration}, called \textit{TRPO-AE-hash}, is to hash the latent space of an auto-encoder fed with states. However, these results are only slightly better than those obtained with a classic exploration policy. An other line of works propose to adapt counting to high-dimensional state spaces via \textit{pseudo-counts} \cite{bellemare2016unifying}. Essentially, \textit{pseudo-counts} allow the generalization of the count from a state towards neighbourhood states using a learnt density model $\phi$. This is defined as:
\begin{equation}
\hat{N}(s') = \frac{p(s'|\phi)(1-p(s'|\phi')}{p(s'|\phi')-p(s'|\phi)}
\end{equation}
where $\phi'(s)$ computes the density of $s$ after having learnt on $s$. In fact, \citet{bellemare2016unifying} show that, under some assumptions, \textit{pseudo-counts} increase linearly with the true counts. In this category, \textit{DDQN-PC} \cite{bellemare2016unifying} and
\textit{DQN-PixelCNN} \cite{ostrovski2017count} compute $\phi$ using respectively a Context-Tree Switching model (CTS) \cite{bellemare2014skip} and a Pixel-CNN density model \cite{van2016conditional}. Although the algorithms based on density models work on environments with sparse rewards, they add an important complexity layer \cite{ostrovski2017count}. One can preserve the quality of observed exploration while decreasing the computational complexity of the pseudo-count by computing it in a learnt latent space \cite{vezzani2019learning,martin2017count}.
There exists several other well-performing tractable exploration methods like \textit{RND} \cite{burda2018exploration}, \textit{DQN+SR} \cite{machado2018count}, \textit{RIDE} \cite{ride2020roberta} or \textit{BeBold} \cite{zhang2020bebold}. These papers argue the reward they propose more or less relate to a count estimation.
\paragraph{Conclusion.} Maximizing the information gain over a density model may maximize the pseudo-count, which relates to count-based objectives. They provide interesting feedbacks for exploration, but in practice, pseudo-counts are hard to approximate since they rely on a powerfull density model, a strict online estimation of density and they assume $p(s|\phi)$ strictly increases $\forall s \in S$ \cite{ostrovski2017count}. In addition, they also struggle with the problem of randomness. For instance, let us assume that one (state, action) tuple can lead to two very different states with 50\% chance each. The algorithm will manage to count for both states the number of visits, although it would take twice as long to avoid to be too much attracted. However, these methods do not address the white-noise problem since next states may be randomly generated at every steps. In this case, it is unclear how these methods could resist the temptation of going into this area since the counting associated to this state will never increase.
\subsection{Conclusion} We detailed three ways to define and maximize the surprise of an agent, based on the expected information gain over a true model of the environment. In practice, the expected information gain over a forward model and the learning progress well-approximate the expected information gain over the true model. Therefore, it appears that they intuitively and experimentally allow to explore inherently stochastic environments, but are hard to implement. The expected information gain over a density model can be seen as approximating the expected information gain over the true uniform density model. This makes the agent targets a uniform distribution of states, making the agent sensitive to stochasticity. In fact, we discuss in the next section the relevance of aiming for a uniform distribution of states, through the study of novelty-based intrinsic motivations.
\section{Novelty maximization}\label{sec:novelty}
Novelty quantifies how much a stimuli contrasts with a previous set of experiences \cite{barto2013novelty,berlyne1966curiosity}. More formally, \citet{barto2013novelty} defend that \textit{an observation is novel when a representation of it is not found in memory, or, more realistically, when it is not “close enough” to any representation found in memory}. Previous experiences may be collected in a bounded memory or distilled in a learnt representation.
Several works propose to formalize novelty seeking as looking for low-density states \cite{becker2021exploration}, or similarly (cf. \secref{sec:knearest}), states that are different from others \cite{lehman2011novelty,conti2018improving}. In our case, this would result in maximizing the entropy of a state distribution. This distribution can be the t-steps state distribution (cf. \eqref{eq:dpi}) $H(d^{\pi}_t(S))$ or the entropy of the stationary state-visitation distribution over a finite horizon $T$:
\begin{align}
H(d^{\pi}(S))=H(\frac{1}{T} \sum_{t=1}^T d^{\pi}_t(S)).
\end{align}
In practice, these distributions can be approximated with a buffer. This formalization is not perfect and does not fit several intuitions about novelty \cite{barto2013novelty}. \citet{barto2013novelty} criticize such definition by stressing out that very distinct and memorable events may have low probabilities of occurring while not being novel (\textit{e.g} a wedding). They suggest that novelty may rather relates to the acquisition of a representation of the incoming sensory data. Following this definition, we propose to formalize novelty seeking behaviors as those that \textit{actively} maximize the mutual information between states and their representation $I(S;Z)=H(S) - H(S|Z)$ where $Z$ is a low-dimensional space ($|Z| \leq |S|$). This objective is commonly known as the \textit{infomax} principle. \cite{linsker1988self,almeida2003misep,bell1995information,HjelmFLGBTB19}; in our case, it amounts to \textbf{actively} learning a representation of the environment. Most of works focus on actively maximizing the entropy of state distribution while a representation learning function minimizes $H(S|Z)$. Furthermore, if one assumes that $Z=S$, the infomax principle collapses to an entropy maximization $H(S)$.
There are several ways to maximize the state-entropy, we separate them based on how they maximize the entropy. We found two kind of methods: low-density search and k-nearest neighbors methods.
\subsection{Direct entropy maximization}\label{sec:directdensity}
The most evident way to maximize the entropy of states consists in maximizing $H(\rho(s))$ where $\rho(s)$ approximates a density model $p(s)$. If we access this density model, it becomes straightforward to discover a policy that maximizes the entropy of a stationary state distribution \cite{hazan2019provably}. But computing $\rho(s)$ is challenging in high-dimensional state spaces. Several methods propose to estimate $\rho(s)$ using variational inference \cite{exploration2021zhang,islam2019entropy,lee2019efficient,pong2019skew} based on autoencoder architectures.
In this setting, we can use either \eqref{eq:badapprox} \cite{lee2019efficient} or \eqref{eq:unbiasedapprox} \cite{pong2019skew}, assuming $z$ is a compressed latent variable, $p(z)$ a prior distribution \cite{KingmaW13} and $q_{decoder}$ a neural network that ends with a diagonal Gaussian.
\begin{align}
\rho(s) &\approx q_{decoder}(s|z)q_{encoder}(z|s) \label{eq:badapprox}\\
&\approx \frac{1}{N} \sum_{i=1}^N \frac{p(z)}{q_{encoder}(z|s)}q_{decoder}(s|z) \label{eq:unbiasedapprox}
\end{align}
\eqref{eq:unbiasedapprox} is unbiased but more expensive to compute than \eqref{eq:badapprox} since it requires decoding several samples. Basically, this estimation allows to reward an agent \cite{berseth2020smirl,lee2019efficient,exploration2021zhang} according to:
\begin{equation*}
R(s,a,s') = - \log \rho(s').
\label{eq:logpbs}
\end{equation*}
Within this setting, \citet{pong2019skew} and \citet{lee2019efficient} learn new skills that target these novel states (see also \secref{sec:skilllearning}). \textit{MaxRenyi} \cite{exploration2021zhang} uses the Rény entropy, a more general version of the Shannon entropy, to give more importance to very low-density states. \citet{islam2019entropy} propose to condition the state density estimation with policy parameters in order to directly back-propagate the gradient of state-entropy into policy parameters. Although \textit{MaxRenyi} achieves good scores on \textit{Montezuma's revenge} with pure exploration, maximizing the ground state entropy may not be adequate since two closed ground states are not necessarily neighbors in the true environment \cite{aubret2021distop}. Following this observation, \textit{GEM} \cite{guo2021geometric} rather maximizes the entropy of the estimated density of states considering the dynamic-aware proximity of states, $H(Z)$. However they do not actively consider $H(Z|S)$.
\paragraph{Conclusion.} Generally speaking, these methods need an accurate density model to provide rewards. In the next paragraph, we study methods that avoid learning a density model.
\subsection{K-nearest neighbors approximation of entropy}\label{sec:knearest}
\begin{figure}
\centering
\includegraphics[width=0.2\linewidth]{knearest.pdf}
\caption{Illustration of the correlation between density and the fourth-nearest neighbor distance. Circles represent states and red dotted lines show the distance between a state and its fourth nearest neighbor. }
\label{fig:knearest}
\end{figure}
Several works propose to approximate the entropy of a distribution using samples and their k-nearest neighbors \cite{singh2003nearest,kraskov2004estimating}. In fact such objective has already been refered to as novelty \cite{conti2018improving}. Assuming $nn_k(S,s_i)$ is a function that outputs the k-th closest state to $s_i$ in $S$, this approximation can be written as:
\begin{equation}
H(S) \propto \frac{1}{|S|} \sum_{s_i \in S} \log ||s_i - nn_k(S,s_i)||_2 + \chi(|S|) + Const
\label{eq:knearestequation}
\end{equation}
where $\chi(s)$ is the digamma function. This approximation assumes the uniformity of states in the ball centered on a sampled state with radius $||s_i - nn_k(S,s_i)||_2$ \cite{lombardi2016nonparametric} but its full form is unbiased with a large number of samples \cite{singh2003nearest}. Intuitively, it means that the entropy is proportional to the average distance between states and their neighbors. \figref{fig:knearest} shows how density estimation relates to k-nearest neighbors distance. We clearly see that low-density states tend to be more distant from their nearest neighbors. Few methods \cite{mutti2020policy} provably relates to such estimations, but several approaches take advantage of the distance between state and neighbors to generate intrinsic rewards, making them related to such entropy maximization. For instance, \textit{APT} \cite{liu2021behavior} proposes new intrinsic rewards based on the k-nearest neighbors estimation of entropy:
\begin{align}
R(s,a_t,s') = \log (1+ \frac{1}{K} \sum_0^K || g(s') - nn_k(g(S),g(s')) ||_2)
\end{align}
where $g$ is a representation function learnt with a contrastive loss based on data augmentation \cite{srinivas2020curl} and $K$ denotes the number of k-nn estimations. By looking for distant state embeddings during an unsupervised pre-training phase, they manage to considerably speed up task-learning in the DeepMind Control Suite. The representation $g$ can also derive from a random encoder \cite{liu2021behavior} or a constrastive loss that ensures the euclidian proximity between consecutive states \cite{tao2020novelty,yarats2021reinforcement}.
\paragraph{Identifying different states.}
Instead of relying on euclidian distance, one can try to learn a similarity function. \textbf{EX$^2$} \cite{fu2017ex2} learns a discriminator to differentiate states from each other: when the discriminator does not manage to differentiate the current state from those in the buffer, it means that the agent has not visited this state enough and it will be rewarded. States are sampled from a buffer, implying the necessity to have a large buffer. To avoid this, some methods distill recent states in a prior distribution of latent variables \cite{kim2019curiosity,klissarovvariational}. The intrinsic reward for a state is then the KL-divergence between a fixed diagonal Gaussian prior and the posterior of the distribution of latent variables. In this case, common latent states fit the prior while novel latents diverge from the prior.
\paragraph{Intra-episode novelty.}
K-nearest neighbors intrinsic rewards have also been employed to improve intra-episode novelty \cite{stanton2018deep}. It contrasts with standard exploration since the agent looks for novel states in the current episode: typically it can try to reach all states after every resets. This setting is possible when the policy depends on all its previous interactions, which is often the case when an agent evolves in a POMDP, since the agent has to be able to predict its value function even though varies widely during episodes. This way, ECO \cite{savinov2018episodic} and Never give up \cite{badia2019never} uses an episodic memory and learn to reach states that have not been visited during the current episode.
\paragraph{Conclusion} K-nn methods turn out to be simple to experiment, but they strongly rely on learnt dynamic-aware representations since they fully take advantage of a meaningful euclidian embedded proximity; their theoretical connection to the rigorous approximation of entropy remains most of the time unclear and the approach badly scales with an increase of the memory size. We note that simple methods can tackle the issue of finding the neighbors by partitioning together close states \cite{yarats2021reinforcement}. Overall, we observe efficient exploration and the methods easily translate to intra-episode exploration.
\subsection{Conclusion}
In this section, we reviewed works that maximize novelty to improve exploration with flat policies. We formalized novelty as actively discovering a representation according to the infomax principle despite that most of works only maximize the entropy of states/representations of states. But works manage to learn a representation that match the inherent structure of the environment \cite{tao2020novelty}. It suggests that it is most of the time enough to learn a good representation. For instance, \citet{guo2021geometric} and \citet{tao2020novelty} compute a reward based on a learnt representation, but perhaps a bad representation tends to be located in low-density areas. It would result that active representation entropy maximization correlates with state-conditional entropy minimization.
We are not aware of a lot of methods that actively learn a representation maximizing $I(Z;S)$. Yet, we stress out three methods that strive to actively learn a representation of states. In \textit{CRL} \cite{du2021curious} \textit{NOR} \cite{nachum2019near} and \textit{CuRe} \cite{aljalbout2021seeking}, the agent plays a minimax game. A module learns a representation function with a constrastive loss and the agent actively challenges the representation by looking for states with a large loss.
\section{Skill learning}\label{sec:skilllearning}
In our everyday life, nobody has to think about having to move his arms' muscles to grasp an object. A command to take the object is just issued. This can be done because an acquired skill can be effortlessly reused
Skill abstraction denotes the ability of an agent to learn a representation of diverse skills. We formalize skill abstraction as maximizing the mutual information between the goal $g \in G$ and some of the rest of the contextual states $f(\tau) \in f(\mathcal{T})$, denoted as $I(G; f(\mathcal{T}))$ where $\tau \in \mathcal{T}$ is a trajectory and $f$ a function that extracts a subpart of the trajectory (last state for example). The definition of $f$ depends on the wanted semantic meaning of a skill. Let $s_0$ refers to the state at which the skill started and $s$ a random state from the trajectory, we highlight two settings based on the literature:
\begin{itemize}
\item $f(\mathcal{T}) = S$, the agent learns skills that target a particular state of the environment \cite{eysenbach2018diversity}.
\item $f(\mathcal{T}) = \mathcal{T}$, the agent learns skills that follow a particular trajectory. This way, two different skills can end in the same state if they cross different areas \cite{co2018self}.
\end{itemize}
Most of works maximize $I(G; S)$ so that, unless stated otherwise, we refer to this objective. In the following, we will study the different ways to maximize $I(G;S)$ which can be written under its reversed form $I(S;G) = H(G) - H(G|S)$ or forward form $ I(G;S) = H(S) - H(S|G)$ \cite{campos2020explore}. In particular, we emphasize that:
\begin{align}
- H(G | S) &= \sum_{g \in G, s \in S} p(g,s) \log p(g|s) \\
&= \mathbb{E}_{\substack{g \sim p(g) \\ s \sim \pi^g }} \log p(g|s)
\label{eq:im}
\end{align}
where, to simplify, $p(g)$ is the current distribution of goals (approximated with a buffer) and $s \sim \pi^g$ denotes the distribution of states that results from the policy that achieves $g$. Note that $p(g,s) = p(s|g)p(g) $.
In this section, we first focus on methods that assume they can learn all skills induced by a given goal space/goal distribution and they assign parts of trajectories to every goal. The second set of methods directly derives the goal space from visited states, so that there are two different challenges that we treat separately: the agent has to learn to reach a selected goal and it must maximize the diversity of goals it learns to reach. We make this choice of decomposition because some contributions focus on only one part of the objective function.
\subsection{Fixing the goal distribution}\label{sec:predefinedG}
The first approach assumes the goal space is arbitrarily provided except for the semantic meaning of a goal. In this setting, the agent samples goals uniformly from $G$, ensuring that $H(G)$ is maximal, and it progressively assigns all possible goals to a part of the state space. To do this assignment, the agent maximizes the reward provided by \eqref{eq:im}:
\begin{equation}
R(g,s,a,s') = - \log q_{\omega}(g|s')
\label{eq:vlbim}
\end{equation}
where $q_{\omega}(g|s')$ represents a learnt discriminator (often a neural network) that approximates $p(g|s')$.
\begin{figure}
\centering
\subfloat[Skills are not learnt yet.]{
\includegraphics[width=0.22\linewidth]{elsim/diayn.png}
\label{fig:diayna}
}
\quad
\subfloat[The discriminator tries unsuccessfully to distinguish the skills. ]{
\includegraphics[width=0.22\linewidth]{elsim/diayn25.png}
\label{fig:diaynb}
}
\quad
\subfloat[Each skill learns to go in the area assigned to it by the discriminator.]{
\includegraphics[width=0.22\linewidth]{elsim/diayn2.png}
\label{fig:diaync}
}
\quad
\subfloat[Skills locally spread out by maximizing action entropy \protect\cite{haarnoja2018soft}.]{
\includegraphics[width=0.22\linewidth]{elsim/diayn3.png}
\label{fig:diaynd}
}
\caption{The agent (circle) starts an episode in the center of the environment, colors denote the trajectories of their corresponding skills.}
\label{fig:diaynall}
\end{figure}
At first, we focus on discrete number of skills, where $p(g)$ represents a uniform categorical distribution. \figref{fig:diaynall} sums up the learning process with two discrete skills: 1- skills and discriminator $q$ are randomly initialized; 2- the discriminator tries to differentiate the skills with states $s$ from its trajectories, in order to approximate $p(g|s)$; 3- skills are rewarded with \eqref{eq:vlbim} in order to make them go in the area assigned to it by the discriminator; 4- finally, skills are clearly distinguishable and target different parts of the state space. \textit{SNN4HRL} \cite{florensa2017stochastic} and \textit{DIAYN} \cite{eysenbach2018diversity} implement this procedure by approximating $g$ with, respectively, a partition-based normalized count and a neural network. \textit{VALOR} \cite{achiam2018variational} also uses a neural network, but discriminate discrete trajectories. In this setting, the agent executes one skill per episode.
\textit{HIDIO} \cite{zhang2020hierarchical} sequentially executes skills, yet that is not clear how they manage to avoid forgetting previously learnt skills. Maximizing $I(G;S|S_0)$ like \textit{VIC} \cite{gregor2016variational} or $I(G;S_0|S)$ with \textit{R-VIC} \cite{baumli2021relative} make it hard to use a uniform (for instance) $H(G|S_0)$, because every skill may not be executable everywhere in the state space. Therefore, they also maximize the entropy term with another reward bonus similar to $\log p(g|s_0)$. They learn discriminable skills, but still struggle to combine them on complex benchmarks \cite{baumli2021relative}. Keeping $p(g)$ uniform, \textit{DADS} \cite{sharma2019dynamics} maximizes the forward form of mutual information $I(S;G|S_0) = H(S|S_0) - H(S|G,S_0)$ by approximating $p(s | s_0)$ and $p(s | s_0,g)$. This method makes possible to plan over skills and can combine several locomotion skills. However this requires several conditional probability density estimation on the ground state space, which may badly scale on higher-dimensional environments.
These methods tend to stay close from their starting point \cite{campos2020explore} and do not learn skills that cover the whole state space. In fact, it is easier for the discriminator to overfit over a small area than to make a policy go in a novel area, this results with a lot of policies that target a restricted part of the state space \cite{choi2021variational}. Accessing the whole set of true possible states and deriving the set of goals by encoding states can considerably improve the coverage of skills \cite{campos2020explore}.
\paragraph{Approaches for a better coverage of states.} Hetereogenous methods address the problem of overfitting of the discriminator. The naive way can be to regularize the learning process of the discriminator. \textit{ELSIM} \cite{aubret2020elsim} takes advantages of L2 regularization and progressively expand the goal space $G$ to cover larger areas of the state space and \citet{choi2021variational} propose to use spectral normalization \cite{miyato2018spectral}. More consistent dynamic-aware methods may further improve regularization; however it remains hard to scale the methods to a large number of skills which are necessary to scale to a large environment. In above-mentioned methods, the number of skills greatly increases \cite{achiam2018variational,aubret2020elsim} and the discrete skill embedding does not provide information about proximity of skills. Therefore learning a continuous embedding may be more efficient.
\paragraph{Continuous embedding.} The prior uniform distribution $p(g)$ is far more difficult to set in a continuous embedding. One can introduce the \textit{continuous DIAYN} \cite{choi2021variational,zhang2020hierarchical} with a prior $p(G) = \mathcal{N}(0^d,I)$ where $d$ is the number of dimensions, or the \textit{continuous DADS} with a uniform distribution over $[-1; 1]$ \cite{sharma2019dynamics}, yet it remains unclear how the skills could adapt to complex environments, where the prior does not globally fit the inherent structure of the environment. \textit{VISR} \cite{visf2020ansen} seems to, at least partially, overcome this issue with a long unsupervised training phase and successor features. They uniformly sample goals on the unit-sphere and computes the reward as a dot product between unit-normed goal vectors and successor features $\log q_{\omega}(g|s) = \phi_{successor}(s)^T g$.
\paragraph{Conclusion.} This set of methods manages to learn discrete skills that can be combined, yet, despite regularization, discrete skills struggle to cover a very large state space \cite{aubret2020elsim}. Successfull adaptations to scale it up to large states spaces currently rely on the relevance of successor features. In the next two sections, we study how to maximize the mutual information by assuming the goal space derives from the state space.
\subsection{Achieving a state-goal}\label{sec:goalstate}
In this section, we review how current methods maximize the goal achievement part of the objective of the agent, $-H(S_g|S)$ where $S_g$ refers to the goal-relative embedding of states. We temporally set aside $H(S_g)$ and we will come back to this in the next subsection, \secref{eq:diversestate}, mainly because the two issues are tackled separately in the literature.
Obviously, maximizing $- H(S_g | S)$ can be written:
\begin{align}
- H(S_g | S) &= \sum_{S_g,S} p(s_g,s) \log p(s_g|s) = \mathbb{E}_{\substack{s_g \sim p(s) \\ s \sim \pi^g }} \log p(s_g|s)
\end{align}
where, to simplify, $p(s)$ is the current distribution of states (approximated with a buffer) and $s \sim \pi^g$ denotes the distribution of states that results from the policy that achieves $g$. If $\log p(s_g|s')$ is modelled as an unparameterized Gaussian with a unit-diagonal co-variance matrix, we have $\log p(s_g|s') \propto -||s_g-s'||_2^2 + Const$ so that we can reward an agent according to:
\begin{equation}
R(s_g,s,a,s')= -||s_g-s'||_2^2.
\label{eq:distance_reward}
\end{equation}
It means that if the goal is a state, the agent must minimize the distance between its state and the goal state. To achieve this, it can take advantage of a goal-conditioned policy $\pi^{s_g}(s)$.
\paragraph{Ground state space.} This way, \textit{Hierarchical Actor-Critic (HAC)} \cite{levy2018hierarchical} directly uses the state space as a goal space to learn three levels of option (the options from the second level are selected to fulfill the chosen option from the third level). A reward is given when the distance between states and goals (the same distance as in Equation \ref{eq:distance_reward}) is below a threshold and they take advantage of HER to avoid to directly use the threshold. Similar reward functions can be found in \citet{pitis2020maximum} and \citet{zhao2019maximum}. Related to these works, \textit{HIRO} \cite{nachum2019data} uses as a goal the difference between the initial state and the state at the end of the option $f(\mathcal{T}) = S_f - S_0$.
This approach is relatively simple and does not require extra neural networks. However, there are two problems in using the state space in the reward function. Firstly, a distance (like L2) makes little sense in a very large space like images composed of pixels. Secondly, it is difficult to make a manager policy learn on a too large action space. Typically, an algorithm having as goals images can imply an action space of $84\times 84\times 3$ dimensions for a goal-selection policy (in the case of an image with standard shape). Such a wide space is currently intractable, so these algorithms can only work on low-dimensional state spaces.
\paragraph{Learning a representation of goals.} To tackle this issue, an agent can learn low-dimensional embedding of space $\phi_e$ and maximize the reward of \eqref{eq:distance_reward_phi} using a goal-conditioned policy $\pi^{\phi_e(s_g)}(s)$:
\begin{equation}
R(s_g,s,a,s')= -||\phi_e(s_g)-\phi_e(s')||_2^2.
\label{eq:distance_reward_phi}
\end{equation}
Similarly to \eqref{eq:distance_reward}, this amounts to maximize $- H(S_g | S)$. \textit{RIG} \cite{nair2018visual} proposes to build the feature space independently with a variational auto-encoder (VAE); but this approach can be very sensitive to distractors (i.e. useless features for the task or goal, inside states) and does not allow to correctly weight features. Similar approaches also encode part of trajectories \cite{kim2021unsupervised,co2018self} for similar mutual information objectives. \textit{SFA-GWR-HRL} \cite{zhou2019vision} uses unsupervised methods like the algorithms of \textit{slow features analysis} \cite{wiskott2002slow} and \textit{growing when required} \cite{marsland2002self} to build a topological map. A hierarchical agent then uses nodes of the map, representing positions in the world, as a goal space. However the authors do not compare their contribution to previous approaches.
Other approaches learn a state embedding that captures the proximity of states with contrastive losses. For instance, \textit{DISCERN} learns the representation function by maximizing the mutual information between the last state representation and the state-goal representation. Similarly to works in \secref{sec:predefinedG}, the fluctuations around the objective allow to bring states around $s_g$ closer to it in the representation. More explicitly, the representation of \textit{NOR} \cite{nachum2019near} maximizes $I(\phi_e(S_{t+k});\phi_e(S_t),A_{t:t+k})$ and the one of \textit{LESSON} \cite{li2021learning} and \textit{DisTop} \cite{aubret2021distop} approximately maximizes $I(\phi_e(S_{t+1});\phi_e(S_t))$; \textit{LESSON } and \textit{NOR} target a change in the representation and manage to navigate in a high-dimensional maze while learning the intrinsic Euclidian structure of the mazes. Their skills can be reused on several environments. However, experiments are made in 2-dimensional embedding spaces and it remains unclear how relevant may be goals as state changes in an embedding space with higher dimensions. The more the number of dimensions increase, the more difficult it will be to distinguish possible skills from impossible skills in a state. DisTop targets goal-state, it has more difficulties to navigate in large environments, but also work in non-maze environments. We discuss again this issue in the next section.
\paragraph{Conclusion.} To sum up, representation learning methods allows to learn state-based skills over complex state spaces. Learning this representation function combined with the use of the euclidian distance as reward function amounts to learn a particular form of reward function in addition for providing pre-computed features to the goal-conditioned policy. In the next paragraph, we study how to maximize $H(S)$ so that to make sure learnt skills are diverse.
\subsection{Proposing diverse state-goals}\label{eq:diversestate}
To make sure the agent maximizes the mutual information between its goals and all visited states, it must sample a diverse set of goal-states. In other words, it has to maximize $H(S_g)$ but through goal selection rather than with an intrinsic bonus as in \secref{sec:novelty}. Similarly to works on novelty (cf. \secref{sec:novelty}), such entropy maximization along with skill acquisition (cf. \secref{sec:goalstate}) tackles the exploration challenge, but without facing catastrophic forgetting (cf. \secref{sec:detachment}) since the agent does not forget its skills.
A naive approach would be to generate random values in the goal space, but this faces a considerable problem: the set of achievable goals is often a very small subset of the entire goal space. To tackle this, a first approach can be to explicitly learn to differentiate these two sets of goals \cite{florensa2018automatic,racaniere2019automated}, using for example a Generative Adversarial Networks (GAN) \cite{florensa2018automatic,goodfellow2014generative}, but it is ineffective in complex environments \cite{pong2019skew}. Other works obtain good results on imagining new goals, but using a compoundable goal space, given \cite{colas2020intrinsically} or or learnt with a dataset \cite{khazatsky2021can}; results show it may be a strong candidate for object-based representations. In contrast, in a more general case, an agent can simply set a previously met state as a goal, this way, it ensures that goals are reachable, since they have already been achieved. In the rest of this section, we focus on this set of methods.%
In \textit{RIG} \cite{nair2018visual}, the agent randomly samples states as goals from its buffer, but it does not increase the diversity of states, and thus, the diversity of learnt skills. \citet{pong2019skew} showed theoretically and empirically that, by sampling goals following a $\alpha$-more uniform distribution over the support of visited states than the "achieved" distribution, the distribution of states of the agent can converge to the uniform distribution. Intuitively, the agent just samples more often low-density goals as illustrated it in \figref{fig:reweight}. There are several ways to increase the importance of low-density goal-states that we introduce in the following.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{reweight.pdf}
\caption{Illustration of the reweighting process. \textbf{Left}: probability of visited states to be selected as goals before reweighting. \textbf{Right}: probability of visited states to be selected as goals after density-reweighting. This figure simplifies the figure of \protect\citet{pong2019skew}.}
\label{fig:reweight}
\end{figure}
\paragraph{Density estimation in the ground state space.} \textit{DISCERN} \cite{warde2018unsupervised} proposes to sample uniformly over the support of visited stated with a simple procedure. Every time the agent wants to add an observation to its buffer, it randomly samples an other observation from its buffer and only keeps the one that is the farthest to all other states of the buffer. This way, it progressively builds an uniform distribution of states inside its buffer. However, it uses the euclidean distance to compare images, which may not be relevant. Other approaches select the state that has the lower density (\textit{OMEGA}) \cite{pitis2020maximum} according to a kernel density estimation or use the rank of state-densities \cite{zhao2019curiosity} estimated with a Variational Gaussian Mixture Model \cite{blei2006variational}. In contrast with them, \textit{Skew-fit} \cite{pong2019skew} provides more flexibility on how uniform one want its distribution of states. \textit{Skew-fit} extends RIG and learns a parameterized generative model $q_{\phi}(S) \approx p(S)$ and skews the generative model (VAE) with the ratio:
\begin{equation}
q_{\phi}(s)^{\alpha_{skew}}.\label{eq:skewratio}
\end{equation}
where $\alpha_{skew} < 0$ determines the speed of uniformisation. This way it gives more importance to low-density states. Then it weights all visited states according to the density approximated by the generative model at the beginning of each epoch, which is made of a predefined number of timesteps. Skew-fit manages to explore image-based environments very efficiently. As shown in \cite{aubret2021distop} \cite{aubret2021distop}, this ratio applied on a discrete number of skills, amount to rewards a Boltzmann goal-selection policy with:
\begin{equation}
R(s_g) = (1+\alpha_{skew}) \log p(s_g).
\end{equation}
\paragraph{Density reweighting by partitioning the embedding space.} With a different objective, \textit{GRIMGREP} \cite{kovavc2020grimgep} partitions the VAE embedding of Skew-fit with a Gaussian Mixture Model \cite{rasmussen1999infinite} to estimate the learning progress of each partition and avoid distractors. The density weighting can also operate in a learnt embedding. \textit{HESS} \cite{li2021efficient} partitions the embedding space of \textit{LESSON} and rewards with a variant of a count-based bonus (see \secref{sec:infogain}). It improves exploration in a two-dimensional latent embedding but the size of partitions may not scale well if the agent considers more latent dimensions. In contrast, \textit{DisTop} \cite{aubret2021distop} dynamically clusters a dynamic-aware embedding space using a variant of a Growing When Required \cite{marsland2002self}; they estimate the density of state according to how much its partition contains states and skew the distribution of sampled similarly to Skew-fit. \textit{HESS} and \textit{DisTop} demonstrate their ability to explore and navigate with an ant inside complex mazes without extrinsic rewards.
\paragraph{Conclusion.} Entropy maximization methods improves over standard skill learning methods by learning to reach as many states as possible.
We expect further works to show the ability to scale to even more complex environments, with higher-dimensional latent structure \cite{li2021efficient}. Learning compositional representations ( modeling disentangled objects and relations) remains hard, but imagination is proven useful to increase skill diversity
\subsection{Conclusion} We found two main ways to discover skills. the first one provides a goal space and assign goals to areas of the state space. It struggles to learn and sequentially executes skills that target different areas of the state space. The second method derives the goal space from the state space with a representation learning method and over-weights the sampling of low-density visited areas.
\section{Outlooks of the domain}\label{sec:outlooks}
In this section, we take a step back and thoroughly analyze the results of our overall review. We first study the exploration process of flat intrinsic motivation in comparison with hierarchical intrinsic motivations in \secref{sec:detachment}; then, this will motivate our focus on the challenges induced by learning a deep hierarchy of skills in \secref{sec:dev}. Finally, in \secref{sec:flatim}, we discuss how flat and hierarchical intrinsic motivations can and should cohabit in such hierarchy.
\subsection{Long-term exploration, detachment and derailment}\label{sec:detachment}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{images/detachment2.png}
\caption{Illustration of the \textit{detachment} issue. Image extracted from \protect\citet{goexplore}. Green color represents intrinsically rewarding areas, white color represents no-rewards areas and purples areas are currently being explored. (1) The agent starts to learn and has not explored the environment yet. (2) It discovers the rewarding area at the left of its starting position and explores it. (3) It consumed close intrinsic rewards on the left part, thus it prefers gathering the right-part intrinsic rewards. (4) Due to catastrophic forgetting, it forgot how to reach the intrinsically rewarding area on the left.}
\label{fig:detachment2}
\end{figure}
The most challenging used benchmarks in flat intrinsic motivations (surprise and novelty) are \textit{DMLab} and \textit{Montezuma's revenge}, yet very sparse reward games such as \textit{Pitfall!} are not currently addressed and should be investigated. In \textit{Pitfall!}, the first reward is reached only after multiple rooms where it requires specific action sequences to go through each room. State of the art on IM methods \cite{ostrovski2017count} achieve 0 mean reward in this game. At the opposite, imitation RL methods \cite{aytar2018playing,hester2018deep} are insensitive to such a specific reward, and thus, exceed IM methods with a mean reward of 37232 on \textit{Montezuma's revenge} and 54912 on \textit{Pitfall!}. Even though these methods use expert knowledge, this performance gap exhibits their resilience to long-term rewards. Compared with flat intrinsic reward methods, which do not exceed a 10000 score on \textit{Montezuma's revenge} \cite{burda2018exploration} and hardly achieve a score on \textit{Pitfall!} \cite{ostrovski2017count}, it shows that flat IMs is still far from solving the overall problem of exploration.
Furthermore, we want to emphasize that the challenge is harder when the intrinsic reward itself is sparse \cite{burda2018exploration}. In \textit{Montezuma's revenge}, it is about avoiding to use a key too quickly in order to be able to use it later. In every day life, it can be about avoiding to spend money too quickly. In fact, it looks like there is an exploration issue in the intrinsic reward function. Intrinsic reward can guide the exploration at the condition that the agent finds this intrinsic reward. There may be two reasons causing the intrinsic reward to be sparse:
\begin{enumerate}
\item The first comes from partial observability, with which most models are incompatible. Typically, if an agent has to push a button and can only see the effect of this pushing after a long sequence of actions, density models and predictive models may not provide meaningfull intrinsic rewards. There would be a too large distance between the event "push a button" and the intrinsic reward.
\item \figref{fig:detachment2} illustrates the second issue, called \textit{detachment} \cite{goexplore,ecoffet2021first}. It results from a distant intrinsic reward coupled with catastrophic forgetting. Simply stated, the RL agent can forget the presence of an intrinsic reward in a distant area: this is hard to maintain the correct Q-value that derives from a distant currently unvisited rewarding area. This is emphasized in on-policy settings.
\end{enumerate}
Pursuing such distant intrinsic reward may be even harder due to the possible \textit{derailment} issue \cite{goexplore,ecoffet2021first}. Essentially, an agent may struggle to execute a long sequence of specific actions needed to reach a distant rewarding area because the local stochasticity incites local dithering all along the sequence. Detachment motivates the need for a hierarchical exploration \cite{ecoffet2021first} and derailment motivates frontier-based exploration \cite{bharadhwaj2020leaf}, which consists in deterministically reaching the area to explore before starting exploration.
\subsection{Deeper hierarchy of skills}\label{sec:dev}
According to \citet{brooks1991intelligence}, \textit{everything is grounded in primitive sensor motor patterns of activation}. This \textit{everything} may refer to the structure of the world and agent affordances. Capturing this knowledge amounts to form concept representations and reusable skills \cite{weng2001autonomous}, use it as a basis for new skills \cite{prince2005ongoing}, explore the environment to find new interesting skills, autonomously self-generate goals in accordance with the level and morphology of the agent.
Most works presented in \secref{sec:skilllearning} abstract actions on a restricted number of hierarchies (generally one hierarchy). This is necessary to well-understand the mechanism of abstraction, but we want to argue that imposing deeper hierarchies could considerably enhance the semantic comprehension of the environment of an agent. Organisms are often assumed to deal with composition of behaviors, which in turn serve as building block for more complex behaviors \cite{flash2005motor}. This way, using a limited vocabulary of skills makes easier avoiding the curse of dimensionality associated to the redundancy of a whole set of ground behaviors.
Our surveyed works \cite{nachum2019near,aubret2021distop,li2021learning,guo2021geometric,ermolov2020latent} already propose to learn the representations using the slowness principle \cite{wiskott2002slow} which assumes temporally close states should be similarly represented. By configuring the time-extension of the representation, one may focus on different semantic parts of the state space. This can be seen in \secref{sec:abstraction}: 1- the agent can learn a very low level representation that provides skills that can manipulate torques of a creature \cite{aubret2021distop}; 2- skills can also orientate an agent in a maze by extracting (x,y) coordinates from a complex state representation \cite{li2021efficient}. While they do not try to combine and learn several representations at the same time, further works could consider separate different parts of states (\textit{e.g.} agent positions and object positions \cite{mutual2021zhao}) or learning these representations at different time scales. In practice, data-augmentation methods already allow to learn object-oriented representations \cite{mitrovic2020representation,grill2020bootstrap,mussa2004neural}. Most augmentations could also be derived with contrast over time by considering, for instance, an embodied agent moving its eyes/head (crops/cuts), turning its head (rotation), controlling vergence (resizing, blur) or, without interventions, color and brightness changes \cite{chen2020simple}. Overall, it stresses out the potential of time-contrastive representations for disentangling the whole state space and providing semantically different skills; new works in this area may unlock new kind of skills.
\textit{Skill focus.}
In a developmental process, multi-level hierarchical RL questions the ability of the agent to learn all policies of the hierarchy simultaneously. This obviously relates to the ability of organisms to continually learn throughout their lifetime; but in more practical way, it may allow to focus the learning process of skills that are interesting for higher-level skills. This focus avoids learning everything in the environment \cite{aubret2021distop}, which is hard and obviously not done by biological organisms. For instance, most persons can not do a somersault.
\textit{Critical periods and lifelong learning.}
Considering a goal representation that changes over time introduces new issues for the agent. In this case, the goal-conditioned policy may be perturbed by the changes of inputs and may no longer be able to reach the goal \cite{li2021efficient}. Current methods consider 1- developmental periods (unsupervised pre-training \cite{metzen2013incremental}); 2- to modify the representation every k-steps epochs \cite{pong2019skew}; 3- to impose slowly changes of the representation \cite{li2021efficient}. Further works may thoroughly investigate the relation and transitions between these methods since they can relate to the concept of critical periods \cite{hensch2004critical,konczak2004neural}. Critical periods assume that the brain is more plastic at some periods of development in order to acquire specific knowledge. Despite this mechanism, the brain slowly keeps learning throughout the lifetime. In the hierarchy of skills, the introduction of a new level may first result in a quick/plastic learning process, followed by slower changes.
\subsection{The role of flat intrinsic motivations}\label{sec:flatim}
In \secref{sec:detachment}, we essentially criticized the limited role that flat intrinsic motivation like surprise or novelty can play in favor of exploration and we hypothesized in \secref{sec:dev} that deeper hierarchies could make emerge an understanding of more complex affordances. Then, what could be the roles of surprise and novelty ?
\textit{Novelty.} We saw in \secref{sec:novelty} that novelty seeking behaviors allow to learn a correct representation of the whole environment; this can be a basis for learning diverse skills. While some methods consider a goal as a state and manage to avoid using novelty bonuses \cite{pong2019skew}, this is harder to do when skills have a different semantic (like a change in the state space). \citet{nachum2019near} provide a meaningful example of this: the agent acts to simultaneously discover a representation of the environment and achieve upper-level goals
\textit{Surprise.} We leave aside the interest of surprise for learning a forward model that could be used for planning \cite{hafner2019learning} and rather focus on the learning process. Surprise amounts to look for the learning progress of forward models so that, in a hierarchy of skills, it quantifies whether skills can currently be better learnt or not. This links surprise to curriculum learning \cite{bengio2009curriculum}, \textit{i.e} can we find a natural order to efficiently learn skills ? For example, assuming an agent want to learn to reach state-goal in a maze, it would be smarter to learn to start learning skills that target goals close to its starting position and to progressively extend its goal selection while learning other skills. Several strategies have been proposed to smartly hierarchically select goals \cite{colas2019curious,linke2020adapting}, yet it often does not consider intrinsic skills \cite{colas2019curious}.
To sum up, we propose that the role of surprise and novelty may rather be to support the learning of skills. Novelty seeking helps to learn the representation required by the skill learning module and surprise speeds up the maximization of the skill learning objective. They may interact as a loop: first, the agent learns a new representation, then it evaluates surprise to select which skill to improve and the skill learning process starts. Considering this, it would result several surprises and novelties: an agent can experiment a novel or surprise interaction for a level of decision (injure the toy while walking), yet it does not mean other levels would be surprised (it is still on the same road). This emphasizes the multi-dimensionality and relativity of the notion of surprise ou novelty \cite{berlyne1960conflict}, only a part of the incoming stimuli may arouse the agent.
\section{Conclusion}
In this survey, we have presented the current challenges faced by DRL: namely 1- learning with \textit{sparse rewards} through exploration; 2- \textit{building a hierarchy of skills} in order to make easier credit assignment, exploration with \textit{sparse rewards} and \textit{transfer learning}.
We identified several types of IM to tackle these issues, that we classified into three categories based on a maximized information theoretic objective, which are \textit{surprise}, \textit{novelty} and \textit{skill learning}. Surprise and novelty based intrinsic motivations implicitly improve flat exploration while skill learning allows to create a hierarchy of reusable skills that also improve exploration.
\textbf{Surprise} results from maximizing the mutual information between the true model parameters and the next state, knowing the previous state, the action and the history of interactions. We have shown that it can be maximized through three set of works: information gain over predictive models, over density models or prediction errors/learning progress. In practice, we found that the information gain over density model is ill-defined for purely stochastic areas and that the determinism assumption underpinning prediction error methods complicates their application. Next challenges may be to make good approximations of surprise tractable.
\textbf{Novelty} seeking can be assimilated to learning a representation of the environment, through the maximization of mutual information between states and their representation. The most important term to actively maximize looks to be the entropy of state or representation, which can be approximated in two ways: 1- one can reward according to the parametric density of its next state, but it is complicated to estimate; 2- one can also reward an agent according to the distance between a state and currently already visited states, making the approach tractable in particular when the agent learns a dynamic-aware representation. We expect future works to benefit from directly looking for good representations rather than uniformity of states.
Finally, using \textbf{skill learning} objective that amount to maximize the mutual information between a goal and a part of trajectories of the corresponding skill, an agent can learn hierarchies of temporally-extended skills. Skills can be directly learnt by attributing part of a fixed goal space to areas, but it remains to clarify how well goals can be embedded in a continuous way and whether approaches may be robust when skills are sequentially executed. The second approach derives the goals space from the state space, often through a time-contrastive loss, and expand the skill set by targeting low-density areas. It remains to be demonstrated how one could create larger hierarchies of skills.
The three objectives are compatible and we have discussed how they could interact to provide a robust exploration with respect to the \textit{detachment} issue, along with reusable hierarchical skills, a quick and focused skill acquisition and multi-semantic representations.
\section{Introduction}
In reinforcement learning (RL), an agent learns by trial-and-error to maximize the expected rewards gathered as a result of its actions performed in the environment \cite{sutton1998reinforcement}. Traditionally, an agent maximizes a reward defined according to the task to perform: it may be a score when the agent learns to solve a game or a distance function when the agent learns to reach a goal. The reward is then considered as extrinsic (or as a feedback) because the reward function is provided expertly and specifically for the task. With an extrinsic reward, many spectacular results have been obtained on Atari game \cite{bellemare15} with the Deep Q-network (DQN) \cite{mnih2015human} through the integration of deep learning to RL, leading to deep reinforcement learning (DRL).
However, despite the recent improvements of DRL approaches, they turn out to be most of the time unsuccessful when the rewards are scattered in the environment, as the agent is then unable to learn the desired behavior for the targeted task \citep{franccois2018introduction}. Moreover, the behaviors learned by the agent are hardly reusable, both within the same task and across many different tasks \citep{franccois2018introduction}. It is difficult for an agent to generalize the learnt skills to make high-level decisions in the environment. For example, such skill could be \textit{go to the door} using primitive actions consisting in moving in the four cardinal directions; or even to \textit{move forward} controlling different joints of a humanoid robot like in the robotic simulator MuJoCo \citep{todorov2012mujoco}.
On another side, unlike RL, developmental learning \cite{piaget1952origins,cangelosi2018babies,oudeyer2016evolution} is based on the trend that babies, or more broadly organisms, acquire new skill while spontaneously exploring their environment \cite{gopnik1999scientist,barto2013intrinsic}. This is commonly called an intrinsic motivation (IM), which can be derived from an intrinsic reward. This kind of motivation allows to autonomously gain new knowledge and skills, which then makes the learning process of new tasks easier \cite{baldassarre2013intrinsically}. For several years now, IM is increasingly used in RL, fostered by important results and the emergence of deep learning. This paradigm offers a greater learning flexibility, through the use of a more general reward function, allowing to tackle the issues raised above when only an extrinsic reward is used. Typically, IM improves the agent ability to explore its environment, to incrementally learn skills independently of its main task, to choose an adequate skill to be improved and even to create a representation of its state with meaningful properties. In addition, as a consequence of its definition, IM does not require additional expert supervision, making it easily generalizable across environments.
\paragraph{Scope of our review.}
In this paper, we study and group together methods through a novel taxonomy based on information theoretic objectives. This way, \textbf{we revisit the notions of surprise, novelty and skill learning and show that they can encompass numerous works.} Each class is characterized by a computational objective that fits its eventual psychological definition. This allows us to situate/relate a large body of works and to highlight important directions of research. To sum up, this paper investigates the use of IM in the framework of DRL and considers the following aspects:
\begin{itemize}
\item The role of IM in addressing the challenges of DRL.
\item Classifying current heterogeneous works through few information theoretic objectives.
\item Important outlooks of IM in RL within and across each category.
\end{itemize}
\paragraph{Related works.} The overall literature on IM is huge \citep{barto2013intrinsic} and we only consider its application to DRL and IMs related to information theory. Therefore, our study of IMs is not meant to be exhaustive. Intrinsic motivation currently attracts a lot of attention and several works made a restricted study of the approaches. \citet{colas2020intrinsically} and \citet{amin2021survey} respectively focus on the different aspects of skill learning and exploration; \citet{baldassarre2019intrinsic} studies intrinsic motivation through the lens of psychology, biology and robotic ; \citet{pateria2021hierarchical} review hierarchical reinforcement learning as a whole, including extrinsic and intrinsic motivations; \citet{linke2020adapting} experimentally compare different goal selection mechanisms. In contrast with these approaches, we study a large part of objectives all based on intrinsic motivation through the lens of information theory. We assume that our work is in line with the work of \citet{schmidhuber2008driven}, which postulates that organisms are guided by the desire to compress the information they receive. However, by reviewing the more recent advances in the domain, we formalize the idea of compression with the tools from information theory.
\rebut{\paragraph{Structure of the paper.}} This paper is organized as follows. As a first step, we discuss RL, define intrinsic motivation and explain how it fits the RL framework (\secref{sec:defs}). Then, we highlight the main current challenges of RL and identify the need for an additional outcome (\secref{sec:defis}). Thereafter, we briefly explain our classification (\secref{sec:classify}), namely surprise, novelty and skill learning and we detail how current works fit it (respectively \secref{sec:infogain}, \secref{sec:novelty} and \secref{sec:skilllearning}). Finally, we highlight some important outlooks of the domain (\secref{sec:outlooks}).
\section{Definitions and Background}\label{sec:defs}
In this section, we will review the background of RL field explain the concept of IM and described how to integrate IM in the RL framework through goal-parameterized RL, hierarchical RL and information theory. \rebut{We sum up the notations used in the paper in \tabref{tab:notations} in \appref{app:notations}.}
\subsection{Markov decision process}\label{sec:mdp}
The goal of a Markov Decision Process (MDP) is to maximize the expectation of cumulative rewards received through a sequence of interactions \citep{puterman2014markov}. It is defined by: $S$ the set of possible states; $A$ the set of possible actions; $T$ the transition function $T : S \times A \times S \rightarrow p(s'|s,a)$; $R$ the reward function $R : S \times A \times S \rightarrow \mathbb{R}$; $d_0 : S \rightarrow \mathbb{R}$ the initial distribution of states. An agent starts in a state $s_0$ given by $d_0$. At each time step $t$, the agent is in a state $s_t$ and performs an action $a_t$; then it waits for the feedback from the environment composed of a state $s_{t+1}$ sampled from the transition function $T$, and a reward $r_t$ given by the reward function $R$. The agent repeats this interaction loop until the end of an episode. In reinforcement learning the goal can be to maximize the expected discounted reward defined by $\sum_{t=0}^{\infty} \gamma^t r_t$ where $\gamma \in[0,1]$ is the discount factor. When the agent does not access the whole state, the MDP can be extended to a Partially Observable Markov Decision Process (POMDP) \citep{kaelbling1998planning}. In comparison with a MDP, it adds a set of possible observations $O$ which defines what the agent can perceive and an observation function $\Omega: S \times O \rightarrow \mathbb{R}$ that defines the probability of observing $o \in O$ when the agent is in the state $s$, \textit{i.e} $\Omega(s,o) = p(o|s)$.
A reinforcement learning algorithm aims to associate actions $a$ to states $s$ through a policy $\pi$. This policy induces a t-steps state distribution that can be recursively defined as:
\begin{equation}
d^{\pi}_t(S) = \int_S d^{\pi}_{t-1}(s_{t-1}) \int_A p(s_t|s_{t-1},a)\pi(a|s_{t-1}) da\, ds_{t-1}\label{eq:dpi}
\end{equation}
with $d^{\pi}_0(S)=d_0$. The goal of the agent is then to find the optimal policy $\pi^*$ maximizing the reward:
\begin{equation}
\pi^* = \argmax{\pi} \mathbb{E}_{\substack{s_0\sim d_0(S)\\
a_t \sim \pi(\cdot|s{_t})\\
s_{t+1}\sim p(\cdot|s_t,a_t)}}
\left[\sum_{t=0}^{\infty} \gamma{^t} R(s{_t},a_t,s_{t+1})\right] .
\end{equation}
In order to find the action maximizing the long-term reward in a state $s$, it is common to maximize the expected discounted gain following a policy $\pi$ from a state, noted $V_{\pi}(s)$, or from a state-action tuple, noted $Q_{\pi}(s,a)$ (cf. \eqref{eq:espeQ}). It enables to measure the impact of the state-action tuple in obtaining the
cumulative reward \cite{sutton1998reinforcement}.
\begin{equation}
Q_{\pi}(s,a) = \mathbb{E}_{\substack{a{_t}\sim\pi(\cdot|s{_t})\\
s_{t+1}\sim p(\cdot|s_t,a_t)}}
\left[\sum_{t=0}^{\infty} \gamma{^t} R(s{_t},a{_t},s_{t+1})|_{s_0=s,a_0=a} \right]. \label{eq:espeQ}
\end{equation}
To compute these values, one can take advantage of the Bellman equation verified by the optimal Q-function:
\begin{equation}
\label{eq:bellman}
Q^*(s_t,a_t) = \mathbb{E}_{s_{t+1}\sim p(\cdot|s_t,a_t)} \big[ R(s_t,a_t,s_{t+1}) + \gamma \: \max_a Q^*(s_{t+1},a) \big].
\end{equation}
$Q$ and/or $\pi$ are often approximated with neural networks when the state space is continuous or very large \cite{mnih2016asynchronous,lillicrap2015continuous}.
\subsection{Definition of intrinsic motivation}\label{sec:defint}
Simply stated, intrinsic motivation is about doing something for its inherent satisfaction rather than to get a positive feedback from the environment \cite{ryan2000intrinsic}. Looking at this definition, one can notice that intrinsic motivation is defined by contrast with extrinsic motivation; it highlights the difference between the two paradigms. Intrinsic motivation assumes the agent learns on its own while extrinsic motivation assumes there exits an expert/need that supervises the learning process.
According to \citet{singh2010intrinsically}, evolution provides a general intrinsic motivation (IM) function that maximizes a fitness function based on the survival of an individual. Curiosity, for instance, does not immediately produce selective advantages but enables the acquisition of skills providing by themselves some selective advantages. More widely, the use of intrinsic motivation allows to obtain intelligent behaviors which may later serve goals more efficiently than with only a standard reinforcement \cite{baldassarre2013intrinsically,baldassarre2011intrinsic,lehman2008exploiting}. Typically, a student doing his mathematical homework because he/she thinks it is interesting is intrinsically motivated whereas his/her classmate doing it to get a good grade is extrinsically motivated \cite{ryan2000intrinsic}. In this future, the intrinsically motivated student may be more successful in math than the other one. This questions the relevance of using only standard reinforcement methods.
More rigorously, \citet{oudeyer2008can} explain that an activity \textit{is intrinsically motivating for an autonomous entity if its interest depends primarily on the collation or comparison of information from different stimuli and independently of their semantics}. At the opposite, an extrinsic reward results of an unknown environment static function which does not depend on previous experience of the agent on the considered environment. The main point is that the agent must not have any \textit{a priori} on the semantic of the observations it receives. Here the term \textit{stimuli} does not refer to sensory inputs, but more generally to the output of a system which may be internal or external to the independent entity, thereby including \textit{homeostatic} body variables (temperature, hunger, thirst, attraction to sexual activities \dots) \cite{baldassarre2011intrinsic,berlyne1965structure}. Broadly speaking, the motivation of an agent can be internal (\textit{source of motivation}) while still being extrinsic (\textit{why} of the actions). For instance, when an agent is looking for food because of the hunger, hunger is a stimuli coming to the cognitive system of the agent such that it is an internal but extrinsic motivation. As an other example, a child may do his/her home-works because he/she thinks it will be crucial to latter get a job. While the source of the motivation is internal, the true outcome comes from the environment.
Now that the we clarified the notion of intrinsic motivation, we study how to integrate intrinsic motivation in the RL framework.
An extensive overview of IM can be found in \citet{barto2013intrinsic}.
\subsection{A model of RL with intrinsic rewards}\label{sec:modelRL}
Reinforcement learning is derived from behaviorism \cite{skinner} and usually uses extrinsic rewards \cite{sutton1998reinforcement}. However \citet{singh2010intrinsically} and \citet{barto2004intrinsically} reformulated the RL framework to incorporate IM. We can differentiate \textit{rewards}, which are events in the environment, and \textit{reward signals} which are internal stimulis to the agent. Thus, what is named \textit{reward} in the RL community is in fact a \textit{reward signal}. Inside the \textit{reward signal} category, there is a distinction between \textit{primary reward signals} and \textit{secondary reward signals}. The \textit{secondary reward signal} is a local \textit{reward signal} computed through expected future rewards and is related to the value function
whereas the \textit{primary reward signal} is the standard \textit{reward signal} received from the MDP.
In addition, rather than considering the MDP environment as the environment in which the agent achieves its task, it suggests that the MDP environment can be formed of two parts: the \textbf{external part} which corresponds to the potential task and the environment of the agent; the \textbf{internal part} which computes the MDP states and the \textit{secondary reward signal} using potentially previous interactions. Consequently, we can consider an intrinsic reward as a \textit{reward signal} received from the MDP environment. The MDP state is no more the external state but an internal state of the agent. However, from now, we will follow the terminology of RL and the term \textit{reward} will refer to the \textit{primary reward signal}.
Figure \ref{im:rlintrinsic} summarizes the framework: the critic is in the internal part of the agent, it computes the intrinsic reward and deals with the credit assignment. The agent can merge intrinsic rewards and extrinsic rewards in its internal part. The state includes sensations and any form of internal context; in this section we refer to this state as a contextual state. The decision can be a high-level decision decomposed by the internal environment into low-level actions.
\begin{figure}
\begin{centering}
\includegraphics[width=0.4\linewidth]{images/IM.drawio.pdf}
\caption{\rebut{Model of RL integrating IM}, taken in \protect\citet{singh2010intrinsically}. The environment is factored into an internal and external environment, with all reward coming from the former.}
\label{im:rlintrinsic}
\end{centering}
\end{figure}
This conceptual model incorporates intrinsic motivations into the formalism of MDP. Now, we will review how this model is instantiated in practice. Indeed it is possible to extend RL to incorporate the three new components that are intrinsic rewards, high-level decisions and contextual states. We separately study them in the following sections.
\subsection{Intrinsic rewards and information theory}
Throughout our definition of intrinsic motivation, one can notice that the notion of \textit{information} comes up a lot. This is not hazardous and quantifying information proves useful to generate intrinsic rewards. In this section, we provide the basics about information theory and explain how to combine intrinsic and extrinsic rewards. However, we emphasize that intrinsic rewards are not restricted to information measures and their characterization mostly depends on whether the reward function fits the properties of an intrinsic motivation.
The Shannon entropy quantifies the mean necessary information to determine the value of a random variable. Let $X$ be a random variable with a law of density $p(X)$ satisfying the normalization and positivity requirements, we define its entropy by:
\begin{equation}
H(X) = -\int_{X} p(x)\log p(x) dx .
\end{equation}
In other words, it allows to quantify the disorder of a random variable. The entropy is maximal when $X$ follows a uniform distribution, and minimal when $p(X)$ is equal to zero everywhere except in one value, which is a Dirac distribution. From this, we can also define the entropy conditioned on a random variable $S$. It is similar to the classical entropy and quantifies the mean necessary information to find $X$ knowing the value of an other random variable $S$:
\begin{equation}
H(X|S) = -\int_{S} p(s)\int_{X} p(x|s)\log p(x|s) dx ds.
\end{equation}
The mutual information allows to quantify the information contained in a random variable $X$ about an other random variable $Y$. It can also be viewed as the decrease of disorder brought by a random variable $Y$ on a random variable $X$. The mutual information is defined by:
\begin{equation}
I(X;Y) = H(X) - H(X|Y)
\end{equation}
We can notice that the mutual information between two independent variables is zero (since $H(X|Y)=H(X)$). Similarly to the conditional entropy, the conditional mutual information allows to quantify the information contained in a random variable about an other random variable, knowing the value of a third one. It can be written in various ways:
\begin{subequations}
\begin{align}
I(X;Y|S) &= H(X|S) - H(X|Y,S) = H(Y|S) - H(Y|X,S) \label{information2} \\
&= D_{KL} \Big[ p(X,Y|S) || p(X|S)p(Y|S)\Big] \label{kldiv}
\end{align}
\end{subequations}
We can see with \eqref{information2} that the mutual information is symmetric and that it characterizes the decrease in entropy on X brought by Y (or inversely). \eqref{kldiv} defines the conditional mutual information as the Kullback-Leibler divergence \cite{cover2012elements}, \rebut{noted $D_{KL}(.||.)$}, between distribution $P(Y,X|S)$ and the same distribution if $Y$ and $X$ were independent variables (the case where $H(Y|X,S) = H(Y|S)$).
For further information on these notions, the interested reader can refer to \citet{cover2012elements}. Sections 5, 6, 7 illustrate how we can use information theory to reward an agent. In practice, there are multiple ways to integrate an intrinsic reward into a RL framework. The main approach is to compute the agent's reward $r$ as a weighted sum of an intrinsic reward $r_{int}$ and an extrinsic reward $r_{ext}$: $r=\alpha r_{int} + \beta r_{ext}$ \cite{kakade2002dopamine,burda2018exploration}. Of course, one of the weighting coefficient $\alpha$ and $\beta$ can be set to 0.
\subsection{Decisions and hierarchical RL}\label{sec:hrl}
Hierarchical reinforcement learning (HRL) architectures are adequate candidates to model the decision hierarchy of an agent \cite{barto2003recent,dayan1993feudal,sutton1999between}. \citet{dayan1993feudal} introduced the feudal hierarchy, called \textit{Feudal reinforcement learning}. In this framework, a manager selects the goals that workers will try to achieve by selecting low-level actions. Once the worker achieved the goal, the manager can select an other goal, so that the interactions keep going. The manager rewards the RL-based worker to guide its learning process; we formalize this with intrinsic motivation in the next section. Below, \figureautorefname~\ref{im:abstract_actions} illustrates the use of a hierarchical decision in contrast with the use of low-level actions. At the origin, the hierarchical architectures have been introduced to make easier the long-term credit assignment \cite{dayan1993feudal,sutton1999between}. This problem refers to the fact that rewards can occur with a temporal delay and will only very weakly affect all temporally distant states that have preceded it, although these states may be important to obtain that reward. Indeed, the agent must propagate the reward along the entire sequence of actions (through \eqref{eq:bellman}) to reinforce the first involved state-action tuple. This process can be very slow when the action sequence is large. This problem also concerns determining which action is decisive for getting the reward, among all actions of the sequence. In contrast, if an agent can take advantage of temporally-extended actions, a large sequence of low-level actions become a short sequence of time-extended decisions that make easier the propagation of rewards.
This goal setting mechanism can be extended to create managers of managers so that an agent can recursively define increasingly abstract decisions as the hierarchy of RL algorithms increases. Relatively to \figref{im:rlintrinsic}, the internal environment of a RL module becomes the lower level module. We can model these decisions as \textit{options}. An \textit{option} $op \in \mathcal{O}$ is defined through 3 components: 1- A set of starting states $\mathcal{I} \subset S$ from which an \textit{option} can be applied; 2- A policy (or worker) that is responsible of achieving the \textit{options} with lower-level actions. This is studied in the next section; 3- A completion function $\mathcal{F}$ that specifies the probability of completing the \textit{option} in each state.
Typically, the starting state can derive from $d_0$ (all \textit{options} start at the beginning of an episode) or the full set of states $S$ (\textit{options can start everywhere}). The completion function can also set a probability $0$ everywhere \cite{eysenbach2018diversity}, in this case, it ends at the same time as an episode. Such specific cases often occur \cite{eysenbach2018diversity}. \textit{Options} where originally learnt during a pre-training phase with exclusively extrinsic rewards \cite{sutton1999between}, it was meant to take advantage of expert knowledge on the task. However, in our framework, we are interested in intrinsically motivated agent, so, in the next section, we take a closer look on how to learn the policies that learn to achieve goals using intrinsic motivation. In particular, we will define goals, skills and explain how to build a contextual state.
\subsection{Goal-parameterized RL}\label{sec:goalpam}
Usually, RL agents solve only one task and are not suited to learn multiple tasks. Thus, an agent is unable to generalize across different variants of a task. For instance, if an agent learns to grasp a circular object, it will not be able to grasp a square object. In the developmental model described in \secref{sec:modelRL}, the decisions can be hierarchically organized into several levels where an upper-level takes decision (or sets goals) that a lower-level has to satisfy. This questions: 1- how a DRL algorithm can make its policy dependent on the goal set by its upper-level decision module ? 2- How to compute the intrinsic reward using the goal ? These issues rise up a new formalism based on developmental machine learning \cite{colas2020intrinsically}.
In this formalism, a \textbf{goal} is defined by the pair $(g,R_G)$ where $G \subset \mathbb{R}^d$, $R_G$ is a goal-conditioned reward function and $g \in G$ is the $d\text{-dimensional}$ goal embedding. This contrasts with the notion of task which is proper to an extrinsic reward function assigned by an expert to the agent. With such embedding, one can generalize DRL to multi-goal learning, or even to every available goal in the state space, with the Universal Value Function Approximator (UVFA) \cite{schaul2015universal}. UVFA integrates, by concatenating, the state goal embedding $g$ with the state of the agent to create a contextual state $c = (g,s)$. Depending on the semantic meaning of a skill, we can further enhance the contextual states with other actions or states executed after starting executing the skill (cf. \secref{sec:skilllearning}).
We can now define the \textbf{skill} associated to each goal as the goal-conditioned policy $\pi^g(a|s)=\pi(a|g,s)$; in other words, a skill refers to the sensorimotor mapping that achieve a goal \cite{thill2013theories}. This skill may by learnt or unlearnt according to the expected intrinsic rewards it gathers. It implies that, if the goal space is well-constructed (as often a ground state space for example, $R_G=S$), the agent can generalize its policy across the goal space, \textit{i.e} the corresponding skills of two close goals are similar. For example, let us consider an agent moving in a closed maze where every position in the maze can be a goal. We can set $G=S$ and set the intrinsic reward function to be the euclidean distance between the goal and the current state of the agent $R_G: S \times G \rightarrow \mathbb{R}, (s,g) \rightarrow ||s-g||_2$.
This formalism completes the instantiation of the architectures described in \secref{sec:modelRL}. Now we will explain how, in practice, one can efficiently learn the goal-conditioned policy.
\subsection{Efficient learning with goal relabelling}\label{sec:relabeling}
When the goal space is a continuous state space, it is difficult to determine whether a goal is reached or not, since two continuous values are never exactly equal. Hindsight experience replay (HER) \cite{andrychowicz2017hindsight} tackles this issue by providing a way to learn on multiple objectives with only one interaction. With author's method, the agent can use an interaction done to accomplish one goal to learn on an other goal, by modifying the associated intrinsic reward. This mechanism greatly improves the sample efficiency since it avoids to try all interactions for every goals.
Let us roll out an example. An agent acts in the environment to gather a tuple $(s,s',r_g,a,g)$ where $r_g$ is the reward associated to the goal $g$. The agent can learn on this interaction, but can also use this interaction to learn other goals; to do so, it can change the goal into a new goal and recompute the reward, resulting in a new interaction $(s,s',r_{g'},a,g')$. The only constraint for doing this is that the reward function $R(s,a,s',g')$ has to be known, which is the case with an intrinsic reward function. Typically, an agent can have a goal state and a reward function which is $1$ if it is into that state and $0$ otherwise. At every interaction, it can change its true goal state for its current state and learn with a positive reward.
\section{Challenges of DRL}\label{sec:defis}
In this section, we detail two main challenges of current DRL methods that are partially addressed by IMs.
\subsection{Sparse rewards} \label{sec:sparse}
Classic RL algorithms operate in environments where the rewards are \textbf{dense}, \textit{i.e.} the agent receives a reward after almost every completed action. In this kind of environment, naive exploration policies such as $\epsilon$-greedy \cite{sutton1998reinforcement} or the addition of a Gaussian noise on the action \cite{lillicrap2015continuous} are effective. More elaborated methods can also be used to promote exploration, such as Boltzmann exploration \cite{cesa2017boltzmann,mnih2015human} or an exploration in the parameter-space \cite{plappert2017parameter,ruckstiess2010exploring,fortunato2017noisy}. In environments with \textbf{sparse} rewards, the agent receives a reward signal only after it executed a large sequence of specific actions. The game \textit{Montezuma's revenge} \cite{bellemare15} is a benchmark illustrating a typical sparse reward function. In this game, an agent has to move between different rooms while picking up objects (it can be keys to open doors, torches, ...). The agent receives a reward only when it finds objects or when it reaches the exit of the room. Such environments with sparse rewards are almost impossible to solve with the above mentioned \textit{undirected} exploration policies \cite{thrun1992efficient} since the agent does not have local indications on the way to improve its policy. Thus the agent never finds rewards and cannot learn a good policy with respect to the task \cite{mnih2015human}. Figure \ref{im:sparse_reward} illustrates the issue on a simple environment.
This issue stresses out the need for \textit{directed} exploration methods \cite{thrun1992efficient}. While intrinsic motivation can provide such direction, the principle of "optimism in face of uncertainty" \cite{audibert2007tuning} can also execute a directed exploration without intrinsic motivation \cite{thrun1992efficient}. Briefly, this principle can incite agents to go in areas with a lot of epistemic uncertainties about its Q-values \cite{ciosek2019better,pacchiano2020optimism}. Yet, it is hard to approximate the epistemic uncertainty and it only slightly improves exploration \cite{ciosek2019better}. This principle can also relate with some intrinsic motivations when we consider uncertainty about models (see \secref{sec:infogainforward}).
\begin{figure}
\begin{centering}
\includegraphics[width=10cm]{images/sparse_rewards.drawio.pdf}
\caption{\rebut{Example of a very simple sparse reward environment, explored by two different strategies}. The agent, represented by a circle, strives to reach the star. The reward function is one when the agent reaches the star and zero otherwise. (a) the agent explores with standard methods such as $\epsilon\text{-greedy}$; as a result, it stays in its surrounded area because of the temporal inconsistency of its behaviour. (b) we imagine an ideal exploration strategy where the agent covers the whole state space to discover where rewards are located. \rebut{The fundamental difference between the two policies is the volume of the state space explored for a given time.}}
\label{im:sparse_reward2}
\end{centering}
\end{figure}
Rather than working on an exploration policy, it is common to shape an intermediary dense reward function that adds to the reward associated to the task in order to make the learning process easier for the agent \cite{su2015reward}. However, the building of a reward function often reveals several unexpected errors \cite{ng1999policy,amodei2016concrete} and most of the time requires expert knowledge. For example, it may be difficult to shape a local reward for navigation tasks. Indeed, one has to be able to compute the shortest path between the agent and its goal, which is the same as solving the navigation problem. On the other side, the automation of the shaping of the local reward (without calling on an expert) requires too high computational resources \cite{chiang2019learning}. We will see in \secref{sec:infogain}, \ref{sec:novelty} and \ref{sec:skilllearning} how IM is a valuable method to encourage exploration in a sparse rewards setting.
\subsection{Temporal abstraction of actions} \label{sec:abstraction}
As argued in \secref{sec:hrl}, skills, through hierarchical RL, are a key element to speed up the learning process since the number of decisions to take is significantly reduced when skills are used. In particular, they make easier the \textit{credit assignment}. Skills can be manually defined, but it requires some extra expert knowledge \cite{sutton1999between}. To avoid providing hand-made skills, several works proposed to learn them with extrinsic rewards \cite{bacon2017option,subpolicy2020li}. However, if an agent rather learns skills in a \textit{bottom-up} way, \textit{i.e} with intrinsic rewards rather than extrinsic rewards, learnt skills become independent from possible tasks. This way, skills can be reused across several tasks to improve transfer learning \cite{aubret2020elsim,heess2016learning} and an agent can learn skills even though it does not access rewards, improving exploration when rewards are sparse \cite{machado2017laplacian}. Let us illustrate both advantages.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\linewidth]{images/abstraction_action.drawio.pdf}
\caption{\rebut{Example of two policies in a simple environment, one uses \textit{skills} (yellow), the other one only uses primitive actions (blue)}. Agents have to reach the star.}
\label{im:abstract_actions}
\end{centering}
\end{figure}
\paragraph{Exploration when rewards are sparse.} \figref{im:abstract_actions} illustrates the benefit in terms of exploration when an agent hierarchically uses skills.
The yellow agent can use a skill \textit{Go to the far right}, to reach the rewarding star while the blue agent can only use low-level cardinal movements.
The problem of exploration becomes trivial for the agent using skills, since one exploratory action can lead to the reward. In contrast, it requires an entire sequence of specific low-level actions for the other agent to find the reward. This problem arises from the minimal number of specific actions needed to get a reward (see also \secref{sec:sparse}). A thorough analysis of this aspect can be found in \cite{nachum2019does}.
\paragraph{Reusing skills across several tasks.} Skills learnt with intrinsic rewards are not specific to a task. Assuming an agent is required to solve several tasks in a similar environment, \textit{i.e} a single MDP with a changing extrinsic reward function, an agent can execute its discovered skills to solve all tasks. Typically, in \figref{im:abstract_actions}, if both agents learnt to reach the star and we move the star somewhere else in the environment, the yellow agent would still be able to execute \textit{Go to the far right} and executing this skill may make the agent closer to the new star. In contrast, the blue agent would have to learn a whole new policy. In \secref{sec:skilllearning}, we provide insights on how an agent can discover skills in a \textit{bottom-up} way.
\section{Classification of methods}\label{sec:classify}
In order to tackle the problem of exploration, an agent may want to identify and return in \textbf{rarely visited} states or \textbf{unexpected} states, which can be quantified with current intrinsic motivations. We will particularly focus on two objectives that address the challenge of exploring with sparse rewards, each with different properties: maximizing novelty and surprise. We formalize novelty and surprise through the lens of information theory (in respectively \secref{sec:novelty} and \secref{sec:infogain}) and the works that instantiate it. Surprise and novelty are specific notions that have often been used in an interchanged way and we are not aware of a currently unanimous definition of novelty \cite{barto2013novelty}. The third notion we study, skill learning, focuses on the issue of skill abstraction.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Surprise}: $I(S';\Phi_T|h,S,A)$, \secref{sec:infogain}} }\\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Information gain & Information gain & Information gain \\
& over forward model & over the true model & over density model \\
\hline
Sections & \secref{sec:infogainforward} & \secref{sec:predictionerror} & \secref{sec:infogaindensity} \\
\hline
Rewards & \eqref{eq:sumupsur1} & \eqref{eq:sumupsur3} & \eqref{eq:sumupsur2} \\
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Novelty}: $I(S;Z)$, \secref{sec:novelty}}}
\\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Parametric density & \multicolumn{2}{c|}{K-nearest neighbors} \\
\hline
Sections & \secref{sec:directdensity} & \multicolumn{2}{c|}{\secref{sec:knearest}} \\
\hline
Rewards & \eqref{eq:sumupnov1} & \multicolumn{2}{c|}{ \eqref{eq:sumupnov2} } \\
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Skill learning}: $I(G; u(\mathcal{T}))$, \secref{sec:skilllearning}}} \\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Fixed goal distribution & Goal-state & Proposing diverse goals \\
& & achievement & \\
\hline
Sections & \secref{sec:predefinedG} & \secref{sec:goalstate} & \secref{eq:diversestate} \\
\hline
Rewards & \eqref{eq:sumupskill1} & \eqref{eq:sumupskill2} & \eqref{eq:sumupskill3} \\
\hline
\end{tabular}
\caption{Summary of our taxonomy of intrinsic motivations in DRL. The function $u$ outputs a part of the trajectories $\mathcal{T}$, $Z$ and $G$ are internal random variables respectively denoting state representations and self-assigned goals. Please, refer to the corresponding sections for more details about methods and notations. The reward function aims to represent the one used in the category.}
\label{tab:taxonomy}
\end{table}
Table \ref{tab:taxonomy} sums up our taxonomy. We classify intrinsic motivations in three categories of objectives based on information theory that reflects the high-level studied concepts of novelty, surprise and skill learning. In practice, we mostly take advantage of the \textit{mutual information} to provide a quantity for our conceptual objectives. These objectives are compatible with each other and may be used simultaneously, as argued in \secref{sec:flatim}. Within each category of objectives, we additionally highlight several ways to maximize each objective and provide details about the underlying methods of the literature. We sum up the methods in Tables \ref{tab:surprise}, \ref{tab:novelty} and \ref{tab:skills} and compare their respective advantages when possible.
\input{bigtableorig}
\subsection{Surprise}
Following the definition of \citet{itti2009bayesian}, we reexplore the notion of surprise and quantify it by $I(S';\Phi_T|h,S,A)$ where $h$ refers to a dataset of interactions and $\Phi_T$ represents the distribution over parameters of true forward/density models. Based on the works we analyze (cf. \tabref{tab:surprise}), we study surprise maximization over forward models:
\rebut{
\begin{equation}
R(s,a,s',h, \Phi) = D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:sumupsur1}
\end{equation}
}
\rebut{
and density models:
}
\rebut{
\begin{equation}
R(s,a,s') = \frac{1}{\sqrt{\hat{N}(s')}}\label{eq:sumupsur2}
\end{equation}
}
\rebut{
where N(s') is the number of visits of $s'$ (cf. \secref{sec:infogaindensity}). Both maximization are two ways of measuring the unexpectedness. Surprise can also be maximized using prediction error (and learning progress through an approximation with weaker assumptions) over a forward model:
}\rebut{
\begin{equation}
R(s,a,s') = ||s' - \hat{s}'||_2^2\label{eq:sumupsur3}
\end{equation}
}
\rebut{
where $\hat{s}'$ is the prediction of state following $(s,a)$. In \tabref{tab:surprise}, we compare surprise-based methods according to their performance on the sparse reward environment \textit{Montezuma's revenge} (cf. \figref{fig:environments}a)), and whether they handle stochastic environments (cf. \secref{sec:predictionerror}).
}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/all_envs.png}
\caption{\rebut{Different environments widely used in our reviewed papers. (a) \textit{Montezuma's revenge}, used to assess the ability of a policy to explore. (b) \textit{Ant maze} (1x scale), used to evaluate the hierarchical organization of learnt skills (low-level: manipulation of low-level torques; high-level: navigation in the maze.) (c) \textit{Ant}, used to analyse the diversity of learnt skills.}}
\label{fig:environments}
\end{figure}
\subsection{Novelty}
Based on the analysis of \citet{barto2013novelty}, we define novelty-seeking behavior as actively maximizing the mutual information between states and a learnt representation of states $Z$, $I(S;Z)$. We divide this objective maximization into two kinds of methods that encompass a large body of works. First, we consider a direct maximization of a parametric entropy of embedded states:
\rebut{
\begin{equation}
R(s,a,s') = - \log \rho(s') \label{eq:sumupnov1}
\end{equation}
}
\rebut{
where $\rho(s')$ is a density model approximating $p(s')$. Second we study an entropy maximization based on a k-nearest neighbors approximation:
}
\rebut{
\begin{equation}
R(s,a,s') = \log (1+ \frac{1}{K} \sum_0^K || f(s') - nn_k(f(S_b),f(s')) ||_2)\label{eq:sumupnov2}
\end{equation}
}\rebut{
where $nn_k(S_b,s')$ is the k-th closest state to $s'$ in $S_b$ and $f$ a representation function. In \tabref{tab:novelty}, we compare novelty-based methods according to their performance on the sparse reward environment \textit{Montezuma's revenge} (cf. \figref{fig:environments}a)), and whether they handle stochastic environments (cf. \secref{sec:predictionerror}).
}
\subsection{Skill learning}
We formalize skill learning as maximizing the mutual information between a goal representation $g \in G$ and a part (extracted with $u$) of a time-extended trajectory $u(\mathcal{T})$, $I(G; u(\mathcal{T}))$ while following $G$. We will consider two ways to achieve this in the literature (cf \tabref{tab:skills}). \rebut{First, by fixing the goal distribution $p(g)$ (\textit{e.g} a uniform categorical probability distribution) and maximizing}
\rebut{
\begin{equation}
R(s,a,s',g) =\log p(g|s')\label{eq:sumupskill1}.
\end{equation}
}
\rebut{Second by deriving the goal representation from the state space and optimizing for}
\rebut{
\begin{equation}
R(s,a,s',s_g) =-||s_g-s'||_2^2\label{eq:sumupskill2}.
\end{equation}
}
We will see that the second point also needs a goal-selection policy to maximize the entropy of goals-states, \rebut{formalized as maximizing}
\rebut{
\begin{equation}
(1+\alpha_{skew})\log p(s_g) \label{eq:sumupskill3}.
\end{equation}
}
\rebut{
where $\alpha_{skew} < 0 $ is a hyper-parameter. In \tabref{tab:novelty}, we compare skill learning methods according to their performance on the widely used hierarchical task \textit{Ant maze} (cf. \figref{fig:environments}b)), and whether they need a hand-made goal space (x,y) or an implicit curriculum of objectives. For methods in the "fixing the goal distribution'', we did not find a common evaluation protocol/environment among works. However, as an example, several qualitative analysis emphasize the diversity of behaviors that can be learnt by the ant displayed in \figref{fig:environments}c) \cite{sharma2019dynamics,eysenbach2018diversity}.
}
We justify our objective within each category and study the different ways to maximize this objective and the advantages/disadvantages. In practice, surprise and novelty are currently maximized as a flat intrinsic motivation, \textit{i.e} without using hierarchical decisions. This mostly helps to improve exploration when rewards are sparse. In contrast, skill learning allows to define time-extended hierarchical skills that enjoy all the benefits argued in \secref{sec:abstraction}.
\section{Surprise}\label{sec:infogain}
In this section, we study methods that maximize the surprise. Firstly, we formalize the notions of surprise, then we will study three approaches for computing intrinsic rewards based of these notions.
\subsection{Definition of surprise}\label{sec:expecsurprise}
In this section, we assume the agent learns either a density model (\secref{sec:infogaindensity}) or a forward model of the environment (Sections \ref{sec:infogainforward} and \ref{sec:predictionerror}) parameterized by $\phi \in \Phi$. The density model induces a marginal distribution of state $p(S|\phi)$ and a forward model computes the next-state distribution conditioned on a tuple state-action $p(S'|S,A,\phi)$. Typically, this can be the parameters of a neural network. Trying to approximate the true model, the agent maintains an approximate distribution $p(\Phi|h)$ of models, where $h_t=h$ refers to the ordered history of interactions $((s_0,a_0,s_1),(s_1,a_1,s_2),\dots, (s_{t-1},a_{t-1},s_t))$. In this section, $h$ simulates a dataset of interactions, we use it to clarify the role of the dataset. It is important to notice that the policy feeds this $h$.
In this case, \textbf{surprise quantifies the mismatch between an expectation and the true experience of an agent} \cite{barto2013novelty,ekman1994nature}. In this paper, we refer to the definition of \citet{itti2009bayesian}, which define it as the discrepancy between a prior distribution of beliefs and the posterior probability distribution following an observation \cite{itti2009bayesian,storck1995reinforcement}. If an agent maximizes the surprise over a model through interactions with the environment, which is often the case \cite{barto2013novelty}, it leads to the expected information gain objective \cite{sun2011planning}. Intuitively, the agent returns in states where it experienced an unexpected transition. Using the KL-divergence to assess the discrepancy, surprise can be computed as $D_{KL}(p(\Phi|h_{t+1})||p(\Phi|h_t))$ where $\phi \in \Phi$ are parameters of a model and $t$ denotes the timestep.
In this case, the agent has a prior distribution about model parameters $p(\Phi)$ and this model can be updated using the Bayes rule:
\begin{equation}
p(\phi|h,s,a,s') = \frac{p(\phi|h)\; p(s'|h,s,a,\phi)}{p(s'|h,s,a)}.
\end{equation}
\paragraph{Information gain over agent's model.} The expected information gain \cite{sun2011planning,little2013learning} over a forward or density model parameterized by $\phi$ can be formulated as:
\begin{subequations}
\begin{align}
IG(h,A,S',S,\Phi) &= I(S';\Phi|h,A,S) = \mathbb{E}_{\substack{ (s,a) \sim p(\cdot|h) \\ s' \sim p(\cdot | s,a,h)}} D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:trueexpectedinfogain} \\
%
&\approx \mathbb{E}_{\substack{ (s,a) \sim \pi \\ s' \sim p(\cdot | s,a,h,\phi_T)}} D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:expectedinfogain}
\end{align}
\end{subequations}
Actively maximizing the expected information gain amounts to reduce the uncertainty of the model. We emphasize that $p(\phi|h) = p(\phi|h,a,s)$ since only full transitions provide information about the true dynamics of the environment. In this case, $p(s'| s,a,h)$ does not refer to the probability induced by the environment, but rather to the probability induced by the current history of transitions. This is stressed out by writing:
\begin{equation}
p(s'|s,a,h) = \sum_{\phi \in \Phi} p(s'|s,a,h,\phi)p(\phi|s,a,h).\label{eq:marginalphi}
\end{equation}
We highlight that the difference between \eqref{eq:trueexpectedinfogain} and \eqref{eq:expectedinfogain} is important and misleading in the literature \cite{houthooft2016vime,little2013learning,sun2011planning}: in the first equation, the agent imagines new outcomes in order to select actions that maximize the change in the internal model, while in \eqref{eq:expectedinfogain}, the agents acts and uses the new states to update its model.
\paragraph{Information gain over the true forward model.} In our formalism, we assume that there is a distribution of true models $p(\Phi_T)$ that underpins the transition function of the environment $T$. In contrast with $\Phi$, this is a property of the environment. One can see this distribution as a Dirac distribution if only one model exists or as a categorical distribution of several forward models. We define the expected information gain over the true models as:
\begin{subequations}
\begin{align}
IG(h,A,S',S,\Phi_T) &= I(S';\Phi_T|h,A,S) = H(\Phi_T|h,A,S) - H(\Phi_T|h,A,S,S') \\
&= \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} \log p(s'|s,a,h,\phi_T) - \log p(s'|s,a,h) \label{eq:predicterror3}.
\end{align}
\end{subequations}
Maximizing \eqref{eq:predicterror3} amounts to look for states that provides new information about the true models distribution. We can see that the left-hand side of \eqref{eq:predicterror3} incites the agent to target inherently deterministic areas, \textit{i.e}, given the true forward model, the agent would exactly know where it ends up. At the opposite, the right-hand term pushes the agent to go in stochastic areas according to its current knowledge. Overall, to improve this objective, an agent has to reach areas that are more deterministic than what it thinks they are. One can see that, assuming $p(s'|s,a,h,\phi_T) \approx p(s' | s, a, \phi, h)$, one falls back on the expected information gain (see also \eqref{eq:predicterror2}). In contrast with \eqref{eq:expectedinfogain}, this objective takes advantage of the true model, which is most of the time unknown, thereby making the objective hardly tractable. As such, in this perspective, surprise results from an agent-centric approximation of the discrepancy between the agent's model and the environment model
In the following, we will study three objectives: the expected information gain over the true forward models, the expected information gain over the forward model and the expected information gain over density models.
\subsection{Information gain over the true forward model}\label{sec:predictionerror}
To avoid the need of the true forward model, the agent can omit the left-hand term of \eqref{eq:predicterror3} by assuming the true forward model is modelled as a deterministic forward model. In this case, we can write:
\begin{subequations}
\begin{align}
I(S';\Phi_T|h,A,S) &\propto \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h), \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} - \log p(s'|s,a,h) \label{eq:predicterror4} \\
&= \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} - \log \sum_{\phi \in \Phi} p(s'|h,s,a,\phi)p(\phi|h) \\
%
&\geq \mathbb{E}_{\substack{\phi_T \sim p(\cdot),\, (s,a) \sim p(\cdot|h) \\ s' \sim p(\cdot|s,a,\phi_T), \phi \sim p(\cdot|h)}} - \log p(s'|h,s,a,\phi) \label{eq:predicterror5}
\end{align}
\end{subequations}
where we applied the Jensen inequality in \eqref{eq:predicterror5} and $\phi_T \sim p(\cdot)$ is fixed. One can model $p(s'|h,s,a,\phi)$ with a unit-variance Gaussian distribution in order to obtain a tractable loss. This way, we have:
\begin{subequations}
\begin{align}
\mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T),\, \phi \sim p(\cdot|h)}} - \log p(s' | \phi,h,a,s) &\approx \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h) ,\, s' \sim p(\cdot|s,a,\phi_T) \\ \phi \sim p(\cdot|h),\, \phi_T \sim p(\cdot) }} - \log \frac{1}{(2\pi)^{d/2}}e^{-0.5 (s' - \hat{s}')^T (s' - \hat{s}')} \label{eq:gaussianinfogain} \\
&\propto \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h) ,\, s' \sim p(\cdot|s,a,\phi_T) \\ \phi \sim p(\cdot|h),\, \phi_T \sim p(\cdot) }} ||s' - \hat{s}'||_2^2 + Const
\end{align}
\end{subequations}
%
where
\begin{equation}
\hat{s}' = \argmax{s'' \in S} p(s''|h,a,s,\phi)
\end{equation}
represents the mean prediction and $\phi$ parameterizes a deterministic forward model.
Following the objective, we can extract a generic intrinsic reward as:
\begin{align}
R(s,a,s')= ||f(s')- f(\hat{s}')||_2^2
\label{eq:rewpredicterror}
\end{align}
where $f$ is a generic function (e.g. identity or a learnt one) encoding the state space into a feature space. \eqref{eq:rewpredicterror} amounts to reward the predictor error of $\phi$ in the representation $f$. In the following, we will see that learning a relevant function $f$ is the main challenge.
The first natural idea to test is whether a function $f$ is required. \citet{burda2019largescale} learn the forward model from the ground state space and observe it is inefficient when the state space is large. In fact, the euclidean distance is meaningless in such high-dimensional state space. In contrast, they raise up that random features extracted from a random neural network can be very competitive with other state-of-art methods. However they poorly generalize to environment changes. An other model, \textit{Dynamic Auto-Encoder (Dynamic-AE)} \cite{stadie2015incentivizing}, computes the distance between the predicted and the real state in a state space compressed with an auto-encoder \cite{hinton2006reducing}. $g$ is then the encoding part of the auto-encoder. However this approach only slightly improves the results over Boltzmann exploration on some standard Atari games. Other works also consider a dynamic-aware representation \cite{ermolov2020latent}. These methods are unable to handle the local stochasticity of the environment \cite{burda2019largescale}. For example, it turns out that adding random noise in a 3D environment attracts the agent; it passively watches the noise since it is unable to predict the next observation. \label{tele} This problem is also called \textit{the white-noise} problem \cite{pathak2017curiosity,schmidhuber2010formal}. This problem emerges by considering only the right-hand term of \eqref{eq:predicterror3}, making the agent assumes environments are deterministic. Therefore, exploration with prediction error breaks down when this assumption is no longer true.
To tackle exploration with local stochasticity, the \textit{intrinsic curiosity module (ICM)} \cite{pathak2017curiosity} learns a state representation function $f$ end-to-end with an \textit{inverse model} (i.e. a model which predicts the action done between two states). Thus, the function $f$ is constrained to represent things that can be controlled by the agent during next transitions. Secondly, the forward model used in ICM predicts, in the feature space computed by $f$, the next state given the action and the current state. The prediction error does not incorporate the white-noise that does not depend on actions, so it will not be represented in the feature state space. ICM notably allows the agent to explore its environment in the games \textit{VizDoom} and \textit{Super Mario Bros}. Building a similar action space, \textit{Exploration with Mutual Information (EMI)} \cite{pmlr-v97-kim19a} significantly outperforms previous works on Atari but at the cost of several complex layers. EMI transfers the complexity of learning a forward model into the learning of states and actions representation through the maximization of $I([S,A];S')$ and $I([S,S'];A)$. Then, the forward model $\phi$ is constrained to be a simple linear model in the representation space. Furthermore, EMI introduces a \textit{model error} which offloads the linear model when a transition remains strongly non-linear (such as a screen change). However one major drawback of ICM and EMI is the incapacity of their agent to keep in their representation what depends on their long-term control. For instance, in a partially observable environment, an agent may perceive the consequences of its actions several steps later.
An other way to tackle local stochasticity can be to maximize the improvement of prediction error, or learning progress, of a transition model \cite{schmidhuber1991curious,azar2019world,lopes2012exploration,oudeyer2007intrinsic,kim2020active}. One can see this as approximating the left-hand side of \eqref{eq:predicterror3} with:
\begin{align}
\log p(s'|s,a,h,\phi_T) - \log p(s'|s,a,h) &\approx \log p(s'|s,a,h') - \log p(s'|s,a,h)
\end{align}
where $h'$ concatenates $h$ with an arbitrary number of additional interactions. As $h'$ becomes large enough and the agent updates its forward model, its forward model converges to the true transition model. Formally, if one stochastic forward model can describe the transitions, we can write:
\begin{subequations}
\begin{align}
\lim_{|h'|\rightarrow \inf} p(s'|s,a,h') &= \lim_{|h'|\rightarrow \inf} \sum_{\Phi} p(s'|s,a,h',\phi) p(\phi|h') \nonumber \\
&= p(s'|s,a,h',\phi_T) \label{eq:approxlearningprogress}
\end{align}
\end{subequations}
In practice, we can not wait for discovering a long sequence of new interactions and the reward can be dependent on a small set of interactions and the efficiency of the gradient update of the forward model. Yet, the theoretical connection with the true expected information gain may indeed explain the robustness of learning progress to stochasticity \cite{linke2020adapting}.
\paragraph{Conclusion.} While these methods perform well in deterministic environments, they struggle to offset the determinism assumption that underpines the focus on \eqref{eq:predicterror4}; it results that standard methods focus on the more stochastic areas. Methods that tackle stochasticity may not predict important long-term information about the environment or they need to compute a learning progress measure, which is non-trivial.
\subsection{Information gain over forward model}\label{sec:infogainforward}
In this subsection, we study the works that maximize the expected information gain over forward models. Here, $\phi$ are parameters of a learnt forward model. Using \eqref{eq:expectedinfogain}, we can extract an intrinsic reward:
\begin{equation}
R(s,a,s') = D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)).\label{eq:rewinfogain}
\end{equation}
This way, an agent executes actions that provide information about the dynamics of the environment. This allows, on one side, to push the agent towards areas it does not know, and on the other side to prevent attraction towards stochastic areas. Indeed, if the area is deterministic, environment transitions are predictable and the uncertainty about its dynamics can decrease. At the opposite, if transitions are stochastic, the agent turns out to be unable to predict transitions and does not reduce uncertainty. The exploration strategy \textit{VIME} \cite{houthooft2016vime} computes this intrinsic reward by modelling $p(\phi|h)$ with Bayesian neural networks \cite{graves2011practical}. The interest of Bayesian approaches is to be able to measure the uncertainty of the learned model \cite{blundell2015weight}. This way, assuming a fully factorized Gaussian distribution over model parameters, the KL-divergence has a simple analytic form \cite{houthooft2016vime,linke2020adapting}, making it easy to compute.
However, the interest of the proposed algorithm is shown only on simple environments and the reward can be computationally expensive to compute. \citet{achiam2017surprise} propose a similar method (\textit{AKL}), with comparable results, using deterministic neural networks, which are simpler and quicker to apply. The weak performance of both models is probably due to the difficulty to retrieve the uncertainty reduction by rigorously following the mathematical formalism of information gain
The expected information gain can also be written:
\begin{subequations}
\begin{align}
I(S';\Phi|h,A,S) &= H(S'|h,A,S) - H(S'|A,\Phi,S,h) \nonumber \\
&\approx - \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s'|h,s,a) + \mathbb{E}_{\substack{\phi \sim p(\cdot|h,s,a,s') \\ (s,a) \sim p(\cdot|h), s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s' | s, a, \phi, h) \label{eq:predicterror} \\
&= \mathbb{E}_{\substack{\phi \sim p(\cdot|h,s,a,s'),\, \phi_T \sim p(\cdot) \\ (s,a) \sim p(\cdot|h), s' \sim p(\cdot | s,a,h,\phi_T)}} - \log \sum_{\phi \in \Phi} p(s'|\phi,h,s,a)p(\phi|h) + \log p(s' | s, a, \phi, h) \label{eq:predicterror2}
\end{align}
\end{subequations}
Using similar equations than in \eqref{eq:predicterror2}, in \textit{JDRX} \cite{shyam2018model}, authors show that one can maximize the information gain by computing the Jensen-Shannon or Jensen-Rényi divergence between distributions of states induced by several forward models. The more the models are trained on a state-action tuple, the more they will converge to the expected distribution of next states. Intuitively, the reward represents how much the different transition models disagree on the next-state distribution. Other works also maximize a similar form of disagreement \cite{pathak2019self,yao2021sample,sekar2020planning} by looking at the variance of predictions among several learnt transition models. \rebut{While these models handle the white-noise problem, the main intrinsic issue is computational since they require multiple forward models to train.}
\paragraph{Conclusion.} Despite the theoretical power of the information gain for improving exploration, it remains hard to efficiently estimate it and use it in difficult tasks.
\subsection{Information gain over density model}\label{sec:infogaindensity}
Surprise can also arise by quantifying \textit{the discrepancy between its probability of occurring and the fact that it actually occurred} \cite{barto2013novelty}. To quantify this probability of occuring, in this paragraph, we assume the agent tries to learn a density model $\phi \in \Phi$ that approximates the current marginal density distribution of states $p(s')$. In this setting, we can define the expected information gain over a density model $\rho$ \cite{bellemare2016unifying}:
\begin{align}
IG(h,S,A,S',\mathrm{P})&\approx \mathbb{E}_{\substack{ (s,a) \sim p(\cdot|h),\, \rho_T \sim p(\cdot) \\ s' \sim p(\cdot | s,a,h,\mathrm{P}_T)}} D_{KL}(p(\rho|h,s')||p(\rho|h)).
\end{align}
We hypothetize that the adversarial training that results from the objective (active maximization of the KL-divergence and density fitting) results in an approximately uniform distribution of states (and a uniform density estimation). This may be due to the convexity of the KL-divergence in $p(\rho|h,s')$ and $p(\rho|h)$ but we leave the proof to future work. To our knowledge, no works directly optimize this objective, but it has been shown that the information gain lower-bounds the squared inverse pseudo-count objective \cite{bellemare2016unifying}, which derives from count-based objectives; in the following, we will review \textit{count} and \textit{pseudo-count} objectives.
To efficiently explore its environment, an agent can count the number of times it visits a state and returns in rarely visited states. Such methods are said to be \textit{count-based} \cite{strehl2008analysis}. As the agent visits a state, the intrinsic reward associated with this state decreases. It can be formalized with:
\begin{equation}
R(s,a,s') = \frac{1}{\sqrt{N(s')}}
\end{equation}
where $N(s)$ is the number of times that the state $s$ has been visited. Although this method is efficient and tractable in a tabular environment (with a discrete state space), it hardly scales when states are numerous or continuous since an agent never really returns in the same state. A first solution proposed by \citet{tang2017exploration}, called \textit{TRPO-AE-hash}, is to hash the latent space of an auto-encoder fed with states. However, these results are only slightly better than those obtained with a classic exploration policy. An other line of works propose to adapt counting to high-dimensional state spaces via \textit{pseudo-counts} \cite{bellemare2016unifying}. Essentially, \textit{pseudo-counts} allow the generalization of the count from a state towards neighbourhood states using a learnt density model $\rho$. This is defined as:
\begin{equation}
\hat{N}(s') = \frac{p(s'|\rho)(1-p(s'|\rho')}{p(s'|\rho')-p(s'|rho)}
\end{equation}
where $\rho'(s)$ computes the density of $s$ after having learnt on $s$. In fact, \citet{bellemare2016unifying} show that, under some assumptions, \textit{pseudo-counts} increase linearly with the true counts. In this category, \textit{DDQN-PC} \cite{bellemare2016unifying} and
\textit{DQN-PixelCNN} \cite{ostrovski2017count} compute $\phi$ using respectively a Context-Tree Switching model (CTS) \cite{bellemare2014skip} and a Pixel-CNN density model \cite{van2016conditional}. Although the algorithms based on density models work on environments with sparse rewards, they add an important complexity layer \cite{ostrovski2017count}. One can preserve the quality of observed exploration while decreasing the computational complexity of the pseudo-count by computing it in a learnt latent space \cite{martin2017count}.
There exists several other well-performing tractable exploration methods like \textit{RND} \cite{burda2018exploration}, \textit{DQN+SR} \cite{machado2018count}, \textit{RIDE} \cite{ride2020roberta} or \textit{BeBold} \cite{zhang2020bebold}. These papers argue the reward they propose more or less relate to a visitation count estimation.
\paragraph{Conclusion.} Maximizing the information gain over a density model may maximize the pseudo-count, which relates to count-based objectives. They provide interesting feedbacks for exploration, but in practice, pseudo-counts are hard to approximate since they rely on a powerfull density model, a strict online estimation of density and they assume $p(s|\phi)$ strictly increases $\forall s \in S$ \cite{ostrovski2017count}. In addition, they also struggle with the problem of randomness. For instance, let us assume that one (state, action) tuple can lead to two very different states with 50\% chance each. The algorithm will manage to count for both states the number of visits, although it would take twice as long to avoid to be too much attracted. However, these methods do not address the white-noise problem since next states may be randomly generated at every steps. In this case, it is unclear how these methods could resist the temptation of going into this area since the counting associated to this state will never increase.
\subsection{Conclusion} We detailed three ways to define and maximize the surprise of an agent, based on the expected information gain over a true model of the environment. \rebut{\tabref{tab:surprise} sums up the relative experimental advantages of methods}. In practice, the expected information gain over a forward model and the learning progress well-approximate the expected information gain over the true model. Therefore, it appears that they intuitively and experimentally allow to well-explore inherently stochastic environments, but are hard to implement. The expected information gain over a density model can be seen as approximating the expected information gain over the true uniform density model. \rebut{This makes the agent targets a uniform distribution of states: while it makes the agent sensitive to stochasticity, it executes robust exploration in deterministic environment.} In fact, we discuss in the next section the relevance of aiming for a uniform distribution of states, through the study of novelty-based intrinsic motivations.
\section{Novelty maximization}\label{sec:novelty}
Novelty quantifies how much a stimuli contrasts with a previous set of experiences \cite{barto2013novelty,berlyne1966curiosity}. More formally, \citet{barto2013novelty} defend that \textit{an observation is novel when a representation of it is not found in memory, or, more realistically, when it is not “close enough” to any representation found in memory}. Previous experiences may be collected in a bounded memory or distilled in a learnt representation.
Several works propose to formalize novelty seeking as looking for low-density states \cite{becker2021exploration}, or similarly (cf. \secref{sec:knearest}), states that are different from others \cite{lehman2011novelty,conti2018improving}. In our case, this would result in maximizing the entropy of a state distribution. This distribution can be the t-steps state distribution (cf. \eqref{eq:dpi}) $H(d^{\pi}_t(S))$ or the entropy of the stationary state-visitation distribution over a horizon $T$:
\begin{align}
H(d^{\pi}_{0:T}(S))=H(\frac{1}{T} \sum_{t=1}^T d^{\pi}_t(S)).
\end{align}
In practice, these distributions can be approximated with a buffer. This formalization is not perfect and does not fit several intuitions about novelty \cite{barto2013novelty}. \citet{barto2013novelty} criticize such definition by stressing out that very distinct and memorable events may have low probabilities of occurring while not being novel (\textit{e.g} a wedding). They suggest that novelty may rather relates to the acquisition of a representation of the incoming sensory data. Following this definition, we propose to formalize novelty seeking behaviors as those that \textit{actively} maximize the mutual information between states and their representation $I(S;Z)=H(S) - H(S|Z)$ where $Z$ is a low-dimensional space ($|Z| \leq |S|$). This objective is commonly known as the \textit{infomax} principle. \cite{linsker1988self,almeida2003misep,bell1995information,HjelmFLGBTB19}; in our case, it amounts to \textbf{actively} learning a representation of the environment. Most of works focus on actively maximizing the entropy of state distribution while a representation learning function minimizes $H(S|Z)$. Furthermore, if one assumes that $Z=S$, the infomax principle collapses to an entropy maximization $H(S)$.
There are several ways to maximize the state-entropy, we separate them based on how they maximize the entropy. We found two kind of methods: low-density search and k-nearest neighbors methods.
\subsection{Direct entropy maximization}\label{sec:directdensity}
\rebut{The most evident way to maximize the entropy of states consists in maximizing $H(\rho(s))$ where $\rho(s)=p(s|\rho)$ approximates the stationary state-visitation distribution $d^{\pi}_{0:T}(S)$.} If we access this density model, it becomes straightforward to discover a policy that maximizes the entropy of a stationary state distribution \cite{hazan2019provably}. But computing $\rho(s)$ is challenging in high-dimensional state spaces. Several methods propose to estimate $\rho(s)$ using variational inference \cite{exploration2021zhang,islam2019entropy,lee2019efficient,pong2019skew} based on autoencoder architectures.
In this setting, we can use the VAE loss, approximated either as \eqref{eq:badapprox} \cite{vezzani2019learning,lee2019efficient} or \eqref{eq:unbiasedapprox} \cite{pong2019skew}, assuming $z$ is a compressed latent variable, $p(z)$ a prior distribution \cite{KingmaW13} and $q_{decoder}$ a neural network that ends with a diagonal Gaussian.
\rebut{
\begin{subequations}
\begin{align}
\log \rho(s') & \geq \mathbb{E}_{\hat{s'} \sim q_{decoder}(\cdot|z)} - \log q_{decoder}(\hat{s'}|z) + D_{KL}(q_{encoder}(z|s)||p(z)) \\
&\approx - \log q_{decoder}(s'|z) + D_{KL}(q_{encoder}(z|s')||p(z)) \label{eq:badapprox}\\
&\approx \log \frac{1}{N} \sum_{i=1}^N \frac{p(z)}{q_{encoder}(z|s')}q_{decoder}(s'|z) \label{eq:unbiasedapprox}
\end{align}
\end{subequations}
}
\eqref{eq:unbiasedapprox} is more expensive to compute than \eqref{eq:badapprox} since it requires decoding several samples, but presumably exhibit less variance. Basically, this estimation allows to reward an agent \cite{berseth2020smirl,lee2019efficient,exploration2021zhang} according to:
\begin{equation*}
R(s,a,s') = - \log \rho(s').
\label{eq:logpbs}
\end{equation*}
\citet{lee2019efficient} maximize \eqref{eq:unbiasedapprox} by learning new skills that target these novel states (see also \secref{sec:skilllearning}). \rebut{Using \eqref{eq:badapprox}, \cite{vezzani2019learning} approximates \eqref{eq:badapprox} with the ELBO as used by the VAE.} This is similar to \textit{MaxRenyi} \cite{exploration2021zhang}, which uses the Rény entropy, a more general version of the Shannon entropy, to give more importance to very low-density states. \citet{islam2019entropy} propose to condition the state density estimation with policy parameters in order to directly back-propagate the gradient of state-entropy into policy parameters. Although \textit{MaxRenyi} achieves good scores on \textit{Montezuma's revenge} with pure exploration, maximizing the ground state entropy may not be adequate since two closed ground states are not necessarily neighbors in the true environment \cite{aubret2021distop}. Following this observation, \textit{GEM} \cite{guo2021geometric} rather maximizes the entropy of the estimated density of states considering the dynamic-aware proximity of states, $H(Z)$. However they do not actively consider $H(Z|S)$.
\paragraph{Conclusion.} Generally speaking, these methods need an accurate density model to provide rewards. In the next paragraph, we study methods that avoid learning a density model.
\subsection{K-nearest neighbors approximation of entropy}\label{sec:knearest}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{images/knearest.drawio.pdf}
\caption{Illustration of the correlation between density and the fourth-nearest neighbor distance.
\label{fig:knearest}
\end{figure}
Several works propose to approximate the entropy of a distribution using samples and their k-nearest neighbors \cite{singh2003nearest,kraskov2004estimating}. In fact such objective has already been refered to as novelty \cite{conti2018improving}. Assuming $nn_k(S_b,s_i)$ is a function that outputs the k-th closest state to $s_i$ in $S_b$, this approximation can be written as:
\begin{equation}
H(S) \propto \frac{1}{|S_b|} \sum_{s_i \in S_b} \log ||s_i - nn_k(S_b,s_i)||_2 + \chi(|S_b|) + Const
\label{eq:knearestequation}
\end{equation}
where $\chi(S_b)$ is the digamma function. This approximation assumes the uniformity of states in the ball centered on a sampled state with radius $||s_i - nn_k(S_b,s_i)||_2$ \cite{lombardi2016nonparametric} but its full form is unbiased with a large number of samples \cite{singh2003nearest}. Intuitively, it means that the entropy is proportional to the average distance between states and their neighbors. \figref{fig:knearest} shows how density estimation relates to k-nearest neighbors distance. We clearly see that low-density states tend to be more distant from their nearest neighbors. Few methods \cite{mutti2020policy} provably relates to such estimations, but several approaches take advantage of the distance between state and neighbors to generate intrinsic rewards, making them related to such entropy maximization. For instance, \textit{APT} \cite{liu2021behavior} proposes new intrinsic rewards based on the k-nearest neighbors estimation of entropy:
\begin{align}
R(s,a_t,s') = \log (1+ \frac{1}{K} \sum_0^K || f(s') - nn_k(f(S_b),f(s')) ||_2)
\end{align}
where $f$ is a representation function learnt with a contrastive loss based on data augmentation \cite{srinivas2020curl} and $K$ denotes the number of k-nn estimations. By looking for distant state embeddings during an unsupervised pre-training phase, they manage to considerably speed up task-learning in the DeepMind Control Suite. The representation $g$ can also derive from a random encoder \cite{seo2021state} or a constrastive loss that ensures the euclidian proximity between consecutive states \cite{tao2020novelty,yarats2021reinforcement}. \rebut{Alternatively, GoCu \cite{bougie2020skill} achieve SOTA results on Montezuma's revenge by learning a representation with a VAE and reward the agent based on how distant, in term of timesteps, a state is from a set of K other states.}
\paragraph{Identifying different states.}
Instead of relying on euclidian distance, one can try to learn a similarity function. \textbf{EX$^2$} \cite{fu2017ex2} learns a discriminator to differentiate states from each other: when the discriminator does not manage to differentiate the current state from those in the buffer, it means that the agent has not visited this state enough and it will be rewarded. States are sampled from a buffer, implying the necessity to have a large buffer. To avoid this, some methods distill recent states in a prior distribution of latent variables \cite{kim2019curiosity,klissarovvariational}. The intrinsic reward for a state is then the KL-divergence between a fixed diagonal Gaussian prior and the posterior of the distribution of latent variables. In this case, common latent states fit the prior while novel latents diverge from the prior.
\paragraph{Intra-episode novelty.}
K-nearest neighbors intrinsic rewards have also been employed to improve intra-episode novelty \cite{stanton2018deep}. It contrasts with standard exploration since the agent looks for novel states in the current episode: typically it can try to reach all states after every resets. This setting is possible when the policy depends on all its previous interactions, which is often the case when an agent evolves in a POMDP, since the agent has to be able to predict its value function even though varies widely during episodes. This way, ECO \cite{savinov2018episodic} and Never give up \cite{badia2019never} uses an episodic memory and learn to reach states that have not been visited during the current episode.
\paragraph{Conclusion} K-nn methods turn out to be simple to experiment, but they strongly rely on learnt dynamic-aware representations since they fully take advantage of a meaningful euclidian embedded proximity; their theoretical connection to the rigorous approximation of entropy remains most of the time unclear and the approach badly scales with an increase of the memory size. We note that simple methods can tackle the issue of finding the neighbors by partitioning together close states \cite{yarats2021reinforcement}. Overall, we observe efficient exploration and the methods easily translate to intra-episode exploration.
\subsection{Conclusion}
In this section, we reviewed works that maximize novelty to improve exploration with flat policies. We formalized novelty as actively discovering a representation according to the infomax principle, even though most of works only maximize the entropy of states/representations of states. \rebut{As highlighted by \tabref{tab:novelty}, these methods can be more efficient than surprise-based method. They can also be robust to stochasticity thanks to a specific learnt representation or the use of an ensemble of encoders \cite{seo2021state}.}
Works manage to learn a representation that match the inherent structure of the environment \cite{tao2020novelty}. It suggests that it is most of the time enough to learn a good representation. For instance, \citet{guo2021geometric} and \citet{tao2020novelty} compute a reward based on a learnt representation, but perhaps a bad representation tends to be located in low-density areas. It would result that active representation entropy maximization correlates with state-conditional entropy minimization.
We are not aware of a lot of methods that actively and explicitly maximize $I(Z;S)$. Yet, we stress out three methods that strive to actively learn a representation of states. In \textit{CRL} \cite{du2021curious} \textit{NOR} \cite{nachum2019near} and \textit{CuRe} \cite{aljalbout2021seeking}, the agent plays a minimax game. A module learns a representation function with a constrastive loss and the agent actively challenges the representation by looking for states with a large loss.
\section{Skill learning}\label{sec:skilllearning}
In our everyday life, nobody has to think about having to move his arms' muscles to grasp an object. A command to take the object is just issued. This can be done because an acquired skill can be effortlessly reused
Skill abstraction denotes the ability of an agent to learn a representation of diverse skills. We formalize skill abstraction as maximizing the mutual information between the goal $g \in G$ and some of the rest of the contextual states $f(\tau) \in u(\mathcal{T})$, denoted as $I(G; u(\mathcal{T}))$ where $\tau \in \mathcal{T}$ is a trajectory and $f$ a function that extracts a subpart of the trajectory (last state for example). The definition of $u$ depends on the wanted semantic meaning of a skill. Let $s_0$ refers to the state at which the skill started and $s$ a random state from the trajectory, we highlight two settings based on the literature:
\begin{itemize}
\item $u(\mathcal{T}) = S$, the agent learns skills that target a particular state of the environment \cite{eysenbach2018diversity}.
\item $u(\mathcal{T}) = \mathcal{T}$, the agent learns skills that follow a particular trajectory. This way, two different skills can end in the same state if they cross different areas \cite{co2018self}.
\end{itemize}
Most of works maximize $I(G; S)$ so that, unless stated otherwise, we refer to this objective. In the following, we will study the different ways to maximize $I(G;S)$ which can be written under its reversed form $I(S;G) = H(G) - H(G|S)$ or forward form $ I(G;S) = H(S) - H(S|G)$ \cite{campos2020explore}. In particular, we emphasize that:
\begin{subequations}
\begin{align}
- H(G | S) &= \sum_{g \in G, s \in S} p(g,s) \log p(g|s) \\
&= \mathbb{E}_{\substack{g \sim p(g) \\ s \sim \pi^g }} \log p(g|s)
\label{eq:im}
\end{align}
\end{subequations}
where, to simplify, $p(g)$ is the current distribution of goals (approximated with a buffer) and $s \sim \pi^g$ denotes the distribution of states that results from the policy that achieves $g$. Note that $p(g,s) = p(s|g)p(g) $.
In this section, we first focus on methods that assume they can learn all skills induced by a given goal space/goal distribution and they assign parts of trajectories to every goal. The second set of methods directly derives the goal space from visited states, so that there are two different challenges that we treat separately: the agent has to learn to reach a selected goal and it must maximize the diversity of goals it learns to reach. We make this choice of decomposition because some contributions focus on only one part of the objective function.
\subsection{Fixing the goal distribution}\label{sec:predefinedG}
The first approach assumes the goal space is arbitrarily provided except for the semantic meaning of a goal. In this setting, the agent samples goals uniformly from $G$, ensuring that $H(G)$ is maximal, and it progressively assigns all possible goals to a part of the state space. To do this assignment, the agent maximizes the reward provided by \eqref{eq:im}:
\begin{equation}
R(g,s,a,s') = - \log q_{\omega}(g|s')
\label{eq:vlbim}
\end{equation}
where $q_{\omega}(g|s')$ represents a learnt discriminator (often a neural network) that approximates $p(g|s')$.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/alldiayn.drawio.pdf}
\caption{\rebut{Illustration of the implicit learning steps of algorithms that use a fixed goal distribution.} (a) Skills are not learnt yet. \rebut{The discriminator randomly assigns partitions of the state space to goals.} (b) The discriminator tries unsuccessfully to distinguish the skills. (c) Each skill learns to go in the area assigned to it by the discriminator. (d) Skills locally spread out by maximizing action entropy \protect\cite{haarnoja2018soft}. \rebut{The discriminator successfully partitions the areas visited by each skill.}}
\label{fig:diaynall}
\end{figure}
At first, we focus on discrete number of skills, where $p(g)$ represents a uniform categorical distribution. \figref{fig:diaynall} sums up the learning process with two discrete skills: 1- skills and discriminator $q_{\omega}$$ are randomly initialized; 2- the discriminator tries to differentiate the skills with states $s$ from its trajectories, in order to approximate $p(g|s)$; 3- skills are rewarded with \eqref{eq:vlbim} in order to make them go in the area assigned to it by the discriminator; 4- finally, skills are clearly distinguishable and target different parts of the state space. \textit{SNN4HRL} \cite{florensa2017stochastic} and \textit{DIAYN} \cite{eysenbach2018diversity} implement this procedure by approximating $g$ with, respectively, a partition-based normalized count and a neural network. \textit{VALOR} \cite{achiam2018variational} also uses a neural network, but discriminate discrete trajectories. In this setting, the agent executes one skill per episode.
\textit{HIDIO} \cite{zhang2020hierarchical} sequentially executes skills, yet that is not clear how they manage to avoid forgetting previously learnt skills. Maximizing $I(G;S|S_0)$ like \textit{VIC} \cite{gregor2016variational} or $I(G;S_0|S)$ with \textit{R-VIC} \cite{baumli2021relative} makes it hard to use a uniform (for instance) $H(G|S_0)$, because every skill may not be executable everywhere in the state space. Therefore, they also maximize the entropy term with another reward bonus similar to $\log p(g|s_0)$. They learn discriminable skills, but still struggle to combine them on complex benchmarks \cite{baumli2021relative}. Keeping $p(g)$ uniform, \textit{DADS} \cite{sharma2019dynamics} maximizes the forward form of mutual information $I(S;G|S_0) = H(S|S_0) - H(S|G,S_0)$ by approximating $p(s | s_0)$ and $p(s | s_0,g)$. This method makes possible to plan over skills and can combine several locomotion skills. However this requires several conditional probability density estimation on the ground state space, which may badly scale on higher-dimensional environments.
These methods tend to stay close from their starting point \cite{campos2020explore} and do not learn skills that cover the whole state space. In fact, it is easier for the discriminator to overfit over a small area than to make a policy go in a novel area, this results with a lot of policies that target a restricted part of the state space \cite{choi2021variational}. Accessing the whole set of true possible states and deriving the set of goals by encoding states can considerably improve the coverage of skills \cite{campos2020explore}.
\paragraph{Approaches for a better coverage of states.} Heterogeneous methods address the problem of overfitting of the discriminator. The naive way can be to regularize the learning process of the discriminator. \textit{ELSIM} \cite{aubret2020elsim} takes advantages of L2 regularization and progressively expand the goal space $G$ to cover larger areas of the state space and \citet{choi2021variational} propose to use spectral normalization \cite{miyato2018spectral}. More consistent dynamic-aware methods may further improve regularization; however it remains hard to scale the methods to a large number of skills which are necessary to scale to a large environment. In above-mentioned methods, the number of skills greatly increases \cite{achiam2018variational,aubret2020elsim} and the discrete skill embedding does not provide information about proximity of skills. Therefore learning a continuous embedding may be more efficient.
\paragraph{Continuous embedding.} The prior uniform distribution $p(g)$ is far more difficult to set in a continuous embedding. One can introduce the \textit{continuous DIAYN} \cite{choi2021variational,zhang2020hierarchical} with a prior $p(G) = \mathcal{N}(0^d,I)$ where $d$ is the number of dimensions, or the \textit{continuous DADS} with a uniform distribution over $[-1; 1]$ \cite{sharma2019dynamics}, yet it remains unclear how the skills could adapt to complex environments, where the prior does not globally fit the inherent structure of the environment \rebut{(\textit{e.g} a disc-shaped environment)}. \textit{VISR} \cite{visf2020ansen} seems to, at least partially, overcome this issue with a long unsupervised training phase and successor features. They uniformly sample goals on the unit-sphere and computes the reward as a dot product between unit-normed goal vectors and successor features $\log q_{\omega}(g|s) = \phi_{successor}(s)^T g$.
\paragraph{Conclusion.} This set of methods manages to learn discrete skills that can be combined, yet, despite regularization, discrete skills struggle to cover a very large state space \cite{aubret2020elsim}. Successful adaptations to scale it up to large states spaces currently rely on the relevance of successor features. In the next two sections, we study how to maximize the mutual information by assuming the goal space derives from the state space.
\subsection{Achieving a state-goal}\label{sec:goalstate}
In this section, we review how current methods maximize the goal achievement part of the objective of the agent, $-H(S_g|S)$ where $S_g$ refers to the goal-relative embedding of states. We temporally set aside $H(S_g)$ and we will come back to this in the next subsection, \secref{eq:diversestate}, mainly because the two issues are tackled separately in the literature.
Obviously, maximizing $- H(S_g | S)$ can be written:
\begin{align}
- H(S_g | S) &= \sum_{S_g,S} p(s_g,s) \log p(s_g|s) = \mathbb{E}_{\substack{s_g \sim p(s) \\ s \sim \pi^g }} \log p(s_g|s)
\end{align}
where, to simplify, $p(s)$ is the current distribution of states (approximated with a buffer) and $s \sim \pi^g$ denotes the distribution of states that results from the policy that achieves $g$. If $\log p(s_g|s')$ is modelled as an unparameterized Gaussian with a unit-diagonal co-variance matrix, we have $\log p(s_g|s') \propto -||s_g-s'||_2^2 + Const$ so that we can reward an agent according to:
\begin{equation}
R(s_g,s,a,s')= -||s_g-s'||_2^2.
\label{eq:distance_reward}
\end{equation}
It means that if the goal is a state, the agent must minimize the distance between its state and the goal state. To achieve this, it can take advantage of a goal-conditioned policy $\pi^{s_g}(s)$.
\paragraph{Ground state space.} This way, \textit{Hierarchical Actor-Critic (HAC)} \cite{levy2018hierarchical} directly uses the state space as a goal space to learn three levels of option (the options from the second level are selected to fulfill the chosen option from the third level). A reward is given when the distance between states and goals (the same distance as in Equation \ref{eq:distance_reward}) is below a threshold and they take advantage of HER to avoid to directly use the threshold. Similar reward functions can be found in \citet{pitis2020maximum} and \citet{zhao2019maximum}. Related to these works, \textit{HIRO} \cite{nachum2019data} uses as a goal the difference between the initial state and the state at the end of the option $f(\mathcal{T}) = S_f - S_0$.
This approach is relatively simple and does not require extra neural networks. However, there are two problems in using the state space in the reward function. Firstly, a distance (like L2) makes little sense in a very large space like images composed of pixels. Secondly, it is difficult to make a manager policy learn on a too large action space. Typically, an algorithm having as goals images can imply an action space of $84\times 84\times 3$ dimensions for a goal-selection policy (in the case of an image with standard shape). Such a wide space is currently intractable, so these algorithms can only work on low-dimensional state spaces.
\paragraph{Learning a representation of goals.} To tackle this issue, an agent can learn low-dimensional embedding of space $\phi_e$ and maximize the reward of \eqref{eq:distance_reward_phi} using a goal-conditioned policy $\pi^{f(s_g)}(s)$:
\begin{equation}
R(s_g,s,a,s')= -||f(s_g)-f(s')||_2^2.
\label{eq:distance_reward_phi}
\end{equation}
Similarly to \eqref{eq:distance_reward}, this amounts to maximize $- H(f(S_g) | f(S))$. \textit{RIG} \cite{nair2018visual} proposes to build the feature space independently with a variational auto-encoder (VAE); but this approach can be very sensitive to distractors (i.e. useless features for the task or goal, inside states) and does not allow to correctly weight features. Similar approaches also encode part of trajectories \cite{kim2021unsupervised,co2018self} for similar mutual information objectives. \textit{SFA-GWR-HRL} \cite{zhou2019vision} uses unsupervised methods like the algorithms of \textit{slow features analysis} \cite{wiskott2002slow} and \textit{growing when required} \cite{marsland2002self} to build a topological map. A hierarchical agent then uses nodes of the map, representing positions in the world, as a goal space. However the authors do not compare their contribution to previous approaches.
Other approaches learn a state embedding that captures the proximity of states with contrastive losses. For instance, \textit{DISCERN} learns the representation function by maximizing the mutual information between the last state representation and the state-goal representation. Similarly to works in \secref{sec:predefinedG}, the fluctuations around the objective allow to bring states around $s_g$ closer to it in the representation. More explicitly, the representation of \textit{NOR} \cite{nachum2019near} maximizes $I(f(S_{t+k});f(S_t),A_{t:t+k})$ and the one of \textit{LESSON} \cite{li2021learning} maximizes $I(f(S_{t+1});f(S_t))$; \textit{LESSON } and \textit{NOR} target a change in the representation and manage to navigate in a high-dimensional maze while learning the intrinsic Euclidian structure of the mazes (cf. \tabref{tab:skills}). Their skills can be reused on several environments. However, experiments are made in 2-dimensional embedding spaces and it remains unclear how relevant may be goals as state changes in an embedding space with higher dimensions. The more the number of dimensions increase, the more difficult it will be to distinguish possible skills from impossible skills in a state. \rebut{In addition, they need dense extrinsic rewards to learn to select the skills to execute. Thus, they generate tasks with binary rewards at a location uniformly distributed in the environment such that the agent learn to achieve the tasks from the simplest to the hardest. This progressive learning generates a curriculum, helping to achieve the hardest task.}
\paragraph{Conclusion.} To sum up, representation learning methods allows to learn state-based skills over complex state spaces. Learning this representation function combined with the use of the euclidian distance as reward function amounts to learn a particular form of reward function in addition for providing pre-computed features to the goal-conditioned policy. \rebut{As highlighted by \tabref{tab:skills}, learnt representations allow to scale the approaches to more complex goal spaces}. In the next paragraph, we study how to maximize $H(S)$ so that to make sure learnt skills target different areas of the state space. \rebut{As highlighted by \tabref{tab:skills}, it will make possible to reach very distant goals without being assisted by a curriculum of tasks.}
\subsection{Proposing diverse state-goals}\label{eq:diversestate}
To make sure the agent maximizes the mutual information between its goals and all visited states, it must sample a diverse set of goal-states. In other words, it has to maximize $H(S_g)$ but through goal selection rather than with an intrinsic bonus as in \secref{sec:novelty}. Similarly to works on novelty (cf. \secref{sec:novelty}), such entropy maximization along with skill acquisition (cf. \secref{sec:goalstate}) tackles the exploration challenge, but without facing catastrophic forgetting (cf. \secref{sec:detachment}) since the agent does not forget its skills.
A naive approach would be to generate random values in the goal space, but this faces a considerable problem: the set of achievable goals is often a very small subset of the entire goal space. To tackle this, a first approach can be to explicitly learn to differentiate these two sets of goals \cite{florensa2018automatic,racaniere2019automated}, using for example a Generative Adversarial Networks (GAN) \cite{florensa2018automatic,goodfellow2014generative}, but it is ineffective in complex environments \cite{pong2019skew}. Other works obtain good results on imagining new goals, but using a compoundable goal space, given \cite{colas2019curious} or learnt with a dataset \cite{khazatsky2021can}; results show it may be a strong candidate for object-based representations. In contrast, in a more general case, an agent can simply set a previously met state as a goal, this way, it ensures that goals are reachable, since they have already been achieved. In the rest of this section, we focus on this set of methods.%
In \textit{RIG} \cite{nair2018visual}, the agent randomly samples states as goals from its buffer, but it does not increase the diversity of states, and thus, the diversity of learnt skills. \citet{pong2019skew} showed theoretically and empirically that, by sampling goals following a $\alpha$-more uniform distribution over the support of visited states than the "achieved" distribution, the distribution of states of the agent can converge to the uniform distribution. Intuitively, the agent just samples more often low-density goals as illustrated it in \figref{fig:reweight}. There are several ways to increase the importance of low-density goal-states that we introduce in the following.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/reweightx3.drawio.pdf}
\caption{\rebut{Illustration of the reweighting process. (a) probability of visited states to be selected as goals before reweighting; (b) probability of visited states to be selected as goals after density reweighting; (c) probability of visited states to be selected as goals after density/reward reweighting. This figure completes and simplifies the figure of \protect\citet{pong2019skew}.}}
\label{fig:reweight}
\end{figure}
\paragraph{Density estimation in the ground state space.} \textit{DISCERN} \cite{warde2018unsupervised} proposes to sample uniformly over the support of visited stated with a simple procedure. Every time the agent wants to add an observation to its buffer, it randomly samples an other observation from its buffer and only keeps the one that is the farthest to all other states of the buffer. This way, it progressively builds an uniform distribution of states inside its buffer. However, it uses the euclidean distance to compare images, which may not be relevant. Other approaches select the state that has the lower density (\textit{OMEGA}) \cite{pitis2020maximum} according to a kernel density estimation or use the rank of state-densities \cite{zhao2019curiosity} estimated with a Variational Gaussian Mixture Model \cite{blei2006variational}. In contrast with them, \textit{Skew-fit} \cite{pong2019skew} provides more flexibility on how uniform one want its distribution of states. \textit{Skew-fit} extends RIG and learns a parameterized generative model $q_{\rho}(S) \approx p(S)$ and skews the generative model (VAE) with the ratio:
\begin{equation}
q_{\rho}(s)^{\alpha_{skew}}.\label{eq:skewratio}
\end{equation}
where $\alpha_{skew} < 0$ determines the speed of uniformisation. This way it gives more importance to low-density states. Then it weights all visited states according to the density approximated by the generative model at the beginning of each epoch, which is made of a predefined number of timesteps. Skew-fit manages to explore image-based environments very efficiently. As highlighted in \cite{aubret2021distop}, this ratio applied on a discrete number of skills, amount to rewards a Boltzmann goal-selection policy with:
\begin{equation}
R(s_g) = (1+\alpha_{skew}) \log p(s_g).
\end{equation}
\paragraph{Density reweighting by partitioning the embedding space.} With a different objective, \textit{GRIMGREP} \cite{kovavc2020grimgep} partitions the VAE embedding of Skew-fit with a Gaussian Mixture Model \cite{rasmussen1999infinite} to estimate the learning progress of each partition and avoid distractors. The density weighting can also operate in a learnt embedding. \textit{HESS} \cite{li2021efficient} partitions the embedding space of \textit{LESSON} and rewards with a variant of a count-based bonus (see \secref{sec:infogain}). It improves exploration in a two-dimensional latent embedding but the size of partitions may not scale well if the agent considers more latent dimensions. In contrast, \textit{DisTop} \cite{aubret2021distop} dynamically clusters a dynamic-aware embedding space using a variant of a Growing When Required \cite{marsland2002self}; they estimate the density of state according to how much its partition contains states and skew the distribution of sampled similarly to Skew-fit. \textit{HESS} and \textit{DisTop} demonstrate their ability to explore and navigate with an ant inside complex mazes without extrinsic rewards. \rebut{As shown in \cite{aubret2021distop} (illustration in \figref{fig:reweight}c), it is also possible to use extrinsic rewards to weight the distribution of sampled state-goals.}
\paragraph{Conclusion.} Entropy maximization methods improves over standard skill learning methods by learning to reach as many states as possible.
We expect further works to show the ability to scale to even more complex environments, with higher-dimensional latent structure. For example, learning compositional representations ( modeling disentangled objects and relations) remains hard: \rebut{SOTA methods only manipulate few objects \cite{pong2019skew}.}
\subsection{Conclusion} We found two main ways to discover skills. the first one provides a goal space and assign goals to areas of the state space. There are empirical evidences emphasizing that it struggles to learn and sequentially executes skills that target different areas of the state space. The second method derives the goal space from the state space with a representation learning method and over-weights the sampling of low-density visited areas. \rebut{This set of works showed the ability to hierarchically navigate in simple environments using moderately morphologically complex agents. }
\section{Outlooks of the domain}\label{sec:outlooks}
In this section, we take a step back and thoroughly analyze the results of our overall review. We first study the exploration process of flat intrinsic motivation in comparison with hierarchical intrinsic motivations in \secref{sec:detachment}; then, this will motivate our focus on the challenges induced by learning a deep hierarchy of skills in \secref{sec:dev}. Finally, in \secref{sec:flatim}, we discuss how flat and hierarchical intrinsic motivations can and should cohabit in such hierarchy.
\subsection{Long-term exploration, detachment and derailment}\label{sec:detachment}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/detachment4.png}
\caption{Illustration of the \textit{detachment} issue. Image extracted from \protect\citet{goexplore}. Green color represents intrinsically rewarding areas, white color represents no-rewards areas and purples areas are currently being explored. (a) The agent has not explored the environment yet. (b) It discovers the rewarding area at the left of its starting position and explores it. (c) It consumed close intrinsic rewards on the left part, thus it prefers gathering the right-part intrinsic rewards. (d) Due to catastrophic forgetting, it forgot how to reach the intrinsically rewarding area on the left.}
\label{fig:detachment2}
\end{figure}
The most challenging used benchmarks in flat intrinsic motivations (surprise and novelty) are \textit{DMLab} and \textit{Montezuma's revenge}, yet very sparse reward games such as \textit{Pitfall!} are not currently addressed and should be investigated. In \textit{Pitfall!}, the first reward is reached only after multiple rooms where it requires specific action sequences to go through each room. State of the art on IM methods \cite{ostrovski2017count} achieve 0 mean reward in this game. At the opposite, imitation RL methods \cite{aytar2018playing,hester2018deep} are insensitive to such a specific reward, and thus, exceed IM methods with a mean reward of 37232 on \textit{Montezuma's revenge} and 54912 on \textit{Pitfall!}. Even though these methods use expert knowledge, this performance gap exhibits their resilience to long-term rewards. Compared with flat intrinsic reward methods, which do not exceed a 10000 score on \textit{Montezuma's revenge} \cite{burda2018exploration} and hardly achieve a score on \textit{Pitfall!} \cite{ostrovski2017count}, it shows that flat IMs is still far from solving the overall problem of exploration.
Furthermore, we want to emphasize that the challenge is harder when the intrinsic reward itself is sparse \cite{burda2018exploration}. In \textit{Montezuma's revenge}, it is about avoiding to use a key too quickly in order to be able to use it later. In every day life, it can be about avoiding to spend money too quickly. In fact, it looks like there is an exploration issue in the intrinsic reward function. Intrinsic reward can guide the exploration at the condition that the agent finds this intrinsic reward. There may be two reasons causing the intrinsic reward to be sparse:
\begin{enumerate}
\item The first comes from partial observability, with which most models are incompatible. Typically, if an agent has to push a button and can only see the effect of this pushing after a long sequence of actions, density models and predictive models may not provide meaningfull intrinsic rewards. There would be a too large distance between the event "push a button" and the intrinsic reward.
\item \figref{fig:detachment2} illustrates the second issue, called \textit{detachment} \cite{goexplore,ecoffet2021first}. It results from a distant intrinsic reward coupled with catastrophic forgetting. Simply stated, the RL agent can forget the presence of an intrinsic reward in a distant area: this is hard to maintain the correct Q-value that derives from a distant currently unvisited rewarding area. This is emphasized in on-policy settings.
\end{enumerate}
Pursuing such distant intrinsic reward may be even harder due to the possible \textit{derailment} issue \cite{goexplore,ecoffet2021first}. Essentially, an agent may struggle to execute a long sequence of specific actions needed to reach a distant rewarding area because the local stochasticity incites local dithering all along the sequence. Detachment motivates the need for a hierarchical exploration \cite{ecoffet2021first} and derailment motivates frontier-based exploration \cite{bharadhwaj2020leaf}, which consists in deterministically reaching the area to explore before starting exploration.
\subsection{Deeper hierarchy of skills}\label{sec:dev}
According to \citet{brooks1991intelligence}, \textit{everything is grounded in primitive sensor motor patterns of activation}. This \textit{everything} may refer to the structure of the world and agent affordances. Capturing this knowledge amounts to form concept representations and reusable skills \cite{weng2001autonomous}, use it as a basis for new skills \cite{prince2005ongoing}, explore the environment to find new interesting skills, autonomously self-generate goals in accordance with the level and morphology of the agent.
Most works presented in \secref{sec:skilllearning} abstract actions on a restricted number of hierarchies (generally one hierarchy). This is necessary to well-understand the mechanism of abstraction, but we want to argue that imposing deeper hierarchies could considerably enhance the semantic comprehension of the environment of an agent. Organisms are often assumed to deal with composition of behaviors, which in turn serve as building block for more complex behaviors \cite{flash2005motor}. This way, using a limited vocabulary of skills makes easier avoiding the curse of dimensionality associated to the redundancy of a whole set of ground behaviors.
Our surveyed works \cite{nachum2019near,aubret2021distop,li2021learning,guo2021geometric,ermolov2020latent} already propose to learn the representations using the slowness principle \cite{wiskott2002slow} which assumes temporally close states should be similarly represented. By configuring the time-extension of the representation, one may focus on different semantic parts of the state space. This can be seen in \secref{sec:abstraction}: 1- the agent can learn a very low level representation that provides skills that can manipulate torques of a creature \cite{aubret2021distop}; 2- skills can also orientate an agent in a maze by extracting (x,y) coordinates from a complex state representation \cite{li2021efficient}. While they do not try to combine and learn several representations at the same time, further works could consider separate different parts of states (\textit{e.g.} agent positions and object positions \cite{mutual2021zhao}) or learning these representations at different time scales. In practice, data-augmentation methods already allow to learn object-oriented representations \cite{mitrovic2020representation,grill2020bootstrap,mussa2004neural}. Most augmentations could also be derived with contrast over time by considering, for instance, an embodied agent moving its eyes/head (crops), turning its head (rotation), controlling vergence (blur) or, without interventions, color and brightness changes \cite{chen2020simple}. Overall, it stresses out the potential of time-contrastive representations for disentangling the whole state space and providing semantically different skills; new works in this area may unlock new kind of skills.
\textit{Skill focus.}
In a developmental process, multi-level hierarchical RL questions the ability of the agent to learn all policies of the hierarchy simultaneously. This obviously relates to the ability of organisms to continually learn throughout their lifetime; but in more practical way, it may allow to focus the learning process of skills that are interesting for higher-level skills. This focus avoids learning everything in the environment \cite{aubret2021distop}, which is hard and obviously not done by biological organisms. For instance, most persons can not do a somersault.
\textit{Critical periods and lifelong learning.}
Considering a goal representation that changes over time introduces new issues for the agent. In this case, the goal-conditioned policy may be perturbed by the changes of inputs and may no longer be able to reach the goal \cite{li2021efficient}. Current methods consider 1- developmental periods (unsupervised pre-training \cite{metzen2013incremental}); 2- to modify the representation every k-steps epochs \cite{pong2019skew}; 3- to impose slowly changes of the representation \cite{li2021efficient}. Further works may thoroughly investigate the relation and transitions between these methods since they can relate to the concept of critical periods \cite{hensch2004critical,konczak2004neural}. Critical periods assume that the brain is more plastic at some periods of development in order to acquire specific knowledge. Despite this mechanism, the brain slowly keeps learning throughout the lifetime. In the hierarchy of skills, the introduction of a new level may first result in a quick/plastic learning process, followed by slower changes.
\subsection{The role of flat intrinsic motivations}\label{sec:flatim}
In \secref{sec:detachment}, we essentially criticized the limited role that flat intrinsic motivation like surprise or novelty can play in favor of exploration and we hypothesized in \secref{sec:dev} that deeper hierarchies could make emerge an understanding of more complex affordances. Then, what could be the roles of surprise and novelty ?
\textit{Novelty.} We saw in \secref{sec:novelty} that novelty seeking behaviors allow to learn a correct representation of the whole environment; this can be a basis for learning diverse skills. While some methods consider a goal as a state and manage to avoid using novelty bonuses \cite{pong2019skew}, this is harder to do when skills have a different semantic (like a change in the state space). \citet{nachum2019near} provide a meaningful example of this: the agent acts to simultaneously discover a representation of the environment and achieve upper-level goals
\textit{Surprise.} We leave aside the interest of surprise for learning a forward model that could be used for planning \cite{hafner2019learning} and rather focus on the learning process. Surprise amounts to look for the learning progress of forward models so that, in a hierarchy of skills, it quantifies whether skills can currently be better learnt or not. This links surprise to curriculum learning \cite{bengio2009curriculum}, \textit{i.e} can we find a natural order to efficiently learn skills ? For example, assuming an agent want to learn to reach state-goal in a maze, it would be smarter to learn to start learning skills that target goals close to its starting position and to progressively extend its goal selection while learning other skills. Several strategies have been proposed to smartly hierarchically select goals \cite{colas2019curious,linke2020adapting}, yet it often does not consider intrinsic skills \cite{colas2019curious}.
To sum up, we propose that the role of surprise and novelty may rather be to support the learning of skills. Novelty seeking helps to learn the representation required by the skill learning module and surprise speeds up the maximization of the skill learning objective. They may interact as a loop: first, the agent learns a new representation, then it evaluates surprise to select which skill to improve and the skill learning process starts. Considering this, it would result several surprises and novelties: an agent can experiment a novel or surprise interaction for a level of decision (injure the toy while walking), yet it does not mean other levels would be surprised (it is still on the same road). This emphasizes the multi-dimensionality and relativity of the notion of surprise ou novelty \cite{berlyne1960conflict}, only a part of the incoming stimuli may arouse the agent.
\section{Conclusion}
In this survey, we have presented the current challenges faced by DRL: namely 1- learning with \textit{sparse rewards} through exploration; 2- \textit{building a hierarchy of skills} in order to make easier credit assignment, exploration with \textit{sparse rewards} and \textit{transfer learning}.
We identified several types of IM to tackle these issues, that we classified into three categories based on a maximized information theoretic objective, which are \textit{surprise}, \textit{novelty} and \textit{skill learning}. Surprise and novelty based intrinsic motivations implicitly improve flat exploration while skill learning allows to create a hierarchy of reusable skills that also improve exploration.
\textbf{Surprise} results from maximizing the mutual information between the true model parameters and the next state, knowing the previous state, the action and the history of interactions. We have shown that it can be maximized through three set of works: information gain over predictive models, over density models or prediction errors/learning progress. In practice, we found that the information gain over density model is ill-defined for purely stochastic areas and that the determinism assumption underpinning prediction error methods complicates their application. Next challenges may be to make good approximations of surprise tractable.
\textbf{Novelty} seeking can be assimilated to learning a representation of the environment, through the maximization of mutual information between states and their representation. The most important term to actively maximize looks to be the entropy of state or representation, which can be approximated in two ways: 1- one can reward according to the parametric density of its next state, but it is complicated to estimate; 2- one can also reward an agent according to the distance between a state and currently already visited states, making the approach tractable in particular when the agent learns a dynamic-aware representation. We expect future works to benefit from directly looking for good representations rather than uniformity of states.
Finally, using \textbf{skill learning} objective that amount to maximize the mutual information between a goal and a part of trajectories of the corresponding skill, an agent can learn hierarchies of temporally-extended skills. Skills can be directly learnt by attributing part of a fixed goal space to areas, but it remains to clarify how well goals can be embedded in a continuous way and whether approaches may be robust when skills are sequentially executed. The second approach derives the goals space from the state space, often through a time-contrastive loss, and expand the skill set by targeting low-density areas. It remains to be demonstrated how one could create larger hierarchies of skills.
The three objectives are compatible and we have discussed how they could interact to provide a robust exploration with respect to the \textit{detachment} issue, along with reusable hierarchical skills, a quick and focused skill acquisition and multi-semantic representations.
\section{Introduction}
In reinforcement learning (RL), an agent learns by trial-and-error to maximize the expected rewards gathered as a result of its actions performed in the environment \cite{sutton1998reinforcement}. Traditionally, an agent maximizes a reward defined according to the task to perform: it may be a score when the agent learns to solve a game or a distance function when the agent learns to reach a goal. The reward is then considered as extrinsic (or as a feedback) because the reward function is provided expertly and specifically for the task. With an extrinsic reward, many spectacular results have been obtained on Atari game \cite{bellemare15} with the Deep Q-network (DQN) \cite{mnih2015human} through the integration of deep learning to RL, leading to deep reinforcement learning (DRL).
However, despite the recent improvements of DRL approaches, they turn out to be most of the time unsuccessful when the rewards are scattered in the environment, as the agent is then unable to learn the desired behavior for the targeted task \citep{franccois2018introduction}. Moreover, the behaviors learned by the agent are hardly reusable, both within the same task and across many different tasks \citep{franccois2018introduction}. It is difficult for an agent to generalize the learnt skills to make high-level decisions in the environment. For example, such skill could be \textit{go to the door} using primitive actions consisting in moving in the four cardinal directions; or even to \textit{move forward} controlling different joints of a humanoid robot like in the robotic simulator MuJoCo \citep{todorov2012mujoco}.
On another side, unlike RL, developmental learning \cite{piaget1952origins,cangelosi2018babies,oudeyer2016evolution} is based on the trend that babies, or more broadly organisms, acquire new skill while spontaneously exploring their environment \cite{gopnik1999scientist,barto2013intrinsic}. This is commonly called an intrinsic motivation (IM), which can be derived from an intrinsic reward. This kind of motivation allows to autonomously gain new knowledge and skills, which then makes the learning process of new tasks easier \cite{baldassarre2013intrinsically}. For several years now, IM is increasingly used in RL, fostered by important results and the emergence of deep learning. This paradigm offers a greater learning flexibility, through the use of a more general reward function, allowing to tackle the issues raised above when only an extrinsic reward is used. Typically, IM improves the agent ability to explore its environment, to incrementally learn skills independently of its main task, to choose an adequate skill to be improved and even to create a representation of its state with meaningful properties. In addition, as a consequence of its definition, IM does not require additional expert supervision, making it easily generalizable across environments.
\paragraph{Scope of our review.}
In this paper, we study and group together methods through a novel taxonomy based on information theoretic objectives. This way, \textbf{we revisit the notions of surprise, novelty and skill learning and show that they can encompass numerous works.} Each class is characterized by a computational objective that fits its eventual psychological definition. This allows us to situate/relate a large body of works and to highlight important directions of research. To sum up, this paper investigates the use of IM in the framework of DRL and considers the following aspects:
\begin{itemize}
\item The role of IM in addressing the challenges of DRL.
\item Classifying current heterogeneous works through few information theoretic objectives.
\item \rebut{Exhibit advantages of each class of methods.}
\item Important outlooks of IM in RL within and across each category.
\end{itemize}
\paragraph{Related works.} The overall literature on IM is huge \citep{barto2013intrinsic} and we only consider its application to DRL and IMs related to information theory. Therefore, our study of IMs is not meant to be exhaustive. Intrinsic motivation currently attracts a lot of attention and several works made a restricted study of the approaches. \citet{colas2020intrinsically} and \citet{amin2021survey} respectively focus on the different aspects of skill learning and exploration; \citet{baldassarre2019intrinsic} studies intrinsic motivation through the lens of psychology, biology and robotic ; \citet{pateria2021hierarchical} review hierarchical reinforcement learning as a whole, including extrinsic and intrinsic motivations; \citet{linke2020adapting} experimentally compare different goal selection mechanisms. In contrast with these approaches, we study a large part of objectives all based on intrinsic motivation through the lens of information theory. We assume that our work is in line with the work of \citet{schmidhuber2008driven}, which postulates that organisms are guided by the desire to compress the information they receive. However, by reviewing the more recent advances in the domain, we formalize the idea of compression with the tools from information theory.
\rebut{\paragraph{Structure of the paper.}} This paper is organized as follows. As a first step, we discuss RL, define intrinsic motivation and explain how it fits the RL framework (\secref{sec:defs}). Then, we highlight the main current challenges of RL and identify the need for an additional outcome (\secref{sec:defis}). Thereafter, we briefly explain our classification (\secref{sec:classify}), namely surprise, novelty and skill learning and we detail how current works fit it (respectively \secref{sec:infogain}, \secref{sec:novelty} and \secref{sec:skilllearning}). Finally, we highlight some important outlooks of the domain (\secref{sec:outlooks}).
\section{Definitions and Background}\label{sec:defs}
In this section, we will review the background of RL field explain the concept of IM and described how to integrate IM in the RL framework through goal-parameterized RL, hierarchical RL and information theory. \rebut{We sum up the notations used in the paper in \tabref{tab:notations} in \appref{app:notations}.}
\subsection{Markov decision process}\label{sec:mdp}
The goal of a Markov Decision Process (MDP) is to maximize the expectation of cumulative rewards received through a sequence of interactions \citep{puterman2014markov}. It is defined by: $S$ the set of possible states; $A$ the set of possible actions; $T$ the transition function $T : S \times A \times S \rightarrow p(s'|s,a)$; $R$ the reward function $R : S \times A \times S \rightarrow \mathbb{R}$; $d_0 : S \rightarrow \mathbb{R}$ the initial distribution of states. An agent starts in a state $s_0$ given by $d_0$. At each time step $t$, the agent is in a state $s_t$ and performs an action $a_t$; then it waits for the feedback from the environment composed of a state $s_{t+1}$ sampled from the transition function $T$, and a reward $r_t$ given by the reward function $R$. The agent repeats this interaction loop until the end of an episode. In reinforcement learning the goal can be to maximize the expected discounted reward defined by $\sum_{t=0}^{\infty} \gamma^t r_t$ where $\gamma \in[0,1]$ is the discount factor. When the agent does not access the whole state, the MDP can be extended to a Partially Observable Markov Decision Process (POMDP) \citep{kaelbling1998planning}. In comparison with a MDP, it adds a set of possible observations $O$ which defines what the agent can perceive and an observation function $\Omega: S \times O \rightarrow \mathbb{R}$ that defines the probability of observing $o \in O$ when the agent is in the state $s$, \textit{i.e} $\Omega(s,o) = p(o|s)$.
A reinforcement learning algorithm aims to associate actions $a$ to states $s$ through a policy $\pi$. This policy induces a t-steps state distribution that can be recursively defined as:
\begin{equation}
d^{\pi}_t(S) = \int_S d^{\pi}_{t-1}(s_{t-1}) \int_A p(s_t|s_{t-1},a)\pi(a|s_{t-1}) da\, ds_{t-1}\label{eq:dpi}
\end{equation}
with $d^{\pi}_0(S)=d_0$. The goal of the agent is then to find the optimal policy $\pi^*$ maximizing the reward:
\begin{equation}
\pi^* = \argmax{\pi} \mathbb{E}_{\substack{s_0\sim d_0(S)\\
a_t \sim \pi(\cdot|s{_t})\\
s_{t+1}\sim p(\cdot|s_t,a_t)}}
\left[\sum_{t=0}^{\infty} \gamma{^t} R(s{_t},a_t,s_{t+1})\right]
\end{equation}
\rebut{where} $x \sim p(\cdot)$ \rebut{is equivalent to} $x \sim p(x)$.
In order to find the action maximizing the long-term reward in a state $s$, it is common to maximize the expected discounted gain following a policy $\pi$ from a state, noted $V_{\pi}(s)$, or from a state-action tuple, noted $Q_{\pi}(s,a)$ (cf. \eqref{eq:espeQ}). It enables to measure the impact of the state-action tuple in obtaining the
cumulative reward \cite{sutton1998reinforcement}.
\begin{equation}
Q_{\pi}(s,a) = \mathbb{E}_{\substack{a{_t}\sim\pi(\cdot|s{_t})\\
s_{t+1}\sim p(\cdot|s_t,a_t)}}
\left[\sum_{t=0}^{\infty} \gamma{^t} R(s{_t},a{_t},s_{t+1})|_{s_0=s,a_0=a} \right]. \label{eq:espeQ}
\end{equation}
To compute these values, one can take advantage of the Bellman equation verified by the optimal Q-function:
\begin{equation}
\label{eq:bellman}
Q^*(s_t,a_t) = \mathbb{E}_{s_{t+1}\sim p(\cdot|s_t,a_t)} \big[ R(s_t,a_t,s_{t+1}) + \gamma \: \max_a Q^*(s_{t+1},a) \big].
\end{equation}
$Q$ and/or $\pi$ are often approximated with neural networks when the state space is continuous or very large \cite{mnih2016asynchronous,lillicrap2015continuous}.
\subsection{Definition of intrinsic motivation}\label{sec:defint}
Simply stated, intrinsic motivation is about doing something for its inherent satisfaction rather than to get a positive feedback from the environment \cite{ryan2000intrinsic}. Looking at this definition, one can notice that intrinsic motivation is defined by contrast with extrinsic motivation; it highlights the difference between the two paradigms. Intrinsic motivation assumes the agent learns on its own while extrinsic motivation assumes there exits an expert/need that supervises the learning process.
According to \citet{singh2010intrinsically}, evolution provides a general intrinsic motivation (IM) function that maximizes a fitness function based on the survival of an individual. Curiosity, for instance, does not immediately produce selective advantages but enables the acquisition of skills providing by themselves some selective advantages. More widely, the use of intrinsic motivation allows to obtain intelligent behaviors which may later serve goals more efficiently than with only a standard reinforcement \cite{baldassarre2013intrinsically,baldassarre2011intrinsic,lehman2008exploiting}. Typically, a student doing his mathematical homework because he/she thinks it is interesting is intrinsically motivated whereas his/her classmate doing it to get a good grade is extrinsically motivated \cite{ryan2000intrinsic}. In this future, the intrinsically motivated student may be more successful in math than the other one. This questions the relevance of using only standard reinforcement methods.
More rigorously, \citet{oudeyer2008can} explain that an activity \textit{is intrinsically motivating for an autonomous entity if its interest depends primarily on the collation or comparison of information from different stimuli and independently of their semantics}. At the opposite, an extrinsic reward results of an unknown environment static function which does not depend on previous experience of the agent on the considered environment. The main point is that the agent must not have any \textit{a priori} on the semantic of the observations it receives. Here the term \textit{stimuli} does not refer to sensory inputs, but more generally to the output of a system which may be internal or external to the independent entity, thereby including \textit{homeostatic} body variables (temperature, hunger, thirst, attraction to sexual activities \dots) \cite{baldassarre2011intrinsic,berlyne1965structure}. Broadly speaking, the motivation of an agent can be internal (\textit{source of motivation}) while still being extrinsic (\textit{why} of the actions). For instance, when an agent is looking for food because of the hunger, hunger is a stimuli coming to the cognitive system of the agent such that it is an internal but extrinsic motivation. As an other example, a child may do his/her home-works because he/she thinks it will be crucial to latter get a job. While the source of the motivation is internal, the true outcome comes from the environment.
Now that the we clarified the notion of intrinsic motivation, we study how to integrate intrinsic motivation in the RL framework.
An extensive overview of IM can be found in \citet{barto2013intrinsic}.
\subsection{A model of RL with intrinsic rewards}\label{sec:modelRL}
Reinforcement learning is derived from behaviorism \cite{skinner} and usually uses extrinsic rewards \cite{sutton1998reinforcement}. However \citet{singh2010intrinsically} and \citet{barto2004intrinsically} reformulated the RL framework to incorporate IM. We can differentiate \textit{rewards}, which are events in the environment, and \textit{reward signals} which are internal stimulis to the agent. Thus, what is named \textit{reward} in the RL community is in fact a \textit{reward signal}. Inside the \textit{reward signal} category, there is a distinction between \textit{primary reward signals} and \textit{secondary reward signals}. The \textit{secondary reward signal} is a local \textit{reward signal} computed through expected future rewards and is related to the value function
whereas the \textit{primary reward signal} is the standard \textit{reward signal} received from the MDP.
\begin{wrapfigure}{r}{0.3\linewidth}
\begin{centering}
\includegraphics[width=1\linewidth]{images/IM.drawio.pdf}
\caption{\rebut{Model of RL integrating IM}, taken in \protect\citet{singh2010intrinsically}. The environment is factored into an internal and external environment, with all reward coming from the former.}
\label{im:rlintrinsic}
\end{centering}
\end{wrapfigure}
In addition, rather than considering the MDP environment as the environment in which the agent achieves its task, it suggests that the MDP environment can be formed of two parts: the \textbf{external part} which corresponds to the potential task and the environment of the agent; the \textbf{internal part} which computes the MDP states and the \textit{secondary reward signal} using potentially previous interactions. Consequently, we can consider an intrinsic reward as a \textit{reward signal} received from the MDP environment. The MDP state is no more the external state but an internal state of the agent. However, from now, we will follow the terminology of RL and the term \textit{reward} will refer to the \textit{primary reward signal}.
Figure \ref{im:rlintrinsic} summarizes the framework: the critic is in the internal part of the agent, it computes the intrinsic reward and deals with the credit assignment. The agent can merge intrinsic rewards and extrinsic rewards in its internal part. The state includes sensations and any form of internal context; in this section we refer to this state as a contextual state. The decision can be a high-level decision decomposed by the internal environment into low-level actions.
This conceptual model incorporates intrinsic motivations into the formalism of MDP. Now, we will review how this model is instantiated in practice. Indeed it is possible to extend RL to incorporate the three new components that are intrinsic rewards, high-level decisions and contextual states. We separately study them in the following sections.
\subsection{Intrinsic rewards and information theory}
Throughout our definition of intrinsic motivation, one can notice that the notion of \textit{information} comes up a lot. This is not hazardous and quantifying information proves useful to generate intrinsic rewards. In this section, we provide the basics about information theory and explain how to combine intrinsic and extrinsic rewards. However, we emphasize that intrinsic rewards are not restricted to information measures and their characterization mostly depends on whether the reward function fits the properties of an intrinsic motivation.
The Shannon entropy quantifies the mean necessary information to determine the value of a random variable. Let $X$ be a random variable with a law of density $p(X)$ satisfying the normalization and positivity requirements, we define its entropy by:
\begin{equation}
H(X) = -\int_{X} p(x)\log p(x) dx .
\end{equation}
In other words, it allows to quantify the disorder of a random variable. The entropy is maximal when $X$ follows a uniform distribution, and minimal when $p(X)$ is equal to zero everywhere except in one value, which is a Dirac distribution. From this, we can also define the entropy conditioned on a random variable $S$. It is similar to the classical entropy and quantifies the mean necessary information to find $X$ knowing the value of an other random variable $S$:
\begin{equation}
H(X|S) = -\int_{S} p(s)\int_{X} p(x|s)\log p(x|s) dx ds.
\end{equation}
The mutual information allows to quantify the information contained in a random variable $X$ about an other random variable $Y$. It can also be viewed as the decrease of disorder brought by a random variable $Y$ on a random variable $X$. The mutual information is defined by:
\begin{equation}
I(X;Y) = H(X) - H(X|Y)\label{eq:MI}
\end{equation}
We can notice that the mutual information between two independent variables is zero (since $H(X|Y)=H(X)$). Similarly to the conditional entropy, the conditional mutual information allows to quantify the information contained in a random variable about an other random variable, knowing the value of a third one. It can be written in various ways:
\begin{subequations}
\begin{align}
I(X;Y|S) &= H(X|S) - H(X|Y,S) = H(Y|S) - H(Y|X,S) \label{information2} \\
&= D_{KL} \Big[ p(X,Y|S) || p(X|S)p(Y|S)\Big] \label{kldiv}
\end{align}
\end{subequations}
We can see with \eqref{information2} that the mutual information is symmetric and that it characterizes the decrease in entropy on X brought by Y (or inversely). \eqref{kldiv} defines the conditional mutual information as the Kullback-Leibler divergence \cite{cover2012elements}, \rebut{noted $D_{KL}(.||.)$}, between distribution $P(Y,X|S)$ and the same distribution if $Y$ and $X$ were independent variables (the case where $H(Y|X,S) = H(Y|S)$).
For further information on these notions, the interested reader can refer to \citet{cover2012elements}. Sections 5, 6, 7 illustrate how we can use information theory to reward an agent. In practice, there are multiple ways to integrate an intrinsic reward into a RL framework. The main approach is to compute the agent's reward $r$ as a weighted sum of an intrinsic reward $r_{int}$ and an extrinsic reward $r_{ext}$: $r=\alpha r_{int} + \beta r_{ext}$ \cite{kakade2002dopamine,burda2018exploration}. Of course, one of the weighting coefficient $\alpha$ and $\beta$ can be set to 0.
\subsection{Decisions and hierarchical RL}\label{sec:hrl}
Hierarchical reinforcement learning (HRL) architectures are adequate candidates to model the decision hierarchy of an agent \cite{barto2003recent,dayan1993feudal,sutton1999between}. \citet{dayan1993feudal} introduced the feudal hierarchy, called \textit{Feudal reinforcement learning}. In this framework, a manager selects the goals that workers will try to achieve by selecting low-level actions. Once the worker achieved the goal, the manager can select an other goal, so that the interactions keep going. The manager rewards the RL-based worker to guide its learning process; we formalize this with intrinsic motivation in the next section. Below, \figureautorefname~\ref{im:abstract_actions} illustrates the use of a hierarchical decision in contrast with the use of low-level actions. At the origin, the hierarchical architectures have been introduced to make easier the long-term credit assignment \cite{dayan1993feudal,sutton1999between}. This problem refers to the fact that rewards can occur with a temporal delay and will only very weakly affect all temporally distant states that have preceded it, although these states may be important to obtain that reward. Indeed, the agent must propagate the reward along the entire sequence of actions (through \eqref{eq:bellman}) to reinforce the first involved state-action tuple. This process can be very slow when the action sequence is large. This problem also concerns determining which action is decisive for getting the reward, among all actions of the sequence. In contrast, if an agent can take advantage of temporally-extended actions, a large sequence of low-level actions become a short sequence of time-extended decisions that make easier the propagation of rewards.
This goal setting mechanism can be extended to create managers of managers so that an agent can recursively define increasingly abstract decisions as the hierarchy of RL algorithms increases. Relatively to \figref{im:rlintrinsic}, the internal environment of a RL module becomes the lower level module. We can model these decisions as \textit{options}. An \textit{option} $op \in \mathcal{O}$ is defined through 3 components: 1- A set of starting states $\mathcal{I} \subset S$ from which an \textit{option} can be applied; 2- A policy (or worker) that is responsible of achieving the \textit{options} with lower-level actions. This is studied in the next section; 3- A completion function $\mathcal{F}$ that specifies the probability of completing the \textit{option} in each state.
Typically, the starting state can derive from $d_0$ (all \textit{options} start at the beginning of an episode) or the full set of states $S$ (\textit{options can start everywhere}). The completion function can also set a probability $0$ everywhere \cite{eysenbach2018diversity}, in this case, it ends at the same time as an episode. Such specific cases often occur \cite{eysenbach2018diversity}. \textit{Options} where originally learnt during a pre-training phase with exclusively extrinsic rewards \cite{sutton1999between}, it was meant to take advantage of expert knowledge on the task. However, in our framework, we are interested in intrinsically motivated agent, so, in the next section, we take a closer look on how to learn the policies that learn to achieve goals using intrinsic motivation. In particular, we will define goals, skills and explain how to build a contextual state.
\subsection{Goal-parameterized RL}\label{sec:goalpam}
Usually, RL agents solve only one task and are not suited to learn multiple tasks. Thus, an agent is unable to generalize across different variants of a task. For instance, if an agent learns to grasp a circular object, it will not be able to grasp a square object. In the developmental model described in \secref{sec:modelRL}, the decisions can be hierarchically organized into several levels where an upper-level takes decision (or sets goals) that a lower-level has to satisfy. This questions: 1- how a DRL algorithm can make its policy dependent on the goal set by its upper-level decision module ? 2- How to compute the intrinsic reward using the goal ? These issues rise up a new formalism based on developmental machine learning \cite{colas2020intrinsically}.
In this formalism, a \textbf{goal} is defined by the pair $(g,R_G)$ where $G \subset \mathbb{R}^d$, $R_G$ is a goal-conditioned reward function and $g \in G$ is the $d\text{-dimensional}$ goal embedding. This contrasts with the notion of task which is proper to an extrinsic reward function assigned by an expert to the agent. With such embedding, one can generalize DRL to multi-goal learning, or even to every available goal in the state space, with the Universal Value Function Approximator (UVFA) \cite{schaul2015universal}. UVFA integrates, by concatenating, the state goal embedding $g$ with the state of the agent to create a contextual state $c = (g,s)$. Depending on the semantic meaning of a skill, we can further enhance the contextual states with other actions or states executed after starting executing the skill (cf. \secref{sec:skilllearning}).
We can now define the \textbf{skill} associated to each goal as the goal-conditioned policy $\pi^g(a|s)=\pi(a|g,s)$; in other words, a skill refers to the sensorimotor mapping that achieve a goal \cite{thill2013theories}. This skill may by learnt or unlearnt according to the expected intrinsic rewards it gathers. It implies that, if the goal space is well-constructed (as often a ground state space for example, $R_G=S$), the agent can generalize its policy across the goal space, \textit{i.e} the corresponding skills of two close goals are similar. For example, let us consider an agent moving in a closed maze where every position in the maze can be a goal. We can set $G=S$ and set the intrinsic reward function to be the euclidean distance between the goal and the current state of the agent $R_G: S \times G \rightarrow \mathbb{R}, (s,g) \rightarrow ||s-g||_2$.
This formalism completes the instantiation of the architectures described in \secref{sec:modelRL}. Now we will explain how, in practice, one can efficiently learn the goal-conditioned policy.
\subsection{Efficient learning with goal relabelling}\label{sec:relabeling}
When the goal space is a continuous state space, it is difficult to determine whether a goal is reached or not, since two continuous values are never exactly equal. Hindsight experience replay (HER) \cite{andrychowicz2017hindsight} tackles this issue by providing a way to learn on multiple objectives with only one interaction. With author's method, the agent can use an interaction done to accomplish one goal to learn on an other goal, by modifying the associated intrinsic reward. This mechanism greatly improves the sample efficiency since it avoids to try all interactions for every goals.
Let us roll out an example. An agent acts in the environment to gather a tuple $(s,s',r_g,a,g)$ where $r_g$ is the reward associated to the goal $g$. The agent can learn on this interaction, but can also use this interaction to learn other goals; to do so, it can change the goal into a new goal and recompute the reward, resulting in a new interaction $(s,s',r_{g'},a,g')$. The only constraint for doing this is that the reward function $R(s,a,s',g')$ has to be known, which is the case with an intrinsic reward function. Typically, an agent can have a goal state and a reward function which is $1$ if it is into that state and $0$ otherwise. At every interaction, it can change its true goal state for its current state and learn with a positive reward.
\section{Challenges of DRL}\label{sec:defis}
In this section, we detail two main challenges of current DRL methods that are partially addressed by IMs.
\subsection{Sparse rewards} \label{sec:sparse}
Classic RL algorithms operate in environments where the rewards are \textbf{dense}, \textit{i.e.} the agent receives a reward after almost every completed action. In this kind of environment, naive exploration policies such as $\epsilon$-greedy \cite{sutton1998reinforcement} or the addition of a Gaussian noise on the action \cite{lillicrap2015continuous} are effective. More elaborated methods can also be used to promote exploration, such as Boltzmann exploration \cite{cesa2017boltzmann,mnih2015human} or an exploration in the parameter-space \cite{plappert2017parameter,ruckstiess2010exploring,fortunato2017noisy}. In environments with \textbf{sparse} rewards, the agent receives a reward signal only after it executed a large sequence of specific actions. The game \textit{Montezuma's revenge} \cite{bellemare15} is a benchmark illustrating a typical sparse reward function. In this game, an agent has to move between different rooms while picking up objects (it can be keys to open doors, torches, ...). The agent receives a reward only when it finds objects or when it reaches the exit of the room. Such environments with sparse rewards are almost impossible to solve with the above mentioned \textit{undirected} exploration policies \cite{thrun1992efficient} since the agent does not have local indications on the way to improve its policy. Thus the agent never finds rewards and cannot learn a good policy with respect to the task \cite{mnih2015human}. Figure \ref{im:sparse_reward} illustrates the issue on a simple environment.
This issue stresses out the need for \textit{directed} exploration methods \cite{thrun1992efficient}. While intrinsic motivation can provide such direction, the principle of "optimism in face of uncertainty" \cite{audibert2007tuning} can also execute a directed exploration without intrinsic motivation \cite{thrun1992efficient}. Briefly, this principle can incite agents to go in areas with a lot of epistemic uncertainties about its Q-values \cite{ciosek2019better,pacchiano2020optimism}. Yet, it is hard to approximate the epistemic uncertainty and it only slightly improves exploration \cite{ciosek2019better}. This principle can also relate with some intrinsic motivations when we consider uncertainty about models (see \secref{sec:infogainforward}).
\begin{figure}
\begin{centering}
\includegraphics[width=10cm]{images/sparse_rewards.drawio.pdf}
\caption{\rebut{Example of a very simple sparse reward environment, explored by two different strategies}. The agent, represented by a circle, strives to reach the star. The reward function is one when the agent reaches the star and zero otherwise. (a) the agent explores with standard methods such as $\epsilon\text{-greedy}$; as a result, it stays in its surrounded area because of the temporal inconsistency of its behaviour. (b) we imagine an ideal exploration strategy where the agent covers the whole state space to discover where rewards are located. \rebut{The fundamental difference between the two policies is the volume of the state space explored for a given time.}}
\label{im:sparse_reward2}
\end{centering}
\end{figure}
Rather than working on an exploration policy, it is common to shape an intermediary dense reward function that adds to the reward associated to the task in order to make the learning process easier for the agent \cite{su2015reward}. However, the building of a reward function often reveals several unexpected errors \cite{ng1999policy,amodei2016concrete} and most of the time requires expert knowledge. For example, it may be difficult to shape a local reward for navigation tasks. Indeed, one has to be able to compute the shortest path between the agent and its goal, which is the same as solving the navigation problem. On the other side, the automation of the shaping of the local reward (without calling on an expert) requires too high computational resources \cite{chiang2019learning}. We will see in \secref{sec:infogain}, \ref{sec:novelty} and \ref{sec:skilllearning} how IM is a valuable method to encourage exploration in a sparse rewards setting.
\subsection{Temporal abstraction of actions} \label{sec:abstraction}
As argued in \secref{sec:hrl}, skills, through hierarchical RL, are a key element to speed up the learning process since the number of decisions to take is significantly reduced when skills are used. In particular, they make easier the \textit{credit assignment}. Skills can be manually defined, but it requires some extra expert knowledge \cite{sutton1999between}. To avoid providing hand-made skills, several works proposed to learn them with extrinsic rewards \cite{bacon2017option,subpolicy2020li}. However, if an agent rather learns skills in a \textit{bottom-up} way, \textit{i.e} with intrinsic rewards rather than extrinsic rewards, learnt skills become independent from possible tasks. This way, skills can be reused across several tasks to improve transfer learning \cite{aubret2020elsim,heess2016learning} and an agent can learn skills even though it does not access rewards, improving exploration when rewards are sparse \cite{machado2017laplacian}. Let us illustrate both advantages.
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\linewidth]{images/abstraction_action.drawio.pdf}
\caption{\rebut{Example of two policies in a simple environment, one uses \textit{skills} (yellow), the other one only uses primitive actions (blue)}. Agents have to reach the star.}
\label{im:abstract_actions}
\end{centering}
\end{figure}
\paragraph{Exploration when rewards are sparse.} \figref{im:abstract_actions} illustrates the benefit in terms of exploration when an agent hierarchically uses skills.
The yellow agent can use a skill \textit{Go to the far right}, to reach the rewarding star while the blue agent can only use low-level cardinal movements.
The problem of exploration becomes trivial for the agent using skills, since one exploratory action can lead to the reward. In contrast, it requires an entire sequence of specific low-level actions for the other agent to find the reward. This problem arises from the minimal number of specific actions needed to get a reward (see also \secref{sec:sparse}). A thorough analysis of this aspect can be found in \cite{nachum2019does}.
\paragraph{Reusing skills across several tasks.} Skills learnt with intrinsic rewards are not specific to a task. Assuming an agent is required to solve several tasks in a similar environment, \textit{i.e} a single MDP with a changing extrinsic reward function, an agent can execute its discovered skills to solve all tasks. Typically, in \figref{im:abstract_actions}, if both agents learnt to reach the star and we move the star somewhere else in the environment, the yellow agent would still be able to execute \textit{Go to the far right} and executing this skill may make the agent closer to the new star. In contrast, the blue agent would have to learn a whole new policy. In \secref{sec:skilllearning}, we provide insights on how an agent can discover skills in a \textit{bottom-up} way.
\section{Classification of methods}\label{sec:classify}
\rebut{In order to tackle the problem of exploration, an agent may want to identify and return in \textbf{rarely visited} states or \textbf{unexpected} states, which can be quantified with current intrinsic motivations. We will particularly focus on two objectives that address the challenge of exploring with sparse rewards, each with different properties: maximizing novelty and surprise.
Surprise and novelty are specific notions that have often been used in an interchanged way and we are not aware of a currently unanimous definition of novelty \cite{barto2013novelty}. The third notion we study, skill learning, focuses on the issue of skill abstraction. In practice, surprise and novelty are currently maximized as a flat intrinsic motivation, \textit{i.e} without using hierarchical decisions. This mostly helps to improve exploration when rewards are sparse. In contrast, skill learning allows to define time-extended hierarchical skills that enjoy all the benefits argued in \secref{sec:abstraction}.}
Table \ref{tab:taxonomy} sums up our taxonomy. based on information theory that reflects the high-level studied concepts of novelty, surprise and skill learning. In practice, we mostly take advantage of the \textit{mutual information} to provide a quantity for our conceptual objectives. These objectives are compatible with each other and may be used simultaneously, as argued in \secref{sec:flatim}. Within each category of objectives, we additionally highlight several ways to maximize each objective and provide details about the underlying methods of the literature. We sum up the methods in Tables \ref{tab:surprise}, \ref{tab:novelty} and \ref{tab:skills} and compare their respective advantages when possible.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Surprise}: $I(S';\Phi_T|h,S,A)$, \secref{sec:infogain}} }\\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Information gain & Information gain & Information gain \\
& over forward model & over the true model & over density model \\
\hline
Sections & \secref{sec:infogainforward} & \secref{sec:predictionerror} & \secref{sec:infogaindensity} \\
\hline
Rewards & $D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h))$ & $||s' - \hat{s}'||_2^2$ & $\frac{1}{\sqrt{\hat{N}(s')}}$ \\
\hline
Advantage & \rebut{Simplicity} & \rebut{\textbf{Stochasticity robustness}} & \rebut{Good exploration} \\
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Novelty}: $I(S;Z)$, \secref{sec:novelty}}}
\\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Parametric density & \multicolumn{2}{c|}{K-nearest neighbors} \\
\hline
Sections & \secref{sec:directdensity} & \multicolumn{2}{c|}{\secref{sec:knearest}} \\
\hline
Rewards & $- \log \rho(s')$ & \multicolumn{2}{c|}{ $\log (1+ \frac{1}{K} \sum_0^K || f(s') - nn_k(f(S_b),f(s')) ||_2)$ } \\
\hline
\rebut{Advantage} & \rebut{Good exploration} & \multicolumn{2}{c|}{\rebut{\textbf{Best exploration}}}\\
\hline
\multicolumn{4}{|c|}{\multirow{2}{*}{\textbf{Skill learning}: $I(G; u(\mathcal{T}))$, \secref{sec:skilllearning}}} \\
\multicolumn{4}{|c|}{}
\\
\hline
Formalism & Fixed goal distribution & Goal-state & Proposing diverse goals \\
& & achievement & \\
\hline
Sections & \secref{sec:predefinedG} & \secref{sec:goalstate} & \secref{eq:diversestate} \\
\hline
Rewards & $\log p(g|s')$ & $-||s_g-s'||_2^2$ & $(1+\alpha_{skew})\log p(s_g)$ \\
\hline
Advantage & \rebut{Simple goal sampling} & \rebut{\textbf{High-granularity skills}} & \rebut{\textbf{More diverse skills}} \\
\hline
\end{tabular}
\caption{Summary of our taxonomy of intrinsic motivations in DRL. The function $u$ outputs a part of the trajectories $\mathcal{T}$, $Z$ and $G$ are internal random variables respectively denoting state representations and self-assigned goals. Please, refer to the corresponding sections for more details about methods and notations. The reward function aims to represent the one used in the category.}
\label{tab:taxonomy}
\end{table}
\section{Surprise}\label{sec:infogain}
In this section, we study methods that maximize the surprise. Firstly, we formalize the notions of surprise, then we will study three approaches for computing intrinsic rewards based of these notions.
\subsection{Definition of surprise}\label{sec:expecsurprise}
In this section, we assume the agent learns either a density model (\secref{sec:infogaindensity}) or a forward model of the environment (Sections \ref{sec:infogainforward} and \ref{sec:predictionerror}) parameterized by $\phi \in \Phi$. The density model induces a marginal distribution of state $p(S|\phi)$ and a forward model computes the next-state distribution conditioned on a tuple state-action $p(S'|S,A,\phi)$. Typically, this can be the parameters of a neural network. Trying to approximate the true model, the agent maintains an approximate distribution $p(\Phi|h)$ of models, where $h_t=h$ refers to the ordered history of interactions $((s_0,a_0,s_1),(s_1,a_1,s_2),\dots, (s_{t-1},a_{t-1},s_t))$. In this section, $h$ simulates a dataset of interactions, we use it to clarify the role of the dataset. It is important to notice that the policy feeds this $h$.
In this case, \textbf{surprise quantifies the mismatch between an expectation and the true experience of an agent} \cite{barto2013novelty,ekman1994nature}. In this paper, we refer to the definition of \citet{itti2009bayesian}, which define it as the discrepancy between a prior distribution of beliefs and the posterior probability distribution following an observation \cite{itti2009bayesian,storck1995reinforcement}. If an agent maximizes the surprise over a model through interactions with the environment, which is often the case \cite{barto2013novelty}, it leads to the expected information gain objective \cite{sun2011planning}. Intuitively, the agent returns in states where it experienced an unexpected transition. Using the KL-divergence to assess the discrepancy, surprise can be computed as $D_{KL}(p(\Phi|h_{t+1})||p(\Phi|h_t))$ where $\phi \in \Phi$ are parameters of a model and $t$ denotes the timestep.
In this case, the agent has a prior distribution about model parameters $p(\Phi)$ and this model can be updated using the Bayes rule:
\begin{equation}
p(\phi|h,s,a,s') = \frac{p(\phi|h)\; p(s'|h,s,a,\phi)}{p(s'|h,s,a)}.
\end{equation}
\paragraph{Information gain over agent's model.} The expected information gain \cite{sun2011planning,little2013learning} over a forward or density model parameterized by $\phi$ can be formulated as:
\begin{subequations}
\begin{align}
IG(h,A,S',S,\Phi) &= I(S';\Phi|h,A,S) = \mathbb{E}_{\substack{ (s,a) \sim p(\cdot|h) \\ s' \sim p(\cdot | s,a,h)}} D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:trueexpectedinfogain} \\
%
&\approx \mathbb{E}_{\substack{ (s,a) \sim \pi \\ s' \sim p(\cdot | s,a,h,\phi_T)}} D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)) \label{eq:expectedinfogain}
\end{align}
\end{subequations}
Actively maximizing the expected information gain amounts to reduce the uncertainty of the model. We emphasize that $p(\phi|h) = p(\phi|h,a,s)$ since only full transitions provide information about the true dynamics of the environment. In this case, $p(s'| s,a,h)$ does not refer to the probability induced by the environment, but rather to the probability induced by the current history of transitions. This is stressed out by writing:
\begin{equation}
p(s'|s,a,h) = \sum_{\phi \in \Phi} p(s'|s,a,h,\phi)p(\phi|s,a,h).\label{eq:marginalphi}
\end{equation}
We highlight that the difference between \eqref{eq:trueexpectedinfogain} and \eqref{eq:expectedinfogain} is important and misleading in the literature \cite{houthooft2016vime,little2013learning,sun2011planning}: in the first equation, the agent imagines new outcomes in order to select actions that maximize the change in the internal model, while in \eqref{eq:expectedinfogain}, the agents acts and uses the new states to update its model.
\paragraph{Information gain over the true forward model.} In our formalism, we assume that there is a distribution of true models $p(\Phi_T)$ that underpins the transition function of the environment $T$. In contrast with $\Phi$, this is a property of the environment. One can see this distribution as a Dirac distribution if only one model exists or as a categorical distribution of several forward models. We define the expected information gain over the true models as:
\begin{subequations}
\begin{align}
IG(h,A,S',S,\Phi_T) &= I(S';\Phi_T|h,A,S) = H(\Phi_T|h,A,S) - H(\Phi_T|h,A,S,S') \\
&= \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} \log p(s'|s,a,h,\phi_T) - \log p(s'|s,a,h) \label{eq:predicterror3}.
\end{align}
\end{subequations}
Maximizing \eqref{eq:predicterror3} amounts to look for states that provides new information about the true models distribution. We can see that the left-hand side of \eqref{eq:predicterror3} incites the agent to target inherently deterministic areas, \textit{i.e}, given the true forward model, the agent would exactly know where it ends up. At the opposite, the right-hand term pushes the agent to go in stochastic areas according to its current knowledge. Overall, to improve this objective, an agent has to reach areas that are more deterministic than what it thinks they are. One can see that, assuming $p(s'|s,a,h,\phi_T) \approx p(s' | s, a, \phi, h)$, one falls back on the expected information gain (see also \eqref{eq:predicterror2}). In contrast with \eqref{eq:expectedinfogain}, this objective takes advantage of the true model, which is most of the time unknown, thereby making the objective hardly tractable. As such, in this perspective, surprise results from an agent-centric approximation of the discrepancy between the agent's model and the environment model
In the following, we will study three objectives: the expected information gain over the true forward models, the expected information gain over the forward model and the expected information gain over density models.
\subsection{Information gain over the true forward model}\label{sec:predictionerror}
To avoid the need of the true forward model, the agent can omit the left-hand term of \eqref{eq:predicterror3} by assuming the true forward model is modelled as a deterministic forward model. In this case, we can write:
\begin{subequations}
\begin{align}
I(S';\Phi_T|h,A,S) &\propto \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h), \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} - \log p(s'|s,a,h) \label{eq:predicterror4} \\
&= \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T)}} - \log \sum_{\phi \in \Phi} p(s'|h,s,a,\phi)p(\phi|h) \\
%
&\geq \mathbb{E}_{\substack{\phi_T \sim p(\cdot),\, (s,a) \sim p(\cdot|h) \\ s' \sim p(\cdot|s,a,\phi_T), \phi \sim p(\cdot|h)}} - \log p(s'|h,s,a,\phi) \label{eq:predicterror5}
\end{align}
\end{subequations}
where we applied the Jensen inequality in \eqref{eq:predicterror5} and $\phi_T \sim p(\cdot)$ is fixed. One can model $p(s'|h,s,a,\phi)$ with a unit-variance Gaussian distribution in order to obtain a tractable loss. This way, we have:
\begin{subequations}
\begin{align}
\mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot|s,a,\phi_T),\, \phi \sim p(\cdot|h)}} - \log p(s' | \phi,h,a,s) &\approx \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h) ,\, s' \sim p(\cdot|s,a,\phi_T) \\ \phi \sim p(\cdot|h),\, \phi_T \sim p(\cdot) }} - \log \frac{1}{(2\pi)^{d/2}}e^{-0.5 (s' - \hat{s}')^T (s' - \hat{s}')} \label{eq:gaussianinfogain} \\
&\propto \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h) ,\, s' \sim p(\cdot|s,a,\phi_T) \\ \phi \sim p(\cdot|h),\, \phi_T \sim p(\cdot) }} ||s' - \hat{s}'||_2^2 + Const
\end{align}
\end{subequations}
%
where
\begin{equation}
\hat{s}' = \argmax{s'' \in S} p(s''|h,a,s,\phi)
\end{equation}
represents the mean prediction and $\phi$ parameterizes a deterministic forward model.
Following the objective, we can extract a generic intrinsic reward as:
\begin{align}
R(s,a,s')= ||f(s')- f(\hat{s}')||_2^2
\label{eq:rewpredicterror}
\end{align}
where $f$ is a generic function (e.g. identity or a learnt one) encoding the state space into a feature space. \eqref{eq:rewpredicterror} amounts to reward the predictor error of $\phi$ in the representation $f$. In the following, we will see that learning a relevant function $f$ is the main challenge.
The first natural idea to test is whether a function $f$ is required. \citet{burda2019largescale} learn the forward model from the ground state space and observe it is inefficient when the state space is large. In fact, the euclidean distance is meaningless in such high-dimensional state space. In contrast, they raise up that random features extracted from a random neural network can be very competitive with other state-of-art methods. However they poorly generalize to environment changes. An other model, \textit{Dynamic Auto-Encoder (Dynamic-AE)} \cite{stadie2015incentivizing}, computes the distance between the predicted and the real state in a state space compressed with an auto-encoder \cite{hinton2006reducing}. $g$ is then the encoding part of the auto-encoder. However this approach only slightly improves the results over Boltzmann exploration on some standard Atari games. Other works also consider a dynamic-aware representation \cite{ermolov2020latent}. These methods are unable to handle the local stochasticity of the environment \cite{burda2019largescale}. For example, it turns out that adding random noise in a 3D environment attracts the agent; it passively watches the noise since it is unable to predict the next observation. \label{tele} This problem is also called \textit{the white-noise} problem \cite{pathak2017curiosity,schmidhuber2010formal}. This problem emerges by considering only the right-hand term of \eqref{eq:predicterror3}, making the agent assumes environments are deterministic. Therefore, exploration with prediction error breaks down when this assumption is no longer true.
To tackle exploration with local stochasticity, the \textit{intrinsic curiosity module (ICM)} \cite{pathak2017curiosity} learns a state representation function $f$ end-to-end with an \textit{inverse model} (i.e. a model which predicts the action done between two states). Thus, the function $f$ is constrained to represent things that can be controlled by the agent during next transitions. Secondly, the forward model used in ICM predicts, in the feature space computed by $f$, the next state given the action and the current state. The prediction error does not incorporate the white-noise that does not depend on actions, so it will not be represented in the feature state space. ICM notably allows the agent to explore its environment in the games \textit{VizDoom} and \textit{Super Mario Bros}. Building a similar action space, \textit{Exploration with Mutual Information (EMI)} \cite{pmlr-v97-kim19a} significantly outperforms previous works on Atari but at the cost of several complex layers. EMI transfers the complexity of learning a forward model into the learning of states and actions representation through the maximization of $I([S,A];S')$ and $I([S,S'];A)$. Then, the forward model $\phi$ is constrained to be a simple linear model in the representation space. Furthermore, EMI introduces a \textit{model error} which offloads the linear model when a transition remains strongly non-linear (such as a screen change). However one major drawback of ICM and EMI is the incapacity of their agent to keep in their representation what depends on their long-term control. For instance, in a partially observable environment, an agent may perceive the consequences of its actions several steps later. In addition they remain sensitive to stochasticity when it is produced by an action \cite{burda2019largescale}.
An other way to tackle local stochasticity can be to maximize the improvement of prediction error, or learning progress, of a transition model \cite{schmidhuber1991curious,azar2019world,lopes2012exploration,oudeyer2007intrinsic,kim2020active}. One can see this as approximating the left-hand side of \eqref{eq:predicterror3} with:
\begin{align}
\log p(s'|s,a,h,\phi_T) - \log p(s'|s,a,h) &\approx \log p(s'|s,a,h') - \log p(s'|s,a,h)
\end{align}
where $h'$ concatenates $h$ with an arbitrary number of additional interactions. As $h'$ becomes large enough and the agent updates its forward model, its forward model converges to the true transition model. Formally, if one stochastic forward model can describe the transitions, we can write:
\begin{subequations}
\begin{align}
\lim_{|h'|\rightarrow \inf} p(s'|s,a,h') &= \lim_{|h'|\rightarrow \inf} \sum_{\Phi} p(s'|s,a,h',\phi) p(\phi|h') \nonumber \\
&= p(s'|s,a,h',\phi_T) \label{eq:approxlearningprogress}
\end{align}
\end{subequations}
In practice, we can not wait for discovering a long sequence of new interactions and the reward can be dependent on a small set of interactions and the efficiency of the gradient update of the forward model. Yet, the theoretical connection with the true expected information gain may indeed explain the robustness of learning progress to stochasticity \cite{linke2020adapting}.
\paragraph{Conclusion.} While these methods perform well in deterministic environments, they struggle to offset the determinism assumption that underpines the focus on \eqref{eq:predicterror4}; it results that standard methods focus on the more stochastic areas. Methods that tackle stochasticity may not predict important long-term information about the environment or they need to compute a learning progress measure, which is non-trivial.
\subsection{Information gain over forward model}\label{sec:infogainforward}
In this subsection, we study the works that maximize the expected information gain over forward models. Here, $\phi$ are parameters of a learnt forward model. Using \eqref{eq:expectedinfogain}, we can extract an intrinsic reward:
\begin{equation}
R(s,a,s') = D_{KL}(p(\Phi|h,s,a,s')||p(\Phi|h)).\label{eq:rewinfogain}
\end{equation}
This way, an agent executes actions that provide information about the dynamics of the environment. This allows, on one side, to push the agent towards areas it does not know, and on the other side to prevent attraction towards stochastic areas. Indeed, if the area is deterministic, environment transitions are predictable and the uncertainty about its dynamics can decrease. At the opposite, if transitions are stochastic, the agent turns out to be unable to predict transitions and does not reduce uncertainty. The exploration strategy \textit{VIME} \cite{houthooft2016vime} computes this intrinsic reward by modelling $p(\phi|h)$ with Bayesian neural networks \cite{graves2011practical}. The interest of Bayesian approaches is to be able to measure the uncertainty of the learned model \cite{blundell2015weight}. This way, assuming a fully factorized Gaussian distribution over model parameters, the KL-divergence has a simple analytic form \cite{houthooft2016vime,linke2020adapting}, making it easy to compute.
However, the interest of the proposed algorithm is shown only on simple environments and the reward can be computationally expensive to compute. \citet{achiam2017surprise} propose a similar method (\textit{AKL}), with comparable results, using deterministic neural networks, which are simpler and quicker to apply. The weak performance of both models is probably due to the difficulty to retrieve the uncertainty reduction by rigorously following the mathematical formalism of information gain
The expected information gain can also be written:
\begin{subequations}
\begin{align}
I(S';\Phi|h,A,S) &= H(S'|h,A,S) - H(S'|A,\Phi,S,h) \nonumber \\
&\approx - \mathbb{E}_{\substack{(s,a) \sim p(\cdot|h),\, \phi_T \sim p(\cdot) \\ s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s'|h,s,a) + \mathbb{E}_{\substack{\phi \sim p(\cdot|h,s,a,s') \\ (s,a) \sim p(\cdot|h), s' \sim p(\cdot | s,a,h,\phi_T)}} \log p(s' | s, a, \phi, h) \label{eq:predicterror} \\
&= \mathbb{E}_{\substack{\phi \sim p(\cdot|h,s,a,s'),\, \phi_T \sim p(\cdot) \\ (s,a) \sim p(\cdot|h), s' \sim p(\cdot | s,a,h,\phi_T)}} - \log \sum_{\phi \in \Phi} p(s'|\phi,h,s,a)p(\phi|h) + \log p(s' | s, a, \phi, h) \label{eq:predicterror2}
\end{align}
\end{subequations}
Using similar equations than in \eqref{eq:predicterror2}, in \textit{JDRX} \cite{shyam2018model}, authors show that one can maximize the information gain by computing the Jensen-Shannon or Jensen-Rényi divergence between distributions of states induced by several forward models. The more the models are trained on a state-action tuple, the more they will converge to the expected distribution of next states. Intuitively, the reward represents how much the different transition models disagree on the next-state distribution. Other works also maximize a similar form of disagreement \cite{pathak2019self,yao2021sample,sekar2020planning} by looking at the variance of predictions among several learnt transition models. \rebut{While these models handle the white-noise problem, the main intrinsic issue is computational since they require multiple forward models to train.}
\paragraph{Conclusion.} Despite the theoretical power of the information gain for improving exploration, it remains hard to efficiently estimate it and use it in difficult tasks.
\subsection{Information gain over density model}\label{sec:infogaindensity}
Surprise can also arise by quantifying \textit{the discrepancy between its probability of occurring and the fact that it actually occurred} \cite{barto2013novelty}. To quantify this probability of occuring, in this paragraph, we assume the agent tries to learn a density model $\phi \in \Phi$ that approximates the current marginal density distribution of states $p(s')$. In this setting, we can define the expected information gain over a density model $\rho$ \cite{bellemare2016unifying}:
\begin{align}
IG(h,S,A,S',\mathrm{P})&\approx \mathbb{E}_{\substack{ (s,a) \sim p(\cdot|h),\, \rho_T \sim p(\cdot) \\ s' \sim p(\cdot | s,a,h,\mathrm{P}_T)}} D_{KL}(p(\rho|h,s')||p(\rho|h)).
\end{align}
We hypothesize that the adversarial training that results from the objective (active maximization of the KL-divergence and density fitting) results in an approximately uniform distribution of states (and a uniform density estimation). This may be due to the convexity of the KL-divergence in $p(\rho|h,s')$ and $p(\rho|h)$ but we leave the proof to future work. To our knowledge, no works directly optimize this objective, but it has been shown that the information gain lower-bounds the squared inverse pseudo-count objective \cite{bellemare2016unifying}, which derives from count-based objectives; in the following, we will review \textit{count} and \textit{pseudo-count} objectives.
To efficiently explore its environment, an agent can count the number of times it visits a state and returns in rarely visited states. Such methods are said to be \textit{count-based} \cite{strehl2008analysis}. As the agent visits a state, the intrinsic reward associated with this state decreases. It can be formalized with:
\begin{equation}
R(s,a,s') = \frac{1}{\sqrt{N(s')}}
\end{equation}
where $N(s)$ is the number of times that the state $s$ has been visited. Although this method is efficient and tractable in a tabular environment (with a discrete state space), it hardly scales when states are numerous or continuous since an agent never really returns in the same state. A first solution proposed by \citet{tang2017exploration}, called \textit{TRPO-AE-hash}, is to hash the latent space of an auto-encoder fed with states. However, these results are only slightly better than those obtained with a classic exploration policy. An other line of works propose to adapt counting to high-dimensional state spaces via \textit{pseudo-counts} \cite{bellemare2016unifying}. Essentially, \textit{pseudo-counts} allow the generalization of the count from a state towards neighbourhood states using a learnt density model $\rho$. This is defined as:
\begin{equation}
\hat{N}(s') = \frac{p(s'|\rho)(1-p(s'|\rho')}{p(s'|\rho')-p(s'|rho)}
\end{equation}
where $\rho'(s)$ computes the density of $s$ after having learnt on $s$. In fact, \citet{bellemare2016unifying} show that, under some assumptions, \textit{pseudo-counts} increase linearly with the true counts. In this category, \textit{DDQN-PC} \cite{bellemare2016unifying} and
\textit{DQN-PixelCNN} \cite{ostrovski2017count} compute $\phi$ using respectively a Context-Tree Switching model (CTS) \cite{bellemare2014skip} and a Pixel-CNN density model \cite{van2016conditional}. Although the algorithms based on density models work on environments with sparse rewards, they add an important complexity layer \cite{ostrovski2017count}. One can preserve the quality of observed exploration while decreasing the computational complexity of the pseudo-count by computing it in a learnt latent space \cite{martin2017count}.
There exists several other well-performing tractable exploration methods like \textit{RND} \cite{burda2018exploration}, \textit{DQN+SR} \cite{machado2018count}, \textit{RIDE} \cite{ride2020roberta} or \textit{BeBold} \cite{zhang2020bebold}. These papers argue the reward they propose more or less relate to a visitation count estimation.
\paragraph{Conclusion.} Maximizing the information gain over a density model may maximize the pseudo-count, which relates to count-based objectives. They provide interesting feedbacks for exploration, but in practice, pseudo-counts are hard to approximate since they rely on a powerfull density model, a strict online estimation of density and they assume $p(s|\phi)$ strictly increases $\forall s \in S$ \cite{ostrovski2017count}. In addition, they also struggle with the problem of randomness. For instance, let us assume that one (state, action) tuple can lead to two very different states with 50\% chance each. The algorithm will manage to count for both states the number of visits, although it would take twice as long to avoid to be too much attracted. However, these methods do not address the white-noise problem since next states may be randomly generated at every steps. In this case, it is unclear how these methods could resist the temptation of going into this area since the counting associated to this state will never increase.
\subsection{Conclusion}
\input{bigtablesurprise}
We detailed three ways to define and maximize the surprise of an agent, based on the expected information gain over a true model of the environment.
\rebut{\tabref{tab:surprise} sums up all the surprise-based methods reviewed in this section, where it is also specified whether each method handles stochastic environments (Stoch) (cf. \secref{sec:predictionerror}), and if expensive models are used (Computational Cost). The relative experimental advantage of each method is also reported in the \textit{Montezuma's revenge} environment (cf. Figure \ref{fig:environments}a)), a sparse-reward benchmark widely used to assess the ability of the method to explore. This gives a clue on how each method compare to the others. Methods categorized in information gain over forward model elegantly handle stochasticity from the environment, but usually apply in environments much simpler than Montezuma's revenge. We can make a similar observation with methods based on learning progress. Methods based on prediction error achieve an overall low score on Montezuma's Revenge, with stochasticity handling depending on the learnt latent space. PE with LWM \cite{ermolov2020latent} achieves good performance, presumably because the learnt representation is more appropriate. One should be cautious about the low results of the Dynamic-AE \cite{stadie2015incentivizing}, because of the very low number of timesteps. Methods based on information gain over a density model are sensitive to stochasticity since nothing prevents them to return in noisy states, but achieve overall good results on Montezuma's Revenge, thanks to the pseudo-count estimation. Among the best methods: BeBold \cite{zhang2020bebold} outstanding result has to be taken with caution, because it is not averaged over several seed; RND \cite{burda2018exploration} is a simple method that achieves important asymptotic performance.}
In practice, the expected information gain over a forward model and the learning progress well-approximate the expected information gain over the true model. Therefore, it appears that they intuitively and experimentally allow to well-explore inherently stochastic environments, but are hard to implement. The expected information gain over a density model can be seen as approximating the expected information gain over the true uniform density model. \rebut{This makes the agent targets a uniform distribution of states: while it makes the agent sensitive to stochasticity, it executes robust exploration in deterministic environment.} In fact, we discuss in the next section the relevance of aiming for a uniform distribution of states, through the study of novelty-based intrinsic motivations.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{images/all_envs.png}
\caption{\rebut{Different environments widely used in our reviewed papers. (a) \textit{Montezuma's revenge}, used to assess the ability of a policy to explore. (b) \textit{Ant maze} (1x scale), used to evaluate the hierarchical organization of learnt skills (low-level: manipulation of low-level torques; high-level: navigation in the maze.) (c) \textit{Ant}, used to analyse the diversity of learnt skills.}}
\label{fig:environments}
\end{figure}
\section{Novelty maximization}\label{sec:novelty}
Novelty quantifies how much a stimuli contrasts with a previous set of experiences \cite{barto2013novelty,berlyne1966curiosity}. More formally, \citet{barto2013novelty} defend that \textit{an observation is novel when a representation of it is not found in memory, or, more realistically, when it is not “close enough” to any representation found in memory}. Previous experiences may be collected in a bounded memory or distilled in a learnt representation.
Several works propose to formalize novelty seeking as looking for low-density states \cite{becker2021exploration}, or similarly (cf. \secref{sec:knearest}), states that are different from others \cite{lehman2011novelty,conti2018improving}. In our case, this would result in maximizing the entropy of a state distribution. This distribution can be the t-steps state distribution (cf. \eqref{eq:dpi}) $H(d^{\pi}_t(S))$ or the entropy of the stationary state-visitation distribution over a horizon $T$:
\begin{align}
H(d^{\pi}_{0:T}(S))=H(\frac{1}{T} \sum_{t=1}^T d^{\pi}_t(S)).
\end{align}
In practice, these distributions can be approximated with a buffer. This formalization is not perfect and does not fit several intuitions about novelty \cite{barto2013novelty}. \citet{barto2013novelty} criticize such definition by stressing out that very distinct and memorable events may have low probabilities of occurring while not being novel (\textit{e.g} a wedding). They suggest that novelty may rather relates to the acquisition of a representation of the incoming sensory data. Following this definition, we propose to formalize novelty seeking behaviors as those that \textit{actively} maximize the mutual information between states and their representation $I(S;Z)=H(S) - H(S|Z)$ where $Z$ is a low-dimensional space ($|Z| \leq |S|$). This objective is commonly known as the \textit{infomax} principle. \cite{linsker1988self,almeida2003misep,bell1995information,HjelmFLGBTB19}; in our case, it amounts to \textbf{actively} learning a representation of the environment. Most of works focus on actively maximizing the entropy of state distribution while a representation learning function minimizes $H(S|Z)$. Furthermore, if one assumes that $Z=S$, the infomax principle collapses to an entropy maximization $H(S)$.
There are several ways to maximize the state-entropy, we separate them based on how they maximize the entropy. We found two kind of methods: low-density search and k-nearest neighbors methods.
\subsection{Direct entropy maximization}\label{sec:directdensity}
\rebut{The most evident way to maximize the entropy of states consists in maximizing $H(\rho(s))$ where $\rho(s)=p(s|\rho)$ approximates the stationary state-visitation distribution $d^{\pi}_{0:T}(S)$.} If we access this density model, it becomes straightforward to discover a policy that maximizes the entropy of a stationary state distribution \cite{hazan2019provably}. But computing $\rho(s)$ is challenging in high-dimensional state spaces. Several methods propose to estimate $\rho(s)$ using variational inference \cite{exploration2021zhang,islam2019entropy,lee2019efficient,pong2019skew} based on autoencoder architectures.
In this setting, we can use the VAE loss, approximated either as \eqref{eq:badapprox} \cite{vezzani2019learning,lee2019efficient} or \eqref{eq:unbiasedapprox} \cite{pong2019skew}, assuming $z$ is a compressed latent variable, $p(z)$ a prior distribution \cite{KingmaW13} and $q_{decoder}$ a neural network that ends with a diagonal Gaussian.
\rebut{
\begin{subequations}
\begin{align}
\log \rho(s') & \geq \mathbb{E}_{\hat{s'} \sim q_{decoder}(\cdot|z)} - \log q_{decoder}(\hat{s'}|z) + D_{KL}(q_{encoder}(z|s)||p(z)) \\
&\approx - \log q_{decoder}(s'|z) + D_{KL}(q_{encoder}(z|s')||p(z)) \label{eq:badapprox}\\
&\approx \log \frac{1}{N} \sum_{i=1}^N \frac{p(z)}{q_{encoder}(z|s')}q_{decoder}(s'|z) \label{eq:unbiasedapprox}
\end{align}
\end{subequations}
}
\eqref{eq:unbiasedapprox} is more expensive to compute than \eqref{eq:badapprox} since it requires decoding several samples, but presumably exhibit less variance. Basically, this estimation allows to reward an agent \cite{berseth2020smirl,lee2019efficient,exploration2021zhang} according to:
\begin{equation*}
R(s,a,s') = - \log \rho(s').
\label{eq:logpbs}
\end{equation*}
\citet{lee2019efficient} maximize \eqref{eq:unbiasedapprox} by learning new skills that target these novel states (see also \secref{sec:skilllearning}). \rebut{Using \eqref{eq:badapprox}, \cite{vezzani2019learning} approximates \eqref{eq:badapprox} with the ELBO as used by the VAE.} This is similar to \textit{MaxRenyi} \cite{exploration2021zhang}, which uses the Rény entropy, a more general version of the Shannon entropy, to give more importance to very low-density states. \citet{islam2019entropy} propose to condition the state density estimation with policy parameters in order to directly back-propagate the gradient of state-entropy into policy parameters. Although \textit{MaxRenyi} achieves good scores on \textit{Montezuma's revenge} with pure exploration, maximizing the ground state entropy may not be adequate since two closed ground states are not necessarily neighbors in the true environment \cite{aubret2021distop}. Following this observation, \textit{GEM} \cite{guo2021geometric} rather maximizes the entropy of the estimated density of states considering the dynamic-aware proximity of states, $H(Z)$. However they do not actively consider $H(Z|S)$.
\paragraph{Conclusion.} Generally speaking, these methods need an accurate density model to provide rewards. In the next paragraph, we study methods that avoid learning a density model.
\subsection{K-nearest neighbors approximation of entropy}\label{sec:knearest}
Several works propose to approximate the entropy of a distribution using samples and their k-nearest neighbors \cite{singh2003nearest,kraskov2004estimating}. In fact such objective has already been refered to as novelty \cite{conti2018improving}. Assuming $nn_k(S_b,s_i)$ is a function that outputs the k-th closest state to $s_i$ in $S_b$, this approximation can be written as:
\begin{equation}
H(S) \propto \frac{1}{|S_b|} \sum_{s_i \in S_b} \log ||s_i - nn_k(S_b,s_i)||_2 + \chi(|S_b|) + Const
\label{eq:knearestequation}
\end{equation}
\begin{wrapfigure}{r}{0.5\linewidth}
\centering
\includegraphics[width=1\linewidth]{images/knearest2.drawio.pdf}
\caption{Illustration of the correlation between density and the fourth-nearest neighbor distance.
\label{fig:knearest}
\end{wrapfigure}
where $\chi(S_b)$ is the digamma function. This approximation assumes the uniformity of states in the ball centered on a sampled state with radius $||s_i - nn_k(S_b,s_i)||_2$ \cite{lombardi2016nonparametric} but its full form is unbiased with a large number of samples \cite{singh2003nearest}. Intuitively, it means that the entropy is proportional to the average distance between states and their neighbors. \figref{fig:knearest} shows how density estimation relates to k-nearest neighbors distance. We clearly see that low-density states tend to be more distant from their nearest neighbors. Few methods \cite{mutti2020policy} provably relates to such estimations, but several approaches take advantage of the distance between state and neighbors to generate intrinsic rewards, making them related to such entropy maximization. For instance, \textit{APT} \cite{liu2021behavior} proposes new intrinsic rewards based on the k-nearest neighbors estimation of entropy:
\begin{align}
R(s,a_t,s') = \log (1+ \frac{1}{K} \sum_0^K || f(s') - nn_k(f(S_b),f(s')) ||_2)
\end{align}
where $f$ is a representation function learnt with a contrastive loss based on data augmentation \cite{srinivas2020curl} and $K$ denotes the number of k-nn estimations. By looking for distant state embeddings during an unsupervised pre-training phase, they manage to considerably speed up task-learning in the DeepMind Control Suite. The representation $g$ can also derive from a random encoder \cite{seo2021state} or a constrastive loss that ensures the euclidean proximity between consecutive states \cite{tao2020novelty,yarats2021reinforcement}. \rebut{Alternatively, GoCu \cite{bougie2020skill} achieve SOTA results on Montezuma's revenge by learning a representation with a VAE and reward the agent based on how distant, in term of timesteps, a state is from a set of K other states.}
\paragraph{Identifying different states.}
Instead of relying on euclidean distance, one can try to learn a similarity function. \textbf{EX$^2$} \cite{fu2017ex2} learns a discriminator to differentiate states from each other: when the discriminator does not manage to differentiate the current state from those in the buffer, it means that the agent has not visited this state enough and it will be rewarded. States are sampled from a buffer, implying the necessity to have a large buffer. To avoid this, some methods distill recent states in a prior distribution of latent variables \cite{kim2019curiosity,klissarovvariational}. The intrinsic reward for a state is then the KL-divergence between a fixed diagonal Gaussian prior and the posterior of the distribution of latent variables. In this case, common latent states fit the prior while novel latents diverge from the prior.
\paragraph{Intra-episode novelty.}
K-nearest neighbors intrinsic rewards have also been employed to improve intra-episode novelty \cite{stanton2018deep}. It contrasts with standard exploration since the agent looks for novel states in the current episode: typically it can try to reach all states after every resets. This setting is possible when the policy depends on all its previous interactions, which is often the case when an agent evolves in a POMDP, since the agent has to be able to predict its value function even though varies widely during episodes. This way, ECO \cite{savinov2018episodic} and Never give up \cite{badia2019never} uses an episodic memory and learn to reach states that have not been visited during the current episode.
\paragraph{Conclusion} K-nn methods turn out to be simple to experiment, but they strongly rely on learnt dynamic-aware representations since they fully take advantage of a meaningful euclidean embedded proximity; their theoretical connection to the rigorous approximation of entropy remains most of the time unclear and the approach badly scales with an increase of the memory size. We note that simple methods can tackle the issue of finding the neighbors by partitioning together close states \cite{yarats2021reinforcement}. Overall, we observe efficient exploration and the methods easily translate to intra-episode exploration.
\subsection{Conclusion}
\input{bigtablenovelty}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{images/alldiayn.drawio.pdf}
\caption{\rebut{Illustration of the implicit learning steps of algorithms that use a fixed goal distribution.} (a) Skills are not learnt yet. \rebut{The discriminator randomly assigns partitions of the state space to goals.} (b) The discriminator tries unsuccessfully to distinguish the skills. (c) Each skill learns to go in the area assigned to it by the discriminator. (d) Skills locally spread out by maximizing action entropy \protect\cite{haarnoja2018soft}. \rebut{The discriminator successfully partitions the areas visited by each skill.}}
\label{fig:diaynall}
\end{figure}
In this section, we reviewed works that maximize novelty to improve exploration with flat policies. We formalized novelty as actively discovering a representation according to the infomax principle, even though most of works only maximize the entropy of states/representations of states.
\rebut{In \tabref{tab:novelty}, we give a summary of all the novelty-based methods reviewed in this section. These methods are also compared according to their performance on the sparse reward environment \textit{Montezuma's revenge} (cf. \figref{fig:environments}a)), and whether they handle stochastic environments (cf. \secref{sec:predictionerror}). We can see that
these methods can better explore than surprise-based method, in particular when using intra-episode novelty mechanisms \cite{badia2019never, savinov2018episodic}. They can also be robust to stochasticity thanks to a specific learnt representation or the use of an ensemble of encoders \cite{seo2021state}.}
Works manage to learn a representation that match the inherent structure of the environment \cite{tao2020novelty}. It suggests that it is most of the time enough to learn a good representation. For instance, \citet{guo2021geometric} and \citet{tao2020novelty} compute a reward based on a learnt representation, but perhaps a bad representation tends to be located in low-density areas. It would result that active representation entropy maximization correlates with state-conditional entropy minimization.
We are not aware of a lot of methods that actively and explicitly maximize $I(Z;S)$ in a RL. Yet, we stress out three methods that strive to actively learn a representation of states. In \textit{CRL} \cite{du2021curious} and \textit{CuRe} \cite{aljalbout2021seeking}, the agent plays a minimax game. A module learns a representation function with a constrastive loss and the agent actively challenges the representation by looking for states with a large loss.
\section{Skill learning}\label{sec:skilllearning}
In our everyday life, nobody has to think about having to move his arms' muscles to grasp an object. A command to take the object is just issued. This can be done because an acquired skill can be effortlessly reused
Skill abstraction denotes the ability of an agent to learn a representation of diverse skills. We formalize skill abstraction as maximizing the mutual information between the goal $g \in G$ and some of the rest of the contextual states $f(\tau) \in u(\mathcal{T})$, denoted as $I(G; u(\mathcal{T}))$ where $\tau \in \mathcal{T}$ is a trajectory and $f$ a function that extracts a subpart of the trajectory (last state for example). The definition of $u$ depends on the wanted semantic meaning of a skill. Let $s_0$ refers to the state at which the skill started and $s$ a random state from the trajectory, we highlight two settings based on the literature:
\begin{itemize}
\item $u(\mathcal{T}) = S$, the agent learns skills that target a particular state of the environment \cite{eysenbach2018diversity}.
\item $u(\mathcal{T}) = \mathcal{T}$, the agent learns skills that follow a particular trajectory. This way, two different skills can end in the same state if they cross different areas \cite{co2018self}.
\end{itemize}
Most of works maximize $I(G; S)$ so that, unless stated otherwise, we refer to this objective. In the following, we will study the different ways to maximize $I(G;S)$ which can be written under its reversed form $I(S;G) = H(G) - H(G|S)$ or forward form $ I(G;S) = H(S) - H(S|G)$ \cite{campos2020explore}. In particular, we emphasize that:
\begin{subequations}
\begin{align}
- H(G | S) &= \sum_{g \in G, s \in S} p(g,s) \log p(g|s) \\
&= \mathbb{E}_{\substack{g \sim p(g) \\ s \sim \pi^g }} \log p(g|s)
\label{eq:im}
\end{align}
\end{subequations}
where, to simplify, $p(g)$ is the current distribution of goals (approximated with a buffer) and $s \sim \pi^g$ denotes the distribution of states that results from the policy that achieves $g$. Note that $p(g,s) = p(s|g)p(g) $.
In this section, we first focus on methods that assume they can learn all skills induced by a given goal space/goal distribution and they assign parts of trajectories to every goal. The second set of methods directly derives the goal space from visited states, so that there are two different challenges that we treat separately: the agent has to learn to reach a selected goal and it must maximize the diversity of goals it learns to reach. We make this choice of decomposition because some contributions focus on only one part of the objective function.
\subsection{Fixing the goal distribution}\label{sec:predefinedG}
The first approach assumes the goal space is arbitrarily provided except for the semantic meaning of a goal. In this setting, the agent samples goals uniformly from $G$, ensuring that $H(G)$ is maximal, and it progressively assigns all possible goals to a part of the state space. To do this assignment, the agent maximizes the reward provided by \eqref{eq:im}:
\begin{equation}
R(g,s,a,s') = - \log q_{\omega}(g|s')
\label{eq:vlbim}
\end{equation}
where $q_{\omega}(g|s')$ represents a learnt discriminator (often a neural network) that approximates $p(g|s')$.
At first, we focus on discrete number of skills, where $p(g)$ represents a uniform categorical distribution. \figref{fig:diaynall} sums up the learning process with two discrete skills: 1- skills and discriminator $q_{\omega}$ are randomly initialized; 2- the discriminator tries to differentiate the skills with states $s$ from its trajectories, in order to approximate $p(g|s)$; 3- skills are rewarded with \eqref{eq:vlbim} in order to make them go in the area assigned to it by the discriminator; 4- finally, skills are clearly distinguishable and target different parts of the state space. \textit{SNN4HRL} \cite{florensa2017stochastic} and \textit{DIAYN} \cite{eysenbach2018diversity} implement this procedure by approximating $g$ with, respectively, a partition-based normalized count and a neural network. \textit{VALOR} \cite{achiam2018variational} also uses a neural network, but discriminate discrete trajectories. In this setting, the agent executes one skill per episode.
\textit{HIDIO} \cite{zhang2020hierarchical} sequentially executes skills, yet that is not clear how they manage to avoid forgetting previously learnt skills. Maximizing $I(G;S|S_0)$ like \textit{VIC} \cite{gregor2016variational} or $I(G;S_0|S)$ with \textit{R-VIC} \cite{baumli2021relative} makes it hard to use a uniform (for instance) $H(G|S_0)$, because every skill may not be executable everywhere in the state space. Therefore, they also maximize the entropy term with another reward bonus similar to $\log p(g|s_0)$. They learn discriminable skills, but still struggle to combine them on complex benchmarks \cite{baumli2021relative}. Keeping $p(g)$ uniform, \textit{DADS} \cite{sharma2019dynamics} maximizes the forward form of mutual information $I(S;G|S_0) = H(S|S_0) - H(S|G,S_0)$ by approximating $p(s | s_0)$ and $p(s | s_0,g)$. This method makes possible to plan over skills and can combine several locomotion skills. However this requires several conditional probability density estimation on the ground state space, which may badly scale on higher-dimensional environments.
These methods tend to stay close from their starting point \cite{campos2020explore} and do not learn skills that cover the whole state space. In fact, it is easier for the discriminator to overfit over a small area than to make a policy go in a novel area, this results with a lot of policies that target a restricted part of the state space \cite{choi2021variational}. Accessing the whole set of true possible states and deriving the set of goals by encoding states can considerably improve the coverage of skills \cite{campos2020explore}.
\paragraph{Approaches for a better coverage of states.} Heterogeneous methods address the problem of overfitting of the discriminator. The naive way can be to regularize the learning process of the discriminator. \textit{ELSIM} \cite{aubret2020elsim} takes advantages of L2 regularization and progressively expand the goal space $G$ to cover larger areas of the state space and \citet{choi2021variational} propose to use spectral normalization \cite{miyato2018spectral}. More consistent dynamic-aware methods may further improve regularization; however it remains hard to scale the methods to a large number of skills which are necessary to scale to a large environment. In above-mentioned methods, the number of skills greatly increases \cite{achiam2018variational,aubret2020elsim} and the discrete skill embedding does not provide information about proximity of skills. Therefore learning a continuous embedding may be more efficient.
\paragraph{Continuous embedding.} The prior uniform distribution $p(g)$ is far more difficult to set in a continuous embedding. One can introduce the \textit{continuous DIAYN} \cite{choi2021variational,zhang2020hierarchical} with a prior $p(G) = \mathcal{N}(0^d,I)$ where $d$ is the number of dimensions, or the \textit{continuous DADS} with a uniform distribution over $[-1; 1]$ \cite{sharma2019dynamics}, yet it remains unclear how the skills could adapt to complex environments, where the prior does not globally fit the inherent structure of the environment \rebut{(\textit{e.g} a disc-shaped environment)}. \textit{VISR} \cite{visf2020ansen} seems to, at least partially, overcome this issue with a long unsupervised training phase and successor features. They uniformly sample goals on the unit-sphere and computes the reward as a dot product between unit-normed goal vectors and successor features $\log q_{\omega}(g|s) = \phi_{successor}(s)^T g$.
\paragraph{Conclusion.} This set of methods manages to learn discrete skills that can be combined, yet, despite regularization, discrete skills struggle to cover a very large state space \cite{aubret2020elsim}. Successful adaptations to scale it up to large states spaces currently rely on the relevance of successor features. In the next two sections, we study how to maximize the mutual information by assuming the goal space derives from the state space.
\subsection{Achieving a state-goal}\label{sec:goalstate}
In this section, we review how current methods maximize the goal achievement part of the objective of the agent, $-H(S_g|S)$ where $S_g$ refers to the goal-relative embedding of states. We temporally set aside $H(S_g)$ and we will come back to this in the next subsection, \secref{eq:diversestate}, mainly because the two issues are tackled separately in the literature.
Obviously, maximizing $- H(S_g | S)$ can be written:
\begin{align}
- H(S_g | S) &= \sum_{S_g,S} p(s_g,s) \log p(s_g|s) = \mathbb{E}_{\substack{s_g \sim p(s) \\ s \sim \pi^g }} \log p(s_g|s)
\end{align}
where, to simplify, $p(s)$ is the current distribution of states (approximated with a buffer) and $s \sim \pi^g$ denotes the distribution of states that results from the policy that achieves $g$. If $\log p(s_g|s')$ is modelled as an unparameterized Gaussian with a unit-diagonal co-variance matrix, we have $\log p(s_g|s') \propto -||s_g-s'||_2^2 + Const$ so that we can reward an agent according to:
\begin{equation}
R(s_g,s,a,s')= -||s_g-s'||_2^2.
\label{eq:distance_reward}
\end{equation}
It means that if the goal is a state, the agent must minimize the distance between its state and the goal state. To achieve this, it can take advantage of a goal-conditioned policy $\pi^{s_g}(s)$.
\paragraph{Ground state space.} This way, \textit{Hierarchical Actor-Critic (HAC)} \cite{levy2018hierarchical} directly uses the state space as a goal space to learn three levels of option (the options from the second level are selected to fulfill the chosen option from the third level). A reward is given when the distance between states and goals (the same distance as in Equation \ref{eq:distance_reward}) is below a threshold and they take advantage of HER to avoid to directly use the threshold. Similar reward functions can be found in \citet{pitis2020maximum} and \citet{zhao2019maximum}. Related to these works, \textit{HIRO} \cite{nachum2019data} uses as a goal the difference between the initial state and the state at the end of the option $f(\mathcal{T}) = S_f - S_0$.
This approach is relatively simple and does not require extra neural networks. However, there are two problems in using the state space in the reward function. Firstly, a distance (like L2) makes little sense in a very large space like images composed of pixels. Secondly, it is difficult to make a manager policy learn on a too large action space. Typically, an algorithm having as goals images can imply an action space of $84\times 84\times 3$ dimensions for a goal-selection policy (in the case of an image with standard shape). Such a wide space is currently intractable, so these algorithms can only work on low-dimensional state spaces.
\paragraph{Learning a representation of goals.} To tackle this issue, an agent can learn low-dimensional embedding of space $\phi_e$ and maximize the reward of \eqref{eq:distance_reward_phi} using a goal-conditioned policy $\pi^{f(s_g)}(s)$:
\begin{equation}
R(s_g,s,a,s')= -||f(s_g)-f(s')||_2^2.
\label{eq:distance_reward_phi}
\end{equation}
Similarly to \eqref{eq:distance_reward}, this amounts to maximize $- H(f(S_g) | f(S))$. \textit{RIG} \cite{nair2018visual} proposes to build the feature space independently with a variational auto-encoder (VAE); but this approach can be very sensitive to distractors (i.e. useless features for the task or goal, inside states) and does not allow to correctly weight features. Similar approaches also encode part of trajectories \cite{kim2021unsupervised,co2018self} for similar mutual information objectives. \textit{SFA-GWR-HRL} \cite{zhou2019vision} uses unsupervised methods like the algorithms of \textit{slow features analysis} \cite{wiskott2002slow} and \textit{growing when required} \cite{marsland2002self} to build a topological map. A hierarchical agent then uses nodes of the map, representing positions in the world, as a goal space. However the authors do not compare their contribution to previous approaches.
Other approaches learn a state embedding that captures the proximity of states with contrastive losses. For instance, \textit{DISCERN} learns the representation function by maximizing the mutual information between the last state representation and the state-goal representation. Similarly to works in \secref{sec:predefinedG}, the fluctuations around the objective allow to bring states around $s_g$ closer to it in the representation. More explicitly, the representation of \textit{NOR} \cite{nachum2019near} maximizes $I(f(S_{t+k});f(S_t),A_{t:t+k})$ and the one of \textit{LESSON} \cite{li2021learning} maximizes $I(f(S_{t+1});f(S_t))$; \textit{LESSON } and \textit{NOR} target a change in the representation and manage to navigate in a high-dimensional maze while learning the intrinsic Euclidean structure of the mazes (cf. \tabref{tab:skills}). Their skills can be reused on several environments. However, experiments are made in 2-dimensional embedding spaces and it remains unclear how relevant may be goals as state changes in an embedding space with higher dimensions. The more the number of dimensions increase, the more difficult it will be to distinguish possible skills from impossible skills in a state. \rebut{In addition, they need dense extrinsic rewards to learn to select the skills to execute. Thus, they generate tasks with binary rewards at a location uniformly distributed in the environment such that the agent learn to achieve the tasks from the simplest to the hardest. This progressive learning generates a curriculum, helping to achieve the hardest task.}
\paragraph{Conclusion.} To sum up, representation learning methods allows to learn state-based skills over complex state spaces. Learning this representation function combined with the use of the euclidean distance as reward function amounts to learn a particular form of reward function in addition for providing pre-computed features to the goal-conditioned policy. \rebut{As highlighted by \tabref{tab:skills}, learnt representations allow to scale the approaches to more complex goal spaces}. In the next paragraph, we study how to maximize $H(S)$ so that to make sure learnt skills target different areas of the state space. \rebut{As highlighted by \tabref{tab:skills}, it will make possible to reach very distant goals without being assisted by a curriculum of tasks.}
\subsection{Proposing diverse state-goals}\label{eq:diversestate}
To make sure the agent maximizes the mutual information between its goals and all visited states, it must sample a diverse set of goal-states. In other words, it has to maximize $H(S_g)$ but through goal selection rather than with an intrinsic bonus as in \secref{sec:novelty}. Similarly to works on novelty (cf. \secref{sec:novelty}), such entropy maximization along with skill acquisition (cf. \secref{sec:goalstate}) tackles the exploration challenge, but without facing catastrophic forgetting (cf. \secref{sec:detachment}) since the agent does not forget its skills.
A naive approach would be to generate random values in the goal space, but this faces a considerable problem: the set of achievable goals is often a very small subset of the entire goal space. To tackle this, a first approach can be to explicitly learn to differentiate these two sets of goals \cite{florensa2018automatic,racaniere2019automated}, using for example a Generative Adversarial Networks (GAN) \cite{florensa2018automatic,goodfellow2014generative}, but it is ineffective in complex environments \cite{pong2019skew}. Other works obtain good results on imagining new goals, but using a compoundable goal space, given \cite{colas2019curious} or learnt with a dataset \cite{khazatsky2021can}; results show it may be a strong candidate for object-based representations. In contrast, in a more general case, an agent can simply set a previously met state as a goal, this way, it ensures that goals are reachable, since they have already been achieved. In the rest of this section, we focus on this set of methods.%
In \textit{RIG} \cite{nair2018visual}, the agent randomly samples states as goals from its buffer, but it does not increase the diversity of states, and thus, the diversity of learnt skills. \citet{pong2019skew} showed theoretically and empirically that, by sampling goals following a $\alpha$-more uniform distribution over the support of visited states than the "achieved" distribution, the distribution of states of the agent can converge to the uniform distribution. Intuitively, the agent just samples more often low-density goals as illustrated it in \figref{fig:reweight}. There are several ways to increase the importance of low-density goal-states that we introduce in the following.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{images/reweightx3.drawio.pdf}
\caption{\rebut{Illustration of the reweighting process. (a) probability of visited states to be selected as goals before reweighting; (b) probability of visited states to be selected as goals after density reweighting; (c) probability of visited states to be selected as goals after density/reward reweighting. This figure completes and simplifies the figure of \protect\citet{pong2019skew}.}}
\label{fig:reweight}
\end{figure}
\paragraph{Density estimation in the ground state space.} \textit{DISCERN} \cite{warde2018unsupervised} proposes to sample uniformly over the support of visited stated with a simple procedure. Every time the agent wants to add an observation to its buffer, it randomly samples an other observation from its buffer and only keeps the one that is the farthest to all other states of the buffer. This way, it progressively builds an uniform distribution of states inside its buffer. However, it uses the euclidean distance to compare images, which may not be relevant. Other approaches select the state that has the lower density (\textit{OMEGA}) \cite{pitis2020maximum} according to a kernel density estimation or use the rank of state-densities \cite{zhao2019curiosity} estimated with a Variational Gaussian Mixture Model \cite{blei2006variational}. In contrast with them, \textit{Skew-fit} \cite{pong2019skew} provides more flexibility on how uniform one want its distribution of states. \textit{Skew-fit} extends RIG and learns a parameterized generative model $q_{\rho}(S) \approx p(S)$ and skews the generative model (VAE) with the ratio:
\begin{equation}
q_{\rho}(s)^{\alpha_{skew}} \label{eq:skewratio}
\end{equation}
where $\alpha_{skew} < 0$ determines the speed of uniformisation. This way it gives more importance to low-density states. Then it weights all visited states according to the density approximated by the generative model at the beginning of each epoch, which is made of a predefined number of timesteps. Skew-fit manages to explore image-based environments very efficiently. As highlighted in \cite{aubret2021distop}, this ratio applied on a discrete number of skills, amount to rewards a Boltzmann goal-selection policy with:
\begin{equation}
R(s_g) = (1+\alpha_{skew}) \log p(s_g).
\end{equation}
\paragraph{Density reweighting by partitioning the embedding space.} With a different objective, \textit{GRIMGREP} \cite{kovavc2020grimgep} partitions the VAE embedding of Skew-fit with a Gaussian Mixture Model \cite{rasmussen1999infinite} to estimate the learning progress of each partition and avoid distractors. The density weighting can also operate in a learnt embedding. \textit{HESS} \cite{li2021efficient} partitions the embedding space of \textit{LESSON} and rewards with a variant of a count-based bonus (see \secref{sec:infogain}). It improves exploration in a two-dimensional latent embedding but the size of partitions may not scale well if the agent considers more latent dimensions. In contrast, \textit{DisTop} \cite{aubret2021distop} dynamically clusters a dynamic-aware embedding space using a variant of a Growing When Required \cite{marsland2002self}; they estimate the density of state according to how much its partition contains states and skew the distribution of sampled similarly to Skew-fit. \textit{HESS} and \textit{DisTop} demonstrate their ability to explore and navigate with an ant inside complex mazes without extrinsic rewards. \rebut{As shown in \cite{aubret2021distop} (illustration in \figref{fig:reweight}c), it is also possible to use extrinsic rewards to weight the distribution of sampled state-goals.}
\paragraph{Conclusion.} Entropy maximization methods improves over standard skill learning methods by learning to reach as many states as possible.
We expect further works to show the ability to scale to even more complex environments, with higher-dimensional latent structure. For example, learning compositional representations ( modeling disentangled objects and relations) remains hard: \rebut{SOTA methods only manipulate few objects \cite{pong2019skew}.}
\subsection{Conclusion}
\input{bigtableskill}
We found two main ways to discover skills. The first one provides a goal space and assigns goals to areas of the state space. There are empirical evidences emphasizing that it struggles to learn and sequentially executes skills that target different areas of the state space. The second method derives the goal space from the state space with a representation learning method and over-weights the sampling of low-density visited areas. \rebut{This set of works showed the ability to hierarchically navigate in simple environments using moderately morphologically complex agents. }
\rebut{In \tabref{tab:skills}, we synthesize the methods presented in this section. We also compare skill learning methods according to their performance on the widely used hierarchical task \textit{Ant maze} (cf. \figref{fig:environments}b)), and whether they need a hand-made goal space (x,y) or an implicit curriculum of objectives. We can make two major observations: 1- methods that do not propose diverse goal-states require an implicit curriculum to learn the Ant-Maze task \cite{nachum2019data,li2021learning} (\textit{Curriculum} column); 2- contrastive representations seem crucial to avoid using a hand-defined goal space like the (x,y) coordinated (\textit{Goal space} column) \cite{nachum2019near,li2021efficient}. For methods in the "fixing the goal distribution'', we did not find a representative and widely used evaluation protocol/environment among works. However, as an example, several qualitative analysis emphasize the diversity of behaviors that can be learnt by the ant displayed in \figref{fig:environments}c) \cite{sharma2019dynamics,eysenbach2018diversity}.
}
\section{Outlooks of the domain}\label{sec:outlooks}
In this section, we take a step back and thoroughly analyze the results of our overall review. We first study the exploration process of flat intrinsic motivation in comparison with hierarchical intrinsic motivations in \secref{sec:detachment}; then, this will motivate our focus on the challenges induced by learning a deep hierarchy of skills in \secref{sec:dev}. Finally, in \secref{sec:flatim}, we discuss how flat and hierarchical intrinsic motivations can and should cohabit in such hierarchy.
\subsection{Long-term exploration, detachment and derailment}\label{sec:detachment}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{images/detachment4.png}
\caption{Illustration of the \textit{detachment} issue. Image extracted from \protect\citet{goexplore}. Green color represents intrinsically rewarding areas, white color represents no-rewards areas and purples areas are currently being explored. (a) The agent has not explored the environment yet. (b) It discovers the rewarding area at the left of its starting position and explores it. (c) It consumed close intrinsic rewards on the left part, thus it prefers gathering the right-part intrinsic rewards. (d) Due to catastrophic forgetting, it forgot how to reach the intrinsically rewarding area on the left.}
\label{fig:detachment2}
\end{figure}
The most challenging used benchmarks in flat intrinsic motivations (surprise and novelty) are \textit{DMLab} and \textit{Montezuma's revenge}, yet very sparse reward games such as \textit{Pitfall!} are not currently addressed and should be investigated. In \textit{Pitfall!}, the first reward is reached only after multiple rooms where it requires specific action sequences to go through each room. State of the art on IM methods \cite{ostrovski2017count} achieve 0 mean reward in this game. At the opposite, imitation RL methods \cite{aytar2018playing,hester2018deep} are insensitive to such a specific reward, and thus, exceed IM methods with a mean reward of 37232 on \textit{Montezuma's revenge} and 54912 on \textit{Pitfall!}. Even though these methods use expert knowledge, this performance gap exhibits their resilience to long-term rewards. Compared with flat intrinsic reward methods, which do not exceed a 10000 score on \textit{Montezuma's revenge} \cite{burda2018exploration} and hardly achieve a score on \textit{Pitfall!} \cite{ostrovski2017count}, it shows that flat IMs is still far from solving the overall problem of exploration.
Furthermore, we want to emphasize that the challenge is harder when the intrinsic reward itself is sparse \cite{burda2018exploration}. In \textit{Montezuma's revenge}, it is about avoiding to use a key too quickly in order to be able to use it later. In every day life, it can be about avoiding to spend money too quickly. In fact, it looks like there is an exploration issue in the intrinsic reward function. Intrinsic reward can guide the exploration at the condition that the agent finds this intrinsic reward. There may be two reasons causing the intrinsic reward to be sparse:
\begin{enumerate}
\item The first comes from partial observability, with which most models are incompatible. Typically, if an agent has to push a button and can only see the effect of this pushing after a long sequence of actions, density models and predictive models may not provide meaningfull intrinsic rewards. There would be a too large distance between the event "push a button" and the intrinsic reward.
\item \figref{fig:detachment2} illustrates the second issue, called \textit{detachment} \cite{goexplore,ecoffet2021first}. It results from a distant intrinsic reward coupled with catastrophic forgetting. Simply stated, the RL agent can forget the presence of an intrinsic reward in a distant area: this is hard to maintain the correct Q-value that derives from a distant currently unvisited rewarding area. This is emphasized in on-policy settings.
\end{enumerate}
Pursuing such distant intrinsic reward may be even harder due to the possible \textit{derailment} issue \cite{goexplore,ecoffet2021first}. Essentially, an agent may struggle to execute a long sequence of specific actions needed to reach a distant rewarding area because the local stochasticity incites local dithering all along the sequence. Detachment motivates the need for a hierarchical exploration \cite{ecoffet2021first} and derailment motivates frontier-based exploration \cite{bharadhwaj2020leaf}, which consists in deterministically reaching the area to explore before starting exploration.
\subsection{Deeper hierarchy of skills}\label{sec:dev}
According to \citet{brooks1991intelligence}, \textit{everything is grounded in primitive sensor motor patterns of activation}. This \textit{everything} may refer to the structure of the world and agent affordances. Capturing this knowledge amounts to form concept representations and reusable skills \cite{weng2001autonomous}, use it as a basis for new skills \cite{prince2005ongoing}, explore the environment to find new interesting skills, autonomously self-generate goals in accordance with the level and morphology of the agent.
Most works presented in \secref{sec:skilllearning} abstract actions on a restricted number of hierarchies (generally one hierarchy). This is necessary to well-understand the mechanism of abstraction, but we want to argue that imposing deeper hierarchies could considerably enhance the semantic comprehension of the environment of an agent. Organisms are often assumed to deal with composition of behaviors, which in turn serve as building block for more complex behaviors \cite{flash2005motor}. This way, using a limited vocabulary of skills makes easier avoiding the curse of dimensionality associated to the redundancy of a whole set of ground behaviors.
Our surveyed works \cite{nachum2019near,aubret2021distop,li2021learning,guo2021geometric,ermolov2020latent} already propose to learn the representations using the slowness principle \cite{wiskott2002slow} which assumes temporally close states should be similarly represented. By configuring the time-extension of the representation, one may focus on different semantic parts of the state space. This can be seen in \secref{sec:abstraction}: 1- the agent can learn a very low level representation that provides skills that can manipulate torques of a creature \cite{aubret2021distop}; 2- skills can also orientate an agent in a maze by extracting (x,y) coordinates from a complex state representation \cite{li2021efficient}. While they do not try to combine and learn several representations at the same time, further works could consider separate different parts of states (\textit{e.g.} agent positions and object positions \cite{mutual2021zhao}) or learning these representations at different time scales. In practice, data-augmentation methods already allow to learn object-oriented representations \cite{mitrovic2020representation,grill2020bootstrap,mussa2004neural}. Most augmentations could also be derived with contrast over time by considering, for instance, an embodied agent moving its eyes/head (crops), turning its head (rotation), controlling vergence (blur) or, without interventions, color and brightness changes \cite{chen2020simple}. Overall, it stresses out the potential of time-contrastive representations for disentangling the whole state space and providing semantically different skills; new works in this area may unlock new kind of skills.
\textit{Skill focus.}
In a developmental process, multi-level hierarchical RL questions the ability of the agent to learn all policies of the hierarchy simultaneously. This obviously relates to the ability of organisms to continually learn throughout their lifetime; but in more practical way, it may allow to focus the learning process of skills that are interesting for higher-level skills. This focus avoids learning everything in the environment \cite{aubret2021distop}, which is hard and obviously not done by biological organisms. For instance, most persons can not do a somersault.
\textit{Critical periods and lifelong learning.}
Considering a goal representation that changes over time introduces new issues for the agent. In this case, the goal-conditioned policy may be perturbed by the changes of inputs and may no longer be able to reach the goal \cite{li2021efficient}. Current methods consider 1- developmental periods (unsupervised pre-training \cite{metzen2013incremental}); 2- to modify the representation every k-steps epochs \cite{pong2019skew}; 3- to impose slowly changes of the representation \cite{li2021efficient}. Further works may thoroughly investigate the relation and transitions between these methods since they can relate to the concept of critical periods \cite{hensch2004critical,konczak2004neural}. Critical periods assume that the brain is more plastic at some periods of development in order to acquire specific knowledge. Despite this mechanism, the brain slowly keeps learning throughout the lifetime. In the hierarchy of skills, the introduction of a new level may first result in a quick/plastic learning process, followed by slower changes.
\subsection{The role of flat intrinsic motivations}\label{sec:flatim}
In \secref{sec:detachment}, we essentially criticized the limited role that flat intrinsic motivation like surprise or novelty can play in favor of exploration and we hypothesized in \secref{sec:dev} that deeper hierarchies could make emerge an understanding of more complex affordances. Then, what could be the roles of surprise and novelty ?
\textit{Novelty.} We saw in \secref{sec:novelty} that novelty seeking behaviors allow to learn a correct representation of the whole environment; this can be a basis for learning diverse skills. While some methods consider a goal as a state and manage to avoid using novelty bonuses \cite{pong2019skew}, this is harder to do when skills have a different semantic (like a change in the state space). \citet{nachum2019near} provide a meaningful example of this: the agent acts to simultaneously discover a representation of the environment and achieve upper-level goals
\textit{Surprise.} We leave aside the interest of surprise for learning a forward model that could be used for planning \cite{hafner2019learning} and rather focus on the learning process. Surprise amounts to look for the learning progress of forward models so that, in a hierarchy of skills, it quantifies whether skills can currently be better learnt or not. This links surprise to curriculum learning \cite{bengio2009curriculum}, \textit{i.e} can we find a natural order to efficiently learn skills ? For example, assuming an agent want to learn to reach state-goal in a maze, it would be smarter to learn to start learning skills that target goals close to its starting position and to progressively extend its goal selection while learning other skills. Several strategies have been proposed to smartly hierarchically select goals \cite{colas2019curious,linke2020adapting}, yet it often does not consider intrinsic skills \cite{colas2019curious}.
To sum up, we propose that the role of surprise and novelty may rather be to support the learning of skills. Novelty seeking helps to learn the representation required by the skill learning module and surprise speeds up the maximization of the skill learning objective. They may interact as a loop: first, the agent learns a new representation, then it evaluates surprise to select which skill to improve and the skill learning process starts. Considering this, it would result several surprises and novelties: an agent can experiment a novel or surprise interaction for a level of decision (injure the toy while walking), yet it does not mean other levels would be surprised (it is still on the same road). This emphasizes the multi-dimensionality and relativity of the notion of surprise ou novelty \cite{berlyne1960conflict}, only a part of the incoming stimuli may arouse the agent.
\section{Conclusion}
In this survey, we have presented the current challenges faced by DRL: namely 1- learning with \textit{sparse rewards} through exploration; 2- \textit{building a hierarchy of skills} in order to make easier credit assignment, exploration with \textit{sparse rewards} and \textit{transfer learning}.
We identified several types of IM to tackle these issues, that we classified into three categories based on a maximized information theoretic objective, which are \textit{surprise}, \textit{novelty} and \textit{skill learning}. Surprise and novelty based intrinsic motivations implicitly improve flat exploration while skill learning allows to create a hierarchy of reusable skills that also improve exploration.
\textbf{Surprise} results from maximizing the mutual information between the true model parameters and the next state, knowing the previous state, the action and the history of interactions. We have shown that it can be maximized through three set of works: information gain over predictive models, over density models or prediction errors/learning progress. In practice, we found that the information gain over density model is ill-defined for purely stochastic areas and that the determinism assumption underpinning prediction error methods complicates their application. \rebut{Good approximations of surprise are notably useful to allow exploration in stochastic environments.} Next challenges may be to make good approximations of surprise tractable.
\textbf{Novelty} seeking can be assimilated to learning a representation of the environment, through the maximization of mutual information between states and their representation. The most important term to actively maximize looks to be the entropy of state or representation, which can be approximated in two ways: 1- one can reward according to the parametric density of its next state, but it is complicated to estimate; 2- one can also reward an agent according to the distance between a state and currently already visited states, making the approach tractable in particular when the agent learns a dynamic-aware representation. \rebut{We found these methods to achieve state-of-the-art performance on the hard exploration task Montezuma's Revenge.} We expect future works to benefit from directly looking for good representations rather than uniformity of states.
Finally, using \textbf{skill learning} objective that amount to maximize the mutual information between a goal and a part of trajectories of the corresponding skill, an agent can learn hierarchies of temporally-extended skills. Skills can be directly learnt by attributing part of a fixed goal space to areas, but it remains to clarify how well goals can be embedded in a continuous way and whether approaches may be robust when skills are sequentially executed. The second approach derives the goals space from the state space, often through a time-contrastive loss, and expand the skill set by targeting low-density areas. \rebut{These methods manage to explore an environment while being able to return to previously visited areas}. It remains to be demonstrated how one could create larger hierarchies of skills.
The three objectives are compatible and we have discussed how they could interact to provide a robust exploration with respect to the \textit{detachment} issue, along with reusable hierarchical skills, a quick and focused skill acquisition and multi-semantic representations.
|
1,314,259,995,629 | arxiv |
\section{Introduction} \label{Introduction}
With the great success of mobile Internet, fifth generation (5G) cellular networks have been standardized to meet competing demands (\textit{e.g.} extremely high data rate, low-latency and massive connectivity) and proliferation of heterogeneous devices. However, the existing ``one-size-fits-all'' 5G architecture lacks sufficient intelligence and flexibility to enable the coexistence of these demands \cite{SaadNet2020}. As we move towards 6G, the latest frontier in this endeavor is open radio access network (ORAN) by disaggregating RAN components and opening up interfaces, which is considered today the most promising approach to revolutionize the wireless technology from ``\textit{connected things}” to ``\textit{connected intelligence}” \cite{LetaiefComMag19,ORAN2020,ORANASAMSUNG2019}. ORAN is expected to fully enable programmable, intelligent, interoperable and multi-vendor RAN \cite{ORANAlliance2019}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{Sytemmodel_ORAN.eps}
\caption{\small ORAN Alliance reference architecture and workflow \cite{ORAN2020}, with Non-RT and Near-RT RICs. A base station is disaggregated to CU, DU and RU.}
\label{fig:systemmodel}
\end{figure}
Fig. \ref{fig:systemmodel} illustrates the high-level ORAN Alliance reference architecture \cite{ORAN2020}, where the “blue” parts and interfaces are defined by the 3rd Generation Partnership Project (3GPP), while the “orange” parts and interfaces are defined by ORAN Alliance. ORAN initiatives were developed to split the RAN into the radio unit (RU), distributed unit (DU) and centralized unit (CU), allowing for the interoperability of open hardware (HW), software (SW) and interfaces (\textit{e.g.} O1, A1 and E2) \cite{ORAN2020,ORANASAMSUNG2019}. The ORAN architecture typically has three main layers (or loops), including the management, control and function layers as illustrated in Fig. \ref{fig:systemmodel}. In particular, the management layer takes place in non-real-time (Non-RT) over 1 s (second) with orchestration, automation functions and trained artificial intelligence (AI) and machine learning (ML) models. The control layer is executed in near real-time (Near-RT) between 10 ms and 1 s to provide functions like radio resource management (RRM), quality-of-service (QoS) management and interference management. Finally, the function layer provides the RAN optimization of a timescale below 10 ms, such as scheduling, power control and radio-frequency assignment, etc. Two important parts introduced in ORAN are Non-RT RAN intelligent controller (Non-RT RIC) and Near-RT RIC that allow to access RRM functions. The former enables AI/ML workflow for RAN components and RRM like traffic steering (TS) as well as policy-based guidance of applications in Near-RT RIC, while the latter is embedded with the control/optimization algorithms of RAN and radio resources \cite{BonatiComMag21,GavrilovskaWPC2020,ORANAlliance2019,WangOpenRAN2019}.
\subsection{Motivation}
Yet the existing research efforts on ORAN in the academic community are isolated, providing only tailored solutions to problems at either the physical or higher layers \cite{KumarGC2020,Pamukluicc2021,LeeGC2020,RomeroINFOCOM2021}. The understanding of how ORAN could help improve network performance by controlling data traffic and optimizing RAN functions remains rather limited in the literature. In this paper, we aim to fill this gap by conducting an in-depth analysis of the multi-layer design between the physical and higher layers and developing low-complexity algorithms for network control, scheduling and resource allocation in different time scales as well as analyzing their impact on throughput and delay performances in the 6G ORAN context.
In light of the above discussions, this paper focuses on designing the TS control to intelligently direct the user traffic through a group of RUs, taking into account available resources and users' service requirements. To fully realize the potential performance of the TS scheme, ORAN allows customization of user-centric strategies, multi-path routing and multi-connectivity as well as proactive optimization of network parameters through RICs. However, the problem becomes more challenging in the ORAN setting due to several complicating factors: $i)$ the traffic demand of user equipements (UEs) often varies over time, and the complete information of the RAN layer is indeterminate at the time of optimization algorithm execution. Hence, the policies and control decisions at the service management and orchestration (SMO) must be adapted to the variation of data traffic; $ii)$ the total data traffic is distributed unevenly to RUs due to different downlink (DL) throughput capabilities, causing high queueing delay; and $iii)$ the strong correlation between congestion control and scheduling optimization influences the optimal choice of flow-split distribution of data traffic across all RUs. In addition, the deployment of fully automated networks is an intricate problem in ORAN that calls for intelligent, scalable
and self-organizing strategies for a holistic multi-layer optimization framework. In this regard, reinforcement learning (RL) plays an important role in achieving long-term utility optimization. \textit{To the best of our knowledge, the TS optimization problem for ORAN as outlined above has not been thoroughly addressed in the literature.}
\subsection{Main Contributions}
In this paper, we consider a practical scenario where the complete information of the RAN layer is not available at the beginning of each time-frame. Instead, we assume that only their expected values are available to approximately measure queueing delay. An interesting question naturally arises: \textit{How does the incomplete information of user traffic demands affect the optimal choices of the TS scheme?} To answer this question and address the challenges above, we introduce a holistic multi-layer optimization framework which jointly optimizes the flow-split distribution, congestion control and scheduling (called JFCS). The proposed framework effectively characterizes the complex interactions between layers (\textit{e.g.} flow-split selection, congestion control rate and power allocation). In summary, we make the following three key contributions:
\begin{itemize}
\item We propose a novel JFCS framework to efficiently and adaptively direct traffic to appropriate RUs. Our framework not only generalizes the classical queue-length-based congestion control and scheduling (QCS) method \cite{NeelyToN2008}, but also provides a synergy between RL, QCS and updated network state information, and thus enabling a closed-loop control of the TS in the ORAN context.
\item To ensure the practicality and scalability, we identify inherent properties of the JFCS problem and propose an intelligent resource management algorithm to solve it effectively by leveraging the stochastic optimization framework \cite{neely2010stochastic}. In particular, by exploiting the historical system information accumulated from the previous time-slots, an RL process is developed to build the smoothed best response while maximizing the long-term utility for each data-flow under arbitrary changes in traffic demands. Given the updated queue-length vector and the optimal flow-split distribution, two low-complexity algorithms are developed to effectively solve the short-term power control optimization subproblem in an iterative fashion.
\item Given a scaling factor $\varphi$ to minimize the
Lyapunov drift \cite{EryilmazJSAC2006}, the theoretical performance results are analyzed to show that the queueing network is stable. In addition, the expected divergence in queue-length and the optimality gap of congestion control rate still scale as $\mathcal{O}(\sqrt{\varphi})$ and $\mathcal{O}(1/\sqrt{\varphi})$, respectively. Thus, there always exists a scaling factor to balance utility-optimality and latency.
\end{itemize}
We numerically evaluate the performance of the proposed framework. Results show that the proposed framework can improve network resource utilization significantly while achieving fast convergence and long-term utility-optimality, compared to state-of-the-art approaches.
\subsection{Paper Organization and Mathematical Notation}
The remainder of this paper is organized as follows. The related work is discussed in Section \ref{sec_RW}. In Section \ref{PreliminariesDefinitions}, we first introduce the network model and then present the problem formulation. The proposed JFCS framework and its solutions are provided in Sections \ref{sec:ORANNUM} and \ref{sec_RAAlgo}, respectively. Section \ref{sec_PerfAnaly} presents the key theoretical performance results of the JFCS framework.
Numerical results are given in Section \ref{sec_NumericalResults}, while Section \ref{sec_Conclusion} concludes the paper.
\textit{Mathematical notation:} Throughout this paper, matrices and vectors are written as bold uppercase and lowercase letters, respectively, while scalar number is denoted in lowercase. $\mathbf{h}^{\mathsf{H}}$ is the Hermitian transpose of vector $\mathbf{h}$. The notation $x \sim \mathcal{C}\mathcal{N}(0,\sigma^2)$ implies that $x$ is circularly-symmetric complex Gaussian random variable with zero mean and variance $\sigma^2$. $\|\cdot\|$ stands for the vector's Euclidean norm. $\mathbb{C}$ and $\mathbb{R}$ denote the sets of all complex and real numbers, respectively. Finally, $\mathbb{E}\{\cdot\}$ denotes the expectation of a random variable.
\section{Related Work}\label{sec_RW}
Multi-layer (a.k.a. cross-layer) optimization for traditional cellular RAN architectures has been extensively studied in the literature (see \textit{e.g.}, \cite{HabibiACCESS19} and references therein). For example, Tang \textit{et al.} \cite{TangTWC15} studied a multi-layer resource allocation problem to minimize the overall system power consumption in a cloud-RAN (C-RAN), which jointly optimizes the service scaling, remote radio head selection, and beamforming. In \cite{LuongTWC18}, a joint design of virtual computing and radio
resource allocation was proposed. It was shown that this approach can efficiently allocate the virtual computing of the baseband unit (BBU) pool to achieve load balancing among users with the significantly reduced power consumption. These problems are often solved by the difference of convex algorithm due to the combinatorial nature and strong coupling between optimization variables. To address this challenge, graph theory techniques were introduced in \cite{DouikTWC16} and \cite{DouikCL17} to effectively solve the joint coordinated scheduling and power optimization problem in C-RAN. Recently, the multi-layer network coding was also investigated in \cite{AbiadTMC19,DouikTWC17,AbiadTMC22}, taking into account the rate heterogeneity of different users to remote radio heads. In general, these existing works only optimized radio resources, while other factors at higher layers (\textit{e.g.} congestion control and routing) were overlooked, making guaranteed multi-layer QoS for ORAN infeasible. In addition, the non-causal statistical knowledge of traffic demands is required to model queue states, which is again impractical.
So far, there have been only few attempts to study the applicability of the ORAN architecture. Kumar \textit{et al.} \cite{KumarGC2020} proposed an automatic neighbour relation (ANR) approach to manage neighbour cell relationships by leveraging ML techniques, hence improving gNodeB (gNB) handovers. The work in \cite{YangTWC22} introduced an intelligent user access control algorithm based on deep reinforcement learning, aiming to maximize the overall throughput and avoid frequent handovers.
The authors in \cite{Pamukluicc2021} developed an RL-based dynamic function splitting which is shown to be able to effectively decide the ORAN's function splits and reduce operating costs. Based on the Working Group (WG)-2 AI/ML specifications of the O-RAN Alliance, Acumos framework and open network automation platform were introduced in \cite{LeeGC2020} to generate AI/ML models to be deployed in RIC modules and monitor the designed workflow, respectively. However, all these studies do not reveal any observable information about the RAN layer to SMO via periodic feedback loops. Thus, RICs in these studies are unable to monitor RAN in a timely manner and allow for their management automation within ORAN.
In traditional RAN architectures, the TS solutions are typically determined by users' radio conditions of a serving cell while treating signals from neighboring cells as interference \cite{ORANWG22021}. The authors in \cite{AnwarIoT22} proposed a distributed TS scheme through edge servers, where the matrix-based shortest path selection and matrix-based multipath searching algorithms are developed to dynamically decide the best paths for traffic steering. Very recently, Kavehmadavani \textit{et al.} \cite{KavehmadavaniICC22} showed that a dynamic multi-connectivity (MC)-based TS scheme can help steer traffic flows towards the most suitable cells based on user-centric condition. However, this work does not embed AI/ML solutions in Non-RT RIC and assumes that all network information are available at Near-RT RIC to optimize radio resource allocation.
Different from all above works and others in the literature that focus on a single layer, we propose a fully multi-layer optimization framework that captures interplays between the physical and higher layers, enabling proactive optimization of network parameters through RICs with periodic feedback loops. This holistic multi-layer optimization framework guarantees the long-term utility-optimality with far less latency than state-of-the-art approaches, opening the door towards fully automated networks with enhanced control and flexibility.
\section{Network Model and Problem Formulation} \label{PreliminariesDefinitions}
\subsection{Network Model}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{CUUEAssociationORAN.eps}
\caption{\small Illustration of the ORAN-based system model enabling TS where each DU connects to multiple RUs towards cost-effective deployment. }
\label{fig:RUUEAssociation}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{Timeframe.eps}
\caption{\small Illustration of frame structure with each time-frame $t$ (corresponding to one large-scale coherence time) consisting of $T_f$ time-slots.}
\label{fig:Timeframe}
\end{figure}
As shown in Fig. \ref{fig:RUUEAssociation}, we consider an ORAN architecture with one CU, $I$ DUs and $J$ RUs, where each DU connects to multiple RUs for cost-effective deployment. Let us denote by $\mathcal{I}\triangleq\{1,2,\cdots, I\}$ the set of DUs. We consider a downlink multi-user multiple-input single-output (MU-MISO) system, where $J$ RUs simultaneously serve the set $\mathcal{K}\triangleq\{1,2,\cdots,K\}$ of $K=|\mathcal{K}|$ single-antenna UEs. The $j$-th RU served by the $i$-th DU is referred to as RU $(i,j)$, which is equipped with $M_{i,j}$ antennas. The total number of RUs' antennas is thus $M_{\Sigma}=\sum_{\forall(i,j)}M_{i,j}$. The set of RUs served by DU $i$ is denoted by $\mathcal{J}_i\triangleq\{(i,1),\cdots,(i, J_i)\}$ with $|\mathcal{J}_i| = J_i$ and $\sum_{i\in\mathcal{I}}J_i=J$. The total set of RUs is denoted as $\mathcal{J}\triangleq \cup_{i\in\mathcal{I}}\mathcal{J}_i$.
We assume that the midhaul (MH) link between the CU and DU and fronthaul link between the DU and RU have sufficient capacity (\textit{i.e.}, high-speed optical ones), so that the transmission latency from CU to RUs and queueing latency at CU and DUs are negligible.
We consider that the system operates in discrete time-frame indexed by $t\in[1,2,\cdots, T]$, which corresponds to one large-scale coherence
time with duration of $T_c$, as illustrated in Fig. \ref{fig:Timeframe}. Each frame is divided into $T_f$ time-slots of equal duration $\tau=T_c/T_f$, where the time-slot is indexed by $t_s = tT_f + s$ with $s\in\{1,2,\cdots,T_f\}$.
At CU, there exists $K$ independent data-flows, each of which is intended for one UE.
The CU splits the data-flow of UE $k$, say flow $k$, into multiple sub-flows which are possibly transmitted through the set of paths and then aggregated at this UE \cite{VuTWC2019,SinghCOMML2016}, so-called ``traffic steering''. For data-flow $k$, we denote by $\mathcal{P}_k\triangleq \{(i,j)\}_{\forall (i,j)\in\mathcal{J}}$ the set of path states, including queue states and routing tables.
To improve the system throughput, a subset of separate paths in the set $\mathcal{P}_k$ (\textit{i.e.}, via neighboring RUs indexed by ($i,j$)) should be appropriately selected. Let us denote by $\mathbf{c}_k[t]\triangleq\bigl[c_k^{i,j}[t]\bigr]_{(i,j)\in\mathcal{P}_k}$ the flow-split selection (action) vector for data-flow $k$ in time-frame $t$, \textit{i.e.}, $c_k^{i,j}[t]=1$ if path ($i,j$) $ \in\mathcal{P}_k$ (\textit{i.e.}, via RU $(i,j)$) is selected to transmit data of flow $k$; otherwise, $c_k^{i,j}[t]=0$. We let $\beta_k^{i,j}[t]\in[0,1]$ be the fraction of data-flow $k$ which is routed via path $(i,j)$ in time-frame (state) $t$ by selecting action $c_k^{i,j}[t]$, where $\sum_{(i,j)\in\mathcal{P}_k }\beta_k^{i,j}[t]=1$. The global flow-split decision is denoted by $\mathscr{B}[t]\triangleq\{\boldsymbol{\beta}_k[t], \forall k\bigl| \sum_{(i,j)\in\mathcal{P}_k }\beta_k^{i,j}[t]=1, \forall k\}$, where each column flow-split vector $\boldsymbol{\beta}_k[t]\triangleq\bigr[\beta_k^{i,j}[t]\bigl]_{(i,j)\in\mathcal{P}_k }^{\mathsf{T}}\in\mathbb{R}^{J}$ corresponds to the flow-split vector of data-flow $k$.
\subsubsection{Wireless Channel Model and Downlink Throughput}
The large-scale fading coefficients are assumed to be invariant within one frame $T_c$, while the small-scale fading components with
a low degree of mobility are assumed to be unchanged during time-slot $t_s$ with duration of $\tau$ and vary independently in the next time-slot. For example, the large-scale fading coefficients may stay invariant for a period of at least 40 small-scale fading coherence intervals for indoor scenarios \cite{NgoTWC2017}.
The channel vector between RU $(i,j)$ and UE $k\in\mathcal{K}$ in time-slot $t_s$ is denoted by $\mathbf{h}^{i,j}_{k}[t_s]\in\mathbb{C}^{M_{i,j}\times 1}$, which follows the Rician fading model with the Rician factor $\kappa^{i,j}_{k}[t]$. In particular, $\mathbf{h}_{k}^{i,j}[t_s]$ is modeled as $\mathbf{h}_{k}^{i,j}[t_s] = \sqrt{\xi_{k}^{i,j}}[t]\Bigl(\sqrt{\kappa^{i,j}_{k}[t]/(\kappa^{i,j}_{k}[t]+1)}\bar{\mathbf{h}}_{k}^{i,j}[t] + \sqrt{1/(\kappa^{i,j}_{k}[t]+1)}\tilde{\mathbf{h}}_{k}^{i,j}[t_s] \Bigr)$
where $\xi_{k}^{i,j}[t]$ represents the large-scale fading; $\bar{\mathbf{h}}_{k}^{i,j}[t]$ and $\tilde{\mathbf{h}}_{k}^{i,j}[t_s]\sim \mathcal{CN}(0,\mathbf{I})$ are the line-of-sight (LoS) and non-LoS (NLoS) components, which follow a deterministic channel and Rayleigh fading models, respectively. We let $\mathbf{H}[t_s]\triangleq \bigr[\mathbf{h}_{1}[t_s]\cdots\mathbf{h}_{K}[t_s]\bigr]\in\mathbb{C}^{M\times K}$ denote the channel matrix between all RUs and UEs in time-slot $t_s$ where $\mathbf{h}_{k}[t_s]\triangleq\bigl[(\mathbf{h}_{k}^{i,j}[t_s])^{\mathsf{H}}\bigr]^{\mathsf{H}}_{\forall i,j}\in\mathbb{C}^{M\times 1}$ corresponds to the channel vector between RUs and UE $k$.
Let us denote by $x_{k}^{i,j}[t_s]$ and $\mathbf{w}_{k}^{i,j}[t_s]\in\mathbb{C}^{M_{i,j}\times 1}$ a unit-power data symbol and a linear beamforming vector transmitted from RU $(i,j)$ to UE $k$ in time-slot $t_s$, respectively. The received signal at UE $k$ in time-slot $t_s$ can be written as
\begin{align}
y_k[t_s] = &\sum_{(i,j)\in\mathcal{P}_k }(\mathbf{h}_{k}^{i,j}[t_s])^{\mathsf{H}}\mathbf{w}_{k}^{i,j}[t_s]x_{k}^{i,j}[t_s] \nonumber\\
& + \sum_{k'\in\mathcal{K}\setminus\{k\}}\sum_{(i,j)\in\mathcal{P}_{k'} }(\mathbf{h}_{k}^{i,j}[t_s])^{\mathsf{H}}\mathbf{w}_{{k'}}^{i,j}[t_s]d_{k'}^{i,j}[t_s] + \omega_k[t_s]
\end{align}
where $\omega_k[t_s]$ is the additive white Gaussian background noise (AWGN) with power $N_0$. The downlink achievable rate (bits/s) of UE $k$ from RU $(i,j)$ in time-slot $t_s$ can be written as $r_{k}^{i,j}(\mathbf{w}[t_s])\triangleq W\log_2\bigl(1 + \gamma_{k}^{i,j}(\mathbf{w}[t_s])\bigr)$, where $W$ is the system bandwidth and the signal-to-interference-plus-noise ratio (SINR) $\gamma_{k}^{i,j}(\mathbf{w}[t_s])$ is given by $\gamma_{k}^{i,j}(\mathbf{w}[t_s])=|(\mathbf{h}_{k}^{i,j}[t_s])^{\mathsf{H}}\mathbf{w}_{k}^{i,j}[t_s]|^2/\Phi_k^{i,j}(\mathbf{w}[t_s])$ with
\begin{align}
\Phi_{k}^{i,j}(\mathbf{w}[t_s]) \triangleq &\underbrace{\sum_{(i',j')\in\mathcal{P}_k\setminus\{(i,j)\}}|(\mathbf{h}_{k}^{i',j'}[t_s])^{\mathsf{H}}\mathbf{w}_{k}^{i',j'}[t_s]|^2}_{\text{Intra-user interference}} \nonumber\\
&\ + \underbrace{ \sum_{k'\in\mathcal{K}\setminus\{k\}}\sum_{(i,j)\in\mathcal{P}_{k'}}|(\mathbf{h}_{k}^{i,j}[t_s])^{\mathsf{H}}\mathbf{w}_{k'}^{i,j}[t_s]|^2}_{\text{Inter-user interference}} +N_0
\end{align}
\noindent and $\mathbf{w}[t_s]\triangleq\bigl[(\mathbf{w}_{k}^{i,j}[t_s])^{\mathsf{H}}\bigr]^{\mathsf{H}}_{k\in\mathcal{K},(i,j)\in\mathcal{P}_k}$ being the vector embracing all the beamformers.
The overall effective data rate of data-flow $k$ (or UE $k$) can be computed as
$r_k(\mathbf{w}[t_s]) = \sum_{(i,j)\in\mathcal{P}_k}$ $ r_{k}^{i,j}(\mathbf{w}[t_s])$. Then, for each $\mathbf{H}[t_s]$ and a given $\boldsymbol{\beta}_k[t]$, we define the \textit{instantaneous achievable
rate region} under beamformer $\mathbf{w}[t_s]$ as
\begin{IEEEeqnarray}{cl}
\mathscr{C}_{\mathbf{H}[t_s]}\triangleq \Bigg\{r_k(\mathbf{w}[t_s]), \forall k\Biggl| {\begin{matrix}r_k(\mathbf{w}[t_s]) = \sum\limits_{(i,j)\in\mathcal{P}_k}r_{k}^{i,j }(\mathbf{w}[t_s]) \\
\sum\limits_{k\in\mathcal{K}}\|\mathbf{w}_{k}^{i,j}[t_s]\|_2^2 \leq P_{\max}^{i,j}, \forall (i,j) \end{matrix}} \Bigg\}\nonumber\\
\end{IEEEeqnarray}
where $P_{\max}^{i,j}$ denotes the transmit power budget of RU $(i,j)$. We note that the achievable rate $r_{k}^{i,j}(\mathbf{w}[t_s])$ is upper bounded by $r_{k}^{i,j}(\mathbf{w}[t_s]) \leq W\log_2\bigl(1 + P_{\max}^{i,j}\bigl\|\mathbf{h}_{k}^{i,j}[t_s]\bigr\|_2^2/N_0\bigr)$ for a limited transmit power budget $P_{\max}^{i,j}$, leading to $r_k(\mathbf{w}[t_s]) < \infty, \forall k,t$.
\subsubsection{Queueing Model}
As illustrated in Fig. \ref{fig:RUUEAssociation}, each RU maintains a separate queue for each UE. Let $A_k[t]$ (bits/s) be the total rate of data arriving at RU destined for UE $k$ in time-frame $t$ with mean $\mathbb{E}\{A_{k}\}=\bar{A}_k$. We assume that $A_k[t]$ is upper bounded by a finite constant $A^{\max}$, such as $A_k[t] \leq A^{\max}<\infty, \forall k,t,$ and unknown at the beginning
of time-frame $t$. As a result, the queue-length of data-flow $k$ at RU $(i,j)$ in time-slot $t_s$ evolves as follows: $q_k^{i,j}[t_{s+1}] = \Bigl[q_k^{i,j}[t_s] + \beta_k^{i,j}[t]A_k[t]\tau - r_{k}^{i,j}(\mathbf{w}[t_s])\tau \Bigr]^+$, where $[x]^+\triangleq\max\{0,x\}$. By $\mathbf{q}[t_s]\triangleq\bigl[q_k^{i,j}[t_s] \bigr]^{\mathsf{T}}_{k,(i,j)}$ and following \cite{EryilmazJSAC2006}, a queueing network is \textit{stable} if the steady-state total queue-length remains finite, such as
\begin{align}\label{steady-state}
\underset{t_s\to\infty}{\limsup}\ \mathbb{E}\{\|\mathbf{q}[t_s]\|_1\} < \infty.
\end{align}
\subsection{Problem Formulation}
Let $\bar{r}_k\triangleq \underset{t_s\to\infty}{\lim}\frac{1}{t_s}\sum_{\ell=1}^{t_s} r_k(\mathbf{w}[\ell])$ denote the long-term average rate of data-flow $k$. Each UE $k$ is associated with a utility function, denoted by $U_k(\bar{r}_k)$.
To facilitate the analysis presented later, we make the following assumption to the utility function \cite{VuTWC2019,EryilmazJSAC2006,KeyInforcom2007,LiuJSAC2017}.
\begin{assumption}\label{assp:1}
The utility function $U_k(\cdot)$ is assumed to satisfy the following conditions
\begin{itemize}
\item
$U_k(\cdot)$ is twice continuously differentiable, increasing, and strictly concave.
\item There exist positive constants $0 < \psi < \Psi < \infty$, such as $\psi\leq - U_{k}^{''}(\bar{r}_k) \leq \Psi, \forall \bar{r}_k\in[0,\ \bar{r}^{\max}]$, with $\bar{r}^{\max}$ being the maximum long-term average rate of any data flow.
\end{itemize}
\end{assumption}
Our goal is to maximize the network utility function $\sum_{k\in\mathcal{K}}U_k(\bar{r}_k)$, subject to the probabilistic delay constraint, achievable rate region and queue-stability constraint. Based on the network utility maximization (NUM) framework, the joint flow-split distribution, congestion control and scheduling optimization problem (JFCS) can be mathematically formulated as
\begin{subequations} \label{eq:JFCS1}
\begin{IEEEeqnarray}{cl}
\textbf{JFCS}:\ &\underset{\boldsymbol{\beta}, \bar{\mathbf{r}}, \mathbf{w}}{\mathrm{max}} \ \sum_{k\in\mathcal{K}}U_k(\bar{r}_k)\label{eq:JFCS1a} \\
& {\mathrm{s.t.}} \ \underset{t_s\to\infty}{\limsup}\ \mathbb{E}\{\|\mathbf{q}[t_s]\|_1\} < \infty \label{eq:JFCS1b}\\
& \qquad r_k(\mathbf{w}[t_s])\in \mathscr{C}_{\mathbf{H}[t_s]}, \forall t_s, k\in\mathcal{K} \label{eq:JFCS1c}\\
&\qquad \boldsymbol{\beta}_k[t]\in \mathscr{B}[t], \forall t, k\in\mathcal{K}\label{eq:JFCS1d} \\
&\qquad \mathsf{Prob}\Bigl(\frac{q_k^{i,j}[t_s]}{\bar{A}_k} \leq \bar{d}_k\Bigr) \geq \epsilon_k,\ \forall t_s, k, (i,j) \label{eq:JFCS1e} \quad
\end{IEEEeqnarray}
\end{subequations}
where $\boldsymbol{\beta}\triangleq\bigl[\boldsymbol{\beta}_k^{\mathsf{T}}\bigr]^{\mathsf{T}}_{k\in\mathcal{K}}$ and $\bar{\mathbf{r}}\triangleq\bigr[\bar{r}_k\bigr]^{\mathsf{T}}_{k\in\mathcal{K}}$. Constraint \eqref{eq:JFCS1e} ensures different minimum outage delay requirements for sub-flows, where $\bar{d}_k$ and $\epsilon_k$ ($0\ll\epsilon_k \leq 1$) are the maximum allowable average delay and the required reliable communication for each UE, respectively. It is stated that the probability of UEs’ maximum allowable delay should be greater than or equal to a certain positive constant $\epsilon_k$.
\begin{remark}
It is clear that problem \eqref{eq:JFCS1} needs to be executed in different time scales (\textit{i.e.}, over the long-term scale $t$ at Non-RT RIC and the short-term scale $t_s$ at Near-RT RIC), as shown in Fig. \ref{fig:Timeframe}. In particular, the global flow-split vector $\boldsymbol{\beta}[t]$ is only updated once per time-frame $t$ to reduce computational complexity, information exchange and to ensure a stable queueing system. On the other hand, the beamforming vector $\mathbf{w}[t_s]$ and the instantaneous achievable rate $\mathbf{r}[t_s]$ are optimized based on the real-time effective CSI $\mathbf{H}[t_s]$ in time-slot $t_s$, adapting to dynamic environments.
\end{remark}
\section{JFCS-based Network Utility Optimization}\label{sec:ORANNUM}
\subsection{Tractable Form of the JFCS Problem \eqref{eq:JFCS1}}\label{sec:ORANNUM_A}
\textbf{Challenges of Solving JFCS Problem \eqref{eq:JFCS1}}:
We can observe that that constraint \eqref{eq:JFCS1c} is nonconvex while \eqref{eq:JFCS1e} is a nonconvex probabilistic constraint, generally making problem \eqref{eq:JFCS1} NP-hard. In addition, the expectations in the constraints cause the stochastic nature of the problem, which cannot be solved directly. The classical optimization approaches, such as successive convex approximation (SCA) \cite{razaviyayn2014successive}, are often applied to solve the optimization problems of nonconvex and deterministic constraints. However, the stochastic SCA-based algorithms can no longer guarantee a feasible and (sub)-optimal solution of all subsequent time intervals (TTIs) due to dynamics of physical layer at small timescales. The flow-split decisions mainly rely on the previous states updated by the RAN layer.
Towards practical applications, an efficient and adaptive solution to the long-term subproblem of \eqref{eq:JFCS1} is necessary to achieve high QoE for all UEs in every TTI.
Let us start by transforming problem \eqref{eq:JFCS1} into a more tractable form. Towards a safe design, we consider the replacement of constraint \eqref{eq:JFCS1e} by its deterministic constraint. From the basic property of probability, we can rewrite \eqref{eq:JFCS1e} as $\mathsf{Prob}\bigl(q_k^{i,j}[t_s] \geq \bar{A}_k\bar{d}_k\bigr) \leq 1-\epsilon_k$. It follows from the well-known Markov inequality \cite{BillingsleyProbability} that $\mathsf{Prob}\bigl(q_k^{i,j}[t_s] \geq \bar{A}_k\bar{d}_k\bigr) \leq \mathbb{E}\{q_k^{i,j}[t_s]\}/\bar{A}_k\bar{d}_k$, yielding
\begin{align}\label{eq:JFCS1e_Relaxed}
&\sum\nolimits_{\ell=1}^{t}\beta_k^{i,j}[\ell]\bar{A}_k\tau - (1-\epsilon_k)\bar{A}_k\bar{d}_k - \sum\nolimits_{\ell=1}^{t_{s-1}}r_{k}^{i,j}(\mathbf{w}[\ell])\tau\nonumber\\
& \leq r_{k}^{i,j}(\mathbf{w}[t_s])\tau,\ \forall t_s, k\in\mathcal{K}, (i,j)\in\mathcal{P}_k
\end{align}
where each queue-length is always non-negative. We note that \eqref{eq:JFCS1e_Relaxed} is a relaxed constraint of \eqref{eq:JFCS1e}, which implies that any feasible of the former is also feasible for the latter but not vice versa due to the Markov upper bound on the outage probabilities.
To facilitate the following optimization, we introduce congestion control variables $\boldsymbol{a}[t_s]\triangleq\bigl[a_k[t_s]\bigr]_{k\in\mathcal{K}}^{\mathsf{T}}$, satisfying $\bar{a}_k-\bar{r}_k\leq 0, \forall k$, where $\bar{a}_k\triangleq \underset{t_s\to\infty}{\lim}\frac{1}{t_s}\sum_{\ell=1}^{t_s} a_k[\ell]$. Problem \eqref{eq:JFCS1} is then rewritten as
\begin{subequations} \label{eq:JFCS2}
\begin{IEEEeqnarray}{cl}
\ &\underset{\boldsymbol{\beta}, \bar{\boldsymbol{a}}, \bar{\mathbf{r}}, \mathbf{w}}{\mathrm{max}} \ \sum_{k\in\mathcal{K}}U_k(\bar{a}_k)\label{eq:JFCS2a} \\
& {\mathrm{s.t.}} \quad\ \eqref{eq:JFCS1b}, \eqref{eq:JFCS1c}, \eqref{eq:JFCS1d}, \eqref{eq:JFCS1e_Relaxed} \label{eq:JFCS2b}\\
& \qquad\quad \bar{a}_k-\bar{r}_k\leq 0, \forall k. \label{eq:JFCS2c}
\end{IEEEeqnarray}
\end{subequations}
We also introduce a new auxiliary queue-length vector $\hat{\mathbf{q}}[t_s]\triangleq\bigl[\hat{q}_k[t_s] \bigr]^{\mathsf{T}}_{k\in\mathcal{K}}$, where
$\hat{q}_k[t_{s+1}] = \bigl[\hat{q}_k[t_s] + a_k[t_s]\tau - r_k(\mathbf{w}[t_s])\tau \bigr]^+$ to associate constraint \eqref{eq:JFCS2c} with a penalty function and $a_k[t_s]\in[0,A^{\max}]$. We define the total queue backlog of all UEs in time-slot $t_s$ as $L[t_s] = \frac{1}{2}\bigl(\sum_{k\in\mathcal{K}}$ $\sum_{(i,j)\in\mathcal{P}_k} \frac{q_k^{i,j}[t_s]^2}{\tau^2} + \sum_{k\in\mathcal{K}}\frac{\hat{q}_k[t_s]^2}{\tau^2}\bigr)$, which is the quadratic Lyapunov function \cite{neely2010stochastic,TassiulasTAC92}. For given $(\mathbf{q}[t_s],\hat{\mathbf{q}}[t_s])$, the Lyapunov drift from time-slot $t_s$ to $t_{s+1}$ is given as $\Delta L[t_s] = L[t_{s+1}] - L[t_s]$. To guarantee joint network stability and penalty minimization (\textit{i.e.}, \eqref{eq:JFCS1b} and \eqref{eq:JFCS2c} hold true), we adopt the drift-plus-penalty procedure \cite{neely2010stochastic} to minimize the drift of a quadratic Lyapunov function and rewrite \eqref{eq:JFCS2} as
\begin{subequations} \label{eq:JFCS3}
\begin{IEEEeqnarray}{cl}
\ &\underset{\boldsymbol{\beta}, \bar{\boldsymbol{a}}, \bar{\mathbf{r}}, \mathbf{w}}{\mathrm{max}} \quad \varphi\sum_{k\in\mathcal{K}}\mathbb{E}\{U_k(a_k[t_s])\}
- \mathbb{E}\{\Delta L[t_s]\}
\label{eq:JFCS3a} \\
& {\mathrm{s.t.}} \quad\ \eqref{eq:JFCS1c}, \eqref{eq:JFCS1d}, \eqref{eq:JFCS1e_Relaxed} \label{eq:JFCS3b}
\end{IEEEeqnarray}
\end{subequations}
where $\varphi$ is a scaling factor to balance two objective functions.
We now show that constraint \eqref{eq:JFCS2c} holds with equality at optimum by introducing the following lemma.
\begin{lemma}\label{lem_1}
For each data-flow of UE $k$, the optimal congestion control rate is equal to the optimal long-term average service rate, \textit{i.e.}, $\bar{a}_k^{*}-\bar{r}_k^{*}= 0, \forall k.$
\end{lemma}
\noindent The proof Lemma \ref{lem_1} is straightforward by examining the Karush–Kuhn–Tucker (KKT) complementary slackness condition over the increasing and strictly concave objective function $U_k(\cdot), \forall k$.
\subsection{Overall Intelligent Resource Management Algorithm}
\begin{algorithm}[t]
\begin{algorithmic}[1]
\fontsize{11}{11}\selectfont
\protect\caption{Intelligent Resource Management Algorithm for Solving JFCS Problem \eqref{eq:JFCS1}, compliant with ORAN} \label{alg_JFCS}
\global\long\def\textbf{Initialization:}{\textbf{Initialization:}}
\REQUIRE Set $t=1$ and select a positive scaling factor $\varphi$. Initialize $\boldsymbol{\beta}_k[1] = \frac{1}{|\mathcal{P}_k|}[1,\cdots,1]$ and all queues are set to be empty: $q_k^{i,j}[1_1]=0$ and $\hat{q}_k[1_1]=0, \forall (i,j), k$.
\global\long\def\textbf{Initialization:}{\textbf{Main Loop:}}
\REQUIRE
\FOR[/*\textit{Long-term scale $t$}*/]{each frame $t=1,2,\cdots,T$}
\STATE \textbf{Flow-Split Distribution:} Given $\{\mathbf{q}[t-1],\mathbf{A}[t-1]\}$, CU splits data-flows of all UEs based on the optimal flow-split decisions $\boldsymbol{\beta}^*[t]$ by solving L-SP at Non-RT RIC:
\[ \underset{\boldsymbol{\beta}_k[t]\in \mathscr{B}[t],\forall k}{\mathrm{max}} \ \sum_{k\in\mathcal{K}}\mathscr{L}_k[t].\]
\FOR[/*\textit{Short-term scale $t_s$}*/]{each time-slot $t_s = tT_f + s$ with $s\in\{1,\cdots,T_f\}$}
\STATE \textbf{Congestion Controller:} Given the queue-length vector $\hat{\mathbf{q}}[t_s]$, solve S-SP1 \eqref{eq:JFCSSSP1} to obtain the optimal congestion control variables:
\[a_k^*[t_s] = \min\Bigl\{ U_k^{'-1}\bigl(\frac{\hat{q}_k[t_s]}{\varphi\tau} \bigr),A^{\max}\Bigr\}, \forall k.\]
\STATE \textbf{Weighted Queue-Length-Based Scheduler:} Given the queue-length vector $\hat{\mathbf{q}}[t_s]$ and the flow-split distribution $\boldsymbol{\beta}^*[t]$, each RU $(i,j)\in\mathcal{P}_{k}$ schedules the service rate $r_{k}^{i,j}(\mathbf{w}[t_s])$ for UE $k\in\mathcal{K}$ by solving S-SP2:
\[\underset{\mathbf{r}[t_s],\mathbf{w}[t_s]}{\mathrm{max}} \quad \sum_{k\in\mathcal{K}} \frac{\hat{q}_k[t_s]}{\tau}r_k(\mathbf{w}[t_s]), \ {\mathrm{s.t.}}\ \eqref{eq:JFCS1c}, \eqref{eq:JFCS1e_Relaxed}.\]
\STATE \textbf{Queue-Length Updates:} Queue-Lengths are updated as
\begin{align}
q_k^{i,j}[t_{s+1}] &= \bigl[q_k^{i,j}[t_s] + \beta_k^{i,j}[t]A_k[t]\tau \nonumber\\
&\qquad\qquad - r_{k}^{i,j}(\mathbf{w}[t_s])\tau \bigr]^+, \ \forall k, (i,j) \nonumber\\
\hat{q}_k[t_{s+1}] &= \bigl[\hat{q}_k[t_s] + a_k[t_s]\tau - r_k(\mathbf{w}[t_s])\tau \bigr]^+,\ \forall k\nonumber.
\end{align}
\STATE Set $s=s+1$
\ENDFOR
\STATE Update $\{\mathbf{q}[t],\mathbf{A}[t]\}:= \{q_k^{i,j}[t], A_k[t]\}_{k, (i,j)}$ to Non-RT RIC.
\STATE Set $t=t+1$
\ENDFOR
\end{algorithmic}
\end{algorithm}
To solve problem \eqref{eq:JFCS3} in different time scales, we now decompose it into three subproblems. To do so, we consider a worst-case design by developing an upper bound of $\Delta L[t_s]$ for given $(\mathbf{q}[t_s],\hat{\mathbf{q}}[t_s])$. From the inequality $([x]^+)^2 \leq x^2$ and $(x+y)^2-x^2 = 2xy + y^2$, we have
\begin{align}
\Delta L^{\mathtt{UB}}[t_s] \triangleq &\sum_{k\in\mathcal{K}}\sum_{(i,j)\in\mathcal{P}_k} \frac{q_k^{i,j}[t_s]}{\tau}\bigl(\beta_k^{i,j}[t]A_k[t] - r_{k}^{i,j}(\mathbf{w}[t_s]) \bigr) \nonumber\\
&\ + \sum_{k\in\mathcal{K}}\frac{\hat{q}_k[t_s]}{\tau}\bigl(a_k[t_s] - r_k(\mathbf{w}[t_s])\bigr) + B[t_s] \geq \Delta L[t_s]
\end{align}
where $B[t_s]\triangleq \frac{1}{2}\sum_{k\in\mathcal{K}}\sum_{(i,j)\in\mathcal{P}_k} \bigl(\beta_k^{i,j}[t]A_k[t] - r_{k}^{i,j}(\mathbf{w}[t_s]) \bigr)^2+\frac{1}{2}\sum_{k\in\mathcal{K}}\bigl(a_k[t_s] - r_k(\mathbf{w}[t_s])\bigr)^2$ is the summation of the second moments of the arrival and service processes. Following \cite{neely2010stochastic} and \cite{VuTWC2019}, we consider that $B[t_s]$ is finite and bounded by $\bar{B}$ for all $t_s$, \textit{i.e.}, $\mathbb{E}\{B[t_s]\big|\mathbf{q}[t_s],\hat{\mathbf{q}}[t_s]\}$ $\leq \bar{B}$. As a result, problem \eqref{eq:JFCS3} is simplified to
\begin{subequations} \label{eq:JFCS4}
\begin{IEEEeqnarray}{cl}
\ &\underset{\boldsymbol{\beta}, \bar{\boldsymbol{a}}, \bar{\mathbf{r}}, \mathbf{w}}{\mathrm{max}} \quad \varphi\sum_{k\in\mathcal{K}}\mathbb{E}\{U_k(a_k[t_s])\}
- \mathbb{E}\{\Delta L^{\mathtt{UB}}[t_s]\}
\label{eq:JFCS4a} \\
& {\mathrm{s.t.}} \quad\ \eqref{eq:JFCS1c}, \eqref{eq:JFCS1d}, \eqref{eq:JFCS1e_Relaxed}. \label{eq:JFCS4b}
\end{IEEEeqnarray}
\end{subequations}
\textbf{Long-term subproblem (L-SP):} The flow-split distribution subproblem at time-frame $t$ is given as
\begin{align}\label{eq:JFCSLSP}
\textbf{L-SP}:\ \underset{\boldsymbol{\beta}_k[t]\in \mathscr{B}[t],\forall k}{\mathrm{max}} \quad \sum_{k\in\mathcal{K}}\mathscr{L}_k[t]
\end{align}
where $\mathscr{L}_k[t] = \sum_{(i,j)\in\mathcal{P}_k} \frac{q_k^{i,j}[t_s]}{\tau}\bigl(r_{k}^{i,j}(\mathbf{w}[t_s]) - \beta_k^{i,j}[t]A_k[t]\bigr)$. Although problem \eqref{eq:JFCSLSP} is a linear program in $\boldsymbol{\beta}$, it cannot be solved directly by standard optimization techniques because $A_k[t],\forall k$ are \textit{incompletely} known at the beginning of time-frame $t$.
\textbf{Short-term subproblems (S-SPs):} The congestion control subproblem at time-slot $t_s$ is
\begin{align}\label{eq:JFCSSSP1}
\textbf{S-SP1}:\ \underset{\boldsymbol{a}[t_s]\geq 0}{\mathrm{max}} \quad \sum_{k\in\mathcal{K}}\bigl(\varphi U_k(a_k[t_s]) - \frac{\hat{q}_k[t_s]}{\tau}a_k[t_s]\bigr)
\end{align}
which is an unconstrained convex problem. The optimal solution of \eqref{eq:JFCSSSP1} exists and is unique that is $a_k^*[t_s] = U_k^{'-1}\bigl(\frac{\hat{q}_k[t_s]}{\varphi\tau} \bigr), \forall k$, where $U_k^{'-1}(\cdot)$ denotes the inverse function of the first derivation of $U_k(\cdot)$. Given the optimal solution $\boldsymbol{\beta}^*[t]$,
the short-term power control optimization subproblem (\textit{i.e.}, the weighted queue-length-based scheduling) at time-slot $t_s$ is given as
\begin{align}\label{eq:JFCSSSP2}
\textbf{S-SP2}: \underset{\mathbf{r}[t_s],\mathbf{w}[t_s]}{\mathrm{max}} \quad \sum_{k\in\mathcal{K}} \frac{\hat{q}_k[t_s]}{\tau}r_k(\mathbf{w}[t_s]), \ {\mathrm{s.t.}}\ \eqref{eq:JFCS1c}, \eqref{eq:JFCS1e_Relaxed}.
\end{align}
The overall intelligent resource management algorithm for solving the JFCS problem \eqref{eq:JFCS1} is summarized in Algorithm \ref{alg_JFCS}, where the solutions of subproblems will be provided next.
\section{Proposed Algorithms for Solving Subproblems}\label{sec_RAAlgo}
We are now in position to solve L-SP \eqref{eq:JFCSLSP} and S-SP2 \eqref{eq:JFCSSSP2} in different time scales. The optimality of the latter depends heavily on the optimal flow-split decisions, which often require a prior knowledge of the statistical information of all possible paths at Non-RT RIC. However, the assumption of complete information is unrealistic due to the dynamic environment and the data collected from the RAN layer being only updated to Non-RT RIC only on the long-term scale. In this work, at time-frame $t$ we aim to exploit historical system information accumulated from the previous time-slot, which can be used to build the smoothed best response in maximizing the long-term utility for each data flow.
\subsection{Reinforcement Learning Algorithm for Solving L-SP \eqref{eq:JFCSLSP}}
The flow-split decision $\boldsymbol{\beta}_k[t]$ in problem \eqref{eq:JFCSLSP} can be estimated separably by minimizing $\mathscr{L}_k[t]$. This implies that the larger the queue-length $q_k^{i,j}[t_s]$, the lower the flow-split decision value $\beta_k^{i,j}[t]$ to guarantee fairness among all RUs $(i,j)\in\mathcal{P}_k$ (\textit{i.e.}, to avoid large queue-lengths $q_k^{i,j}$ at some RUs in the next time-slot $t_{s+1}$).
Let us denote by $u_k^{i,j}[t]\triangleq \frac{q_k^{i,j}[t_s]}{\tau}\bigl(r_{k}^{i,j}(\mathbf{w}[t_s]) - \beta_k^{i,j}[t]A_k[t]\bigr)$ the instantaneous utility observation of data-flow $k$ at time-frame $t$ when selecting path $(i,j)\in\mathcal{P}_k$. The total utility observation of data-flow $k$, denoted by $ u_k[t]$, is thus
\begin{equation}
u_k[t] = \sum_{(i,j)\in\mathcal{P}_k}u_k^{i,j}[t].
\end{equation}
However, it is unable to build a smoothed best response based on $u_k^{i,j}[t]$ as it is not revealed at the beginning of time-frame $t$.
Inspired by \cite{BennisTWC2013}, we denote $\hat{u}_k^{i,j}[t]$ as the estimated utility of data-flow $k$ at time-frame $t$ when selecting path $(i,j)$. In addition, the actual utility observed by data-flow $k$ at time-frame $t$, denoted by $\bar{u}_k[t]$, is given as $\bar{u}_k[t] = u_k[t-1]$, which is based on feedback from Near-RT RIC at time $t-1$. By initializing $\hat{u}_k^{i,j}[1] = 0$, the estimated utility of data-flow $k$ is updated for action $\mathbf{c}_k[t]= c_k^{i,j}[t]$ as follows:
\begin{align}\label{eq_utilityestimate}
\hat{u}_k^{i,j}[t] = \hat{u}_k^{i,j}[t-1]+ \eta_u[t]\mathbbm{1}_{\{\mathbf{c}_k[t] = c_k^{i,j}[t]\}}\bigl(\bar{u}_k[t]
- \hat{u}_k^{i,j}[t-1]\bigr), \ \forall t>1
\end{align}
where $\eta_u > 0$ is the decreasing step size (learning rate), which is often decreased over time. Naturally, $\hat{u}_k^{i,j}[1]$ is initialized as $\hat{u}_k^{i,j}[1] = 0$ for $t=1$. The indicator function $\mathbbm{1}_{\{x=y\}}= 1$ (resp. 0) if the condition $x=y$ is true (resp. false).
Next, we denote $\hat{\boldsymbol{\theta}}_k[t]\triangleq[\hat{\theta}_k^{i,j}[t]]_{(i,j)\in\mathcal{P}_k}$ as the estimated regret vector of data-flow $k$, where each element is updated for action $\mathbf{c}_k[t]= c_k^{i,j}[t]$ as
\begin{align}\label{eq_regretestimate}
\hat{\theta}_k^{i,j}[t] = \hat{\theta}_k^{i,j}[t-1] + \eta_{\theta}[t]\mathbbm{1}_{\{\mathbf{c}_k[t] = c_k^{i,j}[t]\}}\bigl(\bar{u}_k[t]
\- \hat{u}_k^{i,j}[t]- \hat{\theta}_k^{i,j}[t-1]\bigr),\ \forall t>1
\end{align}
with $\hat{\theta}_k^{i,j}[1] = 0$ and $\eta_{\theta}[t]$ being the learning rate. In order to achieve high performance in the long term, the L-SP must balance exploration and exploitation processes.
We note that trying all possible actions to choose the best paths (\textit{e.g.} exploration) can offer the highest payoff, but with the cost of slow convergence and even computationally prohibitive. During the exploitation process, playing an action associated with the highest estimated utility in \eqref{eq_utilityestimate} will likely result in a very sub-optimal solution. To make this tradeoff more efficient, let us define the best response function $\hat{\boldsymbol{\beta}}[t] = f(\hat{\boldsymbol{\theta}}[t])$ as
\begin{IEEEeqnarray}{rCl}\label{eq_bestresponse}\small
f(\hat{\boldsymbol{\theta}}[t]) := \underset{\boldsymbol{\beta}_k[t]\in \mathscr{B}[t]}{\argmin}\Bigr\{h\bigr(\boldsymbol{\beta}[t]\bigl) - \lambda\sum_{k\in\mathcal{K}}\sum_{(i,j)\in\mathcal{P}_k}\beta_k^{i,j}[t]\hat{\theta}_k^{i,j}[t]\Bigl\}.\qquad
\end{IEEEeqnarray}
Here $\lambda$ is the so-called trade-off factor (a.k.a. Boltzmann temperature) and $h\bigr(\boldsymbol{\beta}[t]\bigl)$ denotes the regularization function. We note that when $\lambda \rightarrow 0$, it leads to uniform probabilities of all actions, \textit{i.e.}, $\beta_k^{i,j}[t]=1/|\mathcal{P}_k|,\forall (i,j)\in\mathcal{P}_k$. For $\lambda \rightarrow \infty$, the second term in \eqref{eq_bestresponse} will dominate the best response function and then the actions associated with highest estimated regret will be selected \cite{BennisTWC2013}.
\textbf{Regularization function:} The regularization function allows to learn the best paths that maximize its own performance and stabilize the flow-split decisions. The solutions of problem \eqref{eq:JFCSLSP} lie in the unit simplex for each data-flow. Therefore, we adopt the Gibbs-Shannon entropy as the regularization function, \textit{i.e.} $h\bigr(\boldsymbol{\beta}[t]\bigl) = \sum_{k\in\mathcal{K}}\sum_{(i,j)\in\mathcal{P}_k}\beta_k^{i,j}[t]\ln\bigl( \beta_k^{i,j}[t]\bigr)$, which is $K$-strongly convex.
Substituting $h\bigr(\boldsymbol{\beta}[t]\bigl)$ into \eqref{eq_bestresponse}, we have
\begin{align}\label{eq_bestresponse2}
f(\hat{\boldsymbol{\theta}}[t]) := \underset{\boldsymbol{\beta}_k[t]\in \mathscr{B}[t], \forall k}{\argmin}\Bigr\{\sum_{k\in\mathcal{K}}\sum_{(i,j)\in\mathcal{P}_k}\beta_k^{i,j}[t]\ln\bigl( \beta_k^{i,j}[t]\bigr)
- \lambda\sum_{k\in\mathcal{K}}\sum_{(i,j)\in\mathcal{P}_k}\beta_k^{i,j}[t]\hat{\theta}_k^{i,j}[t]\Bigl\}.
\end{align}
The function $f(\hat{\boldsymbol{\theta}}[t])$ is convex and separable for each $\beta_k^{i,j}[t]$.
By solving the following equation
\begin{equation}\nonumber
\partial f(\hat{\boldsymbol{\theta}}[t])/\partial \beta_k^{i,j}[t] = \ln\bigl( \beta_k^{i,j}[t]\bigr) + 1 - \lambda\hat{\theta}_k^{i,j}[t] = 0
\end{equation}
we have $\beta_k^{i,j}[t] = f(\hat{\theta}_{k}^{i,j}[t]) = \exp{\bigl(\lambda\hat{\theta}_k^{i,j}[t]-1}\bigr)$. To ensure $\sum_{(i,j)\in\mathcal{P}_k }\beta_k^{i,j}[t]=1, \forall k$ (\textit{i.e.} the unit simplex for data-flow $k$), we normalize $f_{k}^{i,j}(\hat{\boldsymbol{\theta}}_{k}[t])$ through the exponentiated mirror function as
\begin{align}
f_{k}^{i,j}(\hat{\boldsymbol{\theta}}_{k}[t]) &= \frac{\exp{\bigl(\bigr[\lambda\hat{\theta}_k^{i,j}[t] -1\bigl]^+}\bigr)}{\sum_{(i',j')\in\mathcal{P}_k}\exp{\bigl(\bigr[\lambda\hat{\theta}_k^{i',j'}[t]-1\bigl]^+}\bigr)} \nonumber\\
&= \frac{\exp{\bigl(\lambda\bigr[\hat{\theta}_k^{i,j}[t]\bigl]^+}\bigr)}{\sum_{(i',j')\in\mathcal{P}_k}\exp{\bigl(\lambda\bigr[\hat{\theta}_k^{i',j'}[t]\bigl]^+}\bigr)}.
\end{align}
As a result, the estimate value of each element of flow-split vector $\boldsymbol{\beta}_k[t]$ is updated for all actions with the regret as
\begin{align}\label{eq_betaestimate}
\beta_k^{i,j}[t] =\beta_k^{i,j}[t-1] + \eta_{\beta}[t]\bigl(f_{k}^{i,j}(\hat{\boldsymbol{\theta}}_{k}[t])-\beta_k^{i,j}[t-1]\bigr)
\end{align}
for $t>1$, where $\boldsymbol{\beta}_k[1] = \frac{1}{|\mathcal{P}_k|}[1,\cdots,1]$ and $\eta_{\beta}[t]$ is the learning rate. The three-step reinforcement learning procedure includes \eqref{eq_utilityestimate}, \eqref{eq_regretestimate} and \eqref{eq_betaestimate}, which do not require expensive computations and projection to the feasible space.
\textbf{Convergence properties}:
The convergence conditions for the three-step reinforcement learning procedure are given as follows:
\begin{align}\label{eq_learningcondi}
&\underset{T\to\infty}{\lim}\sum\nolimits_{t=1}^{T}\eta_u[t] = +\infty \ \&\ \underset{T\to\infty}{\lim}\sum\nolimits_{t=1}^{T}\eta_u^2[t] < +\infty\nonumber\\
&\underset{T\to\infty}{\lim}\sum\nolimits_{t=1}^{T}\eta_\theta[t] = +\infty \ \&\ \underset{T\to\infty}{\lim}\sum\nolimits_{t=1}^{T}\eta_{\theta}^2[t] < +\infty\nonumber\\
&\underset{T\to\infty}{\lim}\sum\nolimits_{t=1}^{T}\eta_{\beta}[t] = +\infty \ \&\ \underset{T\to\infty}{\lim}\sum\nolimits_{t=1}^{T}\eta_{\beta}^2[t] < +\infty\nonumber\\
&\underset{t\to\infty}{\lim}\frac{\eta_\theta[t]}{\eta_u[t]} = 0 \ \&\ \underset{t\to\infty}{\lim}\frac{\eta_{\beta}[t]}{\eta_\theta[t]} = 0.
\end{align}
This implies that the learning rates must be decreased over time to guarantee the convergence of the proposed three-step RL procedure. The detailed proof of a multiple-timescales RL algorithm can be found in \cite{BennisTWC2013,leslie2003convergent}. Following the same arguments as those in \cite{BennisTWC2013} and the conditions in \eqref{eq_learningcondi}, the three-step RL procedure in \eqref{eq_utilityestimate}, \eqref{eq_regretestimate} and \eqref{eq_betaestimate} converges to the optimal solution with the positive trade-off factor $\lambda >0$, satisfying $\underset{t\to\infty}{\lim} \boldsymbol{\beta}_k[t] = \boldsymbol{\beta}_k^*,\ \forall k\in\mathcal{K}$.
\subsection{Proposed Algorithm for Solving S-SP2 \eqref{eq:JFCSSSP2}}
Given the optimal flow-split distribution of data-flow $k$, $\boldsymbol{\beta}_k^*[t]$, we denote by $\mathcal{P}_k^*[t]$ the set of selected path states in time-frame $t$, which only includes $c_k^{i,j}[t]=1$ with $(i,j)\in\mathcal{P}_k$.
In this section, we present two low-complexity transmission designs for $\mathbf{w}$, \textit{namely} maximum ratio transmission (MRT) and zero-forcing beamforming (ZFBF), and then develop low-complexity iterative algorithms for their solution.
\subsubsection{MRT-Based Transmission Design}
Each RU $(i,j)$ performs MRT beamforming (a.k.a. channel-mathched beamforming) using local CSI as $\mathbf{w}_{k}^{i,j}[t_s] = \frac{\sqrt{p_{k}^{i,j}[t_s]}}{\sqrt{\nu_{k}^{i,j}[t_s]}}\mathbf{h}_{k}^{i,j}[t_s],\ \forall (i,j)\in\mathcal{P}_k[t]$ and $k\in\mathcal{K}$, where $\nu_{k}^{i,j}[t_s] \triangleq \|\mathbf{h}_{k}^{i,j}[t_s]\|_2^2$ and $p_{k}^{i,j}[t_s]$ is the transmit power coefficient allocated to UE $k$ from RU $(i,j)$ in time-slot $t_s$. The corresponding SINR is rewritten as
\begin{align}
\gamma_{k}^{i,j}(\mathbf{p}[t_s])=\frac{p_{k}^{i,j}[t_s]\nu_{k}^{i,j}[t_s]}{\Phi_k^{i,j}(\mathbf{p}[t_s])}
\end{align}
where $\Phi_k^{i,j}(\mathbf{p}[t_s]) \triangleq \sum_{(i',j')\in\mathcal{P}_k[t]\setminus\{(i,j)\}}p_{k}^{i',j'}[t_s]\nu_{k}^{i',j'}[t_s] + \sum_{k'\in\mathcal{K}\setminus\{k\}}\sum_{(i,j)\in\mathcal{P}_{k'}[t]}p_{k'}^{i,j}[t_s]\frac{|(\mathbf{h}_{k}^{i,j}[t_s])^{\mathsf{H}}\mathbf{h}_{k'}^{i,j}[t_s]|^2}{\nu_{k'}^{i,j}[t_s]} +N_0$ is linear in $\mathbf{p}[t_s]\triangleq \bigl[p_{k}^{i,j}[t_s]\bigr]_{k\in\mathcal{K},(i,j)\in\mathcal{P}_k}$.
As a result, the short-term power optimization problem \eqref{eq:JFCSSSP2} with MRT reduces to the following problem:
\begin{subequations}\label{eq:JFCSSSP2_Equi}
\begin{IEEEeqnarray}{cl}
\underset{\mathbf{p}[t_s]}{\mathrm{max}}& \quad \sum_{k\in\mathcal{K}} \frac{\hat{q}_k[t_s]}{\tau}r_k(\mathbf{p}[t_s])\label{eq:JFCSSSP2_Equia}\\
{\mathrm{s.t.}}&\quad \bar{R}_k^{i,j}[t_s] \leq r_{k}^{i,j}(\mathbf{p}[t_s])\tau,\ \forall k, (i,j)\label{eq:JFCSSSP2_Equib}\\
&\quad \sum_{k\in\mathcal{K}}p_{k}^{i,j}[t_s] \leq P_{\max}^{i,j},\ \forall (i,j)\label{eq:JFCSSSP2_Equic}
\end{IEEEeqnarray}
\end{subequations}
where $r_k(\mathbf{p}[t_s]) = \sum_{(i,j)\in\mathcal{P}_k^*}r_{k}^{i,j }(\mathbf{p}[t_s])$ with $r_{k}^{i,j}(\mathbf{p}[t_s])\triangleq W\log_2\bigl(1 + \gamma_{k}^{i,j}(\mathbf{p}[t_s])\bigr)$, and $\bar{R}_k^{i,j}[t_s]\triangleq \sum_{\ell=1}^{t}\beta_k^{i,j}[\ell]\bar{A}_k\tau - (1-\epsilon_k)\bar{A}_k\bar{d}_k - \sum_{\ell=1}^{t_{s-1}}r_{k}^{i,j}(\mathbf{p}[\ell])\tau.$
Problem \eqref{eq:JFCSSSP2_Equi} is nonconvex due to the nonconcavity of $r_{k}^{i,j }(\mathbf{p}[t_s])$. We will now apply IA method to effectively solve \eqref{eq:JFCSSSP2_Equi} in an iterative manner. Following from inequality \eqref{eq_IAapproConcave} in Appendix \ref{app:DerivationofInequ} with $v=p_{k}^{i,j}[t_s]\|\mathbf{h}_{k}^{i,j}[t_s]\|_2^2$ and $z=\Phi_k^{i,j}(\mathbf{p}[t_s])$, the global concave lower bound of $r_{k}^{i,j}(\mathbf{p}[t_s])$ at the updated feasible point $\mathbf{p}^{(n)}[t_s]$ found at iteration $n$, denoted by ${r}_{k}^{i,j(n)}(\mathbf{p}[t_s];{\mathbf{p}}^{(n)}[t_s])$, is given as
\begin{align}
r_{k}^{i,j}(\mathbf{p}[t_s]) &\geq r_{k}^{i,j}({\mathbf{p}}^{(n)}[t_s]) - W\log_2e\biggl[ \gamma_{k}^{i,j}({\mathbf{p}}^{(n)}[t_s]) \nonumber \\
&\ - 2\frac{\nu_{k}^{i,j}[t_s]\sqrt{{p}_{k}^{i,j(n)}[t_s]}\sqrt{p_{k}^{i,j}[t_s]}}{\Phi_k^{i,j}({\mathbf{p}}^{(n)}[t_s])} + \gamma_{k}^{i,j}({\mathbf{p}}^{(n)}[t_s])\nonumber\\
&\ \times\frac{p_{k}^{i,j}[t_s]\nu_{k}^{i,j}[t_s] + \Phi_k^{i,j}(\mathbf{p}[t_s]) }{{p}_{k}^{i,j(n)}[t_s]\nu_{k}^{i,j}[t_s] + \Phi_k^{i,j}({\mathbf{p}}^{(n)}[t_s])} \biggl]\nonumber\\
&:={r}_{k}^{i,j(n)}(\mathbf{p}[t_s];{\mathbf{p}}^{(n)}[t_s])\label{eq_Concaveofrate}
\end{align}
with ${r}_{k}^{i,j(n)}({\mathbf{p}}^{(n)}[t_s];{\mathbf{p}}^{(n)}[t_s]) = W\log_2\bigl(1 + \gamma_{k}^{i,j}(\mathbf{p}^{(n)}[t_s])\bigr)$. As a result, we successively solve the following inner convex approximate program of \eqref{eq:JFCSSSP2_Equi} at iteration $n$:
\begin{subequations}\label{eq:JFCSSSP2_Convex}
\begin{IEEEeqnarray}{cl}
\underset{\mathbf{p}[t_s]}{\mathrm{max}}& \ \sum_{k\in\mathcal{K}} \frac{\hat{q}_k[t_s]}{\tau}{r}_k^{(n)}(\mathbf{p}[t_s])\label{eq:JFCSSSP2_Convexa}\\
{\mathrm{s.t.}}&\ \bar{R}_k^{i,j}[t_s] \leq {r}_{k}^{i,j(n)}(\mathbf{p}[t_s];{\mathbf{p}}^{(n)}[t_s])\tau,\ \forall k\in\mathcal{K}, (i,j)\label{eq:JFCSSSP2_Convexb}\qquad\\
&\ \sum_{k\in\mathcal{K}}p_{k}^{i,j}[t_s] \leq P_{\max}^{i,j},\ \forall (i,j)\label{eq:JFCSSSP2_Convexc}
\end{IEEEeqnarray}
\end{subequations}
and update the feasible point ${\mathbf{p}}^{(n)}[t_s]$ until convergence, where ${r}_k^{(n)}(\mathbf{p}[t_s])=\sum_{(i,j)\in\mathcal{P}_k^*}r_{k}^{i,j(n)}(\mathbf{p}[t_s];$ ${\mathbf{p}}^{(n)}[t_s])$. The proposed iterative procedure to solve \eqref{eq:JFCSSSP2} is summarized in Algorithm \ref{alg_SSP2}. An initial feasible value for $\mathbf{p}^{(0)}[t_s]$ to start Algorithm \ref{alg_SSP2} is easily found by successively solving the following simple convex program:
\begin{subequations}\label{eq:JFCSSSP2_ConvexInf}
\begin{IEEEeqnarray}{cl}
\underset{\mathbf{p}[t_s]}{\mathrm{max}}& \ \varrho \triangleq \underset{\forall k,(i,j)}{\min}\bigl\{{r}_{k}^{i,j(n)}(\mathbf{p}[t_s];{\mathbf{p}}^{(n)}[t_s])\tau - \bar{R}_k^{i,j}[t_s]\bigr\}\qquad\label{eq:JFCSSSP2_ConvexInfa}\\
{\mathrm{s.t.}}&\quad \sum_{k\in\mathcal{K}}p_{k}^{i,j}[t_s] \leq P_{\max}^{i,j},\ \forall (i,j)\label{eq:JFCSSSP2_ConvexInfb}
\end{IEEEeqnarray}
\end{subequations}
until reaching $\varrho > 0$.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\fontsize{10}{10}\selectfont
\protect\caption{Proposed Iterative Algorithm for Solving \eqref{eq:JFCSSSP2} with MRT-Based Transmission Design}
\label{alg_SSP2}
\global\long\def\textbf{Initialization:}{\textbf{Initialization:}}
\REQUIRE Set $n:=1$ and generate an initial feasible value for $\mathbf{p}^{(0)}[t_s]$ to constraints in \eqref{eq:JFCSSSP2_Convex}
\REPEAT
\STATE Solve \eqref{eq:JFCSSSP2_Convex} to obtain the optimal transmission power $\mathbf{p}^{*}[t_s]$
\STATE Update\ \ ${\mathbf{p}}^{(n)}[t_s] := \mathbf{p}^{*}[t_s]$
\STATE Set $n:=n+1$
\UNTIL Convergence\\
\STATE{\textbf{Output:}} $\mathbf{p}^{*}[t_s]={\mathbf{p}}^{(n)}[t_s]$ and $\mathbf{w}_{k}^{i,j,*}[t_s] = \frac{\sqrt{p_{k}^{i,j*}[t_s]}}{\sqrt{\nu_{k}^{i,j}[t_s]}}\mathbf{h}_{k}^{i,j}[t_s],\ \forall k, (i,j)$.
\end{algorithmic} \end{algorithm}
\textit{Convergence and complexity analysis:} The convergence of an IA-based algorithm is already provided in \cite{Beck:JGO:10}. In particular, Algorithm \ref{alg_SSP2} generates an improved solution after each iteration, which converges to at least a local optimal solution of \eqref{eq:JFCSSSP2} when $n \rightarrow\infty$. The worst-case of per-iteration complexity of Algorithm \ref{alg_SSP2} is $\mathcal{O}\bigl(\sqrt{c}(v)^3 \bigr)$ by the interior-point method \cite[Chapter 6]{Ben:2001}, where $c=KJ + J$ and $v=KJ$ are the numbers of linear constraints and scalar variables, respectively.
\subsubsection{ZFBF-Based Transmission Design}
To make ZFBF efficient and feasible, the number of antennas of each RU $(i,j)$ is required to be larger than the number of UEs, i.e. $M_{i,j} > K,\ \forall (i,j)\in\mathcal{J}$, to cancel the inter-user interference transmitted by this RU. In addition, the system bandwidth is equally allocated to each RU $(i,j)$, i.e. $W^{i,j}=W/J$, to completely remove the intra-user interference and interference caused by other RUs. Under the proposed ZFBF technique, beamformer $\mathbf{w}_k^{i,j}[t_s]$ at RU $(i,j)$ is designed to satisfy $(\mathbf{h}_{k'}^{i,j}[t_s])^{\mathsf{H}}\mathbf{w}_k^{i,j}[t_s]=0, \forall k'\in\mathcal{K}\setminus\{k\}$. We denote by $\mathbf{H}_{-k}^{i,j}[t_s]\triangleq \bigr[\mathbf{h}_{1}^{i,j}[t_s]\cdots\mathbf{h}_{k-1}^{i,j}[t_s]\ \mathbf{h}_{k+1}^{i,j}[t_s]\cdots\mathbf{h}_{K}^{i,j}[t_s]\bigr]\in\mathbb{C}^{M\times (K-1)}$ the channel matrix from RU ($i,j$) to UEs, except UE $k$. Let $\mathbf{V}_{k}^{i,j}[t_s]\in\mathbb{C}^{M_{i,j}\times(M_{i,j}-K+1)}$ be the null space of $(\mathbf{H}_{-k}^{i,j}[t_s])^{\mathsf{H}}$. We can then write $\mathbf{w}_k^{i,j}[t_s] = \mathbf{V}_{k}^{i,j}[t_s]\tilde{\mathbf{w}}_k^{i,j}[t_s]$, where $\tilde{\mathbf{w}}_k^{i,j}[t_s]\in\mathbb{C}^{(M_{i,j}-K+1)\times 1}, \forall k,(i,j)$ are the solutions to the ZFBF-based problem. By defining $\tilde{\nu}_k^{i,j}[t_s]\triangleq\|(\tilde{\mathbf{h}}_k^{i,j}[t_s])^{\mathsf{H}}\|_2^2$ with $\tilde{\mathbf{h}}_k^{i,j}[t_s]\triangleq(\mathbf{h}_k^{i,j}[t_s])^{\mathsf{H}}\mathbf{V}_{k}^{i,j}[t_s]\in\mathbb{C}^{1\times (M_{i,j}-K+1)}$, we can equivalently express $\tilde{\mathbf{w}}_k^{i,j}[t_s]$ as $\tilde{\mathbf{w}}_k^{i,j}[t_s]=\sqrt{\tilde{p}_{k}^{i,j}[t_s]}\frac{(\tilde{\mathbf{h}}_k^{i,j}[t_s])^{\mathsf{H}}}{\sqrt{\tilde{\nu}_k^{i,j}[t_s]}}$, where $\tilde{\mathbf{p}}[t_s]\triangleq\bigl[\tilde{p}_{k}^{i,j}[t_s]\bigr]_{k,(i,j)\in\mathcal{P}_k}$ are the solutions to the following problem:
\begin{subequations}\label{eq:JFCSSSP2_ZFBF}
\begin{IEEEeqnarray}{cl}
\underset{\tilde{\mathbf{p}}[t_s]}{\mathrm{max}}& \quad \sum_{k\in\mathcal{K}} \frac{\hat{q}_k[t_s]}{\tau}{r}_k(\tilde{p}_{k}^{i,j}[t_s])\label{eq:JFCSSSP2_ZFBFa}\\
{\mathrm{s.t.}}&\quad \bar{R}_k^{i,j}[t_s] \leq r_{k}^{i,j}(\tilde{p}_{k}^{i,j}[t_s])\tau,\ \forall k, (i,j)\label{eq:JFCSSSP2_ZFBFb}\\
&\quad \sum_{k\in\mathcal{K}}\tilde{p}_{k}^{i,j}[t_s] \leq P_{\max}^{i,j},\ \forall (i,j)\label{eq:JFCSSSP2_ZFBFc}
\end{IEEEeqnarray}
\end{subequations}
where $r_{k}^{i,j}(\tilde{p}_{k}^{i,j}[t_s])\triangleq W^{i,j}\log_2\bigl(1 + \frac{\tilde{p}_{k}^{i,j}[t_s]\tilde{\nu}_k^{i,j}[t_s]}{N_0}\bigr)$. The function $r_{k}^{i,j}(\tilde{p}_{k}^{i,j}[t_s])$ is concave in $\tilde{p}_{k}^{i,j}[t_s]$, leading to the convexity of problem \eqref{eq:JFCSSSP2_ZFBF}. From \eqref{eq:JFCSSSP2_ZFBFb}, one can show that $\tilde{p}_{k}^{i,j}[t_s] \geq \tilde{p}_{k,\min}^{i,j}[t_s]:=\frac{N_0}{\tilde{\nu}_k^{i,j}[t_s]} 2^{\frac{\bar{R}_k^{i,j}[t_s]}{W^{i,j}\tau} -1}$. We now develop an efficient method to solve \eqref{eq:JFCSSSP2_ZFBF} by formulating the partial Lagrangian as
\begin{align}
L(\tilde{\mathbf{p}}[t_s],\boldsymbol{\mu}) = \sum_{k\in\mathcal{K}} \frac{\hat{q}_k[t_s]}{\tau}{r}_k(\tilde{p}_{k}^{i,j}[t_s])
+ \sum_{(i,j)\in\mathcal{J}}\mu_{i,j}\bigl(P_{\max}^{i,j} - \sum_{k\in\mathcal{K}}\tilde{p}_{k}^{i,j}[t_s] \bigr)
\end{align}
where $\boldsymbol{\mu}\triangleq\{\mu_{i,j}\geq 0\}_{(i,j)\in\mathcal{J}}$ are the Lagrange multipliers of constraint \eqref{eq:JFCSSSP2_ZFBFc}. The dual function can be written as $g(\boldsymbol{\mu}) = \underset{\tilde{\mathbf{p}}[t_s] \geq 0}{\max}\{L(\tilde{\mathbf{p}}[t_s],\boldsymbol{\mu})|\tilde{p}_{k}^{i,j}[t_s] \geq \tilde{p}_{k,\min}^{i,j}[t_s], \forall k,$ $(i,j)\}$. We note that $L(\tilde{\mathbf{p}}[t_s],\boldsymbol{\mu})$ is separable with respect to $\tilde{p}_{k}^{i,j}[t_s]$. Thus, by solving
\begin{align}
\tilde{p}_{k}^{i,j*}[t_s] = \underset{\tilde{p}_{k}^{i,j}[t_s]\geq \tilde{p}_{k,\min}^{i,j}[t_s]}{\argmax}\Bigr\{\frac{\hat{q}_k[t_s]}{\tau} W\log_2\Bigl(1 + \frac{\tilde{p}_{k}^{i,j}[t_s]\tilde{\nu}_k^{i,j}[t_s]}{N_0}\Bigr)
- \mu_{i,j}\tilde{p}_{k}^{i,j}[t_s] \Bigl\}
\end{align}
for a given $\mu_{i,j}$, the optimal solution to $\tilde{p}_{k}^{i,j}[t_s] $ is given as
\begin{align}\label{eq_powerZFBF}
\tilde{p}_{k}^{i,j*}[t_s]=\max\Bigr\{\tilde{p}_{k,\min}^{i,j}[t_s], \frac{\hat{q}_k[t_s] W^{i,j}}{\tau\mu_{i,j}\ln 2} -\frac{N_0}{\tilde{\nu}_k^{i,j}[t_s]}\Bigl\}.
\end{align}
\begin{algorithm}[t]
\begin{algorithmic}[1]
\fontsize{10}{10}\selectfont
\protect\caption{Proposed Low-Complexity Algorithm for Solving \eqref{eq:JFCSSSP2} with ZFBF-Based Transmission Design}
\label{alg_SSP3}
\global\long\def\textbf{Initialization:}{\textbf{Initialization:}}
\REQUIRE Set $n:=1$ and generate initial values $\underline{\mu}_{i,j}=0$ and $\overline{\mu}_{i,j}=+\infty, \forall (i,j)\in\mathcal{J}$
\FOR{each RU $(i,j)\in\mathcal{J}$ \textit{in parallel}}
\REPEAT
\STATE Compute $\mu_{i,j}^{(n)}=(\underline{\mu}_{i,j}+\overline{\mu}_{i,j})/2$ and $\tilde{p}_{k}^{i,j(n)}[t_s]$ as\qquad in \eqref{eq_powerZFBF}
\IF{\ $\sum_{k\in\mathcal{K}}\tilde{p}_{k}^{i,j(n)}[t_s] - P_{\max}^{i,j} \leq 0$}
\STATE Compute\ $\mu'_{i,j}=(\underline{\mu}_{i,j}+\overline{\mu}_{i,j})/2$\ and\ update\ $\overline{\mu}_{i,j}:=\mu'_{i,j}$
\ELSE
\STATE Update\ $\mu'_{i,j}=(\underline{\mu}_{i,j}+\overline{\mu}_{i,j})/2$\ and\ update\ $\underline{\mu}_{i,j}:=\mu'_{i,j}$
\ENDIF
\STATE Set $n:=n+1$
\UNTIL $\overline\mu_{i,j}-\underline\mu_{i,j} \leq \delta$\ \{/*\textit{Satisfying a given accuracy level*}/\}
\ENDFOR
\STATE{\textbf{Output:}} $\mu_{i,j}^*=\mu_{i,j}^{(n)}$, $\tilde{p}_{k}^{i,j*}[t_s]=\max\Bigr\{\tilde{p}_{k,\min}^{i,j}[t_s], \frac{\hat{q}_k[t_s]W^{i,j}}{\tau\mu_{i,j}^*\ln 2} -\frac{N_0}{\tilde{\nu}_k^{i,j}[t_s]}\Bigl\}$ and $\mathbf{w}_k^{i,j,*}[t_s]=\frac{\sqrt{\tilde{p}_{k}^{i,j*}[t_s]}}{\sqrt{\tilde{\nu}_k^{i,j}[t_s]}}\mathbf{V}_{k}^{i,j}[t_s](\tilde{\mathbf{h}}_k^{i,j}[t_s])^{\mathsf{H}},\ \forall k,(i,j)$.
\end{algorithmic} \end{algorithm}
The optimal Lagrange multiplier $\mu_{i,j}$ is efficiently found by applying a bisection search method between $\underline{\mu}_{i,j}=0$ and a sufficiently large $\overline{\mu}_{i,j}$. An efficient algorithm for solving \eqref{eq:JFCSSSP2} with ZFBF is summarized in Algorithm \ref{alg_SSP3}, which does not rely on existing convex optimization solvers.
\subsection{ORAN-based Implementation of Algorithm \ref{alg_JFCS}}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{ORAN_Implementation.eps}
\caption{ORAN Alliance reference architecture for implementing the proposed JFCS management scheme at time-frame $t$. }
\label{fig:ORANIplementation}
\end{figure}
Fig. \ref{fig:ORANIplementation} illustrates the key steps for implementing the proposed JFCS management scheme at time-frame $t$ in the ORAN architecture.
\renewcommand{\labelenumi}{(\arabic{enumi})}
\begin{enumerate}
\item[\circled{\stepcounter{enumi}\arabic{enumi}}] At the beginning of time-frame $t>1$, the three-step RL
procedure for solving L-SP is carried out at Non-RT RIC based on the collected data in SMO. The collected data include performance/observation and resource updates from CU, DU, RU and Near-RT RIC to SMO. For $t=1$, the flow-split decisions are initialized as $\boldsymbol{\beta}_k[1] = \frac{1}{|\mathcal{P}_k^{(1)}|}[1,\cdots,1], \forall k$ where $\mathcal{P}_k^{(1)}$ is the set of RUs in the feasible communication range of UE $k$.
\item[\circled{\stepcounter{enumi}\arabic{enumi}}] The optimal flow-split decisions $\boldsymbol{\beta}^*[t]$ are sent to Near-RT RIC via the A1 interface for real deployment.
\item[\circled{\stepcounter{enumi}\arabic{enumi}}] Given $\boldsymbol{\beta}^*[t]$, xAPPs deployed in Near-RT RIC control congestion and optimize RAN resources and functions in each time-slot $t_s$ by solving S-SP1 and S-SP2 to obtain the optimal solutions of congestion control $\boldsymbol{a}^*[t_s]$ and beamformer $\mathbf{w}^*[t_s]$.
\item[\circled{\stepcounter{enumi}\arabic{enumi}}] Subsequently, the RAN Data Analytic component in Near-RT RIC updates queue-lengths as in Step 6 of Algorithm \ref{alg_JFCS}.
\item[\circled{\stepcounter{enumi}\arabic{enumi}}] Given $\boldsymbol{\beta}^*[t]$ and $\mathbf{w}^*[t_s]$, the optimal service rate $\mathbf{r}(\mathbf{w}^*[t_s])$ is scheduled and applied to CU and DUs through the E2 interface.
\item[\circled{\stepcounter{enumi}\arabic{enumi}}] After $T_f$ time-slots in the short-term scale $t_s$, performance and observations (e.g. $\mathbf{q}[t-1], \mathbf{A}[t-1]$) are updated to SMO through the O1 interface to re-estimate the flow-split decision $\boldsymbol{\beta}^*[t+1]$.
\end{enumerate}
\section{Performance Analysis of The JFCS Framework}\label{sec_PerfAnaly}
In this section, we analyze the main theoretical performance results of Algorithm \ref{alg_JFCS} and discuss their key insights, followed by concrete proofs of the theorems.
\begin{assumption}\label{assump_2} To facilitate the analysis, we make the following additional assumptions.
\begin{itemize}
\item Under the limited transmit power budget at RUs, the achievable rate of UE $k$ is upper bounded by $r^{\max} >0$, \textit{i.e.}, $r_k(\mathbf{w}[t_s])\leq r^{\max},\ \forall k, t_s$.
\item The congestion control variable $a_k[t_s]$ satisfies the condition $\mathbb{E}\{a_k^2[t_s]\} \leq A_1^{\max}$, where $A_1^{\max}$ is a sufficiently large positive constant \cite{LiuJSAC2017}.
\end{itemize}
\end{assumption}
\begin{theorem}[Bounding the mean divergence of the auxiliary queue-length]\label{Theo_1} For a given scaling factor $\varphi$, let $\hat{\mathbf{q}}^{\infty}_{(\varphi)}$ and $\hat{\mathbf{q}}^*_{(\varphi)}$ be the steady-state and optimal queue-lengths, respectively. From Assumptions \ref{assp:1} and \ref{assump_2}, the expected upper bound of the divergence of $\hat{\mathbf{q}}^{\infty}_{(\varphi)}$ from $\hat{\mathbf{q}}^*_{(\varphi)}$ is given as
\begin{equation}\label{eq_theo1eq}
\mathbb{E}\{\|\hat{\mathbf{q}}^{\infty}_{(\varphi)} - \hat{\mathbf{q}}^*_{(\varphi)}\|_2\} \leq \mathsf{C}_1\sqrt{\varphi} = \mathcal{O}(\sqrt{\varphi})
\end{equation}
where $\mathsf{C}_1 \triangleq \sqrt{\frac{K\tau^2\Psi}{2}\bigl(A_1^{\max} + (r^{\max})^2\bigr)}$ is a positive constant.
\end{theorem}
\noindent The proof is detailed in Appendix \ref{App_B}. Theorem \ref{Theo_1}
implies that the divergence of the steady-state queue-length is bounded by $\mathcal{O}(\sqrt{\varphi})$. In particular, the smaller the value of $\varphi$, the less the divergence of $\hat{\mathbf{q}}^{\infty}_{(\varphi)}$. However a small $\varphi $ will also result in a small congestion control rate and a faster convergence. When $\varphi$ is large, a better congestion control rate is achieved but with the cost of larger steady-state queue-length divergence (i.e. larger delay and slower convergence). Hence, there exists an appropriate value of $\varphi$ to make this tradeoff more efficient. Theorem \ref{Theo_1} will immediately lead to the following result.
\begin{corollary}[Queue-stability]\label{corollary1}
Given a scaling factor $\varphi$ and $\mathsf{C}_1$ in \eqref{eq_theo1eq}, the steady-state total queue-length remains finite and scales as $\mathcal{O}(\varphi) + \mathcal{O}(\sqrt{\varphi})$, \textit{i.e.} \begin{align}\label{steady-statebound}
\underset{t_s\to\infty}{\limsup}\ \mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}[t_s]\|_1\} &\leq \tau\Psi K A^{\max} \varphi + \sqrt{K}\mathsf{C}_1\sqrt{\varphi} \nonumber\\
&= \mathcal{O}(\varphi) + \mathcal{O}(\sqrt{\varphi}).
\end{align}
\end{corollary}
\begin{proof}The proof of \eqref{steady-statebound} is straightforward by noticing the fact that $\underset{t_s\to\infty}{\limsup}\ \mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}[t_s]\|_1\} = \mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}\|_1\}=\mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}_{(\varphi)}^{*}\|_1 + \|\hat{\mathbf{q}}_{(\varphi)}^{*}\|_1 \}$. Applying the inequality $\|\mathbf{x}\|_1\leq \sqrt{K}\|\mathbf{x}\|_2$ for any $\mathbf{x}\in\mathbb{R}_+^K$ yields: $\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}_{(\varphi)}^{*}\|_1 + \|\hat{\mathbf{q}}_{(\varphi)}^{*}\|_1 \leq \sqrt{K}\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}_{(\varphi)}^{*}\|_2 + \|\hat{\mathbf{q}}_{(\varphi)}^{*}\|_1$. From \eqref{eq_theo1eq} and Step 4 of Algorithm \ref{alg_JFCS}, it follows that
\begin{align}
&\underset{t_s\to\infty}{\limsup}\ \mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}[t_s]\|_1\} \nonumber\\
&\leq \sqrt{K}\mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}_{(\varphi)}^{*}\|_2\} + \|\hat{\mathbf{q}}_{(\varphi)}^{*}\|_1\nonumber\\
&\leq \sqrt{K}\mathsf{C}_1\sqrt{\varphi} + \tau \sum_{k\in\mathcal{K}}U_k^{'}\bigl(a_k^*\bigr) \varphi \leq \sqrt{K}\mathsf{C}_1\sqrt{\varphi} + \tau\Psi \sum_{k\in\mathcal{K}}a_k^* \varphi\ \nonumber\\
&\leq \sqrt{K}\mathsf{C}_1\sqrt{\varphi} + \tau\Psi K A^{\max} \varphi\ (\text{due to}\ a_k^* \leq A^{\max},\forall k)
\end{align}
showing \eqref{steady-statebound}.\end{proof}
Let $\boldsymbol{a}^{\infty}_{(\varphi)}\triangleq [a^{\infty}_{(\varphi),k}]^{\mathsf{T}}_{k\in\mathcal{K}}$ with $a^{\infty}_{(\varphi),k} = \mathbb{E}\bigl\{\min\{ A^{\max},$ $ U_k^{'-1}\bigl(\frac{\hat{q}_{(\varphi),k}^{\infty}}{\varphi\tau} \bigr)\}\bigl\}$ be the mean steady-state congestion control rate vector. We also denote by $U(\boldsymbol{a})\triangleq \sum_{k\in\mathcal{K}}U_k(a_k)$ the total utility function of problem \eqref{eq:JFCS2}. The utility-optimality of Algorithm \ref{alg_JFCS} is stated by the following theorem, whose proof is given in Appendix \ref{App_C}.
\begin{theorem}[Optimality]\label{Theo_2} Given a scaling factor $\varphi$, Algorithm \ref{alg_JFCS} produces the mean steady-state congestion control rate vector $\boldsymbol{a}^{\infty}_{(\varphi)}$, satisfying
\begin{align}\label{eq_theo2a}
\|\boldsymbol{a}^{\infty}_{(\varphi)} - \boldsymbol{a}^{*}\|_2 \leq \mathsf{C}_2 \frac{1}{\sqrt{\varphi}} = \mathcal{O}(1/\sqrt{\varphi})
\end{align}
where $\mathsf{C}_2 \triangleq \frac{\mathsf{C}_1}{\psi\tau} = \sqrt{\frac{K\Psi}{2\psi}\bigl(A_1^{\max} + (r^{\max})^2\bigr)}$. Therefore, the optimal network utility maximization is bounded as
\begin{align}\label{eq_theo2b}
U(\boldsymbol{a}^*) - \mathsf{C}_3\frac{1}{\varphi} = U(\boldsymbol{a}^*) - \mathcal{O}(1/\varphi)\leq U(\boldsymbol{a}^{\infty}_{(\varphi)})
\end{align}
where $\mathsf{C}_3 \triangleq \frac{\Psi\mathsf{C}_1^2}{2\psi^2\tau^2} = \frac{K\Psi^2}{4\psi}\bigl(A_1^{\max} + (r^{\max})^2\bigr)$.
\end{theorem}
The analytical results in Theorem \ref{Theo_2} show that the divergence of the steady-state congestion control rate vector $\boldsymbol{a}^{\infty}_{(\varphi)}$ from $\boldsymbol{a}^*$ scales as $\mathcal{O}(1/\sqrt{\varphi})$, which is the same as in \cite{LiuJSAC2017,EryilmazToN2007}. The utility-optimality gap can be reduced by increasing $\varphi$, but this will also lead to a larger steady-state queue-length divergence.
\section{Numerical Results}\label{sec_NumericalResults}
In this section, we first present simulation setup and parameters in Section \ref{sec_NumericalResults_A} and then provide numerical results of Algorithm \ref{alg_JFCS} in Section \ref{sec_NumericalResults_B}. The results and performance comparison over existing schemes will be provided in Section \ref{sec_NumericalResults_C}.
\subsection{Simulation Setups and Parameters}\label{sec_NumericalResults_A}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{Layout_ORAN.eps}
\caption{A system topology with $J=8$ RUs and $K=12$ UEs.}
\label{fig:Layout}
\end{figure}
\begin{table}[t]
\centering
\captionof{table}{Simulation Parameters}
\label{tab:Simulationparameter}
\scalebox{0.9}{
\begin{tabular}{l|l}
\hline
Parameter & Value \\
\hline\hline
System bandwidth, $W$ & 20 MHz \\
Number of RUs, $J$ & 8\\
Number of UEs, $K$ & 12\\
Number of antennas at RUs $(i,j)$, $M_{i,j} \equiv M$ & 16\\
RUs' height & 10 m\\
UEs' antenna altitude & 1.5 m\\
Power budget at RU $(i,j)$, $P_{\max}^{i,j}\equiv P_{\max}$ & 43 dBm\\
Noise figure, $\textsf{NF}$ & 9 dB\\
Maximum average delay, $\bar{d}_k\equiv d$ & 10 ms\\
Require reliable communication, $\epsilon_k\equiv \epsilon$ & 0.95\\
Number of frames, $T$ & 10000\\
Number of time-slots per frame, $T_f$ & 10\\
Duration of one frame, $T_c$ & 10 ms\\
Duration of one time-slot, $\tau$ & 1 ms\\
Trade-off factor (Boltzmann temperature), $\lambda$ & 0.3\\
\hline
\end{tabular}
}
\end{table}
We consider a system topology given in Fig.~\ref{fig:Layout}, including 8 RUs and $12$ UEs located within a circle of 1-km radius. There are two DUs, each connected to 4 RUs. RUs are uniformly distributed in the area, while those of UEs are randomly located in each time-frame $t.$ The large-scale fading coefficient $\xi[t] \in \{\xi_{k}^{i,j}[t] \}_{\forall (i,j),k}$ is modeled as the three-slope path loss model \cite{ngo2017cell}, such as $\xi[t] = \xi_0 - 35 \log_{10}(d[t]) + 20c_0\log_{10}(d/d_0)+15c_1\log_{10}(d/d_1)$
where $\xi_0 = -140.7 + \mathsf{SF}$ dB, $d_0 = 10$ m, $d_1 = 50$ m, and $d$ is the distance between an RU and a UE; here $c_i=\max\{0,\frac{d_i-d}{|d_i-d|}\}$ with $i\in\{0,1\}$ and $\mathsf{SF} \sim \mathcal{CN}(0, \sigma_{\mathsf{SF}})$ denotes the shadowing factor with $\sigma_{\mathsf{SF}}=8$ dB. The Rician factor $\kappa[t] \in \{\kappa_k^{i,j}[t]\}_{\forall (i,j),k}$ is given as $\kappa = P_{\mathsf{LoS}}(d[t])/ \big(1-P_{\mathsf{LoS}}(d[t])\big)$, where the LoS probability follows the 3GPP–UMa model as $P_{\mathsf{LoS}}(d[t]) = \min \left( \frac{18}{d[t]},1 \right) \big( 1 - \exp(-\frac{d[t]}{36}) \big) + \exp(-\frac{d[t]}{36})$ \cite{jafari2015study}. We consider uniform linear arrays with half-wavelength distance between array elements to model the LoS channels at RUs. The array response vector is generated as $\bar{\mathbf{h}}^{i,j}_{k}[t] = \textbf{\textit{a}}(\phi^{i,j}_{k}[t])$, where each element $m$ is given as $\big[\textbf{\textit{a}}(\phi^{i,j}_{k}[t])\big]_{m} = \exp\bigr(j \pi (m-1) \sin \phi^{i,j}_{k}[t]\bigr)$ with $\phi^{i,j}_{k}[t]\in [-\pi / 2, \pi / 2)$ being the angle-of-departure (AoD) at RU $(i,j)$. The noise power is modeled as $N_0 = -170 + 10 \log_{10} (W) + \mathsf{NF}$ dBm, where $\mathsf{NF}=9$ dB denotes the noise figure.
We run Algorithm \ref{alg_JFCS} over $T=10000$ frames, each consists of $T_f = 10$ time-slots (subframes) and has duration of $T_c=10$ ms, followed by 5G NR Frame structure \cite{3GPP_Release_15}. In each time-frame $t$, UE $k$ is served by a subset of four RUs. To illustrate the heterogeneity of UEs, we assume that the arrival rate $A_k[t]$ is uniformly distributed in [1,\ 3] Gbps. The step sizes (learning rates) are set to decrease after each frame as $\eta_u[t]=1/(t+1)^{0.51}$, $\eta_\theta[t]=1/(t+1)^{0.55}$ and $\eta_\beta[t]=1/(t+1)^{0.6}$ \cite{SamarakoonTWC13}. We adopt the proportional fairness metric to model the utility function as: $U_k(r_k)=\log(0.001+r_k), \forall k$ \cite{XiaojunJSAC06}. The key parameters are summarized in Table \ref{tab:Simulationparameter} for ease of cross-referencing, followed studies in \cite{jafari2015study,SamarakoonTWC13,3GPP_Release_15,ngo2017cell,BennisTWC2013}. In the following figures, results are averaged over the last 6000 frames.
\textbf{Benchmark schemes:} To demonstrate the benefits of the proposed JFCS algorithm, we consider the following three benchmark schemes:
\begin{itemize}
\item ``NUM with fixed resource allocation (NUM-FRA)" \cite{StaiCOMML14}: Under Algorithm \ref{alg_JFCS}, RUs allocate power equally to UEs.
\item ``NUM with equal flow-split distribution (NUM-EFSD):" CU splits data-flows of all UEs equally among the selected paths, \textit{i.e.}, $\beta_k^{i,j}[t]=1/|\mathcal{P}_k|,\forall (i,j)\in\mathcal{P}_k$.
\item ``NUM with the nearest RU selection (NUM-NRU):" Under Algorithm \ref{alg_JFCS}, each UE $k$ selects only the nearest RU for the data transmission, \textit{i.e.} $\beta_k^{i,j}[t]=1$ if RU $(i,j)$ is the nearest RU to UE $k$.
\end{itemize}
\subsection{Numerical Results of Algorithm \ref{alg_JFCS}}\label{sec_NumericalResults_B}
\begin{figure}[!ht]%
\centering
\subfigure[Impact of $\varphi$ on congestion control rate]{%
\label{fig:Convergence-a}%
\includegraphics[width=0.91\columnwidth]{CongesRate_vs_Iter_v2.pdf}
}%
\hspace{4pt}%
\subfigure[Impact of $\lambda$ on estimated utility]{%
\label{fig:Convergence-b}%
\includegraphics[width=0.9\columnwidth]{UnilityvsIter_v2.pdf}}
\caption[]{\small Convergence behavior of Algorithm \ref{alg_JFCS} with ZFBF.}%
\label{fig:Convergence}%
\end{figure}
We first study the impacts of $\varphi$ and $\lambda$ on the convergence behavior of Algorithm \ref{alg_JFCS} in Fig. \ref{fig:Convergence}. From Fig. \ref{fig:Convergence}(a), it can be observed that the congestion control rates for different values of the scaling factor $\varphi$ converge to the same optimal solution, and $\|\boldsymbol{a}[t_s]\|$ is almost independent of $\varphi$. In addition, increasing $\varphi$ results in a smaller divergence
of the steady-state congestion control rate (see Theorem \ref{Theo_2}), but also slows down the convergence rate of Algorithm \ref{alg_JFCS}. The reason is attributed to the fact that for a large $\varphi$, the network utility function $\sum_{k\in\mathcal{K}}U_k(a_k[t_s])$ in \eqref{eq:JFCS3a} will prevail over the Lyapunov drift function $\Delta L[t_s]$, which requires more iterations to guarantee network stability. In Fig. \ref{fig:Convergence}(b), we increase the trade-off factor $\lambda$ (\textit{i.e.} Boltzmann
temperature) from 0.05 to 0.7. The result shows that the larger the value of $\lambda$, the better the estimated utility that can be achieved with the cost of lower convergence speed of the RL process. From \eqref{eq_bestresponse2}, the paths associated with highest estimated regret $\hat{\theta}_k^{i,j}[t]$ will be selected to minimize the best response function $f(\hat{\boldsymbol{\theta}}[t])$. Conversely, a low value of $\lambda$ can speed up convergence by allocating traffic data uniformly to all paths, but leads to a very sub-optimal solution.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{ComparisionofAlg1.pdf}
\caption{\small Performance of Algorithm \ref{alg_JFCS} with different transmission strategies versus the number of antennas at RUs, $M \equiv M_{i,j}, \forall (i,j)$.}
\label{fig:ComparisionofAlg1}
\end{figure}
In Fig. \ref{fig:ComparisionofAlg1}, we evaluate the performance of Algorithm \ref{alg_JFCS} with different transmission strategies, \textit{namely} MRT and ZFBF. For a fixed $\varphi = 25$, we vary the number of antennas at RUs $M \equiv M_{i,j}, \forall (i,j)$ from 16 to 128. For each transmission design, we also plot the steady-state congestion control rate $\mathbb{E}\{\|\boldsymbol{a}_{(25)}^{\infty}\|\}$ with equal flow-split distribution. As seen from Fig. \ref{fig:ComparisionofAlg1} that the steady-state congestion control rate of all schemes increases as $M$ increases. Unsurprisingly, Algorithm \ref{alg_JFCS} with ZFBF offers better performance in terms of congestion control rate than that of MRT when the number of antennas at RUs is sufficiently large to cancel the inter-user interference transmitted by the same RU. It is obvious that the higher the effective data rate of a data-flow in the downlink, the lower the total queue-length of that data-flow (or user), resulting in a higher congestion control rate.
Since Algorithm \ref{alg_JFCS} with MRT is based on the IA method that requires high computation complexity and relies on existing convex optimization solvers, we provide only the performance of Algorithm \ref{alg_JFCS} with ZFBF in the following section.
\subsection{Performance Comparison}\label{sec_NumericalResults_C}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{CongestionratevsM_v2.pdf}
\caption{\small The steady-state congestion control rate with respect to the number of antennas at RUs, $M \equiv M_{i,j}, \forall (i,j)$.}
\label{fig:CongestionratevsMout}
\end{figure}
Next, we show the performance comparison in terms of the steady-state congestion control rate $\mathbb{E}\{\|\boldsymbol{a}_{(25)}^{\infty}\|\}$ among the considered schemes versus the number of antennas at RUs in Fig. \ref{fig:CongestionratevsMout}. We fix $\varphi = 25$ and vary $M$ from 16 to 128 to investigate the impact of the physical factor. As $M$ increases, the downlink instantaneous achievable rates of all UEs also significantly increase since more degrees of freedom are added to leverage multi-user diversity, resulting in lower queue-lengths. For a fixed value of $\varphi$, the steady-state congestion control rate vector increases monotonically
with $M$. Clearly, Algorithm \ref{alg_JFCS} outperforms the benchmark schemes in all ranges of $M$, and the gap is deeper when $M$ is small. In addition, the NUM-FRA and NUM-NRU, which fairly allocate the power budget and fix the path selection to UEs, respectively, provide the worst performance. These observations demonstrate the effectiveness of the proposed Algorithm \ref{alg_JFCS} by jointly optimizing the flow-split distribution, congestion control, scheduling and radio resource allocation.
Lastly, the impacts of scaling factor $\varphi$ on the steady-state total queue-length $\mathbb{E}\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}\|_1\}$ and average worst-case delay (\textit{i.e.}, the delay of slowest data-flow) are plotted in Figs. \ref{fig:queue-length} and \ref{fig:latency}, respectively. It can be seen from Fig. \ref{fig:queue-length} that the steady-state total queue-length of all schemes monotonically scales as $\mathcal{O}(\varphi) + \mathcal{O}(\sqrt{\varphi})$, which confirms our theoretical results in Corollary \ref{corollary1}. We recall from Theorem \ref{Theo_2} that the utility-optimality gap can be narrowed by increasing $\varphi$, but with the cost of higher delay, as shown in Fig. \ref{fig:latency}. When $\varphi$ is larger than 25, all the considered schemes violate the maximum allowable average delay of $\bar{d}=10$ ms. It implies that the data traffic cannot be completely transmitted to UEs in each time-frame. Nevertheless, Algorithm \ref{alg_JFCS} still provides the best performance out of the schemes considered.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{QueuevsVarphi_v2.pdf}
\caption{\small The steady-state total queue-length with respect to $\varphi$.}
\label{fig:queue-length}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth,trim={0cm 0.0cm 0cm 0.0cm}]{Latencyvsvarphi_v2.pdf}
\caption{\small Average worst-case delay with respect to $\varphi$.}
\label{fig:latency}
\end{figure}
\section{Conclusion}\label{sec_Conclusion}
We have proposed a new holistic multi-layer optimization framework, called JFCS, to enable intelligent traffic steering in a hierarchical ORAN architecture. In particular, we have developed an intelligent resource management algorithm based on network utility maximization and stochastic optimization to efficiently and adaptively direct traffic to appropriate RUs by jointly optimizing the flow-split
distribution, congestion control and scheduling. JFCS is proved to achieve fast convergence, long-term utility-optimality and significant delay reduction compared to state-of-the-art approaches. To that end, the insights in this work will foster future studies in this area, especially in the design of more advanced AI/ML solutions to achieve enhanced control and flexibility in ORAN.
\appendices
\renewcommand{\thesectiondis}[2]{\Alph{section}:}
\section{Derivation of Inequality} \label{app:DerivationofInequ}
\renewcommand{\theequation}{\ref{app:DerivationofInequ}.\arabic{equation}}\setcounter{equation}{0}
We will find the concave lower bound of $r_{k}^{i,j}[t_s]$. By \cite[Appendix A]{Dinh:TCOMM:2017}, it is true that the function $r(x,y) = -\ln(1-x^2/y)$ is convex in the domain $y>x^2$ with $x,y\in\mathbb{R}_+$. The global concave lower bound of $r(x,y)$ at the feasible point ($\bar{x}, \bar{y}$) is given as
\begin{align}
r(x,y) &\geq r(\bar{x},\bar{y}) + \Bigl<\Bigr(\frac{\partial r(\bar{x},\bar{y})}{\partial \bar{x}},\frac{\partial r(\bar{x},\bar{y})}{\partial \bar{y}}\Bigl), (x-\bar{x},y-\bar{y}) \Bigl>\nonumber\\
& = r(\bar{x},\bar{y}) - \frac{\bar{x}^2}{\bar{y}-\bar{x}^2} + 2\frac{\bar{x}x}{\bar{y}-\bar{x}^2} - \frac{\bar{x}^2}{\bar{y}-\bar{x}^2}\frac{y}{\bar{y}}\label{eq_IAappro}
\end{align}
by applying the first-order Taylor approximation. By the fact that $\ln\bigl(1+\frac{x^2}{z}\bigr)=-\ln\bigl(1-\frac{x^2}{z+x^2}\bigr)$ and substituting $y=z+x^2$, $\bar{y}=\bar{z}+\bar{x}^2$, $x=\sqrt{v}$ and $\bar{x}=\sqrt{\bar{v}}$ into \eqref{eq_IAappro}, we obtain
\begin{align}
r(v,z) &\triangleq \ln\bigl(1+\frac{v}{z}\bigr) \geq r(\bar{v},\bar{z}) - \frac{\bar{v}}{\bar{z}} + 2\frac{\sqrt{\bar{v}}\sqrt{v}}{\bar{z}} - \frac{\bar{v}(z+v)}{\bar{z}(\bar{z}+\bar{v})}
\nonumber\\
&:=\bar{r}(v,z;\bar{v},\bar{z})\label{eq_IAapproConcave}
\end{align}
where $\bar{r}(v,z;\bar{v},\bar{z})$ is concave and $\bar{r}(\bar{v},\bar{z};\bar{v},\bar{z}) = r(\bar{v},\bar{z})$ whenever $(v,z)=(\bar{v},\bar{z})$.
\renewcommand{\thesectiondis}[2]{\Alph{section}:}
\section{Proof of Theorem \ref{Theo_1}} \label{App_B}
\renewcommand{\theequation}{\ref{App_B}.\arabic{equation}}\setcounter{equation}{0}
For a given $\varphi$, the quadratic Lyapunov function defined in Section \ref{sec:ORANNUM_A} is rewritten with respect to $\hat{\mathbf{q}}_{(\varphi)}[t_s]$ as:
$L(\hat{\mathbf{q}}_{(\varphi)}[t_s]) = \frac{1}{2\tau^2}\|\hat{\mathbf{q}}_{(\varphi)}[t_s] - \hat{\mathbf{q}}^*_{(\varphi)}\|_2^2$. Following \cite[Theorem 3]{LiuJSAC2017}, the mean Lyapunov drift from time-slot $t_s$ to $t_{s+1}$ is computed as
\begin{align}\label{eq_B1}
\Delta \bar{L}(\hat{\mathbf{q}}_{(\varphi)}[t_s])
&= \mathbb{E}\{\Delta L(\hat{\mathbf{q}}_{(\varphi)}[t_s])\}
= \mathbb{E}\{L(\hat{\mathbf{q}}_{(\varphi)}[t_{s+1}]) - L(\hat{\mathbf{q}}_{(\varphi)}[t_s])\} \nonumber\\
&= \frac{1}{2\tau^2}\mathbb{E}\Bigl\{\bigl(\hat{\mathbf{q}}_{(\varphi)}[t_{s+1}] +\hat{\mathbf{q}}_{(\varphi)}[t_{s}]
- 2\hat{\mathbf{q}}^*_{(\varphi)} \bigr)^{\mathsf{T}}\bigl(\hat{\mathbf{q}}_{(\varphi)}[t_{s+1}]- \hat{\mathbf{q}}_{(\varphi)}[t_{s}]\bigr)\Bigr\}\nonumber\\
&\leq \frac{1}{2\tau}\mathbb{E}\Bigl\{\bigl(2\hat{\mathbf{q}}_{(\varphi)}[t_{s}] + \bigr(\boldsymbol{a}[t_s] - \mathbf{r}(\mathbf{w}[t_s])\bigl)\tau - 2\hat{\mathbf{q}}^*_{(\varphi)} \bigr)^{\mathsf{T}}\bigl(\boldsymbol{a}[t_s] - \mathbf{r}(\mathbf{w}[t_s])\bigr)\Bigr\}\nonumber\\
&=\underbrace{\frac{1}{2}\mathbb{E}\{\|\boldsymbol{a}[t_s] - \mathbf{r}(\mathbf{w}[t_s])\|_2^2\}}_{\triangleq \mathsf{B}_1} + \underbrace{\frac{1}{\tau}\mathbb{E}\{(\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\bigl(\boldsymbol{a}[t_s] - \mathbf{r}(\mathbf{w}[t_s])\bigr)\}}_{\triangleq \mathsf{B}_2}
\end{align}
by using the inequalities: $([x]^+)^2 \leq x^2$ and $x^2-y^2 = (x+y)(x-y)$, and the fact that $\hat{\mathbf{q}}_{(\varphi)}[t_{s+1}]- \hat{\mathbf{q}}^*_{(\varphi)}=\hat{\mathbf{q}}_{(\varphi)}[t_s] - \hat{\mathbf{q}}^*_{(\varphi)} + \bigr(\boldsymbol{a}[t_s] - \mathbf{r}(\mathbf{w}[t_s])\bigl)\tau.$
We first focus on providing the expected bound of $\mathsf{B}_1$ as
\begin{align}\label{eq_B2}
\mathsf{B}_1 &= \frac{1}{2}\mathbb{E}\{\|\boldsymbol{a}[t_s]\|_2^2 - 2\boldsymbol{a}[t_s]^{\mathsf{T}}\mathbf{r}(\mathbf{w}[t_s]) + \|\mathbf{r}(\mathbf{w}[t_s])\|_2^2\} \nonumber\\
&\leq \frac{1}{2}\mathbb{E}\{\|\boldsymbol{a}[t_s]\|_2^2 + \|\mathbf{r}(\mathbf{w}[t_s])\|_2^2\} \nonumber\\
&\leq \frac{K}{2}\bigl(A_1^{\max} + (r^{\max})^2\bigr) \triangleq \mathsf{B}_1^{\mathtt{UB}}
\end{align}
where the last inequality follows from Assumption \ref{assump_2}. To bound $\mathsf{B}_2$, we first rewrite it equivalently as
\begin{align}\label{eq_B3}
\mathsf{B}_2 = \frac{1}{\tau}(\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\bigl(\mathbb{E}\bigl\{\boldsymbol{a}[t_s]\} - \mathbf{r}^*\bigr)
+ \frac{1}{\tau}\mathbb{E}\bigl\{(\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\bigl(\mathbf{r}^*
- \mathbf{r}(\mathbf{w}[t_s])\bigr)\bigr\}.
\end{align}
From \eqref{eq:JFCS2}, it follows that $(\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\bigl(\mathbb{E}\bigl\{\boldsymbol{a}[t_s]\} - \mathbf{r}^*\bigr)\leq 0$.
By applying the Cauchy–Schwarz inequality, i.e. $|\mathbf{x}^\mathsf{T}\mathbf{y}|\leq \| \mathbf{x}\|_2\| \mathbf{y}\|_2$, to the first term in \eqref{eq_B3}, we have
\begin{align}
\frac{1}{\tau}(\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\bigl(\mathbb{E}\bigl\{\boldsymbol{a}[t_s]\} - \mathbf{r}^*\bigr)
\leq -\frac{1}{\tau}\sum_{k\in\mathcal{K}}|\hat{q}_{(\varphi),k}[t_{s}]-\hat{q}^*_{(\varphi),k}||a_k[t_s] - r^*_k|.
\end{align}
By Assumption \ref{assp:1} on $\Psi$-smooth and Step 4 of Algorithm \ref{alg_JFCS}, it is true that $a_k[t_s] - r^*_k=U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}[t_{s}]}{\varphi\tau}\bigl)$ $-U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}^*[t_{s}]}{\varphi\tau}\bigl)\leq 0$ and $\bigr|U_k^{'}\bigr(\frac{\hat{q}_{(\varphi),k}[t_{s}]}{\varphi\tau}\bigl)-U_k^{'}\bigr(\frac{\hat{q}_{(\varphi),k}^*[t_{s}]}{\varphi\tau}\bigl)\bigr| \leq \Psi\bigl|\frac{\hat{q}_{(\varphi),k}[t_{s}]}{\varphi\tau} - \frac{\hat{q}_{(\varphi),k}^*[t_{s}]}{\varphi\tau}\bigr|$. In addition, we have $\bigr|U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}[t_{s}]}{\varphi\tau}\bigl)-U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}^*[t_{s}]}{\varphi\tau}\bigl)\bigr| \geq \frac{1}{\Psi}\bigl|\frac{\hat{q}_{(\varphi),k}[t_{s}]}{\varphi\tau} - \frac{\hat{q}_{(\varphi),k}^*[t_{s}]}{\varphi\tau}\bigr|$ due to the inverse function lemma. From the fact that $(\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\mathbf{r}^*
- (\hat{\mathbf{q}}^*_{(\varphi)})^\mathsf{T}\mathbf{r}(\mathbf{w}[t_s])\geq 0$, we can further bound $\mathsf{B}_2$ as
\begin{align}\label{eq_B5}
\mathsf{B}_2 \leq -\frac{1}{\tau^2\Psi\varphi}\|\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)}\|_2^2 + \frac{1}{\tau}\mathbb{E}\bigl\{(\hat{\mathbf{q}}_{(\varphi)}[t_{s}])^\mathsf{T}\bigl(\mathbf{r}^*
- \mathbf{r}(\mathbf{w}[t_s])\bigr)\bigr\}
\end{align}
where the term $\mathbb{E}\bigl\{(\mathbf{r}^*
- \mathbf{r}(\mathbf{w}[t_s]))\bigl\}$ is a constant with respect to $\hat{\mathbf{q}}_{(\varphi)}[t_{s}]$. Substituting \eqref{eq_B2} and \eqref{eq_B5} into \eqref{eq_B1} yields
\begin{align}\label{eq_B6}
\Delta \bar{L}(\hat{\mathbf{q}}_{(\varphi)}[t_s]) \leq -\frac{1}{\tau^2\Psi\varphi}\|\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)}\|_2^2 + \mathsf{B}_1^{\mathtt{UB}}
+ \frac{1}{\tau}\mathbb{E}\bigl\{(\hat{\mathbf{q}}_{(\varphi)}[t_{s}])^\mathsf{T}\bigl(\mathbf{r}^*
- \mathbf{r}(\mathbf{w}[t_s])\bigr)\bigr\}.
\end{align}
We now compute the mean Lyapunov drift over $TT_f$ time-slots as
\begin{IEEEeqnarray}{rCl}\label{eq_B7}
\Delta \bar{L}&&=\sum_{t=1}^T\sum_{s=1}^{T_f}\mathbb{E}\{L(\hat{\mathbf{q}}_{(\varphi)}[t_{s+1}]) - L(\hat{\mathbf{q}}_{(\varphi)}[t_s])|\hat{\mathbf{q}}_{(\varphi)}[1_1]\}\nonumber\\
&&=\sum_{t=1}^T\sum_{s=1}^{T_f}\sum_{\hat{\mathbf{q}}_{(\varphi)}\geq 0}\Bigl(\mathsf{Prob}\bigl( \hat{\mathbf{q}}_{(\varphi)}[t_s]=\hat{\mathbf{q}}_{(\varphi)}|\hat{\mathbf{q}}_{(\varphi)}[1_1]\bigr)\nonumber\\
&&\times\mathbb{E}\{L(\hat{\mathbf{q}}_{(\varphi)}[t_{s+1}]) - L(\hat{\mathbf{q}}_{(\varphi)}[t_s])|\hat{\mathbf{q}}_{(\varphi)}[t_s]=\hat{\mathbf{q}}_{(\varphi)}\} \Bigr).\qquad
\end{IEEEeqnarray}
Let us denote by $\rho_{\hat{\mathbf{q}}_{(\varphi)}}^{\infty}$ the stationary distribution of the Markov chain $\hat{\mathbf{q}}_{(\varphi)}[t_s]\geq 0$, i.e. $\rho_{\hat{\mathbf{q}}_{(\varphi)}}^{\infty}= \lim_{T\rightarrow\infty}\frac{1}{TT_f}\sum_{t=1}^T\sum_{s=1}^{T_f}\mathsf{Prob}\bigl( \hat{\mathbf{q}}_{(\varphi)}[t_s]=\hat{\mathbf{q}}_{(\varphi)}|\hat{\mathbf{q}}_{(\varphi)}[1_1]\bigr)$. By substituting \eqref{eq_B6} into \eqref{eq_B7} and dividing both side with $TT_f$, we have
\begin{IEEEeqnarray}{rCl}
&&\sum_{\hat{\mathbf{q}}_{(\varphi)}\geq 0}\rho_{\hat{\mathbf{q}}_{(\varphi)}}^{\infty}\Bigl(-\frac{1}{\tau^2\Psi\varphi}\|\hat{\mathbf{q}}_{(\varphi)}[t_{s}]-\hat{\mathbf{q}}^*_{(\varphi)}\|_2^2 + \mathsf{B}_1^{\mathtt{UB}}
+\frac{1}{\tau}(\hat{\mathbf{q}}_{(\varphi)}[t_{s}])^\mathsf{T}\mathbb{E}\bigl\{\bigl(\mathbf{r}^*
- \mathbf{r}(\mathbf{w}[t_s])\bigr)\bigr\}\Bigr) \nonumber\\
&&= -\frac{1}{\tau^2\Psi\varphi}\mathbb{E}\bigl\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}^*_{(\varphi)}\|_2^2\bigl\} + \mathsf{B}_1^{\mathtt{UB}} + \frac{1}{\tau}\mathbb{E}\bigl\{(\hat{\mathbf{q}}_{(\varphi)}^{\infty})^\mathsf{T}\bigl(\mathbf{r}^*
- \mathbf{r}^{\infty}\bigr\} \geq 0
\end{IEEEeqnarray}
where $\mathbf{r}^{\infty} = \underset{r_k(\mathbf{w})\in \mathscr{C}_{\mathbf{H}[\infty]}, \forall k\in\mathcal{K}}{\argmax} \ \sum_{k\in\mathcal{K}} \hat{q}_k^{\infty}r_k(\mathbf{w})$. We note here that $(\hat{\mathbf{q}}_{(\varphi)}^{\infty})^\mathsf{T} \mathbf{r}^{\infty} = \underset{r_k(\mathbf{w})\in \mathscr{C}_{\mathbf{H}[\infty]}, \forall k\in\mathcal{K}}{\max}$ $\sum_{k\in\mathcal{K}} \hat{q}_k^{\infty}r_k(\mathbf{w}) \geq (\hat{\mathbf{q}}_{(\varphi)}^{\infty})^\mathsf{T}\mathbf{r}^*$, yielding
\begin{IEEEeqnarray}{rCl}
\frac{1}{\tau^2\Psi\varphi}\mathbb{E}\bigl\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}^*_{(\varphi)}\|_2^2\bigr\} - \mathsf{B}_1^{\mathtt{UB}} \leq 0
\end{IEEEeqnarray}
This implies that $\mathbb{E}\bigl\{\|\hat{\mathbf{q}}_{(\varphi)}^{\infty}-\hat{\mathbf{q}}^*_{(\varphi)}\|_2\bigr\} \leq \sqrt{\frac{K\tau^2\Psi}{2}\bigl(A_1^{\max} + (r^{\max})^2\bigr)}\sqrt{\varphi}$ where $\mathsf{B}_1^{\mathtt{UB}} = \frac{K}{2}\bigl(A_1^{\max} + (r^{\max})^2\bigr)$, showing the inequality \eqref{eq_theo1eq} in Theorem \ref{Theo_1}.
\renewcommand{\thesectiondis}[2]{\Alph{section}:}
\section{Proof of Theorem \ref{Theo_2}} \label{App_C}
\renewcommand{\theequation}{\ref{App_C}.\arabic{equation}}\setcounter{equation}{0}
To prove \eqref{eq_theo2a}, we first recall that $a_{(\varphi),k}^{\infty} - a^*_k=U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}^{\infty}}{\varphi\tau}\bigl)$ $-U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}^*}{\varphi\tau}\bigl)$ and $\bigr|U_k^{'}\bigr(\frac{\hat{q}_{(\varphi),k}^{\infty}}{\varphi\tau}\bigl)$ $-U_k^{'}\bigr(\frac{\hat{q}_{(\varphi),k}^*}{\varphi\tau}\bigl)\bigr| \geq \psi\bigl|\frac{\hat{q}_{(\varphi),k}^{\infty}}{\varphi\tau} - \frac{\hat{q}_{(\varphi),k}^*}{\varphi\tau}\bigr|$ using Assumption \ref{assp:1}. By the inverse function lemma, we have $\bigr|U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}^{\infty}}{\varphi\tau}\bigl)$ $-U_k^{'-1}\bigr(\frac{\hat{q}_{(\varphi),k}^*}{\varphi\tau}\bigl)\bigr| \leq \frac{1}{\psi}\bigl|\frac{\hat{q}_{(\varphi),k}^{\infty}}{\varphi\tau} - \frac{\hat{q}_{(\varphi),k}^*}{\varphi\tau}\bigr|$, which yields
\begin{align}\label{eq_C1}
\|\boldsymbol{a}^{\infty}_{(\varphi)} - \boldsymbol{a}^{*}\|_2 \leq \frac{1}{\psi\tau\varphi}\|\hat{\mathbf{q}}^{\infty}_{(\varphi)} - \hat{\mathbf{q}}^*_{(\varphi)}\|_2 \overset{\eqref{eq_theo1eq}}{\leq} \frac{\mathsf{C}_1}{\psi\tau} \frac{1}{\sqrt{\varphi}}.
\end{align}
Next, it is assumed that $U_k(\cdot)$ is twice continuously differentiable, increasing, and strictly concave. If the utility function $ U(\boldsymbol{a})$ has a maximizer $\boldsymbol{a}^*$, then
\begin{align}
U(\boldsymbol{a}^*) - U(\boldsymbol{a}^{\infty}_{(\varphi)}) \leq \frac{\Psi}{2}\|\boldsymbol{a}^* -\boldsymbol{a}^{\infty}_{(\varphi)}\|_2^2 \leq \frac{\Psi\mathsf{C}_1^2}{2\psi^2\tau^2} \frac{1}{\varphi}
\end{align}
where the last inequality follows from \eqref{eq_C1}. The proof is thus complete.
\begingroup
\balance
\bibliographystyle{IEEEtran}
|
1,314,259,995,630 | arxiv | \section{Introduction}
\label{sec:intro}
In this paper, we consider the problem of learning a functional mapping
from a high-dimensional input space with structured sparsity to a multivariate output space where responses are coupled (therefore making the estimator doubly structured), with an application for detecting genomic loci
affecting gene expression levels, a problem known as expression quantitative trait loci (eQTL) mapping.
In particular, we are interested in exploiting the structural information
on both the input and output space jointly to improve the accuracy for identifying
a small number of input variables relevant to the outputs
among a large number of candidates.
When the input or output variables are highly correlated among themselves,
multiple related inputs may synergistically influence
the same outputs,
and multiple related outputs may be synergistically influenced by the same inputs.
The primary motivation for our work comes from the problem of genome-wide association (GWA) mapping of eQTLs in computational genomics \cite{kendziorski2006statistical}, of which
the goal
is to detect the genetic variations, often single nucleotide polymorphisms (SNPs), across the whole genome
that perturb the expression levels of genes, given the data of SNP genotypes
and microarray gene-expression traits of a study cohort.
One of the main challenges of this problem
is that, typically the sample size is very small (e.g. $\sim 1000$), whereas there are a very large number of SNPs (e.g. $\sim$ 500,000) and expression traits (e.g., $\sim$ 10,000).
Furthermore, there have been numerous evidences that multiple genetic variations may interact with
each other (a.k.a., epistasis) \cite{segre2005modular,cho1998identification,badano2005dissection}, and the same genetic variation(s) can influence multiple genes
(a.k.a., pleiotropy) \cite{stearns2010one,ducrest2008pleiotropy}.
However, prior knowledge of such information
implying complex association
structures are difficult to exploit in standard
statistical analysis of GWA mapping \cite{wang2010analysing,moore2010bioinformatics}.
To enhance the statistical power for mapping of eQTLs,
it is desirable to incorporate biological knowledge of genome and transcriptome structures into the model to guide the search for true eQTLs.
In this article, we focus on developing a model
which can make use of structural information on both input (SNPs) and output (gene expressions) sides. In particular, we consider biological knowledge about group associations as structural information. If there exist group behaviors among the covariates in the high-dimensional input $\mathbf{X}$, for example, multiple genetically coupled SNPs
(e.g., in linkage disequilibrium) can jointly affect a single trait \cite{wang2010analysing},
such group information is called an input structure;
if multiple variables in the high-dimensional output $\mathbf{Y}$ are jointly under the influence of a similar set of input covariates, for example, a single SNP can affect multiple functionally coupled traits (e.g., genes in the same pathway or operon) \cite{kim-xing-PLoSG09},
such group information is called an output structure.
The problem of GWA mapping of eQTLs can be formulated as a model selection problem under a multitask regression model $\mathbf{Y}=\mathbf{B} \mathbf{X}$ with structured sparsity,
where the resultant non-zero elements in the regression coefficient matrix $\mathbf{B}$ expose the identities of the eQTLs and their associated traits.
Variants of this problem have been widely studied in the recent high-dimensional inference and variable selection literature, and various penalty-based or Bayesian approaches for learning a shared sparsity pattern among either multiple inputs or multiple outputs in a regression model have been proposed
\cite{tibshirani2005sparsity,negahban2011simultaneous,han2010multi,li2010bayesian}.
Depending on the type of structural constraints, different penalty functions
have been previously considered, including
mixed-norm (group-lasso) penalty for a simple grouping structure
\cite{yuan2006model,zhao2009composite},
tree-guided group-lasso
penalty for a tree structure \cite{kim2009tree},
or graph-guided fused lasso for a graph structure \cite{kim-xing-PLoSG09}.
Most previous approaches, however,
considered either only the input structural constraints
or only the output structural constraints, but not both.
There have been a few approaches that attempted to use both structural information,
including MGOMP \cite{lozanoblock} and ``group sparsity for multi-task learning''
\cite{tseng2009coordinate}.
MGOMP proposed to select the groups of regression coefficients
from a predefined set of grouped variables in a greedy fashion, and
\cite{tseng2009coordinate} proposed to find the groups of inputs that
influence the group of outputs.
However, both methods may have limits on the number or the shapes of sparsity patterns that
can be induced in $\mathbf{B}$.
For example, given a large number of input groups $|\mathcal{G}|$ and output groups $|\mathcal{H}|$
(e.g. $|\mathcal{G}|>10^5, \; |\mathcal{H}|>10^3$ for genome data)
the scalability of MGOMP can be substantially affected since it needs to select
the groups of coefficients from all possible combinations of input and output groups.
For \cite{tseng2009coordinate}, only disjoint block sparsity patterns are considered, hence
it may not capture the sparsity patterns where the grouped variables overlap.
In this paper, we address
the problem of exploiting both the input and output structures in a high-dimensional
linear regression setting practically encountered in eQTL mapping.
Furthermore, to detect epistatic (i.e., interaction) effects between SNP pairs,
we additionally expand the input space to include pairwise terms (i.e., $x_i x_j$'s) guided by biological information, which necessitates
attentions for avoiding excessive input dimension that can make the problem computationally prohibitive.
Our main contributions can be summarized as follows:
\begin{enumerate}
\item
We propose a highly general regression model
with structured input-output regularizers called
``jointly structured input-output lasso'' (SIOL)
that discovers structured associations between SNPs and expression traits (Section \ref{sec:methods}).
\item
We develop a simple and highly efficient optimization method called
``hierarchical group thresholding'' (HiGT)
for solving the proposed regression problem
under complex sparsity-inducing penalties in a very high-dimensional space (Section \ref{sec:optimization}).
\item Extending
SIOL, we propose ``structured polynomial multi-task regression'' to efficiently model non-additive SNP-SNP interactions guided by
genetic interaction networks
(Section \ref{sec:model_interaction}).
\end{enumerate}
Specifically, given knowledge of the groupings of the inputs (i.e., SNPs) and outputs (i.e., traits) in a high-dimensional multi-task regression setting, we employ an $L_1/L_2$ norm over such structures
to impose a group-level sparsity-inducing penalty simultaneously over both the columns and the rows of the regression coefficient matrix $\mathbf{B}$
(In our setting, a row corresponds to coefficients regressing all SNP (or SNP pair) inputs to a particular trait output,
thus reflecting possible epistatic effects;
and a column corresponds to coefficients regressing a particular SNP (or SNP pair) input to all trait outputs in question,
thus reflecting possible pleotropic effects).
Given reliable input and output structures, rich structured sparsity
can increase statistical power significantly since
it makes it possible to borrow information not only within different output or input variables, but also across output and input variables.
The sparsity-inducing penalties on both the inputs and
outputs in SIOL introduce a non-differentiable
and non-separable objective in an extremely high-dimensional optimization space,
which prevents standard optimization methods such as the interior point \cite{nesterov1987interior}, the coordinate-descent \cite{friedman2007pathwise}, or even the recently invented union of supports \cite{jacob2009group} algorithms to be directly applied.
We propose a simple and efficient algorithm called ``hierarchical
group-thresholding''
to optimize our regression model with complex structured regularizers.
Our method is an iterative optimization algorithm, designed to handle complex structured regularizers
for very large scale problems.
It starts with a non-zero $\mathbf{B}$ (e.g. initialized by ridge regression \cite{hoerl1970ridge}),
and progressively discards irrelevant groups of covariates using thresholding operations.
In each iteration, we also update the coefficients of the remaining covariates.
To speed up our method, we employ
a directed acyclic graph (DAG) where nodes represent
the zero patterns encoded by our input-output structured regularizers
at different granularity, and edges indicate the inclusion relations
among them.
Guided by the DAG,
we could efficiently discard irrelevant covariates.
As our third contribution,
we consider non-additive pairwise interaction effects between the input variables,
in a way that avoids a quadratic blow-up of the input dimension.
In eQTL mapping studies, it is not uncommon that the effect of one SNP
on the expression-level of a gene is dependent on the genotype of another SNP,
and this phenomenon is known as epistasis.
To capture pairwise epistatic effects of SNPs on the trait variation, we
additionally consider non-additive interactions between the input covariates.
However, in a typical eQTL mapping, as
the input lies in a very high-dimensional space, it is computationally and statistically infeasible
to consider all possible input pairs.
For example,
for $J$ inputs (e.g. $500,000$ for a typical genome data set),
we have $O(J^2)$ candidate input pairs,
and learning with all of them will require a significantly large sample size.
Many of the previous approaches for learning the epistatic interactions relied
on pruning candidate pairs based on the observed data \cite{Rat:96} or constructing candidate pairs from
individual SNPs that were selected based on marginal effects in the previous learning phase without modeling interactions \cite{Taskar:10,devlin2003analysis}.
A main disadvantage of the later approach is that it will miss
pairwise interactions when they have no or little individual effects
on outputs.
Instead of choosing candidate SNP pairs based on only marginal effects,
we propose to use
genetic interaction network \cite{costanzo2010genetic}
constructed from large-scale biological experiments
to consider biologically plausible candidate pairs.
The rest of this paper is organized as follows.
In Section 2, we discuss
previous works on learning a sparse
regression model with prior knowledge on either output or input structure.
In Section 3, we introduce our proposed model
``jointly structured input-output lasso'' (SIOL).
To solve our regression problem, we present an efficient optimization method
called ``hierarchical group-thresholding'' (HiGT) in Section 4.
We further extend our model to consider pairwise interactions among input
variables and propose ``structured polynomial multi-task regression'' in Section 5.
We demonstrate the accuracy of recovered structured sparsity and the speed
of our optimization method in Section 6 via simulation study, and
present eQTLs having marginal and interaction effects in yeast that we identified in Section 7.
A discussion is followed in Section 8.
\section{Background: Linear Regression with Structured Sparsity}
\label{sec:background}
In this section, we lay out the notation and then review existing sparse regression
methods that recover a structured sparsity pattern in the estimated regression coefficients
given prior knowledge on input or output structure.
\subsection{Notation for matrix operations}
Given a matrix $\mathbf{B} \in \mathbb{R}^{K \times J}$, we denote
the $k$-th row by $\bm{\beta}_k$, the $j$-th column
by $\bm{\beta}^j$, and the $(k,j)$ element
by $\beta_k^j$.
$\lVert \cdot \rVert_F$ denotes
the matrix Frobenius norm, $\lVert \cdot \rVert_1$ denotes an $L_1$ norm
(entry-wise matrix $L_1$ norm for a matrix argument), and $\lVert \cdot \rVert_2$
represents an $L_2$ norm.
Given the set of {\it column} groups $\mathcal{G}=\{{\mathbf{g}}_1, \ldots, {\mathbf{g}}_{|\mathcal{G}|}\}$
defined as a subset of the power set of $\{1, \ldots, J\}$,
$\bm{\beta}_k^{\mathbf{g}}$ represents the row vector with
elements $\{\beta_k^j: j \in \mathbf{g}, \mathbf{g} \in \mathcal{G} \}$, which is
a subvector of $\bm{\beta}_k$ due to group $\mathbf{g}$.
Similarly, for the set of {\it row} groups $\mathcal{H}=\{{\mathbf{h}}_1, \ldots, {\mathbf{h}}_{|\mathcal{H}|}\}$
over $M$ rows of matrix $\mathbf{B}$,
we denote by $\bm{\beta}_{\mathbf{h}}^{j}$ the column subvector with
elements $\{\beta_k^j: k \in \mathbf{h}, \mathbf{h} \in \mathcal{H} \}$.
We also define the submatrix of ${\mathbf{B}}_{\mathbf{h}}^{\mathbf{g}}$ as
a $|\mathbf{h}| \times |\mathbf{g}|$ matrix with elements $\{\beta_k^j: k \in \mathbf{h}, \; j \in \mathbf{g}, \;
\mathbf{h} \in \mathcal{H}, \; \mathbf{g} \in \mathcal{G} \}$.
\subsection{Sparse estimation of linear regression}
Let $\mathbf{X} \in \mathbb{R}^{J \times N}$ be the input data for $J$ inputs and $N$ individuals,
and $\mathbf{Y} \in \mathbb{R}^{K \times N}$ be the output data
for $K$ outputs. We model the functional mapping from the common
$J$-dimensional input space to the $K$-dimensional output space, using a linear
model parametrized by unknown regression coefficients $\mathbf{B} \in \mathbb{R}^{K \times J}$
as follows:
\begin{eqnarray}
\mathbf{Y} = \mathbf{B} \mathbf{X} + \mathbf{E}, \nonumber
\label{eq:reg}
\end{eqnarray}
where $\mathbf{E} \in \mathbb{R}^{K \times N}$ is a matrix of noise terms whose elements
are assumed to be identically and independently distributed as Gaussian with zero mean
and the identity covariance matrix.
Throughout the paper, we assume that $x_j^{i}$'s and $y_k^i$'s
are standardized
such that all rows of $\mathbf{X}$ and $\mathbf{Y}$ have zero mean and a constant variance,
and consider a model without an intercept.
In eQTL analysis, inputs are genotypes for $J$ loci encoded as 0, 1, or 2 in terms of
the number of minor alleles at a given locus, and output data are given as
expression levels of genes measured in a microarray experiment.
Then, the regression coefficients represent the strengths of associations
between genetic variations and gene expression levels.
Our proposed method for estimating the coefficients $\mathbf{B}$ is based on
a
group-structured multi-task regression approach that extends
existing regularized regression approaches including lasso \cite{tibshirani1996regression},
group lasso \cite{yuan2006model} and multi-task lasso \cite{obozinski2006multi},
which we briefly review below in the context of our eQTL mapping problem.
When $J >> N$ and only a small number of inputs are expected
to influence outputs, lasso has been widely used and shown effective in
selecting the input variables relevant to outputs and setting the elements of
$\mathbf{B}$ for irrelevant inputs to zero \cite{zhang2008sparsity}.
Lasso obtains a sparse estimate of regression coefficients by optimizing
the least squared error criterion with an $L_1$ penalty over $\mathbf{B}$ as follows:
\begin{eqnarray}
\min_{\mathbf{B}} \frac{1}{2}
\lVert \mathbf{Y} - \mathbf{B} \mathbf{X} \rVert_F^2
+ \lambda \lVert \mathbf{B} \rVert_1,
\label{eq:lasso}
\end{eqnarray}
where $\lambda$ is the tuning parameter that determines the amount of
penalization.
The optimal value of $\lambda$
can be determined by cross validation or via an information-theoretic test based on BIC.
As in eQTL analysis it is often believed that the expression level of each gene is affected by
a relatively small number of genetic variations in the whole genome, lasso provides
an effective tool for identifying eQTLs from a large number of genetic variations.
Lasso has been previously applied to eQTL analysis \cite{brown2011application}
and more general genetic association
mapping problems \cite{wu2009genome}.
While lasso considers the input variables independently to select relevant
inputs with non-zero regression coefficients, we may have prior knowledge
on how related input variables are grouped together and want to perform variable
selection at the group level rather than at the level of individual inputs.
Grouped variable selection approach can combine the statistical strengths across
multiple related input variables to achieve higher power for detecting
relevant inputs in the case of low signal-to-noise ratio.
Assuming the grouping structure over inputs are available as
$\mathcal{G} = \{{\mathbf{g}}_1, \ldots, {\mathbf{g}}_{|\mathcal{G}|}\}$, which is a subset
of the power set of $\{1, \ldots, J\}$, group lasso uses $L_1/L_2$ penalization
to enforce that all of the members in each group of input variables are jointly
relevant or irrelevant to each output. Group lasso obtains an estimate
of $\mathbf{B}$ by solving the following optimization problem:
\begin{eqnarray}
\min_{\mathbf{B}} \frac{1}{2} \lVert \mathbf{Y} - \mathbf{B} \mathbf{X} \rVert_F^2
+ \lambda \sum_{k=1}^K \sum_{\mathbf{g} \in \mathcal{G}} {\lVert \bm{\beta}^{\mathbf{g}}_{k} \rVert_2}
\label{eq:lasso_g},
\end{eqnarray}
where $\lambda$ is the tuning parameter. The second term in the above
equation represents an $L_1/L_2$ penalty over each row $\bm{\beta}_k$ of $\mathbf{B}$ for
the $k$-th output given $\mathcal{G}$, defined by
$\lVert \bm{\beta}_k \rVert_{L_1/L_2} = \sum_{\mathbf{g} \in \mathcal{G}} \lVert \bm{\beta}_k^{\mathbf{g}} \rVert_2$.
The $L_2$ part of the penalty plays the role of enforcing a joint selection
of inputs within each group, whereas the $L_1$ part of the penalty is applied
across different groups to encourage a group-level sparsity.
Group lasso can be applied to an eQTL mapping problem given biologically
meaningful groups of genetic variations that are functionally related.
For example, rather than individual genetic variations acting
independently to affect (or not affect) gene expressions, the variations are
often related through pathways that consist of multiple genes participating
in a common function. Thus, genetic variations can be grouped according to pathways
that contain genes carrying those genetic variations. Then, given this grouping,
group lasso can be used to select groups of genetic variations
in the same pathways as factors influencing gene expression levels \cite{silverpathway}.
Instead of having groups over inputs with outputs being independent as in group lasso,
the idea of using $L_1/L_2$ penalty for grouped variable
selection has also been applied to take advantage of the relatedness among outputs in multiple
output regression.
In multi-task regression for union support recovery \cite{obozinski2006multi},
one assumes that all the outputs share a common support of relevant input variables
and try to recover shared sparsity patterns across multiple outputs
by solving the following optimization problem:
\begin{eqnarray}
\min_{\mathbf{B}} \frac{1}{2}
\lVert \mathbf{Y} - \mathbf{B} \mathbf{X} \rVert_F^2
+ \lambda \sum_{j=1}^J \sum_{\mathbf{h} \in \mathcal{H}} \lVert \bm{\beta}_{\mathbf{h}}^j \rVert_2, \label{eq:multi_task2}
\end{eqnarray}
where
$\lambda$ can be determined by cross-validation.
In eQTL mapping, as gene expression levels are often correlated for the genes that participate
in a common function, it is reasonable to assume that those coexpressed genes
may be influenced by common genetic variations.
If gene module information is available, one can use the above model to
detect genetic variations influencing the expressions
of a subset of genes within each gene module.
This strategy corresponds to a variation of the standard group lasso, where group is defined
over outputs rather than inputs.
Extending the idea of lasso and group lasso, we may have
group and individual level sparsity simultaneously using
combined $L_1$ and $L_1/L_2$ penalty.
In group lasso, if a group of coefficients is not jointly set to zero, all the members
in the group should have non-zero values.
However, sometimes it is desirable to set some members of the
group to zero if they are irrelevant to outputs.
Sparse group lasso \cite{friedman2010note} is proposed to address
the cases where
groups of coefficients
include both relevant and irrelevant
ones.
Using convex combination of $L_1$ and $L_1/L_2$ norms,
it solves the following convex optimization problem:
\begin{eqnarray}
\min_{\mathbf{B}} \frac{1}{2}
\lVert \mathbf{Y} - \mathbf{B} \mathbf{X} \rVert_F^2
+ \lambda_1 \lVert \mathbf{B} \rVert_1
+ \lambda_2 \sum_{k=1}^K \sum_{\mathbf{g} \in \mathcal{G}} {\lVert \bm{\beta}^{\mathbf{g}}_{k} \rVert_2},
\label{eq:sparse_group_lasso}
\end{eqnarray}
where $\lambda_1$ and $\lambda_2$ determine the individual and
group level sparsity, respectively.
The $L_1/L_2$ penalty shrinks
groups of coefficients to zero, and at the same time,
$L_1$ penalty sets irrelevant coefficients to zero individually within each group.
Our proposed model is motivated by
group lasso, multi-task lasso
and sparse group lasso, each of which can
exploit pre-defined groupingness of input
or output variables to
achieve better statistical power.
In the next section,
we will extend the existing models in such a way
that we can use
the groups in both input and output spaces simultaneously.
Adopting the idea of sparse group lasso,
we will also support variable selection
at individual levels.
\section{Jointly Structured Input-Output Lasso}
\label{sec:methods}
In this section, we propose SIOL
that incorporates structural constraints on both the inputs and outputs.
The model combines the mixed-norm regularizers for the groups of
inputs and outputs, which leads to the following optimization problem:
\begin{subequations}
\label{equ:reg5}
\begin{align}
\min \frac{1}{2}
\lVert \mathbf{Y} - \mathbf{B} \mathbf{X} \rVert_F^2
& + \lambda_1
\lVert \mathbf{B} \rVert_1,
\label{equ:reg5_b} \\
& + \lambda_2
\sum_{k=1}^K \sum_{{\mathbf{g}} \in \mathcal{G}} {\lVert \bm{\beta}^{\mathbf{g}}_{k} \rVert_2}
\label{equ:reg5_c} \\
& + \lambda_3
\sum_{j=1}^J \sum_{{\mathbf{h}} \in \mathcal{H}} \lVert \bm{\beta}_{\mathbf{h}}^j \rVert_2,
\label{equ:reg5_d}
\end{align}
\end{subequations}
where Eq. (\ref{equ:reg5_c})
incorporates the groups of inputs $\mathcal{G}=\{{\mathbf{g}}_1, \ldots, {\mathbf{g}}_{|\mathcal{G}|}\}$,
Eq. (\ref{equ:reg5_d})
incorporates the groups of the outputs
$\mathcal{H}=\{{\mathbf{h}}_1, \ldots, {\mathbf{h}}_{|\mathcal{H}|}\}$,
and Eq. (\ref{equ:reg5_b}) allows us to select individual coefficients.
Note that it is possible that there are overlaps between
$\bm{\beta}_k^{\mathbf{g}}$ and $\bm{\beta}_k^{\mathbf{g}'}$,
between $\bm{\beta}_{\mathbf{h}}^j$ and $\bm{\beta}_{\mathbf{h}'}^j$, and
between $\bm{\beta}_k^{\mathbf{g}}$ and $\bm{\beta}_{\mathbf{h}}^j$, where
$\mathbf{g} \neq \mathbf{g}', \mathbf{h} \neq \mathbf{h}'$ and
$\mathbf{g}, \mathbf{g}' \in \mathcal{G}, \; \mathbf{h}, \mathbf{h}' \in \mathcal{H}$.
The overlaps make it challenging to optimize Eq. (\ref{equ:reg5}), and
this issue will be addressed by our optimization method in Section \ref{sec:optimization}.
Let us characterize the structural constraints imposed by the penalties in our model.
In our analysis, we investigate a block of coefficients involved in one output group $\mathbf{h}$ and
one input group $\mathbf{g}$, i.e, $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$.
We start with Karush-Kuhn-Tucker (KKT)
condition for Eq. (\ref{equ:reg5}):
\begin{equation}
(\mathbf{y}_k-\bm{\beta}_k \mathbf{X} )(\mathbf{x}_j)^T = \lambda_1 s_k^j + \lambda_2 c_k^j + \lambda_3 d_k^j,
\label{eq:subgradients}
\end{equation}
where $s_k^j$, $c_k^j$, and $d_k^j$ are the subgradient of
Eq. (\ref{equ:reg5_b}), Eq. (\ref{equ:reg5_c}), and Eq. (\ref{equ:reg5_d})
with respect to $\beta_k^j$, respectively.
For simple notation, we also define
${\mathbf{r}}_k^j=\mathbf{y}_k-\sum_{l\neq j} \beta_k^l \mathbf{x}_l$.
First, we consider the case where all coefficients in $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$ become
zero simultaneously, i.e., $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}} = \bm{0}$.
Using KKT condition in Eq. (\ref{eq:subgradients}), we
can see that $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}} = \bm{0}$ if and only if
\begin{align}
\label{eq:sub1}
\sum_{k \in \mathbf{h}}\sum_{j \in \mathbf{g}} \left\{ \mathbf{r}_k^j(\mathbf{x}_j)^T - \lambda_1 s_k^j\right\}^2
\leq \sum_{k \in \mathbf{h}}\sum_{j \in \mathbf{g}} \left( \lambda_2 c_k^j + \lambda_3 d_k^j \right)^2
\leq \left( \lambda_2 \sqrt{|\mathbf{h}|} + \lambda_3 \sqrt{|\mathbf{g}|} \right)^2.
\end{align}
This condition is due to Cauchy-Schwarz inequality,
$\sum_{j\in \mathbf{g}}(c_k^j)^2 \leq 1$, and $\sum_{k\in \mathbf{h}}(d_k^j)^2 \leq 1$.
Here if $\lambda_1$, $\lambda_2$ and $\lambda_3$ are large,
$\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$ is likely to be zero jointly.
This structural sparsity is useful to filter out a large number of
irrelevant covariates since it considers both the group of
correlated inputs $\mathbf{g}$ and the group of correlated outputs $\mathbf{h}$ simultaneously.
Our model also inherits grouping effects for only input (or output) groups.
For the analysis of such grouping effects, we fix the groups of zero coefficients that overlap with, say, an input group $\bm{\beta}_k^{\mathbf{g}}$.
Formally speaking, let us define $\bm{\xi} = \{j: (\bm{\beta}_{\mathbf{h}'}^j = \bm{0}, j \in \mathbf{g}, \mathbf{h}' \in \mathcal{H})
\lor (\bm{\beta}_{k}^{\mathbf{g}'} = \bm{0}, j \in \mathbf{g}' \land \mathbf{g}) \}$,
and fix $\beta_k^j$s for all $j \in \bm{\xi}$.
Using the KKT condition in Eq. (\ref{eq:sub1}), $\bm{\beta}_k^{\mathbf{g}} = \bm{0}$ if
\begin{align}
\sum_{j \in \mathbf{g} - \bm{\xi}} \left\{ {\mathbf{r}}_k^j(\mathbf{x}_j)^T - \lambda_1 s_k^j\right\}^2
\leq \sum_{j \in \mathbf{g} - \bm{\xi}} \left( \lambda_2 c_k^j + \lambda_3 d_k^j \right)^2
\leq \lambda_2^2.
\label{eq:sub2}
\end{align}
Here, we know that $d_k^j = 0$ for $j \in \mathbf{g} - \bm{\xi}$ ($\beta_k^j = 0$ and $\bm{\beta}_{\mathbf{h}}^j \neq \bm{0}$) and
$\lambda_2 \sum_{j \in \mathbf{g}} (\beta_k^j)^2 = \lambda_2 \sum_{j \in \mathbf{g} - \bm{\xi}} (\beta_k^j)^2$,
and hence $\sum_{j \in \mathbf{g} - \bm{\xi}} \left( \lambda_2 c_k^j + \lambda_3 d_k^j\right)^2 \leq \lambda_2^2$.
This technique was previously introduced in \cite{yuanefficient} to handle overlapping group lasso.
One can see that if the size of $\bm{\xi}$ is large,
$\bm{\beta}_{k}^{\mathbf{g}}$ tends to be zero together
since it reduces the left-hand side of Eq. (\ref{eq:sub2}).
This behavior explains the correlation effects between input and output group structures.
When a group of coefficients ($\bm{\beta}_k^{\mathbf{g}}$, $\bm{\beta}_{\mathbf{h}}^{j}$) corresponding to an input group or an output group become zero, they
affect other groups of coefficients that overlap with them; and
the overlapped coefficients are more likely to be zero.
These correlation effects between overlapping groups are desirable for inducing
appropriate structured sparsity as it allows us to share information across different
inputs and different outputs simultaneously.
We skip the analysis of the grouping effects for output groups as
the argument is the same
except that the input and output group are reversed.
Finally, we also have individual sparsity due to $L_1$ penalty in Eq. (\ref{equ:reg5_b}).
In this case, let us assume that
$\bm{\beta}_{k}^{\mathbf{g}} \neq \bm{0}$ and $\bm{\beta}_{\mathbf{h}}^{j} \neq \bm{0}$
since if the group of coefficients is zero, we automatically have $\beta_k^j=0$.
Using the KKT condition, $\beta_k^j=0$ if and only if
\begin{align}
|{\mathbf{r}}_k^j(\mathbf{x}_j)^T |
\leq \lambda_1.
\label{eq:sub3}
\end{align}
It is equivalent to the condition of lasso that sets a regression coefficient to zero.
Note that if $\lambda_2=\lambda_3 = 0$,
we have sparsity only at the individual levels, and our model is the same as lasso.
When a group of coefficients contains
both relevant and irrelevant ones,
we can set the irrelevant coefficients to zero using Eq. (\ref{eq:sub3}).
We briefly mention the three tuning parameters ($\lambda_1$, $\lambda_2$, $\lambda_3$)
which can be determined by cross validation.
It is often computationally expensive to search for optimal parameters in 3-dimensional grid.
In practice, instead, we use the following tuning parameters:
$\lambda_2 = \lambda_2'\lambda_3'$ and
$\lambda_3 = (1-\lambda_2')\lambda_3'$.
Here $\lambda_2$ configures the mixing proportion of
input and output group structures, and
$\lambda_3$ is the scaling factor that determines
the degree of penalization for the input and output groups.
In this setting, we also have three regularization parameters, however,
it helps us to reduce the search space of the
tuning parameters as we know the range of $\lambda_2'$ ($0\leq \lambda_2' \leq 1$).
Let us discuss the statistical and biological benefits of
our model.
First, our model can capture rich structured sparsity in $\mathbf{B}$.
The structured sparsity patterns include zero (sub)rows, zero (sub)columns and
zero blocks of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$.
It is impossible to have such rich sparsity patterns if we
use one part of information on either input or output side.
For example, group lasso \cite{yuan2006model}
or multi-task lasso \cite{obozinski2006multi} consider
structured sparsity patterns in either rows or columns in $\mathbf{B}$.
Second, our model is robust to the
groups which
contain both relevant and irrelevant coefficients.
If predefined groups of inputs and outputs
are unreliable, our model may still work
since the irrelevant coefficients can be set to zero individually via $L_1$ penalty
even when their groups are not jointly set to zero.
Third, the grouping effects induced by our model in Eq. (\ref{eq:sub1}, \ref{eq:sub2})
show that we can use
the correlation effects between input and output groups.
When we have reliable input and output groups,
the advantage from the structural information will be
further enhanced by the correlation effects in addition to
the sum of the benefits of both input and output groups.
When applied to GWA mapping of eQTLs, our model offers a number of desirable properties.
It is likely that our model can detect association SNPs with low signal-to-noise ratio by
taking advantage of rich structural information.
In GWA studies, one of the main challenges
is to detect SNPs having weak signals with limited sample size.
In complex diseases such as cancer and diabetes, biologists
believe that multiple SNPs are jointly responsible for diseases but not necessarily with
strong marginal effects \cite{mccarthy2008genome}.
However,
such causal SNPs are hard to detect mainly due to insufficient number of samples.
Our model can deal with this challenge by taking advantage of both input and output group
structures.
First, by grouping inputs (or SNPs), we can increase the signal-to-noise ratio.
Suppose each SNP has small signal marginally,
if a group of coefficients is relevant,
their joint strength will be increased, and it is unlikely that they are jointly set to zero.
On the other hand, if a group of coefficients is irrelevant,
their joint strength will still be small, and it is likely that they are set to zero.
Second, taking advantage of the output groups, we can share information across
the correlated outputs,
and it decreases the sample size required for successful support recovery
\cite{negahban2011simultaneous}.
Overall, to detect causal SNPs having small effects,
our model increases signal-to-noise ratio by grouping
the SNPs, and simultaneously
decreases the required number of samples by grouping
phenotypic traits.
Unfortunately, the optimization problem resultant from Eq. (\ref{equ:reg5}) is non-trivial.
One may find out that each $\beta_k^j$ appears
in all the three penalties of Eq. (\ref{equ:reg5_b} -- \ref{equ:reg5_d}).
Thus, our structured regularizer is non-separable, which makes
simple coordinate descent optimization inapplicable.
The overlaps between/within input and output groups
add another difficulty.
Furthermore,
we must induce appropriate sparsity patterns (i.e., exact zeros)
in addition to the minimization of Eq. (\ref{equ:reg5}),
therefore approximate methods based on merely relaxing the shrinkage functions are not appropriate.
In the following section, we propose ``hierarchical group thresholding'' method (HiGT)
that efficiently solves our optimization problem
with hierarchically organized thresholding operations.
\section{Optimization method}
\label{sec:optimization}
\begin{figure}
\centering
\subfigure[]{\includegraphics[width=0.8\textwidth]{all_zeros2.eps}}
\subfigure[]{\includegraphics[width=0.35\textwidth]{opt_strategy4.eps}}
\caption{
Sparsity patterns of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$ and a DAG constructed with the sparsity patterns.
The shaded area shows zero entries.
(a) All possible zero patterns of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$
that can be induced by Eq. (\ref{equ:reg5_b},\ref{equ:reg5_c},\ref{equ:reg5_d}) when
$\mathbf{g} = \{1,2\}$ and $\mathbf{h} = \{1,2\}$.
(b) An example of a DAG that
contains the zero patterns of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$.
The root node contains zero pattern for $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}=\bm{0}$,
and the internal nodes represent the zero patterns for $\bm{\beta}_{\mathbf{h}}^{j}=\bm{0}$ (one column is zero)
or $\bm{\beta}_{k}^{\mathbf{g}}=\bm{0}$ (one row is zero). The leaf nodes denote $\beta_{k}^{j}=0$.
In the DAG, the zero pattern of children nodes should be a subset of
their parent nodes' zero patterns.
}
\label{fig:all_zeros}
\end{figure}
In this section, we propose our method to optimize Eq. (\ref{equ:reg5}).
We start with a non-zero $\mathbf{B}$ initialized by other methods (e.g. ridge regression),
and always reduce the set of non-zero $\beta_k^j$s using thresholding operations
as our procedure proceeds.
Our framework is an iterative procedure consisting
of two steps. First, we set the groups (or individual) of regression coefficients
to zero by checking optimality conditions (called thresholding)
as we walk through a predefined
directed acyclic graph (DAG).
When we walk though the nodes in the DAG,
some $\beta_k^j$s might not achieve zero.
Second, we update only these non-zero $\beta_k^j$s using any available optimization techniques.
Let us first
characterize the zero patterns induced by Eq. (\ref{equ:reg5_b} -- \ref{equ:reg5_d}).
We separately consider a block of $\mathbf{B}$ which consists of
one input group $(\mathbf{g}\in \mathcal{G})$ and one output group $(\mathbf{h}\in \mathcal{H})$.
Our observation tells us that there are grouping effects
(to be zero simultaneously) for each $\mathbf{g}$ and $\mathbf{h}$:
$\bm{\beta}_k^{\mathbf{g}}=\bm{0}$
and $\bm{\beta}_{\mathbf{h}}^{j}=\bm{0}$.
We also have $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}} = \bm{0}$
when $\bm{\beta}_k^{\mathbf{g}} = \bm{0}, \; \forall k\in \mathbf{h}$
or $\bm{\beta}_{\mathbf{h}}^{j} = \bm{0}, \; \forall j \in \mathbf{g}$.
Each covariate can also be zero, i.e, $\beta_k^j = 0$
due to the $\ell_1$ penalty in Eq. (\ref{equ:reg5_b}).
Figure \ref{fig:all_zeros}(a) shows
all the possible
zero patterns of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$ induced
by Eq. (\ref{equ:reg5_b} -- \ref{equ:reg5_d}).
Given these sparsity patterns,
to induce structured sparsity,
one might be able to check whether or not
these zero patterns satisfy optimality conditions
and discard irrelevant covariates accordingly.
However, this approach may be inefficient as it
needs to examine the large number of
zero patterns.
Instead, to efficiently check the zero patterns,
we will construct a DAG, and exploit the inclusion relationships
between the zero patterns.
The main idea is that we want to be able to check all zero patterns
by traversing the DAG while avoiding unnecessary optimality checks.
In Figure \ref{fig:all_zeros}(b), we show an example of the DAG for
$\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$ when $\mathbf{g} = \{1,2\}$
and $\mathbf{h} = \{1,2\}$.
We denote the set of all possible zero patterns of $\mathbf{B}$
by $\mathcal{Z} = \{Z_1,\ldots,Z_{|\mathcal{Z}|}\}$.
For example, $Z_1$ can be a zero pattern for
$\mathbf{B}_{\mathbf{h}}^{\mathbf{g}} = \bm{0}$ (the root node in Figure \ref{fig:all_zeros}(b)).
Let us denote $\mathbf{B}(Z_t)$ by the coefficients of $\mathbf{B}$ corresponding to $Z_t$'s zero pattern.
Then we define the DAG as follows:
A node is represented by $Z \in \mathcal{Z}$,
and there exists a directed edge from
$Z_1 \in \mathcal{Z}$ to $Z_2 \in \mathcal{Z}$ if and only if $Z_1 \supset Z_2$
and $\nexists Z \in \mathcal{Z}: Z_1 \supset Z \supset Z_2$.
For example, in Figure \ref{fig:all_zeros}(b), the zero patterns
of the nodes in the second level include the zero patterns of their children.
In general, when we have multiple input and output groups,
we can generate a DAG for each $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}}$ separately
and then connect all the DAGs to the root node for $\mathbf{B} = \bm{0}$.
This graph is originated from Hasse diagram \cite{cameron1994combinatorics}
and it was previously utilized for finding a minimal set of
groups for inducing structured sparsity \cite{jenatton2009structured}.
We can readily observe that our procedure has the following properties:
\vspace{-0.2cm}
\begin{itemize}
\item Walking through the DAG, we can check all possible zero patterns
explicitly without resorting to heuristics or approximations.
\item If $\mathbf{B}(Z) = \bm{0}, \; Z \in \mathcal{Z}$, we know that
all the descendants of $Z$ are also zero due to
the inclusion relations of the DAG. Hence, we can ``skip''
to check the optimality conditions that the descendants of
$Z$ are zero.
\end{itemize}
\vspace{-0.2cm}
Considering these properties,
we develop our optimization framework for the following reasons.
First, we can achieve accurate zero patterns in $\mathbf{B}$
since we check all possible sparsity patterns when walking through the DAG.
Second, if $\mathbf{B}$ is sparse,
our framework is very efficient since
we can skip the optimality checks for many zero patterns in $\mathcal{Z}$.
Mostly we will check only nodes
located at the high levels of the DAG.
Third, our framework is simple to implement. All we need
is to check whether each node in the DAG attains zero and
update non-zero $\beta_k^j$s only when necessary.
Specifically, our hierarchical group-thresholding employs the following procedure:
\vspace{-0.1cm}
\begin{enumerate}
\vspace{-0.1cm}
\item Initialize a non-zero $\mathbf{B}$ using any available methods (e.g. ridge regression).
\vspace{-0.1cm}
\item Construct a DAG that contains all zero patterns of $\mathbf{B}$
that can be induced by the penalty in Eq. (\ref{equ:reg5_b}, \ref{equ:reg5_c}, \ref{equ:reg5_d}).
\vspace{-0.1cm}
\item
\label{algo:iter1}
Use depth-first-search (DFS) to traverse the DAG, and check the optimality conditions
to see if the zero patterns at each node $Z$ achieve zero.
If $\mathbf{B}(Z) = 0$
or $Z$ satisfies the optimality condition to be zero,
set $\mathbf{B}(Z) = 0$,
skip the descendants of $Z$, and visit the next node according to the DFS order.
\vspace{-0.1cm}
\item
\label{algo:iter2}
For those of $\beta_k^j$'s which did not achieve zero in the previous step,
update the coefficients of the non-zero $\beta_k^j$'s using
any available optimization algorithms.
\vspace{-0.1cm}
\item Iterate step \ref{algo:iter1} and \ref{algo:iter2} until Eq. (\ref{equ:reg5}) converges.
\end{enumerate}
\vspace{-0.1cm}
Bellow we briefly present the derivations of the three
ingredients of our optimization framework that include
1) the construction of a DAG,
2) the optimality condition of each $Z \in \mathcal{Z}$ in the DAG and
3) the rule for updating non-zero regression coefficients.
Our optimization method is summarized in Algorithm \ref{alg:hGroupThres}.
\begin{algorithm}[!ht]
\caption{Hierarchical group-thresholding method for Eq. (\ref{equ:reg5})}
{\scriptsize
\begin{algorithmic}
\label{alg:hGroupThres}
\STATE $\mathbf{B} \leftarrow \mbox{coefficients estimated by ridge regression}$
\STATE $\mathcal{G} \leftarrow \mbox{groups of inputs}$
\STATE $\mathcal{H} \leftarrow \mbox{groups of outputs}$
\STATE $D(\mathcal{Z},\mathcal{E}) \leftarrow \mbox{DAG including all zero patterns}$
\STATE $\{Z_{(1)},Z_{(2)},\ldots,Z_{(|\mathcal{Z}|)}\} \leftarrow \mbox{DFS order of $\mathcal{Z}$ in $D$}$
\REPEAT
\STATE $t \leftarrow 1$
\WHILE{$t\leq |\mathcal{Z}|$}
\IF{$Z_{(t)}$ contains $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}=\bm{0}$}
\STATE c $\leftarrow$ Eq. (\ref{equ:rule1})
\ELSIF{$Z_{(t)}$ contains $\bm{\beta}_k^{\mathbf{g}}=\bm{0}$}
\STATE c $\leftarrow$ Eq. (\ref{equ:rule2})
\ELSIF{$Z_{(t)}$ contains $\bm{\beta}_{\mathbf{h}}^{j}=\bm{0}$}
\STATE c $\leftarrow$ Eq. (\ref{equ:rule3})
\ELSIF{$Z_{(t)}$ contains $\beta_{k}^{j}=0$}
\STATE c $\leftarrow$ Eq. (\ref{equ:rule4})
\ENDIF
\IF{c holds (condition for $\mathbf{B}(Z_{(t)}) = \bm{0}$) or $\mathbf{B}(Z_{(t)}) = \bm{0}$}
\STATE $\mathbf{B}(Z_{(t)}) = \bm{0}$ (Set zero to $Z_{(t)}$'s zero pattern)
\STATE $t \leftarrow $ DFS order of $t'$ such that
$Z_{(t')}$ is not a descendant of $Z_{(t)}$, $t'>t$
and $\nexists t{''}: t'>t{''}>t$
(Skip the descendants of $Z_{(t)}$)
\ELSIF{c $=$ Eq. (\ref{equ:rule4})}
\STATE Update $\beta_k^j$ using Eq. (\ref{equ:update_rule})
(Updating non-zero regression coefficients)
\STATE $t \leftarrow t+1$
\ELSE
\STATE $t \leftarrow t+1$
\ENDIF
\ENDWHILE
\UNTIL{convergence}
\end{algorithmic}
}
\end{algorithm}
\vspace{-0.3cm}
\paragraph{Construction of the DAG}
To generate the DAG, first we define the set of nodes
$\mathcal{Z}$. For each block of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$,
we are interested in the four types of zero patterns as follows:
\vspace{-0.4cm}
\begin{enumerate}
\item $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}}$ is zero: $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}} = \bm{0}$.
\item One row in $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}}$ is zero:
$\bm{\beta}_{k}^{{\mathbf{g}}} = \bm{0}$, $k\in {\mathbf{h}}$.
\item One column in $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}}$ is zero:
$\bm{\beta}_{{\mathbf{h}}}^{j} = \bm{0}$, $j\in {\mathbf{g}}$.
\item One regression coefficient in $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}}$ is zero:
$\beta_{k}^{j} = 0$, $k\in {\mathbf{h}}$ and $j\in {\mathbf{g}}$.
\end{enumerate}
\vspace{-0.3cm}
These zero patterns of
$\mathbf{B}$ are shown in Figure \ref{fig:all_zeros}(b)
when $|\mathbf{g}| = |\mathbf{h}| = 2$.
For example, Case 2 and 3 correspond to the nodes at the second level
of the DAG.
For all $\mathbf{g} \in \mathcal{G}$ and $\mathbf{h} \in \mathcal{H}$,
we can define nodes $Z \in \mathcal{Z}$ using the above zero patterns.
Then we need to determine the edges of the DAG by
investigating the relations of the nodes.
We can also easily see that there exists the relationship among the zero patterns:
$\mbox{Case } 1 \supset \mbox{Case } 2, \mbox{Case } 3 \supset \mbox{Case } 4$.
Given the zero patterns and their relations,
we create a directed
edge $Z_1 \rightarrow Z_2$
if and only if $Z_1 \supset Z_2$
and $\nexists Z \in \mathcal{Z}: Z_1 \supset Z \supset Z_2$.
In Figure \ref{fig:all_zeros}(b) we show an example of the DAG.
Finally, we make a dummy root node and generate an edge from the
dummy node to the root of all DAGs for $\mathbf{B}_{{\mathbf{h}}}^{{\mathbf{g}}}= \bm{0}$.
\paragraph{Optimality conditions for structured sparsity patterns}
Given a block of $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}}$,
here we show optimality conditions for the four sparsity patterns:
(1) $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}} = \bm{0}$,
(2) $\bm{\beta}_k^{\mathbf{g}} = \bm{0}$,
(3) $\bm{\beta}_{\mathbf{h}}^j = \bm{0}$, and
(4) $\beta_k^j = 0$,
($j \in \mathbf{g}, k \in \mathbf{h}, \mathbf{g} \in \mathcal{G}, \mathbf{h} \in \mathcal{H}$).
In Figure \ref{fig:all_zeros}(b), the root node
corresponds to the first case, the nodes at the second level
correspond to the second and third case, and the leaf nodes correspond to
the fourth case.
Our derivation of the following optimality conditions
use the fact that all zero coefficients are fixed, as
it makes it simple to deal with overlapping groups.
We denote the column and row indices of zero entries by
$\bm{\eta} = \{j : \beta_{k}^j = 0, \; \forall j \in \mathbf{g}, \; \forall k \in \mathbf{h}\}$ and
$\bm{\gamma} = \{k : \beta_{k}^j = 0, \; \forall j \in \mathbf{g}, \; \forall k \in \mathbf{h}\}$.
First, the optimality condition for the first case is as follows: $\mathbf{B}_{\mathbf{h}}^{\mathbf{g}} = \bm{0}$ if
\begin{align}
\label{equ:rule1}
\sum_{k \in \mathbf{h} - \bm{\gamma}} \sum_{j \in \mathbf{g} - \bm{\eta}}
\left\{ {\mathbf{r}}_k^j (\mathbf{x}_j)^T - \lambda_1 s_k^j \right\}^2
\leq \left(\lambda_2 \sqrt{|\mathbf{h}|}
+ \lambda_3 \sqrt{|\mathbf{g}|}\right)^2,
\end{align}
where
\[s_k^j = \left\{
\begin{array}{l l}
\frac{{\mathbf{r}}_k^j (\mathbf{x}_j)^T }{\lambda_1}
& \mbox{if $\left|\frac{ {\mathbf{r}}_k^j (\mathbf{x}_j)^T}{\lambda_1}\right| \leq 1$}\\
sign\left(\frac{ {\mathbf{r}}_k^j (\mathbf{x}_j)^T}{\lambda_1}\right)
& \mbox{if $\left|\frac{ {\mathbf{r}}_k^j (\mathbf{x}_j)^T}{\lambda_1}\right| > 1$}.\\
\end{array} \right.\]
It is derived using KKT condition in Eq. (\ref{eq:subgradients})
and Cauchy-Schwarz inequality.
The second case of structured sparsity, i.e, $\bm{\beta}_k^{\mathbf{g}}=\bm{0}$ is achieved if
\begin{align}
\label{equ:rule2}
\sum_{j \in {\mathbf{g} - \bm{\eta}}} \left\{{\mathbf{r}}_k^j(\mathbf{x}_j)^T -\lambda_1 s_k^j \right\}^2
\leq \lambda_2^2,
\end{align}
and the optimality condition for the third case, i.e, $\bm{\beta}_{\mathbf{h}}^{j}=\bm{0}$ is
\begin{align}
\label{equ:rule3}
\sum_{k \in \mathbf{h} - \bm{\gamma}} \left\{{\mathbf{r}}_k^j(\mathbf{x}_j)^T - \lambda_1 s_k^j \right\}^2
\leq \lambda_3^2.
\end{align}
These conditions can be established using KKT condition in Eq. (\ref{eq:subgradients})
fixing all the zero coefficients.
Finally, assuming that
$\bm{\beta}_{\mathbf{h}}^{j} \neq \bm{0}$ and $\bm{\beta}_{k}^{\mathbf{g}} \neq \bm{0}$,
the fourth case has the optimality condition of
\begin{align}
\label{equ:rule4}
| {\mathbf{r}}_k^j (\mathbf{x}_j)^T| \leq \lambda_1.
\end{align}
\paragraph{Update rule for nonzero coefficients}
If all the above optimality conditions do not hold, we
know that $\beta_k^j\neq 0$.
In this case, the gradient of Eq. (\ref{equ:reg5}) with respect to $\beta_k^j$ exists, and
we can update $\beta_k^j$ using any coordinate descent procedures.
With a little bit of algebra, we derive the following update rule: $\beta_k^j = \hat{\beta}_{k,-}^{j} + \hat{\beta}_{k,+}^{j}$ where
{\footnotesize
\begin{align}
\label{equ:update_rule}
\hat{\beta}_{k,-}^{j} &= \min\left[0, \left( 1 +
\sum_{j\in \mathbf{g}} \frac{\lambda_2}{\left\|\bm{\beta}_k^{\mathbf{g}}\right\|_2} +
\sum_{k\in \mathbf{h}} \frac{\lambda_3}{\left\|\bm{\beta}_{\mathbf{h}}^{j}\right\|_2} \right)^{-1} \left\{\mathbf{r}_k (\mathbf{x}_j)^T+\lambda_1 \right\}\right],
\\
\hat{\beta}_{k,+}^{j} &= \max\left[0, \left( 1+
\sum_{j\in \mathbf{g}} \frac{\lambda_2}{\left\|\bm{\beta}_k^{\mathbf{g}}\right\|_2} +
\sum_{k\in \mathbf{h}} \frac{\lambda_3}{\left\|\bm{\beta}_{\mathbf{h}}^{j}\right\|_2} \right)^{-1} \left\{ \mathbf{r}_k (\mathbf{x}_j)^T-\lambda_1 \right\}\right].
\nonumber
\end{align}
}
We close this section by summarizing the desirable properties of our optimization method.
First, when $\mathbf{B}$ is sparse, our optimization procedure is very fast.
We take advantage of not only the hierarchical structure of the DAG,
but also the simple forms of the optimality conditions with residuals.
If we keep track of the residuals, we can efficiently check
the optimality conditions for each sparsity pattern.
Second, our thresholding operations check all possible sparsity patterns,
resulting in appropriate structured sparsity in $\mathbf{B}$.
It is important for eQTL mapping since
the coefficients for irrelevant SNPs can be set to exactly zero.
Third, our optimization method can deal with overlaps between/within
the coefficients for input groups ($\bm{\beta}_k^{\mathbf{g}}$'s)
and output groups ($\bm{\beta}_{\mathbf{h}}^{j}$'s).
Since input or output groups may overlap, and they must be considered simultaneously,
this property of our method is essential.
Finally, unlike some previous methods
\cite{yuan2006model,tibshirani1996regression},
we make no use of the assumption that the design matrix $\mathbf{X}$ is orthonormal ($\mathbf{X}^T\mathbf{X} = \mathbf{I}$).
This dropping of the assumption is desirable for eQTL mapping in particular as
covariates (SNPs) are highly correlated due to linkage disequilibrium.
If one uses orthonormalization as a preprocessing step to make $\mathbf{X}$ orthonormal,
there is no guarantee that the same solution for the original problem is attained \cite{friedman2010note}.
\section{Dealing with structures inducing higher-order effects}
\label{sec:model_interaction}
So far, we have been dealing with input and output structures in the context of multi-variate and multi-task linear regression where the influences from the covariates on the responses are additive. When higher interactions take place among covariates, which is known as epistasis and is prevalent
in genetic associations \cite{carlson2004mapping}, a common approach to model such
effects is polynomial regression \cite{montgomery2001introduction},
where higher-order terms of the covariates are included as additional regressors. However, in high-dimensional problems
such as the one studied in this paper, this strategy is infeasible even for 2nd-order polynomial regression because,
given say, even a standard genome dataset with $\sim 10^5$ SNPs, one is left with $\sim 10^{10}$ regressors
which is both computationally and statistically unmanageable. In this section, we briefly show how to circumvent
this difficulty using structured regularization based on prior information of covariate interactions.
This strategy is essentially a straightforward generalization
of the ideas in Section \ref{sec:methods} to a
polynomial regression setting using a special type of structure encoded by a graph.
Therefore all the algorithmic solutions developed in Section \ref{sec:optimization}
for the general optimization problem in Section \ref{sec:methods} still apply here.
Following common practice in GWA literature, here we consider only 2nd-order interactions between SNP pairs.
Instead of including all SNP pairs as regressors,
we employ a synthetic genetic interaction network \cite{costanzo2010genetic}
to define a relatively small candidate set ${\bf U}$ of interacting SNP pairs.
A synthetic genetic interaction network is derived from biological evidences of pairwise functional interactions between genes,
such as double knockout experiments~\cite{tong2004global,koh2009drygin,costanzo2010genetic,boone2007exploring}.
It contains information about the pairs of genes whose mutations affect the phenotype only when the mutations are present on both genes,
and this represents a set of {\it ground-truth} interaction effects.
Given such a network, we consider only those pairs of SNPs that are physically
located in the genome near the genes that interact in the network within a certain distance.
A 2nd-order regressor set ${\bf U}$ generated by this scheme is not only much smaller than an exhaustive pair-set,
but also biologically more plausible.
Note that
it is possible to include other sets of SNP pairs from other resources in our candidate set.
For example, in our experiments, we also added SNP pairs that passed two-locus epistasis test
with p-value $<10^{-5}$ into the set ${\bf U}$.
After finding the candidate SNP pairs,
we generate the group of SNPs or interacting SNP pairs in two steps.
In the first step, we find highly interconnected subgraphs (or clusters)
from the genetic interaction network
using any graph clustering algorithms.
In our experiments, we used MCODE algorithm \cite{bader2003automated}
for clustering the network.
In the second step, we group all the SNPs or SNP pairs that are linked to the genes
in a cluster. We linked the genes and SNPs based on physical locations in the genome.
For example, if a SNP is located nearby a gene within a certain
distance (e.g. $<$500bp), they are linked together.
Finally, we define individual SNPs in the $m$th group as ${\mathbf{g}}_m \in \mathcal{G}$
and SNP pairs in the $m$th group as ${\mathbf{l}}_m \in \mathcal{L}$.
We then look for associations between inputs/input-pairs and outputs via Eq. (\ref{equ:reg6}):
\begin{subequations}
\label{equ:reg6}
\begin{align}
\min \frac{1}{2}
\sum_{k=1}^K\sum_{i=1}^{N}
& \left( y_k^i - \sum_{j=1}^J{\beta_k^{j} x_j^i } -
\sum_{(r, s) \in {\bf U}}{\beta_k^{rs} x_r^i x_s^i }
\right)^2
\label{equ:reg6_a}
\\
& + \lambda_1 \sum_{k=1}^K \sum_{j=1}^J{|\beta_k^j|}
\label{equ:reg6_b} \\
& + \lambda_2
\sum_{k=1}^K \left(\sum_{m=1}^{|\mathcal{G}|}{\sqrt{\sum_{j\in {\mathbf{g}}_m}({\beta_k^j})^2}} +
\sum_{m=1}^{|\mathcal{L}|}{\sqrt{\sum_{(r, s) \in {\mathbf{l}}_m}{(\beta_k^{rs}})^2}}\right)
\label{equ:reg6_c} \\
& + \lambda_3 \left(\sum_{j=1}^J\sum_{m=1}^{|\mathcal{H}|}
\sqrt{\sum_{k \in {\mathbf{h}}_m}{({\beta_k^j})^2}} +
\sum_{(r, s) \in {\bf U}} \sum_{m=1}^{|\mathcal{H}|}
\sqrt{\sum_{k \in {\mathbf{h}}_m}{({\beta_k^{rs}})^2}} \right)
\label{equ:reg6_d}\\
&+ \lambda_4 \sum_{k=1}^K \sum_{\begin{subarray}{l}
(r, s) \in {\bf U}
\end{subarray}}{|\beta_k^{rs}|}.
\label{equ:reg6_e}
\end{align}
\end{subequations}
where $\mathcal{G}$ is the set of input groups for
marginal terms and $\mathcal{L}$ is the set of
input groups for pairwise interaction terms.
Here, we use
two tuning parameters for $L_1$ penalty
depending on whether a covariate is modeling an individual effect ($\lambda_1$)
or interaction effect ($\lambda_4$) because
they might need different levels of sparsity.
Note that this problem is identical to Eq. (\ref{equ:reg5}) if we
treat interaction terms $x_r^i x_s^i$ as additional covariates, and hence
our optimization method presented in Section \ref{sec:optimization} is applicable to Eq. (\ref{equ:reg6}).
However, Eq. (\ref{equ:reg6}) will be more computationally expensive than
Eq. (\ref{equ:reg5}) since Eq. (\ref{equ:reg6}) has
a larger number of covariates in $\mathbf{B}$ including both marginal and interaction terms
and additional tuning parameter $\lambda_4$.
\section{Simulation Study}
\label{subsec:simulstudy}
In this section we validate our proposed method using simulated genome/phenome datasets, and examine the effects of simultaneous use of input and output structures
on the detection of true non-zero regression coefficients.
We also evaluate the speed and the performance of our optimization method
for support recovery in comparison to two other alternative methods.
For the comparison of optimization methods,
we selected smoothing proximal gradient method \cite{chen2010efficient}
and the union of supports \cite{jacob2009group} since both methods
are in principle able to use input/output structures and handle overlapping groups.
The simulated datasets with $J=120, K=80$, and $N=100$ are generated as follows.
For generating $\mathbf{X}$, we first selected 60 input covariates from a uniform distribution over $\{0,1\}$
which indicates major or minor genotype.
We then simulated 60 pairwise interaction terms $(x_j^i \times x_{j'}^i)$ by randomly selecting input-pairs from the 60 covariates mentioned above. Pooling the 60 marginal terms and 60 pairwise interaction terms resulted in a input space of 120 dimensions.
We also defined input and output groups as follows (for the sake of illustration and comprehension convenience, here our input and output groups correspond to variables to be jointly selected rather than jointly shrunk, the shrinkage penalty in our regression loss can be defined on the complements of these groups):
{\tiny
\begin{align}
& \rlap{$\overbrace{\phantom{5,\ldots,9,10}}^{\mathbf{g}_1}$}
5,\ldots, \underbrace{9,10,\ldots,15}_{\mathbf{g}_2}, \ldots,
\rlap{$\overbrace{\phantom{25,\ldots,29,30,31,32}}^{\mathbf{g}_3}$}
25,\ldots,\underbrace{29,30,31,32,\ldots, 37}_{\mathbf{g}_4}, \ldots,
\rlap{$\overbrace{\phantom{50,\ldots,54,55,56,57}}^{\mathbf{g}_5}$}
50,\ldots,\underbrace{54,55,56,57,\ldots,60}_{\mathbf{g}_6}, \ldots,
\rlap{$\overbrace{\phantom{75,\ldots,80,\ldots,87}}^{\mathbf{g}_7}$}
75,\ldots,\underbrace{80,\ldots,87,\ldots,94}_{\mathbf{g}_8}, \ldots,
\rlap{$\overbrace{\phantom{104,\ldots,109,110,111}}^{\mathbf{g}_9}$}
104,\ldots,\underbrace{109,110,111,\ldots,116}_{\mathbf{g}_{10}} \nonumber \\
& \rlap{$\overbrace{\phantom{1,\ldots,4,5}}^{\mathbf{h}_1}$}
1,\ldots,\underbrace{4,5,\ldots,10}_{\mathbf{h}_2}, \ldots,
\rlap{$\overbrace{\phantom{12,\ldots,17,18,19,20}}^{\mathbf{h}_3}$}
12,\ldots,\underbrace{17,18,19,20,\ldots, 25}_{\mathbf{h}_4}, \ldots,
\rlap{$\overbrace{\phantom{46,\ldots,56,\ldots,63}}^{\mathbf{h}_5}$}
46,\ldots,\underbrace{56,\ldots,63,\ldots, 70}_{\mathbf{h}_6}, \ldots,
\underbrace{75,\ldots,80}_{\mathbf{h}_7}, \nonumber
\end{align}
}
where the numbers within a bracket represent the indices of
inputs or outputs for an input group $\mathbf{g}_t,\; t=1,\ldots,10$,
or an output group $\mathbf{h}_o,\; o=1,\ldots,7$.
The inputs and outputs which did not belong to
any groups were in a group by itself.
We then simulated $\mathbf{B}$, i.e, the ground truth
that we want to discover.
We selected non-zero coefficients so that $\mathbf{B}$ includes
various cases, e.g., overlap between input and output groups,
overlap within input groups, and overlap within output groups.
Figure \ref{fig:visualize_beta}(a) shows the simulated $\mathbf{B} \in \mathbb{R}^{80 \times 120}$ where
non-zero coefficients are represented by black blocks.
Given $\mathbf{X}$ and
$\mathbf{B}$, we generated $K=80$ outputs
by $\mathbf{Y}=\mathbf{B} \mathbf{X} + \mathbf{E},$
$\mathbf{E} \sim \mathcal{N}(\bm{0},\mathbf{I})$.
We generated 20 datasets and
optimized Eq. (\ref{equ:reg5})
using the three methods.
We report the average performance using precision recall curves.
\begin{figure}[htp]
\vspace{-0.5cm}
\centering
\hspace{-1.7cm}
\includegraphics[width=0.9\textwidth]{visualize_beta2.eps}
\vspace{-0.5cm}
\caption{An example of simulation results with
$|\beta_k^j| = 2, N=100, J=120$, and $K=80$.
(a) True regression coefficient matrix.
Estimated $\mathbf{B}$
by SIOL (b) with both input and output structures
(c) with only input structure, and
(d) with only output structure.
In
(b-d), we show the normalized values of $|\mathbf{B}|$.}
\label{fig:visualize_beta}
\vspace{-0.5cm}
\end{figure}
\subsection{Evaluation of the Effects of Using Input and Output Structures}
\label{subsec:simul_exp1}
We first investigate the effects of using both input and output structures on the performance of our model.
Here we applied our optimization method (HiGT) to the following three
models with different use of structural information:
{\footnotesize
\begin{enumerate}
\item Use of both input and output structures (Eq. (\ref{equ:reg5_b}) + Eq. (\ref{equ:reg5_c}) + Eq. (\ref{equ:reg5_d}))
\item Use of input structures (Eq. (\ref{equ:reg5_b}) + Eq. (\ref{equ:reg5_c}))
\item Use of output structures (Eq. (\ref{equ:reg5_b}) + Eq. (\ref{equ:reg5_d}))
\end{enumerate}
}
We then observed how the use of input/output structures affect the recovery of the
true non-zero coefficients and the prediction error.
In Figure \ref{fig:visualize_beta}, we visualize the examples of
estimated $\mathbf{B}$ by the three different models.
Figure \ref{fig:visualize_beta}(b) shows that the model
with input and output structure successfully recovered true
regression coefficients in Figure \ref{fig:visualize_beta}(a).
However, as shown in Figure \ref{fig:visualize_beta}(c-d),
the models with either input or output structure
were less effective to suppress noisy signals, which resulted in
many false positives.
\begin{figure}[htp]
\hspace{-1.7cm}
\includegraphics[width=1.2\textwidth]{jasa_exp1_case1_412.eps}
\vspace{-1.5cm}
\caption{Precision recall curves on the recovery of true non-zero coefficients due to SIOL with both input and output structures (input/output struct), regression with only input structure (input struct), and with only output structure (output struct), under three different signal strengths of true regression coefficients.
(a) $\beta_k^j =0.4$, (b) $\beta_k^j =1$, and (c) $\beta_k^j =2$.
The simulated data were generated with $N=100, J=120$, and $K=80$.
}
\label{fig:comp_case1}
\vspace{-0.5cm}
\end{figure}
Figure \ref{fig:comp_case1} shows the precision recall curves
on the recovery of true non-zero coefficients by changing the threshold $\tau$ for choosing relevant covariates ($|\beta_k^j| > \tau$),
under different signal strengths of 0.4, 1 and 2.
For all signal strengths, the model with input/output structures
significantly outperformed the other models with either input or output structure.
The most interesting result is that when the signal strength was very small such
as 0.4, our model still achieved good performance by taking advantage of both structural information.
\begin{figure}[htp]
\hspace{-0.3cm}
\centering
\subfigure[]{\includegraphics[width=0.3\textwidth]{predict2_stgh4.eps}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{predict2_stgh1.eps}}
\subfigure[]{\includegraphics[width=0.3\textwidth]{predict2_stgh2.eps}}
\vspace{-0.5cm}
\caption{Comparison of the prediction error of SIOL (input/output struct),
with regression under only input structure (input struct), on only output structure (output struct).
(a) $\beta_k^j = 0.4$, (b) $\beta_k^j = 1$, (c) $\beta_k^j = 2$.}
\label{fig:exp1_pred}
\vspace{-0.1cm}
\end{figure}
We also compare the prediction errors on our validation data with 280 ($20 \times 14$) samples
(each dataset had 14 samples for validation).
For computing the prediction error, we first selected
non-zero coefficients,
and then recomputed the coefficients of those selected covariates using linear regression without shrinkage penalty.
Using the unbiased coefficients of the chosen covariates, we measured the prediction error for our
validation data.
Figure \ref{fig:exp1_pred} shows the prediction error under different signal strengths ranging from 0.4 to 2.
For all signal strengths, we obtained significantly
better prediction error using both input and output structures.
When the signal strength was large such as 1 or 2, the use of both input and output
structures was especially beneficial for reducing the prediction error
since it helped the model to find most of the true covariates relevant to the outputs.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{jasa_exp1_case_23_merged2.eps}
\vspace{-0.5cm}
\caption{Precision recall curves on the recovery of true non-zero coefficients
for SIOL (input/output struct), regression with only input structure, and with only output structure,
under two different sizes of input and output groups.
(a) $|g| \in \{2,3\}$,
(b) $|g| = 5, \; \forall g \in \mathcal{G}$, and
(c) $|h| = 5$, and
(d) $|h| = 40, \; \forall h \in \mathcal{H}$.
We fixed the size of output groups for (a,b) ($|h| = 10$), and fixed
the size of input groups for (c,d) ($|g| = 5$).
The simulated data were generated with $\beta_k^j =0.5$, $N=100$, and $J=120$.
}
\label{fig:comp_case23}
\vspace{-0.3cm}
\end{figure}
\paragraph{The effects of the size of input and output groups}
Figure (\ref{fig:comp_case23}a-\ref{fig:comp_case23}d) demonstrates the results on
simulated datasets with different size of input and output groups.
For all group sizes, our method
significantly improved the performance
by effectively taking advantage of both input and output groups.
\subsection{Comparison of HiGT to Alternative Optimization Methods}
In this section, we compare the accuracy and speed of our optimization method (HiGT)
with those of the two alternative methods including smoothing proximal gradient method (SPG)
\cite{chen2010efficient}
and union of supports \cite{jacob2009group}.
Both alternatives can handle overlapping groups.
Specifically, the smoothing proximal gradient method is developed to efficiently deal with
overlapping group lasso penalty and graph-guided fusion penalty using an approximation approach.
However, it may be inappropriate for our model since
the maximum gap between the approximated penalty and the exact penalty
is proportional to the total number of groups $R$, where
$R = J |\mathcal{H}| + K |\mathcal{G}|$.
Thus, when dealing with high dimensional data (e.g $J \sim 500,000$) such as genome data,
the gap will be large, and the approximation method can be severely affected.
On the other hand, ``union of supports'' finds the support of $\mathbf{B}$ from the union
support of overlapping groups.
To obtain the union of supports, input variables are duplicated to
convert the penalty with overlap into the one with disjoint groups, and
a standard optimization technique for group lasso \cite{yuan2006model} can be applied.
One of disadvantages of union of supports is that the number of duplicated input variables
increases dramatically when we have a large number of overlapping groups.
In our experiment, we considered all possible combinations of overlapping input and output groups,
and used a coordinate descent algorithm for sparse group lasso \cite{friedman2010note}.
\begin{figure}[htp]
\hspace{-1.7cm}
\includegraphics[width=1.2\textwidth]{jasa_exp2_accuracy2_346.eps}
\vspace{-1.5cm}
\caption{Precision recall curves
on the recovery of true non-zero coefficients using the SIOL model via HiGT, smoothing proximal gradient method (SPG), and union of supports for optimization. Three different model sizes determined by the number of input variables were tested (due to high computational cost, results of union-of-support are only available for the smallest problem sizes tested):
(a) $J = 30$, (b) $J = 400$, and (c) $J = 600$.
The simulated data were generated with $\beta_k^j =2$, $N=100$, and $K=20$.
}
\label{fig:accuracy}
\end{figure}
Figure \ref{fig:accuracy} shows the precision recall curves
on the recovery of true non-zero coefficients under the SIOL model using the three optimization methods. The size of the problem is controlled by increasing number of input variables (from 30 to 600).
The simulated data set used here was identical to the data
in Section \ref{subsec:simul_exp1}
except that we used 20 outputs ($\mathbf{y}_{61},\ldots, \mathbf{y}_{80}$) and different number of input variables.
One can see that our method outperforms the other alternatives for all
configurations.
Our method and smoothing proximal gradient method
showed similar performance when the input variable is small ($J=30$)
but as $J$ increases, our method significantly performed better than SPG.
It is consistent with our claim for the maximum
gap between the approximated penalty and the exact penalty which is
related to the number of groups.
Union of supports did not work well even when the number of
input variables is small ($J=30$) since the actual number of
input variables considered was very large due to the duplicated covariates, which
severely degraded the performance.
\begin{figure}[htp!]
\vspace{-0.3cm}
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth]{jasa_exp2_speed4_sample}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{jasa_exp2_speed4.eps}}
\vspace{-0.5cm}
\caption{Time complexity of HiGT, SPG, and union of supports.
All three methods used both input and output groups.
(a) Computational time with different number of samples,
(b) computational time with different number of inputs.
We used the same tuning parameters for all the methods ($\lambda_1 = 0.01, \lambda_2 = \lambda_3 = 0.1$).
We did not report the times for the small number of samples and inputs
for our method and SPG since I/O latency was dominant.
}
\label{fig:speed}
\vspace{-0.3cm}
\end{figure}
We also compared the speed of our method with the two alternatives
of union of supports with all possible combinations of input and output groups
and SPG that considered both input and output groups.
Figure \ref{fig:speed}(a,b) show that our method converged faster than
the other competitors,
and was significantly more scalable than the two alternatives.
Union of supports was very slow compared to our method and SPG because
of the large number of duplicated input variables.
Our experimental results confirms that our optimization technique is not only accurate
but also fast, which
can be explained by the use of DAG and the simple forms of optimality checks.
\section{Analysis of Yeast eQTL Dataset}
\label{subsec:yeast_eqtl}
We apply our method
to the budding yeast (Saccharomyces cerevisiae) data
\cite{brem2005landscape} with 1,260 unique SNPs (out of 2,956 SNPs)
and the observed gene-expression levels of 5,637 genes.
As network prior knowledge,
we used genetic interaction network reported in
\cite{costanzo2010genetic} with stringent cutoff to
construct the set of candidates of SNP pairs ${\bf U}$.
We follow the procedure in section \ref{sec:model_interaction}
to make ${\bf U}$ with an additional set of significant SNP pairs
with p-value $< 10^{-5}$ computed from two-locus epistasis test.
When determining the set ${\bf U}$,
we assumed that a SNP is linked to
a gene if the distance between them is less than 500bp.
We consider it a reasonable choice for cis-effect as
the size of intergene regions for S. cerevisiae is 515bp
on average \cite{sunnerhagen2006comparative}.
As a result, we included 982 interaction terms from
the interaction network
in $\mathbf{X}$
with 1,260 individual SNPs.
The number of SNP pairs from two-locus epistasis test
varied depending on the trait.
For generating input structures, we processed the network data as follows.
We started with genetic interaction data
which include 74,984 interactions between gene pairs.
We then extracted genetic interactions with low p-values ($<$0.001).
Given 44,056 significant interactions, using MCODE clustering algorithm, we found 55
gene clusters.
Using the gene clusters, we generated the groups of
individual SNPs and pairs of SNPs according to the scheme in
section \ref{sec:model_interaction}.
For generating output structures, we
applied hierarchical clustering to the yeast gene expression data
with cutoff 0.8, resulting in 2,233 trait clusters.
\paragraph{Marginal Effects in Yeast eQTL dataset}
\begin{figure}[htp]
\centering
\includegraphics[width=0.9\textwidth]{marginal_analysis_diff.eps}
\caption{Manhattan plot for association between (YER160C and YJR029W) and SNPs on chromosome 7.
The two genes YER160C and YJR029W share the same GO category ``transposition''.
Our method detected SNPs which affect both two genes in this region.
However, single SNP analysis did not find any associated SNPs and
lasso found SNPs associated only with YER160C in this region. Graph were generated using the GenAMap software \cite{curtisgenamap}.
}
\label{fig:marginal_diff}
\end{figure}
We briefly demonstrate the effects of input/output
structures on the detection of eQTLs with marginal effects.
In general, the association results for marginal effects by
our method, lasso and single SNP analysis
(the later two are standard methods in contemporary GWA mapping
that use no structural information,
and hence included for comparison) showed similar patterns for strong associations.
However, we observed differences
for SNPs with small or medium sized signals.
For example, our results had fewer nonzero regression coefficients
compared to lasso. One possible explanation would be
that the grouping effects induced by our model with input/output structures
might have removed
false predictions with small or medium sized effects.
To illustrate eQTLs with marginal effects,
we show some examples of association SNPs using GenAMap \cite{curtisgenamap}.
Figure \ref{fig:marginal_diff} demonstrates a Manhattan plot on chromosome 7
for two genes including YER160C and YJR029W.
Both genes have the same GO category
``transposition''.
As both genes share the same GO category,
it is likely that they are affected by
the same SNPs if there exist any association SNPs for both genes.
In our results, we could see that the same SNPs
on chromosome 7 are associated with both genes
as shown in Figure \ref{fig:marginal_diff}.
However, single SNP analysis did not find any significant
association SNPs in the region.
Lasso detected association SNPs in the region but
they were associated with only YER160C rather than both of them
(lasso plot is not shown to avoid cluttered figure).
This observation is interesting since it supports that our method
can effectively detect the SNPs
jointly associated with the gene traits by taking advantage of
structural information.
\paragraph{Epistatic Effects in Yeast eQTL dataset}
\label{subsubsec:epi_yeast_eqtl}
\begin{figure}[htp]
\centering
\subfigure[]{\includegraphics[width=0.42\textwidth]{circos_ours05_100_new2.eps}}
\subfigure[]{\includegraphics[width=0.42\textwidth]{circos_pt5_100_new2.eps}}
\caption{Hotspots with interaction effects identified by
(a) our method and (b) two-locus epistasis test.
This figure represents the yeast genome
in a circular format. In clockwise direction, from the top
of the circles, we show 16 chromosomes,
which are separated with space and different colors.
Lines indicate interaction effects
between two connected locations in the genome.
Thickness of the lines is proportional to the number of traits
affected by the interaction effects.
Here we show interaction effects which influence
more than 100 gene traits.
The hotspots for (a) are represented in Table \ref{tab:epistatic_hotspot}.
In (b), two SNP pairs are found including
chr16:718892-chr16:890898
(affected genes are enriched with the GO category of
ribosome biogenesis with corrected p-value $1.6\times10^{-36})$,
and chr8:56246-chr9:362631
(affected genes are enriched with the
GO category of vacuolar protein catabolic process with corrected p-value $1.6\times10^{-14})$.
This figure was generated using Circos software \cite{krzywinski2009circos}.
}
\label{fig:epi_hotspots}
\vspace{-0.1cm}
\end{figure}
\begin{sidewaystable}
\caption{Hotspots of SNP pairs having epistatic effects in yeast identified by our method.
}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
Hotspot & SNP1 & SNP2 & Number of & GO category of & Corrected p-value of \\
label & location &location & affected traits & affected traits & GO category\\
\hline \hline
1 & chr1:154328 & chr5:350744 & 455& ribosome biogenesis & $1.2\times10^{-36}$\\
2 &chr10:380085 & chr15:170945 & 195& ribosome biogenesis & $1.6\times10^{-12}$\\
3 &chr10:380085 & chr15:175594 & 185 & ribosome biogenesis & $4.1\times10^{-12}$\\
4 &chr5:222998 & chr15:108577 & 170& response to temperature stimulus & $2.9\times10^{-6}$ \\
5 &chr11:388373 & chr13:64970 & 155& regulation of translation & $1.8\times10^{-32}$\\
6 &chr2:499012 & chr15:519764 & 145& vacuolar protein catabolic process & $1.4\times10^{-7}$ \\
7 &chr1:41483 & chr3:64311 & 130& & \\
8 &chr7:141949 & chr9:277908 & 125& &\\
9 &chr3:64311 & chr7:312740 & 115& glycoprotein metabolic process & $1.5\times10^{-4}$\\
10 &chr12:957108 & chr15:170945 & 110 & vacuolar protein catabolic process & $7.8\times10^{-16}$\\
11 &chr4:864542 & chr13:64970 & 105& ribonucleoprotein complex biogenesis & $3.7\times10^{-6}$\\
\hline
\end{tabular}
\label{tab:epistatic_hotspot}
\end{sidewaystable}
Now we show the benefits of using the input/output structures
for detecting interaction effects among SNPs by comparing the results of our method to those of two-locus epistasis test
performed by PLINK \cite{purcell2007plink} which uses no structural information.
Specifically, we compare the hotspots with interaction effects (i.e. SNP pairs that affect a
large number of gene traits) which are identified by both methods.
Recall that two-locus epistasis test is the most widely used
statistical technique for
detecting interaction effects in genome-wide association studies, which
computes the significance of interactions
by comparing between the null model
with only two marginal effects and the
alternative model with two marginal effects and their
interaction effect.
In the following analysis, we discarded all SNP pairs
if the correlation coefficient between the pairs $>0.5$ to avoid trivial
interaction effects.
We first identified the most significant hotspots that
affect more than 100 gene traits.
To make sure that we include only significant interactions,
we considered interaction terms
if their absolute value of regression coefficients
are $> 0.05$.
For the results of two-locus epistasis test,
we considered all SNP pairs with p-value $< 10^{-5}$.
Figure \ref{fig:epi_hotspots}(a,b) show the hotspots
found by our method and two-locus epistasis test.
The rings in the figure represent the yeast genome
from chromosome 1 (located at the top of each circle) to 16 clockwise,
and the lines show interactions
between the two genomic locations at both ends.
One can see that our method detected 11 hotspots but
two-locus epistasis test found only two
significant hotspots with interaction effects.
This observation shows that our method can find more
significant hotspots with improved statistical power
due to the use of the input/output structures.
In Table \ref{tab:epistatic_hotspot}, we summarized the
hotspots found by our method.
It turns out that our findings are also biologically interesting
(e.g. 9 out of 11 hotspots showed GO enrichment).
Notably, hotspot 1 (epistatic interaction between
chr1:154328 and chr5:350744) affects 455 genes which are enriched with
the GO category of ribosome biogenesis with
a significant corrected p-value $< 10^{-35}$
(multiple testing correction is
performed by false discovery rate \cite{maere2005bingo}).
This SNP pair was included in our candidates from the genetic interaction network.
There is a significant genetic interaction between NUP60 and RAD51
with p-value $3 \times 10^{-7}$ \cite{costanzo2010genetic}, and
both genes are located at chr1:152257-153877 and
chr5:349975-351178, respectively.
As both SNPs are closely located to NUP60 and RAD51 (within 500bp),
it is reasonable to hypothesize that
two SNPs at chr1:154328 and chr5:350744 affected the two genes,
and their genetic interaction in turn acted on a large number of
genes related to ribosome biogenesis.
To provide additional biological insights,
we further investigated the mechanism of this significant SNP-SNP interaction.
From literature survey, RAD51 (RADiation sensitive) is a strand exchange
protein involved in DNA repair system \cite{sung1994catalysis},
and NUP60 (NUclear Pore) is the subunit of unclear pore complex involved in nuclear
export system \cite{denning2001nucleoporin}.
Also, it has been reported that yeast cells
are excessively sensitive to DNA damaging agents if
there exist mutations in NUP60 \cite{nagai2008functional}.
In our results, we also found out that the SNP close to NUP60
did not have significant marginal effects, and
the SNP in RAD51 had marginal effects.
According to these facts, it would be possible to hypothesize the following.
When there is no mutation in RAD51,
the point mutation in NUP60 cannot affect other traits since
the single mutation is not strong enough and
if there exist DNA damaging agents in the environment,
DNA repair system would be able to handle them.
However, when there exist point mutations in RAD51,
DNA damaging agents would severely harm yeast cells with the point mutation in NUP60
since DNA repair system might not work properly due to the mutation in RAD51
(recall that the SNP in RAD51 had marginal effects).
As a result,
both mutations in NUP60 and RAD51
could make a large impact on many gene traits.
\begin{figure}[htp]
\centering
\subfigure[]{\includegraphics[width=0.42\textwidth]{circos_ours1_10_new2.eps}}
\subfigure[]{\includegraphics[width=0.42\textwidth]{circos_pt6_10_new2.eps}}
\caption{Hotspots with interaction effects identified by
(a) our method and (b) two-locus epistasis test by PLINK.
Here we show epistatic interactions which influence
more than 10 gene traits.
This figure was generated using Circos software \cite{krzywinski2009circos}.
}
\label{fig:medium_epi_hotspots}
\vspace{-0.1cm}
\end{figure}
\begin{figure}
\vspace{-0.5cm}
\centering
\includegraphics[width=0.45\textwidth]{corr_graph.eps}
\vspace{-0.3cm}
\caption{The scatter plot for illustrating the correlation
between our epistatic hotspot 1 (chr1:154328-chr5:350744) and
significant SNP pairs close to hotspot 1
detected by two-locus epistasis test
(p-value $< 10^{-6}$ and the distance between
the pairs of SNPs and hotspot 1 is within $< 50$kb).
Each dot represents a SNP pair (SNP1, SNP2)
found by two-locus epistasis test,
and x-axis represents the correlation between SNP1 and chr1:154328
and y-axis represents the correlation between
SNP2 and chr5:350744.
Each dot was perturbed by a small amount of random noise
to avoid overlapping of the dots.
}
\label{fig:corr_graph}
\vspace{-0.5cm}
\end{figure}
Furthermore, we looked at the hotspots which affect $>10$ gene traits.
Figure \ref{fig:medium_epi_hotspots}(a,b) show epistatic interactions
identified by our method and two-locus epistasis test, respectively.
In this figure, we show significant interactions with
regression coefficient cutoff $>0.1$ for our method, and
p-value cutoff $<10^{-6}$ for two-locus epistasis test.
These cutoffs are arbitrarily chosen to make the number of hotspots
found by both methods similar.
Surprisingly, two methods showed very different hotspots with
epistatic interactions.
Figure \ref{fig:medium_epi_hotspots}(a) was very similar to
Figure \ref{fig:epi_hotspots}(a) but
in Figure \ref{fig:medium_epi_hotspots}(b),
several hotspots emerged which were absent in
Figure \ref{fig:epi_hotspots}(b).
We will analyze these hotspots in two ways.
First we will look at the hotspots with epistatic effects
which appeared in both Figure \ref{fig:medium_epi_hotspots}(a) and (b).
Then we will investigate the differences between the two
results.
First, we observed that both methods
found significant epistatic effects between chromosome 1 and 5.
Recall that in our previous analysis of the hotspots,
this interaction was discussed
(see hotspot 1 in Table \ref{tab:epistatic_hotspot}).
Among all significant SNP pairs found by two-locus epistasis test,
there was no identical SNP pair to hotspot 1
but there were 30 SNP pairs close to it (within $<50$kb).
Also, it turns out that these 30 SNP pairs had very strong correlation
with hotspot 1. In Figure \ref{fig:corr_graph}, we show scatter plot to illustrate
the strong correlations between hotspot 1 and these 30 SNP pairs.
More interestingly, the total number of affected traits by these 30 SNP pairs
was 416, and it is very similar to 455 that is the number of affected
genes by hotspot 1.
According to these facts and our previous analysis
for the mechanism of hotspot 1, it seems that hotspot 1 is
truly significant, and two-locus epistasis test found
significant SNP pairs that are close to the true location but
failed to locate the exact location of hotspot 1.
It supports that our algorithm could find such a significant
hotspot affecting $>$ 400 genes by detecting exact SNP pairs.
However, two-locus epistasis test was unable to
locate many hotspots affecting a large number of traits
due to insufficient statistical power.
Second, we investigated the differences between the two
results in Figure \ref{fig:medium_epi_hotspots}(a,b).
As we cannot report all the results in the paper,
we focused on a SNP pair (chr10:87113-chr15:141621)
in Figure \ref{fig:medium_epi_hotspots}(a),
and another SNP pair (chr8:63314-chr9:362631)
in Figure \ref{fig:medium_epi_hotspots}(b).
Figure \ref{fig:checks}(a,b)
show the average gene expression levels for each SNP pair.
In this figure, x-axis represents the genotype $\in \{0,1\}$
which is the multiplication of two SNPs
(SNP1 $\times$ SNP2, where SNP1, SNP2 $\in \{0,1\}$), and y-axis represents the
average gene expression levels
of individuals with given genotype.
Each line in Figure \ref{fig:checks}(a,b) shows
how the average gene expression level varies as the genotype changes
from 0 to 1 for each trait affected by the SNP pairs
with error bars of one standard deviation.
Interestingly, in Figure \ref{fig:checks}(a),
we could
see that there is
a consistent pattern,
where for most gene traits,
the expression levels decreased as the genotype changed from 0 to 1.
However, as shown in Figure \ref{fig:checks}(b),
for the SNP pair found by two-locus epistasis test,
we could not find such a coherent pattern.
It demonstrates the differences between our method and two-locus
epistasis test. As our model borrows information across input and output group structures,
we could find consistent gene expression patterns for the SNP pair.
On the other hand, two-locus pairwise test analyzed each SNP pair
separately, and each trait affected by the SNP pair showed
different gene expression patterns with different standard deviations.
Thus,
it seems that our method can provide interesting biological insights in terms of
gene expression patterns in addition to the statistical significance.
\begin{figure}[htp]
\vspace{-0.5cm}
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth]{our_chk1.eps}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{pt_chk1.eps}}
\vspace{-0.5cm}
\caption{Variations of gene expression levels w.r.t. to the
genotypes of (a) a SNP pair (chr10:87113-chr15:141621)
found by our method, and
(b) a different SNP pair (chr8:63314-chr9:362631)
found by two-locus epistasis test.
Here, x-axis represents genotype doses and
y-axis shows the average expression levels of the multiple genes (denoted by multiple vertical lines) affected by the corresponding SNP pairs.
A small noise was added to the genotypes as offsets to avoid overlapping of the error bars.}
\label{fig:checks}
\vspace{-0.8cm}
\end{figure}
\section{Discussions}
In this paper, we presented jointly structured input-output lasso to
simultaneously take advantage of both input and output structures.
We also presented an efficient optimization technique for solving our
multi-task regression model with structured sparsity.
Our experiments confirmed that our model is able to significantly improve
the accuracy for detecting true non-zero coefficients using both input and output
structures.
Furthermore, we demonstrated that our optimization method is faster and more accurate than
the other competitors.
In our analysis of yeast eQTL datasets, we identified
important pairs of eQTL hotspots that potentially interact with each other.
\paragraph{Prior knowledge about input and output structures}
In practice, it is important to generate reliable input and output groups to maximize
the benefits of the structural information.
In our experiments, we showed that yeast genetic interaction networks
can be used as prior knowledge to define input and output structures.
However, such reliable prior knowledge
cannot be easily attained when we deal with human eQTL datasets.
Instead, we have a variety of resources for human genomes
including protein-protein interaction networks and pathway database.
Generating reliable input and output structures exploiting multiple resources
would be essential for the successful discovery of human eQTLs.
\paragraph{Comparison between HiGT and other optimization algorithms}
Recently, an active set algorithm \cite{jenatton2009structured} has been proposed developed for variable selection with structured sparsity, which can potentially be used for estimating the SIOL model.
We observe two key differences
between our method and the active set algorithm \cite{jenatton2009structured}.
First, the active set algorithm incrementally
increases active sets by searching available non-zero patterns, hence
one can see that it is a ``bottom-up'' approach.
On the other hand, our method adopts ``top-down'' approach where
irrelevant covariates are discarded as we walk
through the DAG.
Second, our algorithm guarantees to search all zero patterns
while the active set algorithm needs a heuristic to select candidate
non-zero patterns. When $\mathbf{B}$ is sparse, our
algorithm is still very fast by taking advantage of the structures of DAG.
However, when $\mathbf{B}$ is not sparse,
our algorithm needs to search a large number of zero patterns and
update many non-zero coefficients but
the active set algorithm still does not need to
update many non-zero coefficients.
Hence, in such a non-sparse case, the active set algorithm may have
an advantage over our optimization method.
Other alternative optimization methods for SIOL include
MM (majorize/minimize) algorithm \cite{lange2004springer}
and generalized stage-wise lasso \cite{zhao2007stagewise}.
However, these methods did not work well for SIOL as
the approximated penalty by MM algorithm
and the greedy procedure of generalized stage-wise lasso
were incapable of efficiently inducing complex sparsity patterns.
\paragraph{Future work}
One promising research direction would be to systematically estimate the
significance of the covariates that we found.
For example, computing p-values of our results would be helpful
to control the false discovery rate.
To handle both sparse and non-sparse $\mathbf{B}$,
it would be also interesting to develop an optimization method for our model
that can take advantage of both ``bottom-up'' and ``top-down'' strategies.
For example, we can select variables using ``bottom-up''
approach and discard irrelevant variables using
''top-down'' approach alternatively in a single framework.
Finally, we are interested in applying our method to human disease datasets.
In that case, the extension of our work to handle case-control studies and
finding reliable structural information will be necessary.
\bibliographystyle{plain}
|
1,314,259,995,631 | arxiv | \section*{Abstract}
In the scientific digital libraries, some papers from different research communities can be described by community-dependent keywords even if they share a semantically similar topic.
Articles that are not tagged with enough keyword variations are poorly indexed in any information retrieval system which limits potentially fruitful exchanges between scientific disciplines.
In this paper, we introduce a novel experimentally designed pipeline for multi-label semantic-based tagging developed for open-access metadata digital libraries.
The approach starts by learning from a standard scientific categorization and a sample of topic tagged articles to find semantically relevant articles and enrich its metadata accordingly. Our proposed pipeline aims to enable researchers reaching articles from various disciplines that tend to use different terminologies.
It allows retrieving semantically relevant articles given a limited known variation of search terms.
In addition to achieving an accuracy that is higher than an expanded query based method using a topic synonym set extracted from a semantic network, our experiments also show a higher computational scalability versus other comparable techniques. We created a new benchmark extracted from the open-access metadata of a scientific digital library and published it along with the experiment code to allow further research in the topic.
\smallskip
\noindent{\bf Keywords.} Semantic tagging, Digital~libraries, Topic modeling, Multi-label classification, Metadata enrichment.
\section{Introduction} \label{sec:introduction}
The activity of researchers has been disrupted by ever greater access to online
scientific libraries --in particular due to the presence of open access
digital libraries. Typically when a researcher enters a query for finding
interesting papers into the search engine of such a digital library it is done
with a few keywords. The match between the keywords entered and those used to
describe the relevant scientific documents in these digital libraries
may be limited if the terms used are not the same. Every
researcher belongs to a community with whom she or he shares common knowledge
and vocabulary. However, when the latter wishes to extend the bibliographic
exploration beyond her/his community in order to gather information that
leads him/her to new knowledge, it is necessary to remove several scientific
and technical obstacles like the size of digital libraries, the heterogeneity
of data and the complexity of natural language.
Researchers working in a multi-disciplinary and cross-disciplinary context should have the ability of discovering related interesting articles regardless of the limited keyword variations they know. They are not expected to have a prior knowledge of all vocabulary sets used by all other related scientific disciplines.
Most often, semantic networks \cite{Borgida_Sowa__Principles_of_semantic_networks__1991} are a good answer to the problems of linguistic variations in non-thematic digital libraries by finding synonyms or common lexical fields.
However, In the scientific research context, using general language semantic network might not be sufficient when it comes to very specific scientific and technical jargons. Such terms also have the challenge of usage evolution over time in which having an updated semantic network counting for new scientific terms would be very expensive to achieve.
Another solution could be brought by the word embedding approach \cite{Mikolov_et_al__Distributed_Representations_of_Words_and_Phrases_and_their_Compositionality__2013}. Another solution could be brought by the word embedding approach \cite{Mikolov_et_al__Distributed_Representations_of_Words_and_Phrases_and_their_Compositionality__2013}.
This technique makes it possible to find semantically similar terms. Nevertheless, this approach presents some problems. It is not obvious to determine the number of terms that must be taken into account to be considered semantically close to the initial term. In addition, this technique does not work well when it comes to a concept composed of several terms rather than a single one. Another strategy is to make a manual enrichment of the digital libraries with metadata in order to facilitate the access to the semantic content of the documents. Such metadata can be other keywords, tags, topic names but there is a lack of a standard taxonomy and they are penalized by the subjectivity of the people involved in this manual annotation process \cite{Abrizah2013}.
In this paper we present an approach combining two different semantic information sources: the first one is provided by the synonym set of a semantic network and the second one from the semantic representation of a vectorial projection of the research articles of the scientific digital library.
The latter takes advantage of learning from already tagged articles to enrich the metadata of other similar articles with relevant predicted tags.
Our experiments show that the average
F1 measure is increased by 11\% in comparison with a baseline approach that only utilizes semantic networks.
The paper is organized as follows: the next section (Section~\ref{sec:sota})
provides an overview of related work. In Section~\ref{sec:model} we introduce our pipeline of multi-label semantic-based tagging followed by a detailed evaluation in Sections~\ref{sec:expe} and \ref{sec:results}. Finally, Section~\ref{sec:conclusion} concludes the paper and gives an outlook on future work.
\section{State of the Art} \label{sec:sota}
According to the language, a concept can be described by a single term or by an expression composed of multiple words. Therefore the same concept may have different representations in different natural languages or even in the same language in the case of different disciplines. This causes an information retrieval challenge when the researcher does not know all the term variations of the scientific concept he is interested in.
Enriching the metadata of articles with semantically relevant keywords facilitates the access of scientific articles regardless of the search term used in the search engine. Such semantically relevant terms could be extracted thanks to lexical databases (e.g., \textit{WordNet} \cite{miller1995wordnet}) or knowledge bases (e.g., \textit{BabelNet} \cite{NavigliPonzetto:12aij},
\textit{DBpedia} \cite{LehmannIJJKMHMK15}, or \textit{YAGO}
\cite{mahdisoltani2014yago3}). Another solution is to use word embedding
techniques \cite{bojanowski2016enriching} for finding semantically similar
terminologies. Nevertheless, it is difficult in this approach to identify
precisely the closeness of the terms in the projection and then if two terms
have still close meanings.
When the set of terms is hierarchically organized, it composes a taxonomy. A
\textit{faceted} or \textit{dynamic taxonomy} is a set of taxonomies, each one
describing the domain of interest from a different point of view
\cite{Sacco_Tzitzikas__Dynamic_taxonomies_and_faceted_search__2009}. Recent
research in this area has shown that it improves the interrogation of
scientific digital libraries to find specific elements, e.g., for finding
chemical substances in pharmaceutical digital libraries
\cite{Wawrzinek_Balke__Semantic_Facettation_in_Pharmaceutical_Collections__2017}.
The use of \textit{Latent Dirichlet Allocation} (LDA) \cite{blei2003latent} for
assigning documents to topics is an interesting strategy in this problem and it
has shown that it helps the search process in scientific digital libraries by
integrating the semantics of topic-specific entities
\cite{Pinto_Balke__Demystifying_the_Semantics_of_Relevant_Objects_in_Scholarly_Collections__2015}.
For prediction problems, the unsupervised approach of LDA has been adapted to a
supervised one by adding an approximate maximum-likelihood procedure to the
process \cite{Blei_McAuliffe__Supervised_Topic_Models__2007}.
Using LDA for topic tagging however has a fundamental challenge in mapping the user defined topics with the LDA's latent topics. We can find a few variations of LDA trying to solve this mapping challenge challenge. For example, \textit{Labeled LDA} technique \cite{Ramage_et_al__Labeled_LDA__2009} is kind of a supervised version of LDA that utilize the user define topic.
Semi-supervised LDA approaches are also
interesting solutions for being able to discover new classes in unlabeled data
in addition to assigning appropriate unlabeled data instances to existing
categories. In particular, we can mention the use of weights of word
distribution in \textit{WWDLDA} \cite{Zhou_et_al__WWDLDA__2013}, or an interval
semi-supervised approach
\cite{Bodrunova_et_al__Interval_Semi-supervised_LDA__2013}.
However, in the case of a real application to millions of documents, such as a
digital library with collections of scientific articles covering many
disciplines, over a large number of years, even recent evolutionary approaches
of LDA require the use of computationally powerful systems, like the use of a
computer cluster \cite{liang15:_large_scale_topic_model}, which is a complex and costly solution.
\section{Model Pipeline} \label{sec:model}
The new model we propose can be resumed following a pipeline of 4 main components as illustrated in Figure \ref{fig1}. In this section we will describe each of this components.
\begin{figure}
\centering
\begin{tikzpicture}
\node[draw,rounded corners, rectangle, minimum height=3em, text width=11em, text centered] (SFbTC) at (0,3.6) {\footnotesize Semantic Feature-based\\ Topic Classifier};
\node[draw,rounded corners, rectangle, minimum height=3em, text width=11em, text centered] (SE) at (0,2.4) {\footnotesize Synset Elasticsearch};
\node[draw,rounded corners, rectangle, minimum height=3em, text width=8em, text centered] (PtFL) at (4.7,3) {\footnotesize Per-topic Fusion List};
\node[draw,rounded corners, rectangle, minimum height=3em, text width=10em, text centered] (PaLoPT) at (8.4,3) {\footnotesize Per-article List of Predicted Topics: Semantic Multi-label Categorization};
\coordinate[shift={(0.7,0)}] (A1) at (SFbTC.east);
\draw[arrows=->] (SFbTC)--(A1)|-(PtFL.172);
\coordinate[shift={(0.7,0)}] (A2) at (SE.east);
\draw[arrows=->] (SE)--(A2)|-(PtFL.188);
\draw[arrows=->] (PtFL)--(PaLoPT);
\end{tikzpicture}
\smallskip
\caption{High-level illustration of the model pipeline. The \textit{Semantic Feature-based Topic Classifier} phase is used to generate \textit{Top N} articles ranked by the probability of topic belonging. Another ranked list is generated by querying the synonym set (synset) of the topic using a text-based search engine which is presented in \textit{Synset Elasticsearch} phase. A \textit{Per-topic Fusion List} is then generated using a special mean rank approach in which only \textit{Top $a \times N$} are considered where $a$ is experimentally determined. Finally, each article is tagged by a list of topics that was categorized with in the \textit{Fusion list}.} \label{fig1}
\end{figure}
\subsection{Semantic Feature-based Topic Classifier} \label{subsec:s3h}
This is computationally a big component that itself includes a pipeline of data transformation and a multi-label classification steps. The main phases of it are described as the following:
\subsubsection{Extract semantic features}
Starting from a multi-disciplinary scientific digital library with an open-access metadata, we extract a big number of articles, i.e., millions in which researchers want to explore. The retrieved data from the metadata of these articles are mainly the \textit{title} and the \textit{abstract}. These two fields will then be concatenated in order to be considered as the textual representation of the article in addition to a unique \textit{identifier}. These set of articles will be denoted as \textit{Corpus}. A TF--IDF weighted bag-of-word vectorization is then applied to transform the \textit{Corpus} into a sparse vector space. This vectorized representation is then semantically transformed into a dense semantic feature vector space, typically 100-600 vector size. The result of this stage is an $(N \times M)$ matrix, where $N$ is the semantic feature vector size and $M$ is the number of articles. It must be accompanied with a dictionary that maps the article unique identifier of the article to the row index of the matrix.
\subsubsection{Topic classifier}
For each topic name, i.e., scientific category name or a key-phrase of a scientific topic, we generate a \textit{dataset} of \textit{positive} and \textit{negative} examples. The \textit{positive} examples are obtained using a text-based search engine, e.g. \textit{Elasticsearch}, which is a widely used search engine web service built on Apache Lucene, as the resulted articles that have \textit{topic name} matches in \textit{title} OR \textit{abstract}. The negative examples, however, are randomly selected articles from the \textit{Corpus} but with no matches with the \textit{topic name} in any of the metadata text fields. Using this \textit{dataset}, we build a kind of \textit{One-vs-All} topic classifier. This classifier must have the ability of providing the predicted probability value of belonging to the topic, i.e. the class.
\subsubsection{Probability-based multi-label classification}
Each of the obtained \textit{One-vs-All} topic classifiers are then used in a multi-label classification task where each article in \textit{Corpus} will have a probability value of belonging to the topic. This could be thought of as a kind of \textit{fuzzy clustering} or \textit{supervised topic modeling} where the article can be assigned to more than one topic but with a probability of belonging. The result of this stage is a top 100K ranked list of articles per topic with the probability value as the ranking score.
\subsection{Synset Elasticsearch} \label{subsec:synset}
This component is computationally simple but has a great value in the pipeline. It is a kind of query expansion where the query space is increased by finding synonyms and supersets of query terms. So, it also requires a text-based search engine, e.g., \textit{Elasticsearch}. We first need a semantic network or a lexicon database, e.g., WordNet, that can provide a set of synonyms of a giving concept name. For each topic in the set of topics, we generate a set of topic name synonyms, that is denoted by \textit{Synset} (synonym set). Using \textit{Elasticsearch} we then generate a ranked list of articles that have matches in their metadata with any of the synonyms in the topic \textit{Synset}. So, the output of this component is a ranked list of articles per topic. As in Section \ref{subsec:s3h}, this output could be considered as a multi-label classification output but with ranking information rather than a probability score.
\subsection{Fusion and Multi-label Categorization} \label{subsec:fusion}
This final stage constitutes the main contribution part of this experimentally designed pipeline. It uses an introduced ranked list fusion criteria of combining the 2 rankings of an article $A$ which are the rank in the \textit{Synset Elasticseach} list denoted by $s_A$ and the rank in the semantic feature-based topic classifier list, denoted by $r_A$. If an article is present both in the 2 lists, we use a special version of \textit{Mean Rank} score as in Equation \ref{equ:fusion1}. Otherwise, the default score value of the article is given by Equation \ref{equ:fusion2}
where $|S|$ is the size of the \textit{Synset Elasticseach} list.
\begin{equation}\label{equ:fusion1} t_A=\frac{s_A+r_A}{2} \end{equation}
\begin{equation}\label{equ:fusion2} t_A=r_A \times |S| \end{equation}
The rank score of the \textit{Fusion List} will be finally used to re-rank the articles to generate a new ranked list with a list size that ranges from the $max(|S|, |R|)$ and $|S| + |R|$ where $|R|$ is the size of the semantic feature-based topic classifier list. However, in our model we define a hyper-parameter $a$ that determines the size of the \textit{Fusion} list as in Equation \ref{equ:fusion3}. The hyper-parameter $a$ will be experimentally determined based on multi-label classification statistics and evaluation that would be presented in Section \ref{sec:expe}.
\begin{equation}\label{equ:fusion3} |F| = a \times |S| \end{equation}
The output of this component, and also the whole pipeline, is a list of articles with their predicted list of topics, i.e. scientific category names. Such list is obtained by applying a \textit{lists inversion} process that takes as input all the per topic \textit{Fusion} lists and generates a per article list of topics for all articles presented in any of the \textit{Fusion} lists.
The obtained list of predicted topics per article are optionally presented with a score value that reflects the ranking of the article in the \textit{Fusion} list of the topic. That score could be used to set an additional hyper-parameter replacing $a$ which would be a score threshold that determines if the topic would be added to the set of predicted topic tags of the article. However, a simple and efficient version, as would be shown in Section \ref{sec:expe}, would only relay of the ranking information but having in place the design parameter $a$.
\section{Experiments} \label{sec:expe}
\subsection{Data Description}
\subsubsection{Scientific Paper Metadata from ISTEX Digital Library.}
The dataset used for running the experiments is extracted from
\textit{ISTEX}\footnote{Excellence Initiative of Scientific and Technical
Information \href{https://www.istex.fr/}{https://www.istex.fr/}}, a French
open-access metadata scientific digital
library\cite{CNRS__White_Paper_Open_Science_in_a_Digital_Republic__2016}. This
digital library is the result of the \textit{Digital Republic Bill}, a law
project of the French Republic discussed from 2014, one of whose aims is a
``wider data and knowledge
dissemination''\footnote{\href{https://www.republique-numerique.fr/pages/in-english}{https://www.republique-numerique.fr/pages/in-english}}.
ISTEX digital library contains 21 million documents from 21 scientific
literature corpora in all disciplines, more than 9 thousands journals and 300
thousands ebooks published between 1473 and 2015 (in April 2018).
Private publishers (e.g., Wiley, Springer, Elsevier, Emerald...) did not leave
access to their entire catalog of publications, that is why the publication
access does not cover the most recent publications. In addition, because the
contracts were signed with the French Ministry of Higher Education and
Research, even if anybody can access to the general information about the
publications with ISTEX platform (title, names of the authors and full
references of the publication, and also metadata in MODS or JSON format), the
global access is limited to the French universities, engineering schools, or
public research centers: documents in full text (in PDF, TEI, or plain text
format), XML metadata and other enrichments (e.g., bibliographical references
in TEI format and other useful tools and criteria for automatic indexing).
For our experiments, we considered only a subpart of ISTEX corpus: the articles
must be published during the last twenty years, written in English and related
to sufficient metadata, including their title, abstract, keywords and subjects.
\subsubsection{Scientific Topic from Web of Science}
For each scientific article, we also use a list of tags extracted from the
collection of \textit{Web of
Science}\footnote{\scriptsize{\href{https://images.webofknowledge.com/images/help/WOS/hp_subject_category_terms_tasca.html}{https://images.webofknowledge.com/images/help/WOS/hp\_subject\_category\_terms\_tasca.html}}} which contains more than 250 flattened topics. These
flattened topics are obtained as follows: when a topic is a sub-topic of
another one, we can aggregate to the subcategory terms those of the parent
category (e.g, [computer science, artificial intelligence] or [computer
science, network]). Some of the topics are composition of topics, like ``art
and humanities.''
The selected 33 topics are: [Artificial Intelligence; Biomaterials; biophysics; Ceramics; Condensed Matter; Emergency Medicine; Immunology; Infectious Diseases; Information Systems; Literature; Mechanics; Microscopy; Mycology; Neuroimaging; Nursing; Oncology; Ophthalmology; Pathology; Pediatrics; Philosophy; Physiology; Psychiatry; Psychology; Rehabilitation; Religion; Respiratory System; Robotics; Sociology; Substance Abuse; Surgery; Thermodynamics; Toxicology; Transplantation]
In our experiments, to facilitate the analysis of the results without bias due
to lexical pretreatment, we work only with topics containing neither
punctuation nor linkage words. Moreover, we have kept in our experiences only
\textit{Web of Science} topics with enough articles (in ISTEX digital library)
for having a significant positive subset of documents not used for the learning
part (at least 100 scientific articles). The topics, which can be single words
(as ``thermodynamic'') or a concatenation of words (as ``artificial
intelligence''), should be known in the semantic network to benefit of a
consequent synonyms list. In our work, we present the results obtained with 33 topics, which are English single words or the concatenation of several words.
\subsubsection{Synonym Sets from BabelNet.}
In our experiments, we produce a semantic enrichment by using a list of
synonyms for each concept, also known as ``synset'' (for ``synonym set''). To
build our \textit{synset} list, we need a semantic network. After some
preliminary tests on several semantic networks, we chose \textit{BalbelNet}
\cite{NavigliPonzetto:12aij} \color{black} which gave better
results. A sample synset from \textit{BabelNet} for the topic \textit{Mycology} is [Mycology, fungology, History of mycology, Micology, Mycological, Mycologists, Study of fungi].
\subsubsection{Supervised LDA}
Based on the state-of-the-art review as described in Section \ref{sec:sota}, we started by developing a model based on LDA. We defined a supervised version of the LDA (\textit{sLDA}) where we the number of topics was set to 33 topics. Each topic was guided by boosting the terms of the topic synonym set obtained from \textit{BabelNet} where the boosting values were [1, 10, 20, 30]. The dataset for experimenting this model were extracted from ISTEX scientific corpus by using \textit{Elasticsearch} getting all articles that have at least one match of any of the 33 topics in any of these metadata fields: \textit{Title}, \textit{abstract}, \textit{subjects} or \textit{keywords}. However, the text used to build the \textit{sLDA} were limited to the \textit{title} and the \textit{abstract}. The evaluation of the \textit{sLDA} model will then be performed on a test set that is constructed from the \textit{keywords} and the \textit{subjects} fields.
\subsection{Experimental Process}\label{sec:exp}
Initially, we defined an accuracy indicator that is based on the count of tagged articles with a list of prediction topics that has at least one label intersection with ground truth. This indicator will be denoted as \textit{At least one common label} metric. The other statistical and multi-label classification evaluation metrics can be easily found in the literature\footnote{\href{https://en.wikipedia.org/wiki/Multi-label\_classification}{https://en.wikipedia.org/wiki/Multi-label\_classification}}.
In order to build an experiment of our proposed pipeline, we need to experimentally determine some hyper-parameters of it as follows:
\subsubsection{Semantic feature-based topic classifier}
We limit our text representation of the article to its title and abstract, which are available metadata. Comparing Paragraph vector \cite{halko2011finding} and Randomized truncated SVD \cite{halko2011finding} based on a metric that maximizes the inner cosine similarity of articles from the same topics and minimizes it for a randomly selected articles, we choose SVD decomposition of the TF--IDF weighted bag of words and bi-grams resulting in 150 features for more than 4 millions articles. As for the topic classifier, also by comparative evaluation, we select \textit{Random Forest Classifier}, tuning certain design parameters, and use it to rank the scientific corpus. We consider the top 100K articles of each topic classifier to be used in the fusion step.
\subsubsection{Synonym set Elasticsearch}
Reviewing many available semantic networks, we found that BabelNet was the most comprehensive one combining many other networks \cite{NavigliPonzetto:12aij}. So, we use it to extract a set of synonyms, i.e., a \textit{synset} for each topic. This synset is then used to query the search engine of ISTEX which is built on Elasticsearch server. As would be shown in Section \ref{sec:results}. This technique will be used as the experiment baseline.
\subsubsection{Fusion and per multi-label categorization}
The main design parameter of this phase is the size of the ranked list that is achieved by setting it to the double size of the \textit{Synset Elasticsearch} list.
\section{Results and Discussion} \label{sec:results}
First, we run an experiment on \textit{sLDA} as described in Section \ref{sec:expe}. The result of this designed experiment was very disappointing based on the evaluation metrics. The best performing \textit{sLDA} model, that was with a boosting value of 30, resulted in the following evaluation: \textit{F1 measure} = 0.02828, \textit{At-least-one-common-label} = 0.0443, \textit{Jaccard index} = 0.0219 and \textit{Hamming loss} = 0.0798. Comparing to using our pipeline with $a=2$ having \textit{F1 measure} of the 33 topics was 0.6032. So, \textit{sLDA} was obviously not a good candidate to be used as a baseline. However, it was an additional motivation for designing and proposing our pipeline.
After dropping \textit{sLDA} from further experiments due to the very low evaluation results, we have added 2 more topics to the set of the 33 topics totaling to 35 topics. The 2 additional topics were [International Relations; Biodiversity Conservation]. We have also added more examples to the test set counting for an additional ISTEX metadata field called \textit{categories:wos} that is actually does not exists in all the articles but was still considered as a good source for increasing the test examples in our published benchmark.
We define 5 methods for the experiment. One is a method of \textit{Synset Elasticsearch}, denoted here by \textit{Synset} which will be the baseline of benchmark. The other 4 methods are variations of our proposed pipeline but with variant values of the design parameter $a = [1, 2, 3, 4]$. The pipeline methods are then denoted respectively with the value of $a$ as \textit{Fusion1}, \textit{Fusion2}, \textit{Fusion3} and \textit{Fusion4}. The results of the multi-label classification evaluation metrics, described in Section \ref{sec:exp}, are shown in Table \ref{recall} and Figure \ref{fig:eval}.
\begin{table}
\centering
\caption{
Evaluation results based on the evaluation metrics \textit{Recall} and \textit{At least one common label} denoted here as the \textit{Common-Match} metric. The table also shows the size of the intersection between the method results and the test set that was used in computing the evaluation metric, denoted here as \textit{Intersection}. The value of \textit{Intersection} might also be a good indicator of the method being able to tag more articles.}\label{recall}
\begin{tabular}{|l|r|r| r|}
\hline
Method & Intersection & Common-Match & Recall\\
\hline\hline
\textit{Synset} & 22,192& 0.5284& 0.5285\\
\hline
\textit{Fusion1} & 22,123& 0.5736& 0.5735\\
\hline
\textit{Fusion2} & 41,642& 0.6375& 0.6374\\
\hline
\textit{Fusion3} & 56,114& \textbf{0.647}0& \textbf{0.6473}\\
\hline
\textit{Fusion4} & \textbf{67,625}& \textbf{0.6470}& 0.6464\\
\hline
\end{tabular}
\end{table}
While the evaluation metric values in Table \ref{recall} recommend higher $a$ values, 3 or 4 with no significant value difference, we can see from Figure \ref{fig:eval} that the best value is $a=2$ based on \textit{Precision}, \textit{F1 measure}, \textit{Jaccard index }and \textit{Hamming loss}. This means that if we increase the size of the fusion ranked list more than the double of the size of the Synset method, we will start loosing accuracy. Another indicator that we should limit the size of the Fusion list is Figure \ref{fig:eval}.a that shows that if we increase the size of the Fusion list, the difference of the \textit{Label Cardinality} between the predicted results and the compared test set will increase. This difference is a negative effect that should be minimized, otherwise, the model will tend to predict too much labels that would be more probably irrelevant to the article.
\begin{figure}[htbp]
\centering
\subfloat[Label cardinality *] {\includegraphics[width=.5\linewidth]{charts/_label_cardinality_difference.png}}
\subfloat[Jaccard index **] {\includegraphics[width=.5\linewidth]{charts/_jaccard_index.png}}
\subfloat[Hamming loss $\times 10$] {\includegraphics[width=.5\linewidth]{charts/_10_times_hamming_loss.png}}
\subfloat[F1 measure] {\includegraphics[width=.5\linewidth]{charts/_f1_measure.png}}
\smallskip
\caption{Results of \textit{label cardinality difference}, \textit{Jaccard index}, \textit{Hamming loss} and \textit{F1 measure} evaluation metrics. While Synset is the method that uses synonyms of the category name as a query in Elasticsearch, Fusion 1, 2, 3 and 4 represent respectively the values of the pipeline design parameters $a=[1, 2, 3, 4]$ that determine the number of annotated articles per topic as an integer multiple of the size of \textit{Synset Elasticsearch} list. *: Difference value with the label cardinality of the compared test set of each of the methods. **: Equivalent to \textit{Precision} in our case of a test set label cardinality = 1.}
\label{fig:eval}
\end{figure}
Due to the fact that the test set was not generated manually but by filtering on a set of scientific category terms in relevant metadata fields, we believe that it is an incomplete ground truth. However, we think it is very suitable to compare models as a guidance for designing an efficient one because the test labels are correct even incomplete.
Accordingly, we tried to perform some error analysis where we found that in most of the cases, the extra suggested category names are either actual correct topic having the article a multi-disciplinary one or topics from very similar and related topic. For example, a medical article from ISTEX\footnote{\href{https://api.istex.fr/document/23A2BC6E23BE8DE9971290A5E869F1FA4A5E49E4}{https://api.istex.fr/document/23A2BC6E23BE8DE9971290A5E869F1FA4A5E49E4}} is tagged with the category name [`Transplantation'] in the test set. The predicted topics by our method was [`Mycology', `Transplantation'] resulting into $0.5$ precision value. However, when we read the abstract of that article, we find that it talks about \textit{dematiaceous fungi} which is actually a \textit{Mycology} topic. So, in many cases where there is at least one common tag, the other tags are actually the aimed discovered knowledge rather than a false prediction. The complete list of results --where these cases could be verified-- are published as well as all the experimental data and reproducibility code\footnote{\href{https://github.com/ERICUdL/stst}{https://github.com/ERICUdL/stst}}.
\section{Conclusion and Future Work} \label{sec:conclusion}
Governments, public organizations and even the private sector have recently invested in developing multi-disciplinary open-access scientific digital libraries. However, these huge scientific repositories are facing many information retrieval issues. Nevertheless, this opens opportunities for text-mining based solutions that can automate cognitive efforts in data curation. In this paper, we proposed an efficient and practical pipeline that solves the challenge of the community-dependent tags and the issue caused by aggregating articles from heterogeneous scientific topic ontologies and category names used by different publishers. We believe that providing a solution for such a challenging issue would foster trans-disciplinary research and innovation by enhancing the corpus information retrieval systems. We demonstrated that combining two main semantic information sources --the semantic networks and the semantic features of the text of the article metadata-- was a successful approach for semantic based multi-label categorization. Our proposed pipeline does not only enable for a better trans-disciplinary research but also supports the process of metadata semantic enrichment with relevant scientific categorization tags.
Other available methods in semantic multi-label categorization, such as LDA, are not suitable in this context for many reasons. For instance, they require powerful computational resources for processing big scientific corpus. Moreover, they need a pre-processing step to detect concepts that are composed of more than one word (e.g., ``Artificial Intelligence''). Finally, LDA is originally an unsupervised machine learning model in which it is problematic to define some undetermined parameters like the number of topics. Our proposed pipeline, however, overcomes all of these limitations and provides efficient results.
Towards improving the query expansion component of the pipeline (Synset Elasticsearch), we are planning to study the impact of using extra information from \textit{BabelNet} semantic network other than only the synonym sets. In particular, we want to include the neighboring concept names as well as the category names of the concept. We expect that such term semantic expansion will improve the performance of the method.
\section*{Acknowledgment}
We would like to thank ISTEX project and ARC6 program\footnote{ \href{http://www.arc6-tic.rhonealpes.fr/larc-6/}{http://www.arc6-tic.rhonealpes.fr/larc-6/}} of the Region Auvergne-Rh\^{o}ne-Alpes that funds the current PhD studies of the first author.
\newpage
|
1,314,259,995,632 | arxiv | \section{Introduction}
A natural question about automata and related models of computation
is the length of the shortest string an automaton accepts.
A function mapping the size of an automaton
to the maximum length of the shortest accepted string,
with the maximum taken over all automata of that size,
is a certain complexity measure for a family of automata.
For one-way finite automata, this measure is trivial:
the length of the shortest string
accepted by a nondeterministic finite automaton (NFA) with $n$ states
is at most $n-1$: this is the length of the shortest path to an accepting state.
On the other hand, Ellul et al.~\cite{EllulKrawetzShallitWang}
proved that the length of shortest strings \emph{not} accepted by an $n$-state NFA
is exponential in $n$.
Similar questions were studied for other models and some variants of the problem.
Chistikov et al.~\cite{ChistikovCzerwinskiHofmanPilipczukWehar}
investigated the length of shortest strings in counter automata.
The length of shortest strings in formal grammars
under intersections with regular languages
was studied by Pierre~\cite{Pierre},
and recently by Shemetova et al.~\cite{ShemetovaOkhotinGrigorev}.
Alpoge et al.~\cite{AlpogeAngSchaefferShallit} investigated shortest strings
in intersections of deterministic one-way finite automata (DFA).
The maximum length of shortest strings
for deterministic two-way finite automata (2DFA)
has been investigated in two recent papers.
First of all, from the well-known proof
of the PSPACE-completeness of the emptiness problem for 2DFA
by Kozen~\cite{Kozen}
it is understood that the length of the shortest string
accepted by an $n$-state 2DFA
can be exponential in $n$.
There is also an exponential upper bound on this length,
given by transforming a 2DFA to an NFA:
the construction by Kapoutsis~\cite{Kapoutsis}
uses at most $\binom{2n}{n+1}=\Theta(\frac{1}{\sqrt{n}} 4^n)$ states,
and hence the length of the shortest string is slightly less than $4^n$.
Overall, the maximum length of the shortest string is exponential,
with the base bounded by 4.
The first attempt to determine the exact base
was made by Dobronravov et al.~\cite{two_way_dfa_shortest},
who constructed a family of $n$-state 2DFA
with shortest strings of length $\Omega((\sqrt[5]{10})^n) \geqslant \Omega(1.584^n)$.
The automata they have actually constructed
belong to a special class of 2DFA:
the \emph{direction-determinate automata}.
These are 2DFA with the set of states
split into states accessible only by transitions from the right
and states accessible only by transitions from the left:
in other words, direction-determinate automata always remember the direction
of the last transition in their state.
Later, Krymski and Okhotin~\cite{KrymskiOkhotin_conf}
extended the method of Dobronravov et al.~\cite{two_way_dfa_shortest}
to produce automata of a more general form, with longer shortest accepted strings.
They constructed a family of non-direction-determinate 2DFA
with shortest strings of length $\Omega((\sqrt[4]{7})^n) \geqslant \Omega(1.626^n)$.
This paper improves these bounds.
First, the maximum length of the shortest string
accepted by $n$-state direction-determinate 2DFA
is determined precisely as $\binom{n}{\lfloor\frac{n}{2}\rfloor}-1 = \Theta(\frac{1}{\sqrt{n}} 2^n)$.
The upper bound on the length of the shortest string
immediately follows from the complexity of transforming direction-determinate 2DFA to NFA,
see Geffert and Okhotin~\cite{GeffertOkhotin}.
A matching lower bound is proved by a direct construction of a family of $n$-state automata.
The second result of this paper
is that not remembering the direction helps to accept longer shortest strings:
a family of $n$-state non-direction-determinate automata
with shortest strings of length $\frac{3}{4} \cdot 2^n - 1$
is constructed.
This is more than what is possible in direction-determinate automata.
\section{Definitions}
\begin{definition}
A \emph{two-way deterministic finite automaton} (2DFA)
is a quintuple
$\mathcal{A}=(\Sigma, Q, q_0, \delta, F)$,
in which:
\begin{itemize}
\item
$\Sigma$ is a finite alphabet,
which does not contain two special symbols:
the left end-marker ($\vdash$)
and the right end-marker ($\dashv$);
\item
$Q$ is a finite set of states;
\item
$q_0 \in Q$ is the initial state;
\item
$\delta \colon Q \times (\Sigma \cup \{{\vdash},{\dashv}\}) \to Q \times \{-1,+1\}$
is a partial transition function;
\item
$F \subseteq Q$
is the set of accepting states,
effective at the right end-marker ($\dashv$).
\end{itemize}
An input string $w = a_1 \ldots a_m \in \Sigma^*$ is given to an automaton
on a tape ${\vdash} a_1 \ldots a_m {\dashv}$.
The automaton starts at the left end-marker ${\vdash}$ in the state $q_0$.
At each moment, if the automaton is in a state $q \in Q$
and sees a symbol $a \in \Sigma \cup \{{\vdash}, {\dashv}\}$,
then, according to the transition function $\delta(q, a)=(r, d)$,
it enters a new state $r$
and moves to the left or to the right depending on the direction $d$.
If the requested value $\delta(q, a)$ is not defined, then the automaton rejects.
The automaton accepts the string, if it ever comes to the right end-marker $\dashv$
in any state from $F$.
The automaton can also loop.
The language recognized by an automaton $A$,
denoted by $L(A)$, is the set of all strings it accepts.
\end{definition}
This paper also uses a subclass of 2DFA,
in which one can determine the direction of the previous transition
from the current state.
\begin{definition}[\cite{KuncOkhotin_reversible}]
A 2DFA is called \emph{direction-determinate},
if there is a partition of the set of states $Q=Q^+ \cup Q^-$,
with $Q^+ \cap Q^- = \emptyset$,
such that for each transition $\delta(q, a)=(r, +1)$, the state $r$ must belong to $Q^+$,
and for each transition $\delta(q, a)=(r, -1)$, the state $r$ is in $Q^-$.
\end{definition}
The known upper bounds on the length of the shortest accepted string
are different for direction-determinate 2DFA and for 2DFA of the general form.
These bounds are inferred from the complexity of transforming
two-way automata with $n$ states to one-way NFA:
for 2DFA of the general form,
as proved by Kapoutsis~\cite{Kapoutsis},
it is sufficient and in the worst case necessary
to use $\binom{2n}{n}$ states in a simulating NFA,
whereas for direction-determinate 2DFA
the simulating 2DFA requires $\binom{n}{\lfloor\frac{n}{2}\rfloor}$ states in the worst case,
see Geffert and Okhotin~\cite{GeffertOkhotin}.
Since the shortest string in a language cannot be longer
than the shortest path to an accepting state in an NFA,
the following bounds hold.
\begin{theorem}[Dobronravov et al.~\cite{two_way_dfa_shortest}]
Let $n \geqslant 1$,
and let $A$ be a 2DFA with $n$ states,
which accepts at least one string.
Then the length of the shortest string accepted by $A$
is at most $\binom{2n}{n}-1$.
If the automaton $A$ is direction-determinate,
then the length of the shortest accepted string
does not exceed $\binom{n}{\lfloor\frac{n}{2}\rfloor}-1$.
\end{theorem}
The first result of this paper is that
this upper bound for direction-determinate automata
is actually precise.
\section{Shortest accepted strings for direction-determinate automata}
In this section, direction-determinate automata
with the maximum possible length $\binom{n}{\lfloor\frac{n}{2}\rfloor}-1$
of shortest accepted strings,
where $n$ is the number of states,
will be constructed.
Automata are constructed for every $k$ and $\ell$,
where $k$ is the number of states reachable by transitions to the right
and $\ell$ is the number of states reachable in the left direction.
The following theorem shall be proved.
\begin{theorem}\label{dirdet_shortest_theorem}
For every $k \geqslant 2$ and $\ell \geqslant 0$
there exists a direction-determinate 2DFA with the set of states $Q=Q^+ \cup Q^-$,
where $|Q^+|=k$ and $|Q^-|=\ell$,
such that the length of the shortest string it accepts is $\binom{k+\ell}{\ell+1}-1$.
\end{theorem}
The automaton constructed in the theorem works as follows.
While working on its shortest string,
it processes every pair of consecutive symbols
by moving back and forth between them,
thus effectively comparing them to each other.
Eventually it moves on to the next pair and processes it in the same way.
It cannot come back to the previous pair anymore,
because it has no transitions for that.
The automaton's motion between two neighbouring symbols
begins when it first arrives from the first symbol to the second in some state from $Q^+$.
Then it moves back and forth,
alternating between states from $Q^+$ at the second symbol
and states from $Q^-$ at the first symbol,
and finally leaves the second symbol to the right.
Among the states visited by the automaton during this back-and-forth motion,
the number of states from $Q^+$ is greater by one than the number of states from $Q^-$.
Two such sets of states will be denoted
by a pair $(P, R)$, where $P \subseteq Q^-$, $R \subseteq Q^+$ and $|R|=|P|+1$.
\begin{proposition}
There are $\binom{k+\ell}{\ell+1}$
different pairs $(P, R)$, such that $P \subseteq Q^-$, $R \subseteq Q^+$ and $|R|=|P|+1$.
\end{proposition}
\begin{proof}
There are as many pairs $(P, R)$ as pairs $(Q^- \setminus P, R)$, where $|R|=|P|+1$.
The number of pairs of the latter form
is equal to the number of subsets of $Q$ of size $\ell+1$,
that is, $\binom{k+\ell}{\ell+1}$.
\end{proof}
Let the sets $Q^+$ and $Q^-$ be linearly ordered.
Then one can define an order on the set of pairs $(P, R)$ as follows.
In every such pair,
let $P=\{p_1, \ldots, p_m\}$, where $p_1 < \ldots < p_m$,
and $R=\{r_1, \ldots, r_{m+1}\}$, where $r_1 < \ldots < r_{m+1}$.
There is a corresponding sequence to each pair,
of the form
$r_1$, $-p_1$, $r_2$, $-p_2$, \ldots, $r_m$, $-p_m$, $r_{m+1}$,
and different pairs are compared by the lexicographic order on these sequences.
In Table~\ref{t:order_on_pairs_P_R},
all pairs $(P, R)$, for $k=4$ and $\ell=2$,
are given in increasing order,
along with the corresponding sequences.
\begin{table}[t]
\begin{equation*}
\begin{array}{ll}
\text{pairs } (P, R) & \text{sequences} \\
\hline
\emptyset, \{1\} & (1)\\
\{2'\}, \{1, 2\} & (1,-2',2)\\
\{2'\}, \{1, 3\} & (1,-2',3)\\
\{2'\}, \{1, 4\} & (1,-2',4)\\
\{1'\}, \{1, 2\} & (1,-1',2)\\
\{1', 2'\}, \{1, 2, 3\} & (1,-1',2,-2',3)\\
\{1', 2'\}, \{1, 2, 4\} & (1,-1',2,-2',4)\\
\{1'\}, \{1, 3\} & (1,-1',3)\\
\{1', 2'\}, \{1, 3, 4\} & (1,-1',3,-2',4)\\
\{1'\}, \{1, 4\} & (1,-1',4)\\
\emptyset, \{2\} & (2)\\
\{2'\}, \{2, 3\} & (2,-2',3)\\
\{2'\}, \{2, 4\} & (2,-2',4)\\
\{1'\}, \{2, 3\} & (2,-1',3)\\
\{1', 2'\}, \{2, 3, 4\} & (2,-1',3,-2',4)\\
\{1'\}, \{2, 4\} & (2,-1',4)\\
\emptyset, \{3\} & (3)\\
\{2'\}, \{3, 4\} & (3,-2', 4)\\
\{1'\}, \{3, 4\} & (3,-1',4)\\
\emptyset, \{4\} & (4)
\end{array}
\end{equation*}
\caption{All pairs $(P, R)$
for sets of states $Q^+=\{1, 2, 3, 4\}$ and $Q^-=\{1', 2'\}$.}
\label{t:order_on_pairs_P_R}
\end{table}
Let $N=\binom{k+\ell}{\ell+1}$ be the number of pairs.
Then all pairs are enumerated in increasing order
as
$(P^{(1)}, R^{(1)}) < \ldots < (P^{(N)}, R^{(N)})$,
where
$P^{(i)}=\{p^{(i)}_1, \ldots, p^{(i)}_{m_i}\}$ and
$R^{(i)}=\{r^{(i)}_1, \ldots, r^{(i)}_{m_i+1}\}$.
In particular, the least pair is $(P^{(1)}, R^{(1)})=(\emptyset, \{\min Q^+\})$,
because the corresponding sequence ($\min Q^+$) is lexicographically the least.
The greatest pair is $(P^{(N)}, R^{(N)})=(\emptyset, \{\max Q^+\})$.
The desired direction-determinate automaton $A$
with the shortest accepted string of length $N-1$
is defined over an alphabet $\Sigma=\{a_1, \ldots, a_{N-1}\}$,
and the shortest accepted string will be $w=a_1 \ldots a_{N-1}$.
The set of states is defined as $Q=Q^+ \cup Q^-$,
where $Q^+=\{1, \ldots, k\}$ and $Q^-=\{1', \ldots, \ell'\}$.
The initial state is $q_0=1$.
The only transition by the left end-marker ($\vdash$)
leads from the initial state to the least state in $R^{(1)}$.
\begin{subequations}
\begin{align}
\label{A_transition_initial}
\delta(q_0, {\vdash}) &= (r^{(1)}_1, +1)
\intertext{%
For each symbol $a_i$, transitions are defined in the states $R^{(i)} \cup P^{(i+1)}$.
If the automaton is at the symbol $a_i$ in any state from $R^{(i)}$ (except for the greatest state),
then it moves to the left in the corresponding state from $P^{(i)}$.
}
\label{A_transition_r_to_p}
\delta(r^{(i)}_j, a_i) &= (p^{(i)}_j, -1)
&& (j \in \{1, \ldots, m_i\})
\intertext{%
For the greatest state in $R^{(i)}$,
there is no corresponding state in $P^{(i)}$,
and so the automaton moves to the right
(and this is the only way to move from $Q^+$ to $Q^+$,
and hence the only way to advance from the symbol $a_i$ to the next symbol for the first time).
}
\label{A_transition_r_to_next_r}
\delta(r^{(i)}_{m_i+1}, a_i) &= (r^{(i+1)}_1, +1)
\intertext{%
In each state from $P^{(i)}$,
the automaton moves to the right
in the next available state from $R^{(i)}$.
}
\label{A_transition_p_to_r}
\delta(p^{(i+1)}_j, a_i) &= (r^{(i+1)}_{j+1}, +1)
&& (j \in \{1, \ldots, m_{i+1}\})
\end{align}
\end{subequations}
There are no transitions at the right end-marker,
and there is one accepting state: $F=\{r^{(N)}_{m_N+1}\}$.
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{example_dirdet_k4_l2_w19}}
\caption{The accepting computation of the automaton $A$ on the string $w$,
for $k=4$ and $\ell=2$.}
\label{f:example_k4_l2}
\end{figure}
The computation of the automaton on the string $w = a_1 \ldots a_{N-1}$
is illustrated in Figure~\ref{f:example_k4_l2}.
The automaton gradually advances,
and moves between every two subsequent symbols,
$a_{i-1}$ and $a_i$,
according to the sets $P_i$ and $R_i$.
Transitions at $a_i$ expect that there is $a_{i-1}$ to the left,
whereas transitions at $a_{i-1}$ expect $a_i$ to the right.
As long as every symbol is followed by the next symbol in order,
these expectations will be fulfilled each time,
and the automaton accepts in the end.
\begin{lemma}
The automaton $A$ accepts the string $w=a_1 \ldots a_{N-1}$.
\end{lemma}
\begin{proof}
It is claimed that the automaton $A$, executed on the string $w$,
eventually arrives to each symbol $a_i$ in the state $r^{(i)}_{m_i + 1}$.
This is proved by induction on $i$.
Base case $i=1$:
the first transition \eqref{A_transition_initial}
moves the automaton to the state $r^{(1)}_1$.
The first pair $(P^{(1)}, R^{(1)})$ is $(\emptyset, \{1\})$,
and so $r^{(1)}_1 = r^{(1)}_{m_1+1}$.
Induction step.
Assume that the automaton comes to the symbol $a_i$ in the state $r^{(i)}_{m_i+1}$.
Then it makes a transition~\eqref{A_transition_r_to_next_r} to the right in the state $r^{(i+1)}_1$.
Then it executes the sequence of transitions \eqref{A_transition_r_to_p}, \eqref{A_transition_p_to_r},
defined by the pair $(P_{i+1}, R_{i+1})$,
moving back and forth between $a_{i+1}$ and $a_i$,
and passing through the states $p^{(i+1)}_1$, $r^{(i+1)}_2$, $p^{(i+1)}_2$, \ldots
$r^{(i+1)}_{m_{i+1}}$, $p^{(i+1)}_{m_{i+1}}$, $r^{(i+1)}_{m_{i+1}+1}$.
And so it comes to the symbol $a_{i+1}$ in the state $r^{(i+1)}_{m_{i+1}+1}$,
as shown in Figure~\ref{f:dirdet_shortest_proof}.
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{dirdet_shortest_proof}}
\caption{The moves of $A$ between two neighbouring symbols of $w$.}
\label{f:dirdet_shortest_proof}
\end{figure}
In the end, the automaton comes to the last symbol $a_{N-1}$ in the state $r^{(N-1)}_{m_{N-1}+1}$.
Then it makes a transition~\eqref{A_transition_r_to_next_r}
and moves to the right end-marker in the state $r^{(N)}_1$.
And this is the accepting state $r^{(N)}_{m_N+1}$,
because the last pair $(P^{(N)}, R^{(N)})$ is $(\emptyset, \{k\})$.
Therefore, the string $w$ is accepted.
\end{proof}
It is claimed that the automaton $A$ cannot accept any shorter string.
It cannot accept the empty string;
if it did, then the first transition would lead to the right end-marker in the state 1,
and the automaton would reject, because $k \neq 1$.
Next, it will be shown that each accepted string
begins with the symbol $a_1$
and ends with the symbol $a_{N-1}$.
Finally, it will be proved that the automaton cannot skip any number,
that is, the number of every next symbol,
as compared to the number of the previous symbol,
cannot increase by more than $1$.
If the number decreases or does not change, this would make the string only longer;
but in order to reach $a_{N-1}$ from $a_1$ without skipping any number,
the automaton would have to move through all symbols of the alphabet,
and therefore an accepted string cannot be shorter than $N-1$ symbols.
\begin{lemma}
Every string accepted by the automaton $A$
begins with the symbol $a_1$.
\end{lemma}
\begin{proof}
Let the automaton $A$ accept some string
that starts from some symbol $a_i$.
The transition from the initial configuration
leads the automaton to the state $r^{(1)}_1$ at the first symbol $a_i$.
As $(P^{(1)}, R^{(1)}) = (\emptyset, \{1\})$,
the state $r^{(1)}_1$ is $1$.
Transitions by the symbol $a_i$ are defined only in states from $R^{(i)} \cup P^{(i+1)}$,
and hence $1 \in R^{(i)}$, for otherwise the automaton immediately rejects.
If there is at least one more state in $R^{(i)}$,
then the transition in the state $1$ by $a_i$ moves the automaton to the left.
Then the automaton returns to the left end-marker,
and then either loops or rejects,
because there is only one transition defined there.
Therefore, there are no other states in $R^{(i)}$ besides $1$,
and so, $(P^{(i)}, R^{(i)})=(\emptyset, \{1\})=(P^{(1)}, R^{(1)})$,
which implies $i=1$.
\end{proof}
\begin{lemma}
Every string accepted by the automaton $A$
ends with the symbol $a_{N-1}$.
\end{lemma}
\begin{proof}
Let a string accepted by $A$ end with a symbol $a_i$.
To accept, the automaton should move from $a_i$ to the right
using the transition~\eqref{A_transition_r_to_next_r},
and it arrives to the right end-marker in the state $r^{(i+1)}_1$.
As the only accepting state is $k$,
and the automaton rejects at the right end-marker in all other states,
this state must be $r^{(i+1)}_1=k$.
Because the state $r^{(i+1)}_1$ is the least in $R^{(i+1)}$,
it follows that $R^{(i+1)}=\{k\}$ and $P^{(i+1)}=\emptyset$.
Therefore, this is the last pair, and $i=N-1$.
\end{proof}
\begin{lemma}
No string accepted by the automaton $A$
may contain any substring of the form $a_i a_j$, where $j>i+1$.
\end{lemma}
\begin{proof}
The proof is by contradiction.
Suppose that $A$ accepts a string
that contains a substring $a_i a_j$, with $j>i+1$.
In order to accept,
the automaton should eventually reach this symbol $a_j$ for the first time,
moving to it from the symbol $a_i$.
To make this transition, the automaton should be at $a_i$ in some state from $Q^+$
(indeed, if it were in the state from $Q^-$,
then it would have been at $a_j$ already at the previous step).
Then the automaton must use the transition~\eqref{A_transition_r_to_next_r}
to move from $a_i$ to $a_j$,
and this transition leads to the state $r^{(i+1)}_1$.
For the computation to go onward,
this state should lie in $R^{(j)}$.
Moreover, the state $r^{(i+1)}_1$ should be the least in $R^{(j)}$,
for otherwise the pair $(P^{(j)}, R^{(j)})$
would be less than the pair $(P^{(i+1)}, R^{(i+1)})$.
Also $r^{(i+1)}_1$ cannot be the only state in $R^{(j)}$:
if not, then $(P^{(j)}, R^{(j)})$ would either coincide with or be less than $(P^{(i+1)}, R^{(i+1)})$.
It can be concluded that $r^{(i+1)}_1=r^{(j)}_1$,
and the next transition from this state leads to the state $p^{(j)}_1$,
moving to the symbol $a_i$.
For the automaton to have a transition in the state $p^{(j)}_1$ at $a_i$,
this state should belong to $P^{(i+1)}$.
In addition, it should be the least among the state in $P^{(i+1)}$,
because if there were a lesser state $p$,
then the second term in the sequence for $(P^{(i+1)}, R^{(i+1)})$ would be $-p$,
and this pair would be greater than $(P^{(j)}, R^{(j)})$.
This leads to the equality $p^{(j)}_1=p^{(i+1)}_1$.
By analogous arguments, one can prove that
the sequences for $(P^{(j)}, R^{(j)})$ and for $(P^{(i+1)}, R^{(i+1)})$
must coincide and continue infinitely.
This is impossible, because the numbers of states increase,
and there finitely many of them.
\end{proof}
\begin{corollary}[from Theorem~\ref{dirdet_shortest_theorem}]
For every $n \geqslant 2$,
there is a direction-determinate 2DFA with $n$ states,
such that the length of the shortest string it accepts is $\binom{n}{\lfloor\frac{n}{2}\rfloor}-1$.
\end{corollary}
\section{Longer shortest strings for automata of the general form}
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{2dfa_shortest_2_3_4}}
\caption{Computations of automata $A_2$, $A_3$ and $A_4$
from the proof of Theorem~\ref{theorem_2_pow_n}
on their shortest strings $w_2$, $w_3$ and $w_4$.}
\label{f:2dfa_shortest_2_3_4}
\end{figure}
The main result of this section is the construction of a family of 2DFA
with shortest strings of length $3\cdot 2^{n-2}-1$,
where $n$ is the number of states in an automaton.
This is more than the maximum possible length of shortest strings
for direction-determinate automata;
in other words, \emph{forgetting the direction is useful.}
\begin{theorem}\label{theorem_2_pow_n}
For each $n \geqslant 2$ there exists a 2DFA with $n$ states,
such that the shortest string it accepts is of length $3\cdot 2^{n-2}-1$.
\end{theorem}
\begin{proof}
The automata and the shortest strings they accept
are constructed inductively;
for small values of $n$ they are given in Figure~\ref{f:2dfa_shortest_2_3_4}.
For the inductive proof to work,
the following set of properties is ensured for every $n$.
\begin{claim*}
For each $n \geqslant 2$ there exists a 2DFA $A_n = (\Sigma_n, Q_n, \delta_n)$
with no transitions by end-markers, no initial state and no accepting states,
with the set of states $Q_n = \{1, \ldots, n\}$,
and there exists a string $w_n \in \Sigma_n^*$
of length $3\cdot 2^{n-2}-1$,
such that the following two properties hold.
\begin{enumerate}
\item
If $A_n$ starts at any symbol of $w_n$ in the state $n$,
then it eventually leaves this string
by a transition from its rightmost symbol to the right in the state $1$.
\item
If for some non-empty string $u$ there exists a position,
in which the automaton $A_n$ can start in the state $n$
and eventually leave the string $u$
by a transition from its rightmost symbol to the right in the state $1$,
then $u$ is at least as long as $w_n$.
\end{enumerate}
\end{claim*}
The first observation is that Theorem~\ref{theorem_2_pow_n}
follows from this claim.
Let $n \geqslant 2$,
and let $A_n$ and $w_n$ be an automaton and a string
that satisfy the conditions in the claim.
Then $A_n$ is supplemented
with an initial state $n$,
a set of accepting states $\{1\}$
and a single transition by the left end-marker: from the state $n$ to the state $n$;
no transitions by the right end-marker are defined.
The resulting automaton $A'_n$ becomes a valid 2DFA,
and it accepts the string $w_n$ as follows:
from the initial state at $\vdash$
it moves to the first symbol of $w_n$ in the state $n$,
then, by the first point of the claim,
the automaton eventually leaves $w_n$ to the right in the state $1$,
and thus arrives to the right end-marker $\dashv$ in an accepting state.
To see that every string accepted by $A'_n$ is of length at least $|w_n|$,
let $u$ be any accepted string.
It is not empty, because on the empty string
the automaton steps on the right end-marker in the state $n$ and rejects.
Then, after the first step the automaton $A'_n$
is at the first symbol of $u$ in the state $n$.
It cannot return to $\vdash$,
because it has already used the only transition at this label,
and if it ever comes back, it will reject or loop.
Also the automaton cannot come to $\dashv$ in states other than $1$.
In order to accept, it must arrive to $\dashv$ in the state $1$,
and this is the first and the only time when it leaves the string $u$.
Then, by the second point of the claim,
the length of $u$ cannot be less than the length of $w_n$.
It remains to prove the claim,
which is done by induction on $n$.
Base case: $n = 2$.
The automaton $A_2 = (\Sigma_2, Q_2, \delta_2)$ for $n = 2$ is constructed as follows.
The alphabet is $\Sigma_2 = \{a, b\}$, and the set of states is $Q_2 = \{1, 2\}$.
The transition function is defined by
\begin{align*}
\delta_2(2,a) &= (2,+1), \\
\delta_2(2,b) &= (1,-1), \\
\delta_2(1,a) &= (1,+1), \\
\delta_2(1,b) &= (1,+1).
\end{align*}
The string $w_2$ is $ab$,
and the computation of $A_2$ on $w_2$ is presented in Figure~\ref{f:2dfa_shortest_2_3_4} (top left).
To be precise, computations starting in the state $2$ either at $a$ or at $b$
both end by leaving the string to the right in the state $1$, as claimed.
There are only two shorter non-empty strings: $a$ and $b$.
If the automaton starts on the string $a$ in the state $2$,
then it moves to the right in the state $2$;
on $b$, it moves to the left in the state $1$.
In either case, it does not go to the right in the state $1$.
Thus, the second point of the claim is satisfied.
The length of the string is $|w_2| = 2 = 3 \cdot 2^0-1$.
Induction step: $n \to n+1$.
Let an $n$-state 2DFA $A_n = (\Sigma_n, Q_n, \delta_n)$
and a string $w_n \in \Sigma_n^*$
satisfy the claim.
The $(n+1)$-state automaton $A_{n+1}$ satisfying the claim
is constructed as follows.
Let $A_{n+1} = (\Sigma_{n+1}, Q_{n+1}, \delta_{n+1})$.
\begin{itemize}
\item
Its alphabet is
$\Sigma_{n+1} = \overrightarrow{\Sigma_n} \cup \overleftarrow{\Sigma_n} \cup \{\#\}$,
where $\overrightarrow{\Sigma_n} = \set{\overrightarrow{a}}{a \in \Sigma_n}$
and $\overleftarrow{\Sigma_n} = \set{\overleftarrow{a}}{a \in \Sigma_n}$
\item
The set of states is $Q_{n+1} = Q_n \cup \{n+1\} = \{1,\ldots, n+1\}$.
\item
The transition function is defined as follows.
In the new state $n+1$,
the automaton moves by all symbols with arrows
in the directions pointed by the arrows.
\begin{align*}
\delta_{n+1}(n+1,\overrightarrow{a}) &= (n+1,+1),
&& \text{for } a \in \Sigma
\\
\delta_{n+1}(n+1,\overleftarrow{a}) &= (n+1,-1),
&& \text{for } a \in \Sigma
\intertext{%
In all old states $1, \ldots, n$,
on symbols with arrows,
the new automaton works in the same way
as the automaton $A_n$ on the corresponding symbols without arrows.
}
\delta_{n+1}(i,\overrightarrow{a}) = \delta_{n+1}(i,\overleftarrow{a}) &=
\delta_n(i,a),
&& \text{for } a \in \Sigma \text{ and } i \in \{1, \ldots, n\}
\\
\intertext{%
By the new separator symbol $\#$,
only two transitions are defined.
In the state $n+1$, the automaton moves to the left in the state $n$,
thus starting the automaton $A_n$
on the substring to the left.
}
\delta_{n+1}(n+1,\#) &= (n,-1)
\intertext{%
And if the automaton gets to $\#$ in the state $1$
(which happens after concluding the simulation of $A_n$
on the substring to the left),
then the automaton moves to the right in the state $n$
to start the simulation of $A_n$ also on the substring to the right of the separator $\#$.
}
\delta_{n+1}(1,\#) &= (n,+1)
\end{align*}
The rest of transitions are undefined.
\end{itemize}
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{theorem_2_pow_n_computation_on_w_n_plus_1.pdf}}
\caption{Computation of the automaton $A_{n+1}$ on the string $w_{n+1}$.}
\label{f:theorem_2_pow_n_computation_on_w_n_plus_1}
\end{figure}
Note that once the automaton $A_{n+1}$ leaves the state $n+1$,
it never returns to it,
because there are no transitions to $n+1$ from any other state.
Let
$h \colon (\overrightarrow{\Sigma_n}\cup \overleftarrow{\Sigma_n})^* \to \Sigma_n^*$
be a string homomorphism
which removes the arrow from the top of every symbol,
that is,
$h(\overrightarrow{a})=h(\overleftarrow{a})=a$ for all $a \in \Sigma_n$.
The automaton $A_{n+1}$
works in the states $1, \ldots, n$
on symbols from $\overrightarrow{\Sigma_n}\cup \overleftarrow{\Sigma_n}$
as $A_n$ works on the corresponding symbols from $\Sigma_n$.
Then, if $h(w) = w_n$ for some $w \in (\overrightarrow{\Sigma_n}\cup \overleftarrow{\Sigma_n})^*$,
it follows that the automaton $A_{n+1}$,
having started in the state $n$ at any symbol of $w$,
eventually leaves the string $w$ by moving to the right in the state $1$.
Furthermore, if $|w| < |w_n|$
for some string $w \in (\overrightarrow{\Sigma_n}\cup \overleftarrow{\Sigma_n})^*$,
then the automaton $A_{n+1}$,
having started in the state $n$ at any symbol of $w$,
cannot leave the string by moving to the right in the state $1$.
The string $w_{n+1}$ is defined as $\overrightarrow{w_n}\#\overleftarrow{w_n}$,
where $\overrightarrow{a_1 \ldots a_\ell} = \overrightarrow{a_1} \ldots \overrightarrow{a_\ell}$
and $\overleftarrow{a_1 \ldots a_\ell} = \overleftarrow{a_1} \ldots \overleftarrow{a_\ell}$
for every string $a_1 \ldots a_\ell \in \Sigma_n^*$.
The length of $w_{n+1}$ is
$|w_{n+1}| = 2|w_n|+1 = 2(3\cdot 2^{n-2}-1)+1 = 3 \cdot 2^{n-1}-1$,
as desired.
First, it is proved that the automaton $A_{n+1}$ works on the string $w_{n+1}$
as stated in the first point of the claim.
Let $A_{n+1}$ start its computation on the string $w_{n+1}$
at any symbol in the state $n+1$,
as shown in Figure~\ref{f:theorem_2_pow_n_computation_on_w_n_plus_1}.
By the symbols in $\overrightarrow{w_n}$, the automaton moves to the right,
maintaining the state $n+1$;
by the symbols in $\overleftarrow{w_n}$, it moves to the left in $n+1$.
Thus, wherever the automaton begins,
it eventually arrives to the separator $\#$ in the state $n+1$.
Next, the automaton moves to the last symbol of $\overrightarrow{w_n}$ in the state $n$.
Since $h(\overrightarrow{w_n}) = w_n$,
the automaton $A_{n+1}$ operates on $\overrightarrow{w_n}$ as $A_n$ on $w_n$,
and leaves $\overrightarrow{w_n}$ by a transition to the right in the state $1$.
Then $A_{n+1}$ arrives to the separator $\#$ again, now in the state $1$,
and moves to the first symbol of $\overleftarrow{w_n}$ in the state $n$.
As $h(\overleftarrow{w_n}) = w_n$, the automaton $A_{n+1}$ works as $A_n$ on $w_n$,
and leaves $\overleftarrow{w_n}$ (and the whole string $w_{n+1}$)
by moving to the right in the state $1$.
Turning to the second point of the claim,
it should be proved that computations of a certain form
are impossible on any strings shorter than $w_{n+1}$.
Let $w \in \Sigma_{n+1}^*$ be a string,
and let there be a position in $w$,
such that the automaton $A_{n+1}$,
having started at this position in the state $n+1$,
eventually leaves the string $w$ by a transition to the right in the state $1$.
It is claimed that $|w| \geqslant |w_{n+1}|$.
Consider the computation of $A_{n+1}$
leading out of $w$ to the right in the state $1$.
It begins in the state $n+1$,
and the automaton maintains the state $n+1$ at all symbols except $\#$.
In order to reach the state $1$, there should be a moment in the computation on $w$
when the automaton arrives at some symbol $\#$ in the state $n+1$.
Let $u$ be the prefix of $w$ to the left of this $\#$,
and let $v$ be the suffix to the right of this $\#$;
note that the substrings $u$ and $v$ may contain more symbols $\#$.
It is sufficient to prove that $|u| \geqslant |w_n|$ and $|v| \geqslant |w_n|$.
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{theorem_2_pow_n_v0}}
\caption{The partition $w=u\#v$ and the suffix $v_0$ of $v$.}
\label{f:theorem_2_pow_n_u3}
\end{figure}
Consider first the case of the suffix $v$.
Let $v_0$ be the longest suffix of $v$ that does not contain the symbol $\#$;
then the symbol preceding $v_0$ in $w$ is the separator $\#$,
as shown in Figure~\ref{f:theorem_2_pow_n_u3}.
Once the automaton $A_{n+1}$ steps from the last $\#$ in $w$ to the right,
it arrives to the first symbol of $v_0$ in the state $n$
(by the unique transition to the right at $\#$).
The string $v_0$ cannot be empty, because $n \neq 1$.
Once the automaton is inside $v_0$,
it cannot return to $\#$ anymore,
since it has already used the only transition to the right from $\#$,
and cannot use it again without looping.
Therefore, the automaton $A_{n+1}$ starts on the string
$v_0 \in (\Sigma_{n+1}\setminus\{\#\})^*$ in the state $n$,
and, operating as $A_n$,
eventually leaves this string to the right in the state $1$.
Then $|v_0| \geqslant |w_n|$ by the induction hypothesis,
and hence $|v| \geqslant |w_n|$.
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{theorem_2_pow_n_u}}
\caption{The case of computations on $u$ not reaching any separators.}
\label{f:theorem_2_pow_n_u1}
\end{figure}
Now consider the prefix $u$.
Once the automaton $A_{n+1}$ comes in the state $n+1$
to the separator $\#$ between $u$ and $v$,
it moves to the last symbol of $u$ in the state $n$.
In order to leave the string $u$ to the right and proceed further,
it must return to the separator $\#$ in the state $1$,
because there are no transitions by any states $\{2, \ldots, n\}$
at this separator.
If there are no symbols $\#$ in $u$,
or if there are some, but the automaton does not reach them,
then the entire computation of $A_{n+1}$ on $u$
takes place on a certain suffix of $u$ that does not contain $\#$,
as illustrated in Figure~\ref{f:theorem_2_pow_n_u1}.
This computation follows a computation of $A_n$ on a string from $\Sigma_n^*$.
Then, by the induction hypothesis, this suffix is not shorter than $w_n$,
and therefore $|u| \geqslant |w_n|$.
The remaining case is when the automaton
comes to some symbol $\#$ inside the string $u$.
Let $u_0$ be the maximal suffix of $u$ not containing any symbols $\#$,
as in Figure~\ref{f:theorem_2_pow_n_u0}.
The automaton $A_{n+1}$ visits the separator $\#$ to the left of $u_0$,
and then immediately moves from this separator
back to the first symbol of $u_0$ in the state $n$
(the string $u_0$ is non-empty, because it is followed by $\#$, which has no transitions in the state $n$).
Returning back to $\#$ to the left of $u_0$ is not an option,
since the unique transition by $\#$ to the right has been used already.
Therefore, the automaton leaves $u_0$ by a transition to the right,
and comes to the separator $\#$ between $u$ and $v$.
In order to continue the computation, it should come there in the state $1$.
By the induction hypothesis for this computation on $u_0$,
the length of $u_0$ is at least $|w_n|$.
Then the length of the entire $u$ is also at least $|w_n|$.
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{theorem_2_pow_n_u0}}
\caption{The case of computations on $u$ reaching a separator $\#$ inside $u$.}
\label{f:theorem_2_pow_n_u0}
\end{figure}
This confirms that $|w|=|u|+1+|v| \geqslant |w_n|+1+|w_n| = |w_{n+1}|$
and completes the proof.
\end{proof}
\section{Conclusion}
The maximum length of the shortest accepted string
for direction-determinate 2DFA has been determined precisely,
whereas for 2DFA of the general form,
a lower bound of the order $2^n$ has been established.
The known upper bound on this length is of the order $4^n$.
Bounds on the maximum length of shortest strings
for small values of the number of states $n$
are given in Table~\ref{tab:bounds_for_small_N}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c|r|r|r|r|}
\hline
\multirow{3}{*}{$n$}
& direction-determinate
& \multicolumn{3}{|c|}{2DFA of the general form}
\\
\cline{3-5}
& 2DFA
& lower bound
& computed values
& upper bound \\
& $\binom{n}{\lfloor n/2 \rfloor} - 1$
& $3 \cdot 2^{n-2} - 1$
&
& $\binom{2n}{n+1} - 1$ \\
\hline
2 & 1 & 2 & 2 & 3 \\
\hline
3 & 2 & 5 & 6 & 14 \\
\hline
4 & 5 & 11 & 17 & 55 \\
\hline
5 & 9 & 23 & 32 & 209 \\
\hline
6 & 19 & 47 & & 791 \\
\hline
\end{tabular}
\end{center}
\caption{The maximum length of shortest accepted strings for $n$-state 2DFA, for small $n$.}
\label{tab:bounds_for_small_N}
\end{table}
In the table, besides the theoretical bounds,
there are also some computed values
of the length of shortest strings in some automata.
The example for $n=3$ was obtained by exhaustive search,
while the examples for $n=4$ and $n=5$ were found by heuristic search.
Therefore, the maximum length of the shortest string
for 3-state automata is now known precisely,
for 4-state automata it is at least 17 and possibly more,
and the given length of strings for 5 states is most likely much less than possible.
The computations of the automata found for $n=3$ and $n=4$ on their shortest strings
are presented in Figure~\ref{f:2dfa_shortest_calc_3_4}.
It should be noted that these computed values
exceed the theoretical lower bound $\frac{3}{4} \cdot 2^n - 1$ proved in this paper,
and are much less than the known upper bound $\binom{2n}{n+1} - 1$.
Thus, the bounds for 2DFA of the general form are still in need of improvement.
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{2dfa_shortest_calc_3_4}}
\caption{Automata found by computer programs,
and their shortest strings:
(top) 3 states, string of length 6;
(bottom) 4 states, string of length 17.}
\label{f:2dfa_shortest_calc_3_4}
\end{figure}
|
1,314,259,995,633 | arxiv | \section{Introduction and main results}
This paper is a continuation of a previous work~\cite{GonLewNaz-20_ppt} where the last two authors together with F.Q.~Nazar studied the existence of ground states for the \emph{nonlinear Schr\"odinger equation} (NLS) for systems of orthonormal functions. In the present paper, we exhibit a connection between the corresponding minimisation problem and the family of Lieb-Thirring inequalities~\cite{LieThi-75,LieThi-76,LieSei-09}, which enables us to prove results both for the Lieb-Thirring inequalities and the NLS equation studied in~\cite{GonLewNaz-20_ppt}.
\subsection{Lieb-Thirring inequalities}\label{sec:LT}
The Lieb-Thirring inequality is one of the most important inequalities in mathematical physics. It has been used by Lieb and Thirring~\cite{LieThi-75}
to give a short proof of the stability of matter~\cite{DysLen-67,DysLen-68,Lieb-90,LieSei-09} and it is a fundamental tool for studying large fermionic systems. It is also a source of many interesting mathematical questions.
\subsubsection{The finite rank Lieb-Thirring constant}
Let $d\geq 1$, $\kappa \geq 0$ and $N\geq 1$, and let $L_{\kappa,d}^{(N)}$ be the best constant in the {\em finite rank Lieb-Thirring inequality}
\begin{equation}
\boxed{ \sum_{n=1}^N |\lambda_n(-\Delta+V)|^\kappa \leq L_{\kappa,d}^{(N)} \int_{\R^d} V(x)_-^{\kappa+\frac{d}2}\,{\rd x} }
\label{eq:LT_V_N}
\end{equation}
for all $V\in L^{\kappa+\frac{d}{2}}(\R^d)$, where $a_-=\max(0,-a)$ and $\lambda_n(-\Delta+V)$ denotes the $n$th min-max level of $-\Delta+V$ in $L^2(\R^d)$, which equals the $n$th negative eigenvalue (counted with multiplicity) when it exists and 0 otherwise. Note that $L_{\kappa, d}^{(N)}\leq NL_{\kappa, d}^{(1)}$ is finite by the Gagliardo-Nirenberg inequality, under the assumption that
\begin{equation}
\begin{cases}\kappa\geq\frac12& \text{in $d=1$,}\\
\kappa>0& \text{in $d=2$,}\\
\kappa\geq0& \text{in $d\geq3$.}
\end{cases}
\label{eq:constraint_kappa}
\end{equation}
These restrictions on $\kappa$ are optimal in the sense that $L_{\kappa,d}^{(1)}=\infty$ for $0\leq\kappa<1/2$ in $d=1$ and for $\kappa=0$ in $d=2$. From the definition we have $L_{\kappa, d}^{(N)} \le L_{\kappa, d}^{(N+1)}$. The Lieb-Thirring theorem states that the limit is finite:
\begin{equation}
L_{\kappa,d} :=L_{\kappa,d} ^{(\ii)}= \lim_{N\to\ii}L_{\kappa,d}^{(N)}<\ii\qquad\text{for $\kappa$ as in~\eqref{eq:constraint_kappa}.}
\label{eq:LT_V_form}
\end{equation}
This was proved by Lieb and Thirring~\cite{LieThi-75,LieThi-76} for $\kappa>1/2$ in $d=1$ and for $\kappa>0$ in $d\geq 2$. The critical cases
$\kappa=0$ in $d\geq3$ and $\kappa=1/2$ in $d=1$ are respectively due to Cwikel-Lieb-Rozenbljum~\cite{Cwikel-77,Lieb-76b,Rozenbljum-72} and Weidl~\cite{Weidl-96}.
\subsubsection{Results on the non-optimality of the finite rank Lieb-Thirring constant}
Our first theorem states that for an appropriate range of $\kappa$, the optimal constant in the Lieb-Thirring inequality can never be attained by a potential having finitely many bound states.
\begin{theorem}[Non optimality of the finite-rank case]\label{thm:LT}
Let $d\geq1$ and
\begin{equation}
\begin{cases}
\kappa >\frac32&\text{for $d=1$,}\\
\kappa >1&\text{for $d=2$,}\\
\kappa \geq1&\text{for $d\geq3$.}
\end{cases}
\label{eq:condition_kappa}
\end{equation}
Then there exists an infinite sequence of integers
$N_1 = 1 < N_2 = 2 <N_3<\cdots$
such that
\begin{equation*}
L^{(N_k-1)}_{\kappa,d}<L^{(N_{k})}_{\kappa,d} \qquad\text{for all}\ k\geq 1.
\end{equation*}
In particular, we have
$$\boxed{L_{\kappa,d}^{(N)} < L_{\kappa,d}\qquad\text{for all $N\geq1$.}}$$
In addition, for any $N\geq2$ there exist optimisers $V_N$ for $L_{\kappa,d}^{(N)}$. When $N=N_k$ we have $\lambda_N(-\Delta+V_N)<0$, that is, $-\Delta+V_N$ has at least $N$ negative eigenvalues.
\end{theorem}
As we will discuss below, this result, in particular, disproves the Lieb--Thirring conjecture in dimension $d=2$ in the range $1<\kappa\lesssim 1.165$ and suggests a new scenario for the optimal Lieb-Thirring constant.
It is unclear whether the passage to a subsequence is really necessary or whether the conclusion holds also for $N_k=k$.
The proof of Theorem~\ref{thm:LT} proceeds by studying the \emph{dual formulation} of the Lieb-Thirring inequality~\eqref{eq:LT_V_N} in a similar manner as what was done in~\cite{GonLewNaz-20_ppt} for the nonlinear Schr\"odinger equation. This is explained in detail in the next section, where we also collect more properties of $V_N$.
This duality argument requires the assumption $\kappa\geq 1$. It is an interesting open question whether Theorem~\ref{thm:LT} is valid for all
$\kappa>\max\{0,2-d/2\}$ instead of~\eqref{eq:condition_kappa}. The value of the critical exponent $\max\{0,2-d/2\}$ will be motivated in the next section. In Section~\ref{sec:proof_LT_V_kappa<1} we provide a direct proof for $N=2$ which covers this range of $\kappa$, as stated in the following result.
\begin{theorem}[Non optimality of the $N=1$ case]\label{thm:LT_bis}
Let $d\geq1$ and
\begin{equation}
\kappa >\max\left\{0,2-\frac{d}2\right\}.
\label{eq:condition_kappa_bis}
\end{equation}
Then we have
\begin{equation*}
\boxed{ L^{(1)}_{\kappa,d}<L^{(2)}_{\kappa,d}\leq L_{\kappa,d}.}
\end{equation*}
\end{theorem}
As we will discuss below, this result, in particular, disproves the Lieb--Thirring conjecture in dimension $d=3$ in the range $1/2<\kappa\lesssim 0.8627$.
The conclusion $L^{(1)}_{\kappa,d}< L_{\kappa,d}$ for the appropriate range of $\kappa$ is new for all dimensions $2\leq d\leq 7$. Let us briefly sketch an alternative way of arriving at this strict inequality for $d\geq 8$ using results from~\cite{GlaGroMar-78}. Indeed, it is shown there that the best Cwikel-Lieb-Rozenbljum constant satisfies $L_{0,d}>L_{0,d}^{\rm sc} > L^{(1)}_{0,d}$ in dimensions $d\geq8$; see also~\cite{Frank-20_ppt}. Here, the constant $L^{(1)}_{0,d}$ is defined in terms of the Sobolev optimiser. The monotonicity argument from~\cite{AizLie-78} applies to the one-bound state constant $L^{(1)}_{\kappa,d}$ as well (see Lemma \ref{al} in Appendix~\ref{app:Aizenman-Lieb}) and implies that $L_{\kappa,d}\geq L_{\kappa,d}^{\rm sc}>L^{(1)}_{\kappa,d}$ for all $\kappa\geq0$ and all $d\geq 8$, as claimed. In contrast to this argument, our Theorem~\ref{thm:LT_bis} is not only valid in all dimensions, in the mentioned range of $\kappa$, but it gives the additional information that the two-bound states constant $L_{\kappa,d}^{(2)}$ is above $L_{\kappa,d}^{(1)}$. The mechanism used in our proof is completely different from~\cite{GlaGroMar-78}. There, the authors increased the coupling constant in front of the potential to reach the semi-classical limit. On the other hand, the proof of Theorem~\ref{thm:LT_bis} consists of placing two copies of the one-bound state optimiser far away in the appropriate manner, and computing the resulting exponentially small attraction.
Our proof of Theorem \ref{thm:LT_bis} does not work for $\kappa=0$ in dimensions $d=5,6,7$ (where one still has $2-\frac d2<0$). Understanding this case is an open problem.
\subsubsection{Discussion}
We now discuss some consequences of Theorems~\ref{thm:LT} and~\ref{thm:LT_bis}, in light of a conjecture of Lieb and Thirring in~\cite{LieThi-76}.
There are many results on the Lieb-Thirring best constants $L_{\kappa,d}$. The best estimates currently known are in~\cite{FraHunJexNam-18_ppt}. Let us mention a selection of results pertinent to our theorem and refer to~\cite{Frank-20_ppt} for a detailed discussion of known results and open problems. We introduce the semi-classical constant
\begin{equation} \label{eq:L_sc}
L_{\kappa, d}^{\rm sc} := \frac{\Gamma\left(\kappa+1\right)}{2^d\pi^{\frac{d}2}\,\Gamma\left(\kappa+d/2+1\right)}
\end{equation}
and recall the following known properties:
\begin{itemize}[leftmargin=*]
\item (Lower bound~\cite{LieThi-76}) For all $d \ge 1$, $\kappa \ge 0$, we have
\begin{equation}
L_{\kappa, d} \ge \max \left\{ L_{\kappa, d}^{(1)}, L_{\kappa, d}^{\rm sc} \right\};
\label{eq:LT_conjecture}
\end{equation}
\item (Monotonicity~\cite{AizLie-78}) For all $d \ge 1$ and all $1\leq N\leq\ii$, the map $\kappa\mapsto L^{(N)}_{\kappa,d}/L_{\kappa,d}^{\rm sc}$ is non-increasing;\footnote{Only the case $N=\ii$ is considered in~\cite{AizLie-78} but the argument applies the same to any finite $N\geq1$. When $N=1$ the argument is given in Appendix~\ref{app:Aizenman-Lieb} below, where we also prove that $\kappa\mapsto L^{(1)}_{\kappa,d}/L_{\kappa,d}^{\rm sc}$ is indeed \emph{strictly decreasing}.}
\item ($\kappa=3/2$ in $d=1$~\cite{LieThi-76}) In dimension $d = 1$ with $\kappa = \frac32$, we have, for all $N \in \N$,
\begin{equation}
L_{3/2,1}=L_{3/2,1}^{(N)}=L_{3/2,1}^{\rm sc};
\label{eq:1D_3/2}
\end{equation}
\item ($\kappa=3/2$ in $d\geq1$~\cite{LapWei-00}) For all $d \ge 1$ with $\kappa = \frac32$, we have $L_{3/2,d}=L_{3/2,d}^{\rm sc}$;
\item ($\kappa<3/2$ is not semi-classical in 1D~\cite{LieThi-76}) For $d=1$ and $\kappa<3/2$, we have $L_{\kappa, 1} > L_{\kappa, 1}^{\rm sc}$;
\item ($\kappa<1$ is not semi-classical~\cite{HelRob-90}) For all $d \ge 1$ and $\kappa < 1$, we have $L_{\kappa, d} > L_{\kappa, d}^{\rm sc}$;
\item ($\kappa=0$ in $d\geq7$~\cite{GlaGroMar-78}, see also \cite{Frank-20_ppt}) We have $L_{0,d}> L_{0,d}^{\rm sc}>L_{0,d}^{(1)}$ in dimensions $d\geq 8$ and $L_{0,d}> L^{(1)}_{0,d}> L_{0,d}^{\rm sc}$ in dimension $d=7$.
\end{itemize}
These properties imply that there exists a critical number $1 \le \kappa_{\rm sc}(d) \le \frac32$ such that
\[
L_{\kappa,d} \begin{cases}
=L_{\kappa,d}^{\rm sc}&\text{for $\kappa\geq\kappa_{\rm sc}(d)$,}\\
>L_{\kappa,d}^{\rm sc}&\text{for $\kappa<\kappa_{\rm sc}(d)$.}
\end{cases}
\]
In the original article~\cite{LieThi-76}, Lieb and Thirring conjectured that there is equality in~\eqref{eq:LT_conjecture}: the optimal constant should be given either by the one bound state case, or by semi-classical analysis. This would imply $\kappa_{\rm sc}(d) = \kappa_1(d)$, where $\kappa_1(d)$ is the (unique) crossing point between the two curves $\kappa \mapsto L_{\kappa, d}^{(1)}$ and $\kappa\mapsto L_{\kappa,d}^{\rm sc}$ when it exists; see Corollary \ref{crossing} in Appendix~\ref{app:Aizenman-Lieb}. In the following we use the convention that $\kappa_1(d)=-\ii$ when the two curves do not cross, that is, when $d\geq 8$. Numerically, one finds \cite{LieThi-76}
\begin{equation}
\begin{cases}
\kappa_1(1) =3/2&\text{for $d=1$,}\\
\kappa_1(2) \simeq 1.165&\text{for $d=2$,}\\
\kappa_1(3) \simeq 0.8627&\text{for $d=3$.}
\end{cases}
\label{eq:kappa_c}
\end{equation}
Although the conjecture is still believed to hold in dimension $d=1$, it is now understood that the situation is more complicated in dimensions $d\geq2$. In particular, Theorem~\ref{thm:LT_bis} implies already that
$$\kappa_1(d) < \kappa_{\rm sc}(d) \le \frac32\qquad\text{in dimensions $d\geq2$.}$$
The first inequality is always strict because otherwise we would have $L_{\kappa,d}=L_{\kappa,d}^{\rm sc}=L^{(1)}_{\kappa,d}$ at $\kappa=\kappa_1(d)$ which cannot hold by Theorems~\ref{thm:LT} and~\ref{thm:LT_bis}. We now discuss some further consequences of our results, mostly in the physical dimensions $d\leq3$.
\medskip
\noindent $\bullet$ \emph{In dimension $d = 1$}, since $\kappa_1(1) = 3/2$, we have indeed $\kappa_{\rm sc}(1)=\kappa_1(1) = 3/2$. In addition, at $\kappa=1/2$, the constant is $L_{1/2,1}=L_{1/2,1}^{(1)}=1/2$ as proved in~\cite{HunLieTho-98}, with the optimal $V$ being a delta function. The remaining part of the Lieb-Thirring conjecture, namely, that $L_{\kappa,1}=L_{\kappa,1}^{(1)}$ for all $1/2<\kappa<3/2$, has been confirmed by numerical experiments in~\cite{Levitt-14} but it is still open.
\medskip
\noindent $\bullet$ \emph{In dimension $d = 2$}, we have $1.165\simeq\kappa_1(2)< \kappa_{\rm sc}(2)\leq3/2$ and this is the best we can say at present. Numerical simulations from~\cite{Levitt-14} did not provide any hint of what is happening in the region $1\leq\kappa\lesssim 1.165$. However, our Theorem~\ref{thm:LT} in dimension $d = 2$ shows that $L_{\kappa, 2} > L_{\kappa,2}^{(N)}$ for all $\kappa > 1$ and $N\geq1$. In particular, for $1 < \kappa \lesssim 1.165$, \textbf{we disprove the Lieb-Thirring conjecture that the constant is given by the $N = 1$ optimiser in 2D}. It can indeed not be given by any finite rank optimiser.
\medskip
\noindent $\bullet$ \emph{In dimension $d = 3$}, a system with 5 bound states was numerically found in~\cite{Levitt-14} to be better than the one bound state for $\kappa\gtrsim 0.855$, showing that the one bound state case ceases to be optimal before the critical value $0.8627$ in~\eqref{eq:kappa_c}. Our Theorem~\ref{thm:LT_bis} implies that the one-bound state constant $L^{(1)}_{\kappa,d}$ can indeed not be optimal for all $\kappa>1/2$. This \textbf{disproves the Lieb-Thirring conjecture that the constant is given by the $N = 1$ optimiser for $1/2<\kappa\lesssim 0.8627$ in 3D.}
\medskip
\noindent $\bullet$ \emph{In dimension $d \ge 3$}, a common belief is that $\kappa_{\rm sc}(d)=1$ for all $d\geq3$. The validity of this conjecture would have some interesting physical consequences, for instance an exact lower bound involving the Thomas-Fermi kinetic energy in Density Functional Theory~\cite{LewLieSei-19_ppt}. Our Theorem~\ref{thm:LT} does not contradict this belief, since we prove that the optimal Lieb-Thirring potential cannot have a finite number of bound states. But many other situations are still possible, as we now discuss.
\medskip
Theorem~\ref{thm:LT} suggests to interpret the Lieb-Thirring inequality within the framework of statistical mechanics. For an optimal potential $V_N$ for $L^{(N)}_{\kappa,d}$, we can think of the corresponding $N$ first orthonormal eigenfunctions of $-\Delta+V_N$ as describing $N$ fermions in $\R^d$~\cite[Rmk.~8]{GonLewNaz-20_ppt}. Theorem~\ref{thm:LT} says that in the limit $N\to\ii$, the $N$ particles always attract each other, at least along a subsequence $N_k$. We \textbf{conjecture} that for $\kappa>\max\{2-d/2,0\}$ they will form a large cluster of size proportional to $N^{1/d}$ (if $\int_{\R^d} (V_N)_-^{\kappa+d/2}$ is, for instance, normalised to $N$) and that $V_N$ will converge in the limit to a bounded, but non-integrable potential $V_\ii$. There would then be no optimiser for the Lieb-Thirring constant $L_{\kappa,d}$. The semi-classical constant $L_{\kappa,d}^{\rm sc}$ corresponds to the case where the limiting potential $V_\ii$ is constant over $\R^d$, that is, the system is translation-invariant. In statistical mechanics, this is called a \emph{fluid phase}. In principle, the limiting potential $V_\ii$ could also be a non-trivial periodic function, which is then interpreted as a \emph{solid phase}. We see no obvious physical reasons for discarding this possibility, in particular in low dimensions where periodic systems are ubiquitous~\cite{BlaLew-15}. This mechanism does not seem to have been considered before in the context of Lieb-Thirring inequalities. In particular, we conjecture that the system is in a solid phase for all $2-d/2<\kappa<\kappa_{\rm sc}(d)$ in dimensions $d=2,3$.
\begin{remark}
In dimension $d = 2$, some preliminary numerical tests suggest that the difference $L_{\kappa, 2} - L_{\kappa, 2}^{(1)}$ might be very small in the region $1 < \kappa \lesssim 1.165$. This makes the problem difficult to simulate as we need high precision.
\end{remark}
\subsection{Dual Lieb-Thirring inequalities}\label{sec:duallt}
Our strategy to prove Theorem~\ref{thm:LT} is to study the dual version of the Lieb-Thirring inequality~\eqref{eq:LT_V_N}. This dual version is well known for $\kappa = 1$ and it is often used in practical applications. The dual inequality for $\kappa>1$ appears, for instance, in~\cite{LioPau-93}, but is less known and we briefly recall it in this subsection. There is no known dual problem for $\kappa<1$, except for a certain substitute for $\kappa=0$ in dimensions $d\geq3$~\cite{Frank-14}.
Let $0 \le \gamma = \gamma^*$ be a self-adjoint non-negative operator of $\rank(\gamma)\leq N$, of the form $\gamma = \sum_{j=1}^N n_j | u_j \ket \bra u_j |$ with $u_1,...,u_N$ an orthonormal family in $L^2(\R^d)$. For $1 \le q < \infty$, we denote by
\[
\|\gamma\|_{\gS_q}:=(\tr|\gamma|^q)^{1/q} = \left( \sum_{j=1}^N n_j^q \right)^{1/q}
\]
its $q$-th Schatten norm~\cite{Simon-79}, and use the convention that $\| \gamma \|_{\gS_\ii} = \| \gamma \|$ is the operator norm. The density of $\gamma$ is the function $\rho_\gamma \in L^1(\R^d)$ defined by
\[
\rho_\gamma(x) := \sum_{j=1}^N n_j | u_j (x)|^2,
\]
and the kinetic energy of $\gamma$ is
\[
\Tr( - \Delta \gamma) := \sum_{j=1}^N n_j \int_{\R^d} | \nabla u_j |^2 (x) \rd x
\]
with the convention that $\Tr( - \Delta \gamma) = + \infty$ if $u_j \notin H^1(\R^d)$ for some $j$. Let $1 \le p \le 1 + \frac{2}{d}$ with $d \ge 1$, and let
\begin{equation*}
q := \begin{cases}
\frac{2p+d-dp}{2+d-dp}&\text{for $1\le p<1+\frac2d$,}\\%=\frac{\kappa}{\kappa-1}=\kappa',
+\ii&\text{for $p=1+\frac2d$.}
\end{cases}
\end{equation*}
We denote by $K_{p, d}^{(N)}$ the best (that is, largest possible) constant in the inequality
\begin{equation}
\boxed{ K_{p,d}^{(N)} \norm{\rho_\gamma}_{L^p(\R^d)}^{\frac{2p}{d(p-1)}} \le \norm{\gamma}_{\gS^q}^{\frac{p(2-d)+d}{d(p-1)}}\tr(-\Delta\gamma)}
\label{eq:LT_Schatten}
\end{equation}
valid for all $0 \le \gamma = \gamma^*$ with $\rank(\gamma) \le N$. The fact that $K_{p, d}^{(N)}$ is well-defined with $K_{p, d}^{(N)} > 0$ is a consequence of the next result, together with the Lieb-Thirring theorem.
\begin{lemma}[Duality]\label{lem:duality_N}
Let $1\leq N\leq \ii$, $d\geq1$ and $1\leq p\leq 1+\frac2d$, and set
\[
\kappa := \frac{p}{p-1} - \frac{d}{2}, \quad \text{so that} \quad \frac{\kappa}{\kappa - 1} = q.
\]
Then,
\begin{equation} \label{eq:duality_K_and_L}
K_{p,d}^{(N)} \left( L_{\kappa,d}^{(N)}\right)^{\frac{2}d}=\left(\frac{\kappa}{\kappa+\frac{d}{2}}\right)^{\frac{2\kappa}d}\left(\frac{d}{2\kappa + d}\right).
\end{equation}
\end{lemma}
The lemma says that the inequality~\eqref{eq:LT_Schatten} is dual to the finite-rank Lieb-Thirring inequality~\eqref{eq:LT_V_N}. This is because the density $\rho_\gamma$ is the variable dual to the potential $V$ whereas the density matrix $\gamma$ can be interpreted as the dual of the Schr\"odinger operator $-\Delta+V$. Hence $p$ is the dual exponent of $\kappa+d/2$ and $q$ the one of $\kappa$.
The proof of Lemma~\ref{lem:duality_N}, provided in Appendix~\ref{appendix:proof_duality}, also shows how to relate the corresponding optimisers, assuming they exist. A similar argument, but without the constraint on the rank, can be found for instance in~\cite{LioPau-93}.
We denote
$$
K_{p,d} := \lim_{N\to\infty} K_{p,d}^{(N)} = \inf_{N\geq1} K_{p,d}^{(N)} \,.
$$
This constant is related to the constant $L_{\kappa,d}$ in \eqref{eq:LT_V_form} by
\begin{equation} \label{eq:duality_K_and_L_unconst}
K_{p,d} \left( L_{\kappa,d} \right)^{\frac{2}d}=\left(\frac{\kappa}{\kappa+\frac{d}{2}}\right)^{\frac{2\kappa}d}\left(\frac{d}{2\kappa + d}\right)
\end{equation}
and is the best constant in the inequality
\begin{equation}
\boxed{ K_{p,d} \norm{\rho_\gamma}_{L^p(\R^d)}^{\frac{2p}{d(p-1)}} \le \norm{\gamma}_{\gS^q}^{\frac{p(2-d)+d}{d(p-1)}}\tr(-\Delta\gamma)}
\label{eq:LT_Schatten_unconst}
\end{equation}
valid for all $0 \le \gamma = \gamma^*$.
In Section~\ref{sec:proof_K}, we study the dual problem \eqref{eq:LT_Schatten} and prove the following result which, together with Lemma~\ref{lem:duality_N}, immediately implies Theorem~\ref{thm:LT}.
\begin{theorem}[Existence of optimisers and properties]
\label{thm:K}
Let $d\geq1$ and $1\leq p\leq 1+2/d$.
\medskip
\noindent $(i)$ \textbf{Existence.} For every finite $N\geq1$, the problem $K^{(N)}_{p,d}$ in~\eqref{eq:LT_Schatten} admits an optimiser $\gamma$.
\medskip
\noindent $(ii)$ \textbf{Equation.} After an appropriate normalisation, any optimiser $\gamma$ for $K_{p,d}^{(N)}$ has rank $1\leq R\leq N<\ii$ and can be written in the form
$$\gamma=\sum_{j=1}^Rn_j|u_j\rangle\langle u_j|$$
with
\begin{equation}
n_j=\begin{cases}
\left(\frac{2p}{d(p-1)}\right)^{\frac{1}{p-1}}\frac{2p+d-dp}{d(p-1)}\frac{|\mu_j|^{\frac{1}{q-1}}}{\sum_{k=1}^R|\mu_k|^{\frac{q}{q-1}}}&\text{for $p<1+\frac2d$,}\\
\frac2d \left(\frac{d}{d+2}\right)^{\frac{1}{p-1}}\frac{1}{\sum_{k=1}^R|\mu_k|}&\text{for $p=1+\frac2d$,}
\end{cases}
\label{eq:formula_n_j}
\end{equation}
where the corresponding orthonormal system $(u_1,...,u_R)$ solves the nonlinear Schr\"o\-din\-ger equation
\begin{equation} \label{eq:NLS_in_Lemma}
\forall j = 1, \cdots, R, \quad \Big(-\Delta-\rho_\gamma(x)^{p-1}\Big)u_j=\mu_j\,u_j,
\quad \text{with} \quad
\rho_\gamma = \sum_{j=1}^R n_j | u_j |^2.
\end{equation}
Here $\mu_j$ are the $R$ first negative eigenvalues of $H_\gamma := - \Delta - \rho_\gamma^{p-1}$. In particular, this operator has at least $R$ negative eigenvalues. If $R<N$, then it has exactly $R$ negative eigenvalues. Finally, the potential $V=-\rho_\gamma^{p-1}$ is an optimiser for the finite-rank Lieb-Thirring problem $L^{(N)}_{\kappa,d}$ in~\eqref{eq:LT_V_N}.
\medskip
\noindent $(iii)$ \textbf{Rank.} If, in addition, $p<2$, then there exists an infinite sequence of integers $N_1=1<N_2=2<N_3<\cdots$ so that
$$K^{(N_{k})}_{p,d} < K^{(N_k-1)}_{p,d}$$
and any optimiser for $K^{(N_k)}_{p,d}$ must have rank $R=N_k$. In particular,
$$K_{p,d}<K_{p,d}^{(N)},\qquad \text{for all} \ N\geq1.$$
\end{theorem}
The assertions in $(i)$ and $(ii)$ follow by applying well-known methods from the calculus of variation adapted to the setting of operators; see, for instance, \cite{Solovej-91,Bach-93,FraLieSeiSie-07,Lewin-11}.
For $(iii)$, we use ideas from~\cite{GonLewNaz-20_ppt}, which consist in evaluating the exponentially small interaction between two copies of an optimiser placed far from each other, in order to show that
$$K^{(2N)}_{p,d}<K^{(N)}_{p,d}$$
whenever $K^{(N)}_{p,d}$ admits an optimiser of rank $N$. The proof is provided in Section~\ref{sec:proof_K} below. This argument inspired our proof of Theorem~\ref{thm:LT_bis} for $\kappa<1$ and $N=2$, provided in Section~\ref{sec:proof_LT_V_kappa<1}. There we use the $N=1$ Gagliardo-Nirenberg optimiser to construct a trial state for $N=2$ but we do not prove the existence of an optimal potential.
\subsection{Fermionic Nonlinear Schr\"odinger Equation}
The system of coupled nonlinear equations~\eqref{eq:NLS_in_Lemma} has some similarities with that studied in~\cite{GonLewNaz-20_ppt}, where one has $n_j=1$ instead of~\eqref{eq:formula_n_j}. Here we exhibit a link between the two problems and use this to solve a question left open in~\cite{GonLewNaz-20_ppt}.
In~\cite{GonLewNaz-20_ppt} the authors studied the minimisation problem
\begin{equation}
\boxed{J(N)=\inf \left\{ \tr(-\Delta\gamma)-\frac1p\int_{\R^d}\rho_\gamma(x)^p\,\rd x : \ 0\leq \gamma=\gamma^*\leq1,\ \Tr(\gamma) = N \right\}.}
\label{eq:def_J1}
\end{equation}
Under the assumption $1 < p < 1 + {2}/{d}$, it is proved in~\cite{GonLewNaz-20_ppt} that $- \infty < J(N) < 0$ for all $N > 0$. Under the additional assumption that $p < 2$, it was also shown that there is an infinite sequence of integers $N_1 = 1 < N_2 = 2 < N_3 < \cdots$ such that $J(N_k)$ has a minimiser $\gamma$ of rank $N_k$. This minimiser is a projector of the form $\gamma = \sum_{j=1}^{N_k} | u_j \ket \bra u_j|$, where $u_1,...,u_{N_k}$ form an orthonormal system and solve the \emph{fermionic NLS equation}
\begin{equation}
\forall j = 1, \cdots, N_k, \qquad \left(-\Delta- \rho_\gamma(x)^{p-1}\right)u_j=\mu_j\,u_j, \quad \text{with} \quad
\rho_\gamma = \sum_{i=1}^{N_k}|u_i|^2.
\label{eq:fermionic_NLS}
\end{equation}
Here again $\mu_1 < \mu_2 \le \cdots \le \mu_{N_k} < 0$ are the $N_k$ first eigenvalues of $H_\gamma := - \Delta - \rho_\gamma^{(p-1)}$. The existence of minimisers for $J(N_k)$ therefore proves the existence of solutions of the fermionic NLS equation~\eqref{eq:fermionic_NLS}, for all $1 \le p < \min\{ 2, 1 + 2/d\}$ and $N = N_k$. In dimension $d = 1$, this does not cover the case $p \in [2, 3)$. In the present paper, we prove the following result for the case $p = 2$, which was announced in~\cite{GonLewNaz-20_ppt} and actually also follows from the analysis in~\cite{LieLla-78}.
\begin{theorem}[Non-existence of minimisers for $d=1$, $p=2$]\label{th:nonExistence_J(N)}
Let $d=1$ and $p=2$. For all $N \ge 1$, we have $J(N)=N\, J(1)$. In addition, for all $N \ge 2$, $J(N)$ admits no minimiser.
\end{theorem}
The theorem is reminiscent of a similar result for the true Schr\"odinger (Lieb-Liniger~\cite{LieLin-63}) model in 1D describing $N$ particles interacting with the delta potential. In the attractive case, only two-particle (singlet) bound states exist~\cite{McGuire-64,Yang-68,LieLla-78}. The same result in the Hartree-Fock case was proved in~\cite{LieLla-78}. The spatial component of the singlet state coincides with our $N=1$ solution.
In the case $N = 1$ and $1<p<1+2/d$, it is proved in~\cite[Lem.~11]{GonLewNaz-20_ppt} that $J(1)$ has the Gagliardo-Nirenberg-Sobolev optimiser $\gamma = | U \ket \bra U |$, where
\begin{equation}
U(x)=m^{-\frac{p-1}{2(1+2/d-p)}-\frac12}\;Q\left(m^{-\frac{p-1}{d(1+2/d-p)}}x\right),\qquad\int_{\R^d}U(x)^2\,{\rm d}x=1,
\label{eq:sol_NLS_mass_1}
\end{equation}
and $Q$ is the unique positive radial solution to the NLS equation
\begin{equation}
-\Delta Q-Q^{2p-1}+Q =0, \quad \text{with mass} \quad m := \int_{\R} Q^2.
\label{eq:NLS}
\end{equation}
When $d=1$ and $p=2$, we have the explicit formula
$$U(x)=\frac{1}{2^{\frac32}\cosh(x/4)}.$$
Our strategy to prove Theorem~\ref{th:nonExistence_J(N)} for $d=1$ is to relate $J(N)$ to the dual Lieb-Thirring constant $K_{\kappa, 1}^{(N)}$ for $\kappa=3/2$, and use $K_{3/2, 1}^{(N)} = K_{3/2, 1}^{(1)}$. The proof is given in Section~\ref{ssec:proof_nonExistence_J(N)} below.
The same argument gives that if the Lieb-Thirring conjecture $K_{\kappa, 1}^{(N)} = K_{\kappa, 1}^{(1)}$ is true for some $1<\kappa<3/2$, then $J(N)=N\,J(1)$ for $p=(\kappa+1/2)/(\kappa-1/2)$; see Remark \ref{rem:ltconjj}.
\medskip
Even if $J(N)$ has no minimiser for $N\geq 2$ if $d=1$ and $p=2$, one may still wonder whether the fermionic NLS equation~\eqref{eq:fermionic_NLS} possesses orthonormal solutions. We believe there are no other solutions than the $N=1$ case and are able to prove this for $N=2$, using the fundamental fact that the system is completely integrable~\cite{Manakov-74}. The following is stronger than Theorem~\ref{th:nonExistence_J(N)} for $N=2$.
\begin{theorem}[Non-existence of solutions for $p = 2$, $d=1$ and $N = 2$] \label{th:N=2}
Let $\mu_1 \le \mu_2 < 0$, and let $u_1, u_2$ be two square integrable real-valued functions solving
\begin{equation} \label{eq:no_binding_u}
\begin{cases}
- u_1'' - (u_1^2 + u_2^2) u_1 = \mu_1 u_1, \\
- u_2'' - (u_1^2 + u_2^2) u_2 = \mu_2 u_2.
\end{cases}
\end{equation}
If $\| u_1 \|_{L^2(\R)} = \| u_2 \|_{L^2(\R)}=1$, then we have $\mu_1 = \mu_2$ and
\begin{equation}
u_1(x)=\pm \frac{1}{2\cosh\big((x-x_0)/2\big)},\qquad u_2(x)=\pm \frac{1}{2\cosh\big((x-x_0)/2\big)}
\label{eq:formulas_u_1_u_2}
\end{equation}
for some $x_0\in\R$ and two uncorrelated signs $\pm$.
\end{theorem}
The proof can probably be generalised to show that there are no solutions for all $N\geq3$ at $p=2$ but we only address the simpler case $N=2$ here. The proof is given in Section~\ref{ssec:proof_N=2}. More comments about the NLS problem~\eqref{eq:def_J1} can be read in Appendix~\ref{app:NLS_comments}.
\subsection*{Structure of the paper}
In Section~\ref{sec:proof_K}, we recall useful facts about the finite rank Lieb-Thirring inequalities and we prove Theorem~\ref{thm:K}, which implies Theorem~\ref{thm:LT}. Section~\ref{sec:proof_LT_V_kappa<1} is devoted to the proof of Theorem~\ref{thm:LT_bis}. We prove Theorem~\ref{th:nonExistence_J(N)} and Theorem~\ref{th:N=2} in Sections~\ref{ssec:proof_nonExistence_J(N)} and~\ref{ssec:proof_N=2}, respectively. The proof of duality (Lemma~\ref{lem:duality_N}) is given in Appendix~\ref{appendix:proof_duality} whereas Appendix~\ref{app:NLS_comments} contains more comments on the NLS model from~\cite{GonLewNaz-20_ppt}. Finally, in Appendix~\ref{app:LT-Sobolev} we compare our results with those in~\cite{HonKwoYoo-19}.
\section{Finite rank Lieb-Thirring inequalities: Proof of Theorem~\ref{thm:K}}
\label{sec:proof_K}
This section contains the proof of Theorem~\ref{thm:K} which, for convenience, we split into several intermediate steps.
\subsection{Preliminaries}
First, we recall some useful facts and we make general comments about the inequality~\eqref{eq:LT_Schatten}, before we actually start the proof of the theorem.
The Gagliardo-Nirenberg inequality states that
\begin{equation}
K_{p,d}^{\rm GN}\left(\int_{\R^d}|u(x)|^{2p}\,\rd x\right)^{\frac2{d(p-1)}}\leq \left(\int_{\R^d}|\nabla u(x)|^2\,\rd x\right)\left(\int_{\R^d}|u(x)|^2\,\rd x\right)^{\frac{(2-d)p+d}{d(p-1)}}
\label{eq:GN}
\end{equation}
for all
\begin{equation*}
\begin{cases}
1< p <+\ii&\text{for $d=1,2$,}\\
1 < p \leq\frac{d}{d-2}&\text{for $d\geq3$,}
\end{cases}
\end{equation*}
with the best constant $K_{p,d}^{\rm GN}>0$. In dimension $d=1$ one can take $p\to+\ii$. The constants $K_{p,1}^{\rm GN}$ and the optimisers are known explicitly in $d=1$ \cite{Nagy-41}. In particular, the optimiser is unique up to translations, dilations and multiplication by a phase factor. As explained, for instance, in~\cite{Tao-06,Frank-13,CarFraLie-14}, by combining the results on existence~\cite{Strauss-77,BerLio-83,Weinstein-83}, symmetry~\cite{GidNiNir-81,AlvLioTro-86} and uniqueness~\cite{Coffman-72,Kwong-89,McLeod-93} one infers that in any $d\geq 2$ as well, there is a unique optimiser $Q$, up to translations, dilations and multiplication by a phase factor. This function can be chosen positive and to satisfy~\eqref{eq:NLS} when $p<1+2/d$. When $p=1+2/d$, it still can be chosen positive and to satisfy the equation in~\eqref{eq:NLS}, even with $\mu=-1$. The integral $\int_{\R^d} Q^2\,\rd x$ will be a dimension-dependent constant.
For an operator $\gamma$ of rank one the inequality~\eqref{eq:LT_Schatten} is equivalent to~\eqref{eq:GN}, hence we obtain
\begin{equation}
K_{p,d}^{(1)}=K_{p,d}^{\rm GN}.
\label{eq:K_1}
\end{equation}
The duality argument from Lemma~\ref{lem:duality_N} shows that
\begin{equation}
L_{\kappa,d}^{(1)}=\left(\frac{2\kappa}{2\kappa+d}\right)^{\kappa+\frac{d}2}\left(\frac{d}{2\kappa}\right)^{\frac{d}2}\left(K_{p,d}^{\rm GN}\right)^{-\frac{d}2}<\ii.
\label{eq:formula_L_1}
\end{equation}
Our goal in this section is to study the optimisation problem corresponding to
inequality~\eqref{eq:LT_Schatten}, namely
\begin{equation}
\boxed{ K_{p,d}^{(N)} := \inf_{0 \le \gamma = \gamma^* \atop \rank(\gamma) \le N} \dfrac{ \norm{\gamma}_{\gS^q}^{\frac{p(2-d)+d}{d(p-1)}}\tr(-\Delta\gamma)}{\norm{\rho_\gamma}_{L^p(\R^d)}^{\frac{2p}{d(p-1)}}},}
\label{eq:def_K_N}
\end{equation}
where we recall that
\begin{equation}
q := \begin{cases}
\frac{2p+d-dp}{2+d-dp}&\text{for $1<p<1+\frac2d$,}\\%=\frac{\kappa}{\kappa-1}=\kappa',
+\ii&\text{for $p=1+\frac2d$.}
\end{cases}
\label{eq:relation_q}
\end{equation}
Throughout the paper, the constants $p$, $q$ and $\kappa$ are linked by the relations (we set $p' = p/(p-1)$ and $\kappa' = \kappa/ (\kappa - 1)$)
\[
\boxed{\kappa +\frac{d}{2}= p' , \quad \text{and} \quad q = \kappa'.}
\]
Taking~\eqref{eq:def_K_N} to the power $\frac12(p-1)$, and letting $p \to 1$, so that $q \to 1$ as well, we recover the equality
\[
\int_{\R^d}\rho_\gamma(x)\,{\rd}x=\| \rho_\gamma \|_{L^1(\R^d)} = \| \gamma \|_{\gS_1}=\tr(\gamma),
\]
for all $0 \le \gamma = \gamma^*$. On the other hand, taking $p = 1 + 2/d$, so that $q = \infty$, we recover the better known dual Lieb-Thirring inequality
\begin{equation}
K_{1+2/d,d}^{(N)}\int_{\R^d}\rho_\gamma(x)^{1+\frac2d}\,\rd x\leq \|\gamma\|^{\frac2d}\tr(-\Delta\gamma),\qquad \forall 0 \le \gamma=\gamma^*,\ \rank(\gamma)\leq N.
\label{eq:K_forkappa=1}
\end{equation}
We can think of~\eqref{eq:LT_Schatten} as a specific interpolation between these two cases. Note that a direct proof of~\eqref{eq:K_forkappa=1} with $N=+\ii$ can be found in~\cite{Rumin-11}, see also~\cite{LunSol-13,Sabin-16,Nam-18}. The original Lieb-Thirring proof proceeds by proving~\eqref{eq:LT_V_N} and then deducing~\eqref{eq:K_forkappa=1} by duality.
\subsection{Proof of $(i)$ on the existence of optimisers}
\label{sssec:step1}
Consider a minimising sequence $(\gamma_n)$ with $\rank(\gamma_n)\leq N$ for~\eqref{eq:def_K_N}, normalised such that
$$\tr(-\Delta\gamma_n)=1,\qquad \|\gamma_n\|_{\gS^q}=1$$
and
\begin{equation}
\lim_{n\to\ii}\int_{\R^d}\rho_{n}(x)^p\,\rd x=\frac1{\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}}
\label{eq:min_seq_K}
\end{equation}
with $\rho_n:=\rho_{\gamma_n}$. We have $\|\gamma_n\|\leq \|\gamma_n\|_{\gS^q}=1$ and hence
$$\int_{\R^d}\rho_n(x)\,\rd x=\tr(\gamma_n)\leq N.$$
This proves that $\rho_n$ is bounded in $L^1(\R^d)$. On the other hand, the Hoffmann-Ostenhof~\cite{Hof-77} inequality states that
\begin{equation}
\tr(-\Delta \gamma) \geq \int_{\R^d}|\nabla\sqrt{\rho_\gamma}(x)|^2\,\rd x
\label{eq:Hoffmann-Ostenhof}
\end{equation}
for all $\gamma=\gamma^*\geq0$. This shows that $\sqrt{\rho_n}$ is bounded in $H^1(\R^d)$, hence in $L^r(\R^d)$ for all $2\leq r<2^*$ where $2^*=2d/(d-2)$ in dimension $d\ge3$ and $2^*=+\ii$ in dimensions $d=1,2$, by the Sobolev inequality. In particular, we can choose $r = p$. From~\cite{Lieb-83} or from~\cite[Lem.~I.1]{Lions-84b}, we know that
\begin{itemize}[leftmargin=*]
\item \textbf{either} $\rho_n\to0$ strongly in $L^p(\R^d)$,
\item \textbf{or} there is a $\rho\neq 0$ with $\sqrt\rho\in H^1(\R^d)$, a sequence $\tau_k\in\R^d$ and a subsequence so that $\sqrt{\rho_{n_k}(\cdot-\tau_k)}\wto \sqrt{\rho}\neq0$ weakly in $H^1(\R^d)$.
\end{itemize}
Due to~\eqref{eq:min_seq_K} we know that the first possibility cannot happen and we may assume that $\sqrt\rho_n\wto \sqrt\rho\neq0$, after extraction of a subsequence and translation of the whole system by $\tau_n$. We may also extract a weak-$\ast$ limit for $\gamma_n$ in the trace class topology and infer $\gamma_n\wto\gamma$ where $\rho_\gamma=\rho\neq0$, hence $\gamma\neq0$. By passing to the limit, we have $\gamma=\gamma^*\geq0$ and $\rank(\gamma)\leq N$.
Next we apply Lions' method~\cite{Lions-84} based on the Levy concentration function $Q_n(R)=\int_{|x|\leq R}\rho_n(x)\,\rd x$ and the strong local compactness in $L^2(\R^d)$ to deduce that there exists a sequence $R_n\to\ii$ so that
$$\lim_{n\to\ii}\int_{|x|\leq R_n}\rho_n(x)\,\rd x=\int_{\R^d}\rho(x)\,\rd x,\qquad \lim_{n\to\ii}\int_{R_n\leq |x|\leq 2R_n}\rho_n(x)\,\rd x=0.$$
Let $\chi\in C^\ii_c(\R^d,[0,1])$ be a smooth localisation function such that $\chi\equiv1$ on the unit ball $B_1$ and $\chi\equiv0$ outside of $B_2$. Let $\chi_n(x):=\chi(x/R_n)$ and $\eta_n=\sqrt{1-\chi_n^2}$. Then $\chi_n^2\rho_n\to\rho$ strongly in $L^1(\R^d)\cap L^p(\R^d)$ whereas $|\nabla\chi_n|^2\rho_n\to0$ and $|\nabla\eta_n|^2\rho_n\to0$
strongly in $L^1(\R^d)$. By the IMS formula (see, e.g., \cite[Thm.~3.2]{CycFroKirSim-87}) and Fatou's lemma for operators (see, e.g., \cite[Thm.~2.7]{Simon-79}), we obtain
\begin{align*}
\tr(-\Delta\gamma_n)&=\tr(-\Delta\chi_n\gamma_n\chi_n)+\tr(-\Delta\eta_n\gamma_n\eta_n)-\int_{\R^d}(|\nabla\chi_n|^2+|\nabla\eta_n|^2)\rho_n\\
&=\tr(-\Delta\chi_n\gamma_n\chi_n)+\tr(-\Delta\eta_n\gamma_n\eta_n)+o(1)\\
&\geq\tr(-\Delta\gamma)+\tr(-\Delta\eta_n\gamma_n\eta_n)+o(1).
\end{align*}
From the strong convergence of $\chi_n^2\rho_n$ we have
\begin{align*}
\int_{\R^d}\rho_n^p&=\int_{\R^d}\chi_n^2(\rho_n)^p+\int_{\R^d}(\eta_n^2\rho_n)^p+\int_{\R^d}(\eta_n^2-\eta_n^{2p})\rho_n^p\\
&=\int_{\R^d}\rho^p+\int_{\R^d}(\eta_n^2\rho_n)^p+o(1).
\end{align*}
First, we assume that $q<\ii$, that is, $p<1+2/d$. The Schatten norm satisfies
\begin{align*}
\tr(\gamma_n)^q&=\tr\big(\chi_n(\gamma_n)^q\chi_n\big)+\tr\big(\eta_n(\gamma_n)^q\eta_n\big)\\
&\geq\tr(\chi_n\gamma_n\chi_n)^q+\tr(\eta_n\gamma_n\eta_n)^q\\
&\geq\tr(\gamma)^q+\tr(\eta_n\gamma_n\eta_n)^q+o(1).
\end{align*}
In the second line we have used the inequality $\tr(ABA)^m\leq \tr(A^mB^mA^m)$ for all $m\geq1$~\cite[App.~B]{LieThi-76} to infer
$$\tr(\gamma_n)^q(\chi_n)^2\geq \tr(\gamma_n)^q(\chi_n)^{2q}=\tr(\chi_n)^q(\gamma_n)^q(\chi_n)^q\geq \tr(\chi_n\gamma_n\chi_n)^q.$$
In the third line we used Fatou's lemma in the Schatten space $\gS^q$.
Next, we argue using the method of the missing mass as in~\cite{Lieb-83c}, see also~\cite{Frank-13}, noticing that $K^{(N)}_{p,d}$ can be rewritten as
$$\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}=\inf_{\substack{\gamma=\gamma^*\geq0\\ \rank(\gamma)\leq N}}\frac{\Big(\tr(\gamma^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma)\Big)^{\theta}}{\int_{\R^d}\rho_\gamma(x)^{p}\,\rd x}$$
with
$$\theta:=\frac{d(p-1)}{2}\in(0,1).$$
Using H\"older's inequality in the form
$$(a_1+a_2)^\theta(b_1+b_2)^{1-\theta}\geq a_1^\theta b_1^{1-\theta}+a_2^\theta b_2^{1-\theta}$$
we find
\begin{align*}
1&=\Big(\tr(\gamma_n^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma_n)\Big)^{\theta}\\
&\geq\Big(\tr(\gamma^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma)\Big)^{\theta}+\Big(\tr(\eta_n\gamma_n \eta_n)^q\Big)^{1-\theta}\Big(\tr(-\Delta\eta_n\gamma_n\eta_n)\Big)^{\theta}+o(1)\\
&\geq\Big(\tr(\gamma^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma)\Big)^{\theta}+\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\int_{\R^d}(\eta_n^2\rho_n)^p+o(1)\\
&=\Big(\tr(\gamma^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma)\Big)^{\theta}+1-\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\int_{\R^d}\rho_{\gamma}^p+o(1).
\end{align*}
In the third line we used $\rank(\eta_n\gamma_n\eta_n)\leq N$. Passing to the limit we obtain
$$\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\int_{\R^d}\rho_{\gamma}^p\geq \Big(\tr(\gamma^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma)\Big)^{\theta}$$
and therefore $\gamma\neq0$ is an optimiser.
The case $p=1+2/d$ is similar. This time, we use $\|\gamma\|\leq \liminf_{n\to\ii}\|\gamma_n\|=1$ and $\|\eta_n\gamma_n\eta_n\|\leq\|\gamma_n\|=1$ to bound
\begin{align*}
1&=\tr(-\Delta\gamma_n)\\
&\geq\tr(-\Delta\gamma)+\tr(-\Delta\eta_n\gamma_n\eta_n)+o(1)\\
&\geq\|\gamma\|^{\frac2d}\tr(-\Delta\gamma)+\|\eta_n\gamma_n\eta_n\|^{\frac2d}\tr(-\Delta\eta_n\gamma_n\eta_n)+o(1)\\
&\geq\|\gamma\|^{\frac2d}\tr(-\Delta\gamma)+K_{1+2/d,d}^{(N)}\int_{\R^d}(\eta_n^2\rho_n)^{1+\frac2d}+o(1)\\
&=\|\gamma\|^{\frac2d}\tr(-\Delta\gamma)+1-K_{1+2/d,d}^{(N)}\int_{\R^d}\rho_\gamma^{1+\frac2d}+o(1)
\end{align*}
and arrive at the same conclusion that $\gamma$ is an optimiser.
\subsection{Proof of $(ii)$ on the equation}
Let $\gamma$ be an optimiser such that
$$\tr(-\Delta\gamma)=\int_{\R^d}\rho(x)^p\,\rd x=1.$$
This normalisation is always possible by scaling and by multiplying $\gamma$ by a positive constant. Then we have
$$\tr(\gamma^q)=\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2+d-dp}}.$$
We start with the case $q<\ii$, that is, $p<1+2/d$. Assume that we have a smooth curve of operators $\gamma(t)=\gamma+t\delta+o(t)$ for some $\delta=\delta^*$, with $\gamma(t)=\gamma(t)^*\geq0$ and $\rank(\gamma(t))\leq N$. By expanding we find
\begin{align}
\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}&\leq \frac{\Big(\tr(\gamma(t)^q)\Big)^{1-\theta}\Big(\tr(-\Delta\gamma(t))\Big)^{\theta}}{ \int_{\R^d}\rho_{\gamma(t)}^p}\nn\\
&=\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\frac{\Big(1+qt\frac{\tr(\delta\gamma^{q-1})}{\tr(\gamma^q)}+o(t)\Big)^{1-\theta}\Big(1+t\tr(-\Delta\delta)+o(t)\Big)^{\theta}}{1+pt\int_{\R^d}\rho_\delta\rho_\gamma^{p-1}+o(t) }\nn\\
&=\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\!\!\left(1+t\,\theta\,\tr\left[\delta\left(-\Delta-\frac{p}{\theta}\rho_\gamma^{p-1}+\frac{q(1-\theta)}{\theta\tr(\gamma^q)}\gamma^{q-1}\right)\right]+o(t)\right).\label{eq:derivative}
\end{align}
Now take $\gamma(t):=e^{itH}\gamma e^{-itH}=\gamma+it[H,\gamma]+o(t)$ for some (smooth and finite-rank) self-adjoint operator $H$ and all $t\in\R$. Since $\rank(\gamma(t))=\rank(\gamma)$, we deduce from~\eqref{eq:derivative} after varying over all $H$ that
$$\left[-\Delta-\frac{p}{\theta}\rho_\gamma^{p-1}\,,\,\gamma\right]=0.$$
Hence $\gamma$ commutes with the mean-field operator $H_\gamma:=-\Delta-p\rho_\gamma^{p-1}/\theta$. We can therefore write $\gamma=\sum_{j=1}^R n_j|u_{k_j}\rangle\langle u_{k_j}|$ for some eigenvectors $u_{k_j}$ of $H_\gamma$ (with eigenvalue $\mu_{k_j}$) and some $n_j>0$. In particular, $H_\gamma$ admits at least $R$ eigenvalues.
Using now $\gamma(t)=\gamma+t\delta$ for a $\delta$ supported on the range of $\gamma$ and for $t$ small enough in~\eqref{eq:derivative}, we find that
$$-\Delta-\frac{p}{\theta}\rho_\gamma^{p-1}+\frac{(1-\theta)q}{\theta\tr(\gamma^q)}\gamma^{q-1}\equiv0\qquad\text{on the range of $\gamma$.}$$
Evaluating this identity on $u_{k_j}$ we infer that
$$
\mu_{k_j} + \frac{(1-\theta)q}{\theta\tr(\gamma^q)} n_j^{q-1} =0.
$$
This shows that $\mu_{k_j}<0$ and
$$n_j = \left(\frac{\theta\tr(\gamma^q)}{(1-\theta)q}\right)^{\frac1{q-1}}\ | \mu_{k_j} |^{\frac{1}{q - 1}}.$$
Since $\gamma$ is assumed to be of rank $R$, we in particular deduce that $H_\gamma$ has at least $R$ negative eigenvalues.
Next, we show that the $\mu_{k_j}$ are necessarily the $R$ first eigenvalues. Assume that one eigenvector of $H_\gamma$ with eigenvalue $<\mu_{R}$ does not belong to the range of $\gamma$, so there is $1 \le j \le R$ with $u_{k_j} \neq u_j$ with $k_j > j$ and $u_j$ not in the range of $\gamma$. Consider the new operator
$$\gamma':=\gamma - n_j |u_{k_j}\rangle\langle u_{k_j}|+ n_j |u_j\rangle\langle u_j|:=\gamma+\delta,$$
which has the same rank and the same $\gS^q$ norm as $\gamma$. We have by convexity
$$\int_{\R^d}\rho_{\gamma'}^p\geq 1+pn_{j}\int_{\R^d}\rho_\gamma^{p-1}\left(|u_j|^2-|u_{k_j}|^2\right)$$
and
\begin{align*}
\tr(-\Delta\gamma')&=1+n_{j}\pscal{u_j,-\Delta u_j}-n_{k_j}\pscal{u_{k_j},-\Delta u_{k_j}} \\
&=1+\frac{pn_{j}}{\theta}\int_{\R^d}\rho^{p-1}_\gamma \big(|u_j|^2-|u_{k_j}|^2\big)+\left(\mu_j-\mu_{k_j}\right) n_j \\
&<1+\frac{pn_{j}}{\theta}\int_{\R^d}\rho^{p-1}_\gamma \big(|u_j|^2-|u_{k_j}|^2\big)
\end{align*}
since $\mu_j<\mu_{k_j}$. This gives
\begin{align*}
\frac{\Big(\tr(\gamma')^q\Big)^{1-\theta}\Big(\tr(-\Delta\gamma')\Big)^\theta}{\int_{\R^d}\rho_{\gamma'}^p}
&<\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\frac{\left(1+\frac{pn_{j}}{\theta}\int_{\R^d}\rho^{p-1}_\gamma \big(|u_j|^2-|u_{k_j}|^2\big)\right)^\theta}{1+pn_{j}\int_{\R^d}\rho_\gamma^{p-1}\left(|u_j|^2-|u_{k_j}|^2\right)} \\
&\leq \left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}},
\end{align*}
a contradiction. Hence $\mu_{k_j}=\mu_j$.
Finally, when $R<N$ and $\mu_{R+1}<0$, we can consider the operator
$$\gamma(t)=\gamma+t|u_{R+1}\rangle\langle u_{R+1}|$$
with $t\geq 0$, which has rank $R+1\leq N$. From~\eqref{eq:derivative} we obtain
\begin{align*}
\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}&\leq \left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\bigg(1+o(t)\\
&\qquad +t\theta\pscal{u_{R+1},\left(-\Delta-\frac{p}{\theta}\rho_\gamma^{p-1}+\frac{(1-\theta)q}{\theta\tr(\gamma^q)}\gamma^{q-1}\right)u_{R+1}}\bigg)\\
&\leq \left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}}\left(1+t\mu_{R+1}\theta+o(t)\right),
\end{align*}
another contradiction. Hence $H_\gamma$ cannot have more than $R$ negative eigenvalues when $R<N$.
As a conclusion, we have shown that
$$\gamma=\left(\frac{\theta\tr(\gamma^q)}{q(1-\theta)}\right)^{\frac{1}{q-1}}\sum_{j=1}^R|\mu_j|^{\frac{1}{q-1}}|u_j\rangle\langle u_j|,$$
with
$$\left(-\Delta-\frac{p}{\theta}\rho_\gamma(x)^{p-1}\right)u_j=\mu_j\,u_j,\qquad j=1,...,R.$$
Taking the trace of $\gamma^q$ we find that
$$\frac{\theta\tr(\gamma^q)}{q(1-\theta)}=\left(\frac{q(1-\theta)}{\theta} \dfrac{1}{\sum_{j=1}^R|\mu_j|^{\frac{q}{q-1}} }\right)^{q-1}$$
and thus
$$\gamma=\frac{q(1-\theta)}{\theta\sum_{j=1}^R|\mu_j|^{\frac{q}{q-1}}} \sum_{j=1}^R|\mu_j|^{\frac{1}{q-1}}|u_j\rangle\langle u_j|.$$
Replacing $\gamma$ by $(p/\theta)^{\frac1{p-1}}\gamma$ we find the equation mentioned in the statement.
\medskip
The arguments for $q=+\ii$ ($p=1+2/d$) are similar. We start with a minimiser normalised so that
$$\int_{\R^d}\rho_\gamma^{1+\frac2d}=\tr(-\Delta\gamma)=1,\qquad \|\gamma\|^{\frac2d}=K_{1+2/d,d}^{(N)}.$$
The first perturbation $\gamma(t):=e^{itH}\gamma e^{-itH}=\gamma+it[H,\gamma]+o(t)$ leaves the operator norm invariant and provides the equation $[-\Delta-p\rho_\gamma^{2/d}\,,\,\gamma]=0$, hence again $\gamma=\sum_{j=1}^Rn_j|u_{k_j}\rangle\langle u_{k_j}|$ with $H_\gamma u_{k_j}=\mu_{k_j}u_{k_j}$ and $H_\gamma=-\Delta-p\rho_\gamma^{2/d}$.
In order to prove that $\mu_{k_j}<0$, we consider the operator
$$\tilde\gamma:=\gamma-n_j|u_{k_j}\rangle\langle u_{k_j}|$$
which has one less eigenvalue and satisfies $\|\tilde\gamma\|^{2/d}\leq \|\gamma\|^{2/d}=K^{(N)}_{1+2/d,d}$. We find
\begin{align*}
K^{(N)}_{1+2/d,d}\leq K^{(N-1)}_{1+2/d,d}&\leq \frac{\|\tilde \gamma\|^{\frac2d}\tr(-\Delta\tilde \gamma)}{ \int_{\R^d}\rho_{\tilde\gamma}^{1+\frac2d}}\nn\\
&\leq K^{(N)}_{1+2/d,d}\frac{\tr(-\Delta\tilde \gamma)}{ \int_{\R^d}\rho_{\tilde\gamma}^{1+\frac2d}}\nn\\
&=K^{(N)}_{1+2/d,d}\frac{1-n_j\int_{\R^d}|\nabla u_{k_j}|^2}{\int_{\R^d}\big(\rho_\gamma-n_j|u_{k_j}|^2)^{1+\frac2d}}\nn\\
&=K^{(N)}_{1+2/d,d}\frac{1-n_j\mu_{k_j}-n_j\frac{d+2}{d}\int_{\R^d}\rho_\gamma^{\frac2d}|u_{k_j}|^2}{\int_{\R^d}\big(\rho_\gamma-n_j|u_{k_j}|^2)^{1+\frac2d}}.\label{eq:decrease_occ}
\end{align*}
Simplifying by $K^{(N)}_{1+2/d,d}>0$, this gives the estimate
\begin{equation}
\mu_{k_j}\leq -\frac1{n_j}\left(\int_{\R^d}\big(\rho_\gamma-n_j|u_{k_j}|^2)^{1+\frac2d}-\int_{\R^d}\rho_{\gamma}^{1+\frac2d}+n_j\frac{d+2}{d}\int_{\R^d}\rho_\gamma^{\frac2d}|u_{k_j}|^2\right)<0
\label{eq:estim_mu_j}
\end{equation}
where the last negative sign is by strict convexity of $t\mapsto t^{1+2/d}$.
Hence $\gamma$ has its range into the negative spectral subspace of $H_\gamma$, an operator which thus possesses at least $R$ negative eigenvalues.
Next we show that $n_j=\|\gamma\|$ for all $j=1,...,R$. Assume on the contrary that $0<n_j<\|\gamma\|$ (this can only happen when $R\geq2$). Taking $\gamma(t)=\gamma+t|u_{k_j}\rangle\langle u_{k_j}|$ which has the same operator norm for $t$ small enough, we obtain
\begin{align}
K^{(N)}_{1+2/d,d}\leq \frac{\|\gamma(t)\|^{\frac2d}\tr(-\Delta\gamma(t))}{ \int_{\R^d}\rho_{\gamma(t)}^{1+\frac2d}}&=K^{(N)}_{1+2/d,d}\frac{1+t\int_{\R^d}|\nabla u_{k_j}|^2}{\int_{\R^d}\big(\rho_\gamma+t|u_{k_j}|^2)^{1+\frac2d}}\nn\\
&=K^{(N)}_{1+2/d,d}\frac{1+t\mu_{k_j}+pt\int_{\R^d}\rho_\gamma^{p-1}|u_{k_j}|^2}{\int_{\R^d}\big(\rho_\gamma+t|u_{k_j}|^2)^{1+\frac2d}}\nn\\
&=K^{(N)}_{1+2/d,d}\left(1+t\mu_{k_j}+o(t)\right)\label{eq:increase_occ}
\end{align}
which is a contradiction since $\mu_{k_j}<0$, as we have seen. We conclude that $n_j=\|\gamma\|$ for all $j=1,...,R$. The argument for showing that $\mu_{k_1},...,\mu_{k_R}$ are the $R$ first eigenvalues is exactly the same as before.
\subsection{Proof of $(iii)$ on the rank of optimisers}
In this subsection, we prove the following result.
\begin{proposition}[Binding]\label{prop:binding}
Let $1<p\leq 1+2/d$ with $p<2$ and assume that $K^{(N)}_{p,d}$ admits an optimiser $\gamma$ of rank $N$. Then
$K^{(2N)}_{p,d}<K^{(N)}_{p,d}$.
\end{proposition}
The proof of $(iii)$ in Theorem~\ref{thm:K} follows immediately from Proposition~\ref{prop:binding}, arguing as follows. Since $K^{(1)}_{p,d}$ has an optimiser, the proposition shows that $K^{(2)}_{p,d}<K^{(1)}_{p,d}$, hence we can take $N_2=2$. By Step $(i)$ there is an optimiser for $K^{(2)}_{p,d}$ and by Step $(ii)$ the strict inequality $K^{(2)}_{p,d}<K^{(1)}_{p,d}$ implies that the optimisers for $K^{(2)}_{p,d}$ all have rank two. Hence Proposition~\ref{prop:binding} implies that $K^{(4)}_{p,d}<K^{(2)}_{p,d}$. If $K^{(3)}_{p,d}<K^{(2)}_{p,d}$ we take $N_3=3$ and otherwise we take $N_3=4$. We then go on by induction to obtain the assertion of $(iii)$. Hence we now concentrate on proving Proposition~\ref{prop:binding}.
\begin{proof}[Proof of Proposition~\ref{prop:binding}]
We follow ideas from~\cite[Section~2.4]{GonLewNaz-20_ppt}. Let $\gamma := \sum_{j=1}^{N}n_j | u_j \ket \bra u_j |$ be a minimiser of rank $N$ for $K^{(N)}_{p,d}$, normalised in the manner $\tr(-\Delta\gamma)=\int_{\R^d}\rho^p=1$. The functions $u_j$ satisfy
$$\left(-\Delta-\frac{p}{\theta}\left(\sum_{j=1}^Nn_j|u_j|^2\right)^{p-1}\right)u_j=\mu_j\,u_j$$
with $n_j=c|\mu_j|^{1/(q-1)}$. Note that the first eigenfunction $u_1$ is positive, hence the nonlinear potential never vanishes. By usual regularity arguments, this shows that the $u_j$ are $C^\ii$ and decay exponentially at infinity. For $R>0$, we set $u_{j,R}(x) := u_j(x - R e_1)$ where $e_1=(1,0,...,0)$, and we introduce the Gram matrix
\[
S_R = \begin{pmatrix}
\bbI_N & E^R \\
(E^{R})^* & \bbI_N
\end{pmatrix}, \quad \text{with} \quad E_{ij}^R := \bra u_i, u_{j,R} \ket = \int_{\R^d} u_i(x) u_j(x - Re_1) \rd x.
\]
Since the functions $u_i$ and $v_j$ are exponentially decaying, $E_R$ goes to $0$, and the overlap matrix $S_R$ is invertible for $R$ large enough. We then let
$$\begin{pmatrix}
\psi_{1,R}\\ \vdots\\ \psi_{2N,R}
\end{pmatrix}=(S_R)^{-\frac12}\begin{pmatrix}u_1\\ \vdots\\ u_N\\ u_{1,R}\\ \vdots\\ u_{N,R}\end{pmatrix}$$
and
$$\gamma_R = \sum_{j=1}^{N} n_j\Big(| \psi_{j,R} \ket \bra \psi_{j,R} |+| \psi_{N+j,R} \ket \bra \psi_{N+j,R} |\Big).$$
We have
$$\tr(\gamma_R)^q=2\tr(\gamma^q),\qquad \|\gamma_R\|=\|\gamma\|.$$
Expanding as in~\cite{GonLewNaz-20_ppt} using
\[
(S_R)^{-1/2} = \begin{pmatrix}
\bbI_N & 0 \\
0 & \bbI_N
\end{pmatrix} - \frac12\begin{pmatrix}
0 & E^R \\
(E^R)^* & 0
\end{pmatrix} + \frac38\begin{pmatrix}
E^R (E^R)^* & 0 \\
0 & (E^R)^* E^R
\end{pmatrix} + O(e_R^3).
\]
for
$$e_R:=\max_{i,j}\int_{\R^d} |u_i(x)|\,|u_j(x - Re_1)| \rd x,$$
we obtain after a long calculation
\begin{align*}
\left(K^{(2N)}_{p,d}\right)^{\frac{d(p-1)}{2}} &\leq \left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}} \frac{2^{1-\theta}\big(\tr(-\Delta\gamma_R)\big)^\theta}{\int_{\R^d}\rho_{\gamma_R}^p}\\
&=\left(K^{(N)}_{p,d}\right)^{\frac{d(p-1)}{2}} \left(1-\frac12\int_{\R^d} \left( (\rho+\rho_R)^p-\rho^p-\rho_R^p \right) +O(e_R^2)\right)
\end{align*}
with $\rho(x)=\rho_\gamma(x)$ and $\rho_R(x)=\rho(x-Re_1)$.
From the arguments in~\cite[Section~2.4]{GonLewNaz-20_ppt} we know that
\begin{equation}
\label{eq:boundgln}
\int_{\R^d} \left( (\rho+\rho_R)^p-\rho^p-\rho_R^p \right) \geq c R^{p(1-d)}e^{-p\sqrt{|\mu_N|}R}
\end{equation}
and by~\cite[Lemma~21]{GonLewNaz-20_ppt} we have
$$e_R\leq C(1+R^d)e^{-\sqrt{|\mu_N|}R}.$$
Since $p<2$ by assumption we conclude, as we wanted, that $K^{(2N)}_{p,d}<K^{(N)}_{p,d}$.
\end{proof}
\section{Binding for $\kappa<1$ and $N=2$: Proof of Theorem~\ref{thm:LT_bis}}
\label{sec:proof_LT_V_kappa<1}
In this section we provide the proof of Theorem~\ref{thm:LT_bis}. Define $p$ by $p'=\kappa+d/2$ let $Q$ be the radial Gagliardo--Nirenberg minimiser, solution to~\eqref{eq:NLS}, and set $m:=\int_{\R^d} Q^2\,dx$.
\subsection{Some properties of $Q$}
First we relate our constants for $N = 1$ to $Q$. We have the Pohozaev identity
\begin{equation}
\label{eq:pohozaev}
\begin{cases}
\dps \int_{\R^d} |\nabla Q|^2\,dx - \int_{\R^d} Q^{2p}\,dx = -m,\\[0.4cm]
\dps \left( \frac d2-1 \right)\int_{\R^d} |\nabla Q|^2\,dx - \frac d{2p} \int_{\R^d} Q^{2p}\,dx = - \frac d2 m \,.
\end{cases}
\end{equation}
These follow by multiplying the equation~\eqref{eq:NLS} by $ Q$ and by $x\cdot\nabla Q$, respectively. This gives the identity
\begin{equation} \label{eq:intQ2p}
\frac{m}{\int_{\R^d} Q^{2p}} = 1 - \frac{d}{2} \frac{p-1}{p} = \frac{p-1}{p} \kappa.
\end{equation}
On the other hand, setting $V_Q := -Q^{2(p-1)}$, we see that $Q$ is an eigenvector of $-\Delta + V_Q$ (with corresponding eigenvalue $-1$), and, by optimality of $V_Q$ for $L^{(1)}_{\kappa, d}$, we have
\begin{equation} \label{eq:exactL1_withQ}
L^{(1)}_{\kappa, d} = \frac{1}{\int_{\R^d} | V_Q |^{\kappa + \frac{d}{2}}} = \frac{1}{\int_{\R^d} Q^{2p}}.
\end{equation}
Finally, it is well known that there is $C > 0$ so that
\begin{equation} \label{eq:boundQ}
\frac{1}{C} \frac{\re^{ - | x |}}{1 + | x |^{\frac{d-1}{2}}} \le Q(x) \le C \frac{\re^{ - | x |}}{1 + | x |^{\frac{d-1}{2}}}.
\end{equation}
\subsection{Test potential for $L^{(2)}_{\kappa, d}$}
We now construct a test potential to find a lower bound for $L^{(2)}_{\kappa, d}$. For $R>0$, We let
$$
Q_\pm(x) = Q\big(x\pm\tfrac R2 e_1\big)
$$
with $e_1=(1,0,...,0)$. Inspired by the dual problem studied in the previous section, we consider the potential
$$
\boxed{V = - \left(Q_+^2 + Q_-^2\right)^{p-1} \,.}
$$
It is important here that we add the two densities and not the corresponding potentials. We do not see how to make our proof work if we would take $V = -Q_+^{2(p-1)} - Q_-^{2(p-1)}$ instead.
We introduce the quantity
\begin{equation}
\label{eq:def:A}
A = A(R) := \frac12 \int_{\R^d} \left( (Q_+^2+Q_-^2)^p - Q_+^{2p} - Q_-^{2p} \right)dx >0\,.
\end{equation}
Due to the inequality~\eqref{eq:boundQ}, $A$ goes (exponentially fast) to $0$ as $R$ goes to infinity. Our main result is the following.
\begin{lemma}
We have, as $R \to \infty$,
\[
L^{(2)}_{\kappa, d} \ge \frac{|\lambda_1(-\Delta+V)|^\kappa + |\lambda_2(-\Delta+V)|^\kappa}{\int_{\R^d} |V |^{\kappa+\frac{d}{2}}\,dx}
= L^{(1)}_{\kappa, d} \left(1 + \frac{\kappa}{pm} A + o(A) \right).
\]
\end{lemma}
The proof of Theorem~\ref{thm:LT_bis} follows as the leading correction is positive.
\begin{proof}
First, we bound $A$ from below similarly to \eqref{eq:boundgln}. Indeed, noting that the integrand of $A$ is nonnegative and bounding it from below using~\eqref{eq:boundQ} in a neighborhood of the origin, we find
\begin{equation} \label{eq:boundA}
A \ge \frac12 \int_{\cB(0, 1)} \left( (Q_+^2+Q_-^2)^p - Q_+^{2p} - Q_-^{2p} \right) \ge c \dfrac{\re^{ - p R}}{R^{p(d-1)}}.
\end{equation}
Next, we turn to the denominator appearing in the lemma. We have
$$
\int_{\R^d} | V |^{\kappa+\frac{d}{2}}\,dx = \int_{\R^d} \left(Q_+^2 + Q_-^2\right)^{p} = 2 \int_{\R^d} Q^{2p}\,dx + 2A.
$$
Together with~\eqref{eq:exactL1_withQ}, this gives
\begin{align*}
\dfrac{1}{\int_{\R^d} | V |^{\kappa+\frac{d}{2}}\,dx} & = \frac{1}{2} \frac{1}{\int_{\R^d} Q^{2p}} \left( 1 - \frac{A}{\int_{\R^d} Q^{2p}} + O(A^2) \right) \\
& = \frac{L^{(1)}_{\kappa, d}}{2} \left( 1 - \frac{A}{\int_{\R^d} Q^{2p}} + O(A^2) \right).
\end{align*}
Finally, we evaluate the numerator. We set $E := E(R) = \int_{\R^d} Q_+ Q_-\,dx$. We have $E \to 0$ as $R\to\infty$, so for $R$ large enough, we have $|E|<m$, and the two functions $\psi^{(\pm)}$ defined by
$$
\begin{pmatrix}
\psi^{(+)} \\ \psi^{(-)}
\end{pmatrix}
= \begin{pmatrix}
m & E \\ E & m
\end{pmatrix}^{-1/2}
\begin{pmatrix}
Q_+ \\ Q_-
\end{pmatrix}
$$
are orthonormal in $L^2(\R^d)$. Let
$$
\mathcal H := \begin{pmatrix}
\langle \psi^{(+)},(-\Delta+V)\psi^{(+)}\rangle & \langle \psi^{(+)},(-\Delta+V)\psi^{(-)}\rangle \\
\langle \psi^{(-)},(-\Delta+V)\psi^{(+)}\rangle & \langle \psi^{(-)},(-\Delta+V)\psi^{(-)}\rangle
\end{pmatrix} \,.
$$
By the variational principle, the two lowest eigenvalues of $-\Delta+V$ are not larger than the corresponding eigenvalues of $\mathcal H$, and therefore
$$
|\lambda_1(-\Delta+V)|^\kappa + |\lambda_2 (-\Delta+V)|^\kappa \geq \Tr \;\mathcal H_-^\kappa \,.
$$
We have
$$
\mathcal H = h \bbI_2 + \begin{pmatrix}
0 & \delta \\ \delta & 0
\end{pmatrix},
$$
where
$$
h := \langle \psi^{(+)},(-\Delta+V)\psi^{(+)}\rangle = \langle \psi^{(-)},(-\Delta+V)\psi^{(-)}\rangle
$$
and
$$
\delta := \langle \psi^{(+)},(-\Delta+V)\psi^{(-)}\rangle = \langle \psi^{(-)},(-\Delta+V)\psi^{(+)}\rangle \,.
$$
We have $h\to -1$ and $\delta\to 0$ as $R\to\infty$, and therefore
$$
\Tr\; \mathcal H_-^\kappa = 2|h|^{\kappa} - \kappa |h|^{\kappa-1} \Tr \begin{pmatrix}
0 & \delta \\ \delta & 0
\end{pmatrix}
+ O(\delta^2)
= 2|h|^{\kappa} + O(\delta^2) \,.
$$
It remains to expand $h$ and to bound $\delta$. We begin with $h$. We find
\begin{align*}
|\nabla\psi^{(+)}|^2 + |\nabla\psi^{(-)}|^2 & = \frac{m}{m^2-E^2} \left( |\nabla Q_+|^2 + |\nabla Q_-|^2 \right) - \frac{2E}{M^2-E^2} \nabla Q_+ \cdot \nabla Q_-.
\end{align*}
Integrating and using~\eqref{eq:NLS} gives
\begin{align*}
\int_{\R^d} \left( |\nabla\psi^{(+)}|^2 + |\nabla\psi^{(-)}|^2 \right)dx & = -2 + \frac{2m}{m^2-E^2} \int_{\R^d} Q^{2p}\,dx \\
& \ \quad - \frac{E}{m^2-E^2} \int_{\R^d} \left( Q_+^{2p-2}+Q_-^{2p-2}\right)Q_+ Q_- \,dx \,.
\end{align*}
Similarly,
$$
(\psi^{(+)})^2 + (\psi^{(-)})^2 = \frac{m}{m^2-E^2} \left( Q_+^2 + Q_-^2 \right) - \frac{2E}{M^2-E^2} Q_+ Q_-
$$
and therefore
\begin{align*}
h & = \frac12 \left( \langle \psi^{(+)},(-\Delta+V)\psi^{(+)}\rangle + \langle \psi^{(-)},(-\Delta+V)\psi^{(-)}\rangle \right) \\
& = -1 - \frac{m}{m^2-E^2} A + \frac{E}{m^2-E^2} B \,,
\end{align*}
where $A$ was defined in~\eqref{eq:def:A}, and where
$$
B=B(R) := \int_{\R^d} Q_+Q_- \left( (Q_+^2+Q_-^2)^{p-1} - \frac12 \left( Q_+^{2p-2} + Q_-^{2p-2}\right)\right)dx \,.
$$
From~\eqref{eq:boundQ} and~\cite[Lem.~21]{GonLewNaz-20_ppt} we see that $E(R) \le C' R^{d}\re^{-R}$ and $B(R) \le C' R^{d}\re^{ - R}$. In particular, by \eqref{eq:boundA} and the assumption $p < 2$, we have $E^2 = o(A)$ and $E B = o(A)$. This gives
\begin{align*}
|h|^\kappa = (-h)^\kappa &= (1+ m^{-1}A + o(A))^\kappa = 1 + \kappa m^{-1} A + o(A) \,.
\end{align*}
We see in a similar fashion that $\delta \le C'R^d \re^{-R}$ hence $O(\delta^2) = o(A)$ as well. Gathering all the estimates gives
\[
L^{(2)}_{\kappa, d} \ge L^{(1)}_{\kappa, d}
\left(1 + \left( \kappa - \frac{m}{\int_{\R^d} Q^{2p}} \right) \frac{A}{m} + o(A) \right)
= L^{(1)}_{\kappa, d} \left(1 +\frac{\kappa}{pm} A + o(A) \right),
\]
where the last equality comes from~\eqref{eq:intQ2p}.
\end{proof}
\section{Non existence of minimisers for the Fermionic NLS: Proof of Theorems~\ref{th:nonExistence_J(N)} and~\ref{th:N=2}}
\label{sec:NLS}
In this section, we prove our results concerning the minimisation problem $J(N)$ which, we recall, is defined by
\begin{equation}
J(N):=\inf \Big\{ \tr(-\Delta\gamma)-\frac1p\int_{\R^d}\rho_\gamma(x)^p\,\rd x : \ 0\leq \gamma=\gamma^*\leq1,\ \Tr(\gamma) = N \Big\}.
\label{eq:def_J}
\end{equation}
We assume in the whole section
\[
1 < p < 1 + \frac2d.
\]
After an appropriate scaling, and using the fact that $\Tr(\gamma) = \| \gamma \|_{\gS_1}$, the optimal inequality $\cE(\gamma)\geq J(N)$ becomes
\begin{equation*}
\widetilde K_{p,d}^{(N)} \| \rho_\gamma \|_p^{\frac{2p}{d(p-1)}} \leq \| \gamma \|_{\gS_1}^{\frac{d+2-dp}{d(p-1)}}\; \tr(-\Delta\gamma),
\end{equation*}
valid for all $0 \le \gamma = \gamma^* \le 1$ with $\Tr(\gamma) = N$, and with best constant
\begin{equation} \label{eq:explicit_tildeK}
\boxed{\widetilde{K}_{p,d}^{(N)} := \left(\frac{|J(N)|}{N}\right)^{-\frac{d+2-pd}{d(p-1)}}\frac1{p-1}\left(\frac{d}{2p}\right)^{\frac{2}{d(p-1)}} \left(1+\frac2d-p\right)^{-\frac{d+2-dp}{d(p-1)}}.}
\end{equation}
One can remove the constraint $\| \gamma \| \le 1$ at the expense of a factor $\| \gamma \|^{d/2}$, and we obtain the optimal inequality
\begin{equation}
\boxed{ \widetilde K_{p,d}^{(N)} \| \rho_\gamma \|_p^{\frac{2p}{d(p-1)}} \leq \| \gamma \|_{\gS_1}^{\frac{d+2-dp}{d(p-1)}}\;\|\gamma\|^{\frac{2}{d}}\;\tr(-\Delta\gamma),}
\label{eq:LT_sub_critical}
\end{equation}
valid for all $0 \le \gamma = \gamma^*$ with $\Tr(\gamma) = N$.
\subsection{Link between NLS and Lieb-Thirring, proof of Theorem~\ref{th:nonExistence_J(N)}}
\label{ssec:proof_nonExistence_J(N)}
The link between the constant $\widetilde K_{p,d}^{(N)}$ and the dual Lieb-Thirring constant $K_{p, d}^{(N)}$ defined in~\eqref{eq:LT_Schatten} is given in the following proposition.
\begin{proposition}[Relation between $\widetilde K_{p,d}^{(N)}$ and $K_{p,d}^{(N)}$]\label{prop:relation_K}
Let $d\geq1$ and $1<p<1+\frac2d$. For all $N\in\N$ we have
\begin{equation}
K_{p,d}^{(N)}\leq \widetilde K_{p,d}^{(N)} \le \widetilde K_{p,d}^{(1)}=K_{p,d}^{(1)}.
\label{eq:relation_K}
\end{equation}
\end{proposition}
\begin{proof}
It is shown in~\cite[Lemma~11]{GonLewNaz-20_ppt} that the minimisation problem $J(N)$ can be restricted to operators $\gamma$ which are orthogonal projectors of rank $N$. For such operators, we have $\| \gamma \| = 1$ and
\[
\| \gamma \|_{\gS_q}^q = \tr(\gamma^q) = N = \| \gamma \|_{\gS_1} = \rank(\gamma).
\]
This gives
\begin{equation*}
K^{(N)}_{p,d}\leq \frac{\norm{\gamma}_{\gS^q}^{\frac{p(2-d)+d}{d(p-1)}}\tr(-\Delta\gamma)}{\norm{\rho_\gamma}_{L^p(\R^d)}^{\frac{2p}{d(p-1)}}} = \frac{ \| \gamma \|_{\gS_1}^{\frac{d+2-dp}{d(p-1)}}\|\gamma\|^{\frac2d}\tr(-\Delta\gamma)}{\norm{\rho_\gamma}_{L^p(\R^d)}^{\frac{2p}{d(p-1)}}}.
\end{equation*}
Optimising over projectors $\gamma$ gives $K_{p,d}^{(N)}\leq \widetilde K_{p,d}^{(N)}$. In the case $N = 1$, every operator of rank $1$ is proportional to a rank $1$ projector, so the two problems coincide, and $\widetilde{K}^{(1)}_{p,d}=K^{(1)}_{p,d}$. Finally, in~\cite{GonLewNaz-20_ppt}, it is also proved that $J(N) \le N J(1)$. This implies $\widetilde{K}_{p,d}^{(N)} \le \widetilde K_{p,d}^{(1)}$.
\end{proof}
There is a similarity between the proof of the above proposition and the arguments in \cite{LieLla-78,FraLieSeiTho-11b}. In those works also the sharp Lieb-Thirring inequality for $\kappa=3/2$ is used to obtain an inequality about orthonormal functions.
The relation~\eqref{eq:relation_K} allows us to prove Theorem~\ref{th:nonExistence_J(N)}, which states that $J(N) = N J(1)$ for all $N \in \N$, and that $J(N)$ admits no minimiser for $N \ge 2$.
\begin{proof}[Proof of Theorem~\ref{th:nonExistence_J(N)}]
It was proved in~\cite{LieThi-76} that for $\kappa = 3/2$, we have $L_{3/2,1}=L^{(N)}_{3/2,1}=L^{(1)}_{3/2,1}$ for all $N\in\N$. This implies $K_{2,1}^{(N)}=K_{2,1}^{(1)}$ for all $N \in \N$. Hence, by~\eqref{eq:relation_K}, also $\widetilde{K}^{(N)}_{2,1}=\widetilde{K}^{(1)}_{2,1}$ for all $N\in\N$ and, finally, $J(N) = NJ(1)$ thanks to the explicit formula~\eqref{eq:explicit_tildeK}.
To prove that $J(N)$ has no minimiser for $N\geq2$, we assume by contradiction that $\gamma$ is one.
By~\cite[Proposition~16]{GonLewNaz-20_ppt}, $\gamma$ is a rank $N$ projector. In addition, since we have equality in~\eqref{eq:relation_K}, $\gamma$ is also an optimiser for $K^{(N)}_{2,1}$. But then, by Theorem~\ref{thm:K}, it is of the form $\gamma=c\sum_{j=1}^N|\mu_j|^{1/2}\,|u_j\rangle\langle u_j|$ for some $c$. We conclude that $\mu_j=-1/c^2$ for all $j=1,...,N$ which is impossible since the first eigenvalue $\mu_1$ of a Schr\"odinger operator is always simple.
\end{proof}
\begin{remark}\label{rem:ltconjj}
In dimension $d=1$, a special case of the Lieb-Thirring conjecture~\cite{LieThi-76} states that
$$L_{\kappa,1}^{(N)}=L_{\kappa,1}^{(1)}\qquad\text{for all $\kappa\in(1,3/2]$ and all $N\geq1$.}$$
If true, this conjecture would imply by the same argument as in the previous proof that
\begin{equation}
J(N)=N\,J(1)\qquad \text{for all $2\leq p<3$ and all $N\geq1$, in dimension $d=1$,}
\label{eq:conjecture_J_N_1}
\end{equation}
and that the corresponding problems do not have minimisers for $N\geq 2$. The weaker conjecture~\eqref{eq:conjecture_J_N_1} appeared in~\cite{GonLewNaz-20_ppt}
\end{remark}
\subsection{Proof of Theorem~\ref{th:N=2}: triviality of solutions for $d = 1$, $p = 2$ and $N = 2$}
\label{ssec:proof_N=2}
In this subsection we prove Theorem~\ref{th:N=2}: we show that the fermionic NLS equation~\eqref{eq:fermionic_NLS} does not have a solution in the one dimensional case with $p = 2$ and $N = 2$. We will make use of the integrability of the equations. In the sequel, we study the ODE system
\begin{equation} \label{eq:NLS_N=2}
\begin{cases}
v_1'' + 2 (v_1^2 + v_2^2) v_1 + \mu_1 v_1 = 0, \\
v_2'' + 2 (v_1^2 + v_2^2) v_2 + \mu_2 v_2 = 0.
\end{cases}
\end{equation}
We added an extra factor $2$ to obtain the same explicit formulas as in the literature. If $(u_1, u_2)$ is a real-valued ground state solution to~\eqref{eq:no_binding_u}, then $(v_1, v_2) = \frac{1}{\sqrt{2}}(u_1, u_2)$ is a real-valued solution to~\eqref{eq:NLS_N=2}, which satisfies in addition $\| v_1 \| = \| v_2 \| = \frac12$.
The key step in the proof of Theorem~\ref{th:N=2} is the following classification result for~\eqref{eq:NLS_N=2} under an additional vanishing condition for $v_2$.
\begin{lemma} \label{lem:ODE}
Let $\mu_1 \le \mu_2 < 0$, and let $(v_1, v_2)$ be a square integrable real-valued solutions of the ODE~\eqref{eq:NLS_N=2} with $v_2(0) = 0$. Then there are $a_1, a_2 \in \R$ such that
\begin{equation}
\begin{cases}
v_1(x) = \dfrac{a_1 \re^{ \eta_1x}}{f(x)}\left(1 + \dfrac{ a_2^2}{4 \eta_2^2} \dfrac{\eta_1 - \eta_2}{\eta_1 + \eta_2} \re^{2 \eta_2 x} \right),\\[0.4cm]
v_2(x) = \dfrac{a_2 \re^{ \eta_2x}}{f(x)}\left(1 - \dfrac{ a_1^2}{4 \eta_1^2} \dfrac{\eta_1 - \eta_2}{\eta_1 + \eta_2} \re^{2 \eta_1 x} \right),
\end{cases}
\label{eq:explicit_1D}
\end{equation}
where
\[
f(x) = 1 + \dfrac{a_1^2}{4 \eta_1^2} \re^{2 \eta_1 x}
+ \dfrac{a_2^2}{4 \eta_2^2} \re^{2 \eta_2 x} + \dfrac{a_1^2 a_2^2}{16 \eta_1^2 \eta_2^2} \dfrac{(\eta_1 - \eta_2)^2}{ (\eta_1 + \eta_2)^2} \re^{(2 \eta_2 + 2 \eta_1)x}
\]
and $\eta_1 := \sqrt{ | \mu_1 |}$, $\eta_2 := \sqrt{| \mu_2|}$.
\end{lemma}
In fact, if $a_2 \neq 0$, the condition $v_2(0) = 0$ fixes the value
\begin{equation} \label{eq:value_a1}
a_1 = \pm 2 \eta_1 \left( \frac{\eta_1 + \eta_2}{\eta_1 - \eta_2} \right)^{1/2}.
\end{equation}
\begin{proof}
We proceed in two steps. First, we show that the functions~\eqref{eq:explicit_1D} are solutions and then we prove that they cover all possible initial data for $v_1(0)$, $v_1'(0)$ and $v_2'(0)$. By uniqueness of the solution of an initial value problem the result follows.
For the first point, checking the equation is simply a computation. For the convenience of the reader we quickly recall how to find the formulas~\eqref{eq:explicit_1D}. Following~\cite{RadLak-95} which uses Hirota's bilinearisation method~\cite{Hirota-80}, we write
\[
v_1 = \frac{g}{f}, \quad \text{and} \quad v_2 = \frac{h}{f}.
\]
With this change of variable, we see that~\eqref{eq:NLS_N=2} can we written as
\[
\begin{cases}
f^2 \left( f g'' + f'' g - 2 f' g' + \mu_1 f g \right) + 2 f g \left( | f'|^2 - f f'' + g^2 + h^2 \right) = 0, \\
f^2 \left( f h'' + f'' h - 2 f' h' + \mu_2 f h \right) + 2 f h \left( | f'|^2 - f f'' + g^2 + h^2 \right) = 0.
\end{cases}
\]
We seek solutions that satisfy
\[
\begin{cases}
f g'' + f'' g - 2 f' g' + \mu_1 f g = 0, \\
f h'' + f'' h - 2 f' h' + \mu_2 f h = 0, \\
| f'|^2 - f f'' + g^2 + h^2 = 0.
\end{cases}
\]
With Hirota's notation, this is of the form
\[
D(f,g) + \mu_1 fg = 0, \quad D(f,g) + \mu_2 fh = 0, \quad D(f, f) = \frac12 (g^2 + h^2),
\]
with the bilinear form $D(u,v) := uv'' + u''v - 2u'v'$. We now make the formal expansion $g = \chi g_1 + \chi^3 g_3$, $h = \chi h_1 + \chi^3 h_3$ and $f = 1 + \chi^2 f_2 + \chi^4$, and we solve the cascade of equations in powers of $\chi$. We first obtain (setting $\eta_1 := \sqrt{| \mu_1 |}$ and $\eta_2 := \sqrt{|\mu_2|}$)
\[
g_1 = a_1 \re^{ \eta_1 x}, \quad h_1 = a_2 \re^{ \eta_2 x},
\]
where $a_1$ and $a_2$ are two arbitrary constants. After some computation, we get (see also~\cite{RadLak-95}),
\[
f_2 = \dfrac{a_1^2}{4 \eta_1^2} \re^{2 \eta_1 x}
+ \dfrac{a_2^2}{4 \eta_2^2} \re^{2 \eta_2 x},
\]
then
\[
g_3 = \left( \dfrac{a_1 a_2^2}{4 \eta_2^2} \dfrac{\eta_1 - \eta_2}{\eta_1 + \eta_2} \right) \re^{(2 \eta_2 + \eta_1)x}, \quad
h_3 = - \left( \dfrac{a_1^2 a_2}{4 \eta_1^2} \dfrac{\eta_1 - \eta_2}{\eta_1 + \eta_2} \right) \re^{(2 \eta_1 + \eta_2)x}
\]
and finally
\[
f_4 = \dfrac{a_1^2 a_2^2}{16 \eta_1^2 \eta_2^2} \dfrac{(\eta_1 - \eta_2)^2}{ (\eta_1 + \eta_2)^2}\re^{(2 \eta_2 + 2 \eta_1)x}.
\]
This is the solution in Lemma~\ref{lem:ODE}. The condition $v_2(0) = 0$ gives the value of $a_1$ in~\eqref{eq:value_a1}.
Let us now prove that all square integrable solutions with $v_2(0) = 0$ are of this form. In fact, instead of square integrability we will assume that $v_j$ and $v_j'$ tend to zero at infinity for $j=1,2$. It is not hard to deduce this property from the assumption that the solution is square integrable.
For the proof we will assume that $v_2'(0)\neq 0$, for otherwise $v_2=0$ everywhere and the result is well-known (and easy to prove by a variation of the arguments that follow, using only \eqref{eq:cst_of_motion1} below).
Any solution $(v_1, v_2)$ that decays at infinity has two constants of motion
\begin{subequations} \label{eq:cst_of_motion}
\begin{align}
& (v_1^2 + v_2^2)^2 + | v_1' |^2 + | v_2 '|^2 + \mu_1 v_1^2 + \mu_2 v_2^2 = 0, \label{eq:cst_of_motion1} \\
& (v_1^2 + v_2^2)(\mu_1 v_2^2 + \mu_2 v_1^2 + \mu_1 \mu_2) + (v_1 v_2' - v_1' v_2)^2 + \mu_2| v_1' |^2 + \mu_1 | v_2'|^2 = 0. \label{eq:cst_of_motion2}
\end{align}
\end{subequations}
To obtain identity \eqref{eq:cst_of_motion1} we multiply the first and second equation in~\eqref{eq:NLS_N=2} by $v_1'$ and $v_2'$, respectively, add the resulting identities and then integrate using the fact that the solutions and their derivatives vanish at infinity. The fact that there is a second identity \eqref{eq:cst_of_motion2} reflects the integrability of the system~\cite{Manakov-74}.
Evaluating~\eqref{eq:cst_of_motion} at $x = 0$ and using $v_2'(0) \neq 0$, we deduce that
\[
v_1(0)^2 = \mu_2 - \mu_1 \quad \text{and} \quad v_1'(0)^2 + v_2'(0)^2 = - \mu_2 \left( \mu_2 - \mu_1 \right).
\]
Thus, the value of $v_1(0)$ is determined, up to a sign, by $\mu_1$ and $\mu_2$ and we have
\[
v_1'(0)^2 < -\mu_2 (\mu_2 - \mu_1) = \eta_2^2 \left( \eta_1^2 - \eta_2^2 \right).
\]
The assumption $v_2'(0)\neq 0$ also shows that $- \mu_2(\mu_2-\mu_1) > 0$, hence $\mu_2 \neq \mu_1$ and therefore also $v_1(0)\neq 0$.
Let $(\tilde v_1,\tilde v_2)$ be a solution of the form \eqref{eq:explicit_1D}. The absolute value of $a_1$ is fixed by~\eqref{eq:value_a1}. We will now show that the sign of $a_1$ as well as the number $a_2$ can be determined in such a way that $\tilde v_j(0)=v_j(0)$ and $\tilde v_j'(0)=v_j'(0)$ for $j=1,2$. Once we have shown this, ODE uniqueness implies that $\tilde v_j=v_j$ for $j=1,2$, which is what we wanted to prove.
Since $v_1(0)\neq 0$, we can choose the sign of $a_1$ in~\eqref{eq:value_a1} such that $\mathrm{sgn}\ a_1 = \mathrm{sgn}\ v_1(0)$. Note that, independently of the choice of $a_2$, we have $\mathrm{sgn}\ \tilde v_1(0)=\mathrm{sgn}\ a_1$. This, together with $\tilde v_1(0)^2 = \mu_2-\mu_1= v_1(0)^2$, implies that $\tilde v_1(0)= v_1(0)$.
It remains to choose $a_2$. A tedious but straightforward computation yields
\[
\tilde v_1'(0) = - \frac{a_1}{|a_1|} \eta_2 \sqrt{\eta_1^2 - \eta_2^2}\ \dfrac{4 \eta_2^2(\eta_1 + \eta_2) - a_2^2 (\eta_1 - \eta_2)}{4 \eta_2^2(\eta_1 + \eta_2) + a_2^2(\eta_1 - \eta_2)}.
\]
The last quotient on the right side is a decreasing function of $a_2^2$ from $\left[ 0, \infty \right]$ to $[-1,1]$. Thus, there is an $a_2^2\in (0,\infty)$ such that $\tilde v_1'(0)=v_1'(0)$. This determines the absolute value of $a_2$. To determine its sign, we note that the identities $\tilde v_1'(0)^2 + \tilde v_2'(0)^2 = - \mu_2 \left( \mu_2 - \mu_1 \right)= v_1'(0)^2 + v_2'(0)^2$ and $\tilde v_1'(0)=v_1'(0)$ imply that $\tilde v_2'(0)^2 = v_2'(0)^2$. Thus, we can choose the sign of $a_2$ in such a way that $\tilde v_2'(0) = v_2'(0)$.
This shows that we can indeed find $a_1$ and $a_2$ such that $\tilde v_j(0)=v_j(0)$ and $\tilde v_j'(0)=v_j'(0)$ for $j=1,2$. As explained before, this implies the result.
\end{proof}
We will also need the following lemma in the proof of Theorem~\ref{th:N=2}.
\begin{lemma}\label{lem:normalization}
If $(v_1, v_2)$ is a solution of the form~\eqref{eq:explicit_1D} of Lemma~\ref{lem:ODE}, then $\| v_1 \|^2 = 2 \eta_1$ and $\| v_2 \|^2 = 2 \eta_2$. In particular, we can have $\| v_1 \| = \| v_2 \|$ only if $\mu_1 = \mu_2$.
\end{lemma}
\begin{proof}
With the notation of Lemma~\ref{lem:ODE}, a computation reveals that
\[
v_1(x)^2 = - \left( \frac{ \frac{a_2^2 \eta_1}{2 \eta_2^2} \re^{2 \eta_2 x} + 2 \eta_1 }{f(x)} \right)'
\quad \text{while} \quad
v_2(x)^2 = - \left( \frac{\frac{a_1^2 \eta_2}{2 \eta_1^2} \re^{2 \eta_1 x} + 2 \eta_2 }{f(x)} \right)'.
\]
Integrating gives
\[
\int_\R v_1^2 = -\left[ \frac{ \frac{a_2^2 \eta_1}{2 \eta_2^2} \re^{2 \eta_2 x} + 2 \eta_1 }{f(x)} \right]_{-\infty}^\infty = 2 \eta_1
\quad \text{and similarly} \quad \int_\R v_2^2 = 2 \eta_2,
\]
as wanted.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:N=2}]
As explained before Lemma \ref{lem:ODE}, it is enough to consider solutions $(v_1,v_2)$ of~\eqref{eq:NLS_N=2} with $\| v_1 \| = \| v_2 \| = \frac12$.
The equations~\eqref{eq:NLS_N=2} mean that the numbers $\mu_1$ and $\mu_2$ are negative eigenvalues of the operator $- \partial_{xx}^2 -2 (v_1^2+v_2^2)$. It is easy to see that the latter operator is bounded from below and its negative spectrum is discrete. Therefore it has a lowest eigenvalue $\mu_0$. Let $v_0$ be a corresponding eigenfunction, normalised by $\|v_0\|=\frac12$. It is well-known that the eigenvalue $\mu_0$ is non-degenerate and that $v_0$ can be chosen positive. In particular, if $v$ is a square integrable real valued solution to $- v'' -2 (v_1^2+v_2^2)v = \mu v$ which never vanishes, then necessarily $\mu=\mu_0$.
We claim that $\mu_1=\mu_2=\mu_0$. To prove this, we may assume that $\mu_1\leq\mu_2<0$. In the case where $v_2$ never vanishes, the above remark gives $\mu_2=\mu_0$. Since $\mu_0$ is the lowest eigenvalue and since $\mu_1\leq\mu_2$, this also yields $\mu_1=\mu_0$. In the opposite case where $v_2$ does vanish at some point we can, after a translation, apply Lemma~\ref{lem:ODE}. We deduce that $v_1$ does not vanish, hence $\mu_1=\mu_0$. Moreover, applying Lemma~\ref{lem:normalization}, we conclude that $\mu_1=\mu_2$. This proves the claim.
It follows from the equality $\mu_1=\mu_2=\mu_0$, the simplicity of $\mu_0$ and the normalisation that $v_1^2=v_2^2$. In particular, $v_1$ and $v_2$ both satisfy $v_j''+4v_j^3 + \mu_0 v_j = 0$. By uniqueness of the solution to the equation up to translations, this gives~\eqref{eq:formulas_u_1_u_2} for some $x_0\in\R$ and a sign $\pm$. Since $v_1^2=v_2^2$ the $x_0$'s for the two functions coincide, while the signs are independent. This completes the proof of the theorem.
\end{proof}
|
1,314,259,995,634 | arxiv | \section{Introduction}
The study of positive definite matrices and of functions that preserve
them arises naturally in many branches of mathematics and other
disciplines. Given a function $f: \R \to \R$ and a matrix $A = (a_{st})$,
the matrix $f[A] := (f(a_{st}))$ is obtained by applying $f$ to the
entries of $A$. Such mappings are called \emph{entrywise} or
\emph{Hadamard functions} (see \cite[\S 6.3]{Horn_and_Johnson_Topics}).
Entrywise functions preserving Loewner positivity have been widely
studied in the literature (see e.g.~ Schoenberg \cite{Schoenberg42},
Rudin \cite{Rudin59}, Herz \cite{Herz63}, Horn \cite{Horn}, Christensen
and Ressel \cite{Christensen_et_al78}, Vasudeva \cite{vasudeva79},
FitzGerald, Micchelli, and Pinkus \cite{fitzgerald}, Hiai
\cite{Hiai2009}). The subject has recently received renewed attention due
to its importance in the regularization of high-dimensional
covariance/correlation matrices
\cite{Guillot_Rajaratnam2012, Guillot_Rajaratnam2012b, hero_rajaratnam,
Hero_Rajaratnam2012, Li_Horvath, Zhang_Horvath}.
An important family of functions is the set of power functions $f(x) =
x^\alpha$ for $\alpha > 0$. Characterizing the entrywise powers that
preserve positivity is a classical problem that has been well-studied in
the literature and is now completely resolved (see \cite{FitzHorn,
Bhatia-Elsner, Hiai2009, GKR-crit-2sided}).
A natural generalization of this problem consists of studying powers
preserving positivity when applied to block matrices (see
e.g.~\cite{Dipa_proc,Gunther,Lin20141}). More precisely, let $H :=
(H_{st})_{s,t=1}^n$ be an $mn \times mn$ Hermitian positive semidefinite
matrix, where each block $H_{st}$ is an $m \times m$ Hermitian positive
semidefinite matrix. Our first main result in this paper is a complete
characterization of the powers $\alpha$ such that the matrix
$(H_{st}^\alpha)_{s,t=1}^n$ is always positive semidefinite. Here, the
power $H_{st}^\alpha$ is computed using the spectral decomposition of
$H_{st}$.
Note that when each block of $H$ is $1 \times 1$, the problem reduces to
the classical problem of characterizing entrywise powers preserving
positivity. In contrast, when $H$ consists of only one block, every power
trivially preserves positive semidefiniteness. Surprisingly, we
demonstrate that except in trivial cases, powers do not preserve
positivity when the block size is $2$ or more. This sharply contrasts the
classical case where all powers preserve positivity beyond a certain {\it
critical exponent} (see e.g.~\cite{FitzHorn,Walch_survey}).
In a previous paper, Choudhury \cite{Dipa_proc} has studied powers
$\alpha > 0$ such that the map $(H_{st}) \mapsto (H_{st}^\alpha)$
preserves Loewner positivity, under the additional assumption that the
blocks $H_{st}$ pairwise commute. She demonstrates that every power
$\alpha \in \N \cup [mn-2, \infty)$ preserves Loewner positivity.
However, it is not clear if the bound $mn-2$ is sharp, nor which smaller
non-integer powers preserve positivity. In our second main result, we
completely answer these questions by showing that the set of powers
preserving positivity when the blocks commute is exactly $\N \cup [n-2)$.
In contrast to previous results, the answer turns out to be independent
of the block size $m$. Our result therefore shows that positivity is
actually retained at a much lower threshold (critical exponent) than was
previously thought.
We then extend this characterization to commuting Hermitian blocks that
are not necessarily positive semidefinite, by considering the odd and
even extensions of the power functions. Our characterization extends
previous work by FitzGerald and Horn \cite{FitzHorn}, Bhatia and Elsner
\cite{Bhatia-Elsner}, Hiai \cite{Hiai2009}, and Guillot, Khare, and
Rajaratnam \cite{GKR-crit-2sided}.
When studying powers of block matrices, one has to assume the blocks
$H_{st}$ are positive semidefinite for the powers $H_{st}^\alpha$ to be
well-defined. When the blocks are only Hermitian, it is natural to
replace the power functions by their odd or even extensions to $\R$ (see
Hiai \cite{Hiai2009}). Note that these functions are precisely the
Lebesgue measurable multiplicative functions on $\R$ (see
e.g.~\cite{GKR-measurable}). More generally, when the blocks $H_{st}$ are
only diagonalizable, it is natural to replace the power functions by
general Lebesgue measurable multiplicative functions on $\C$. Considering
such multiplicative functions provides a general and systematic framework
in which to study powers preserving Loewner positivity, either in the
block case, the commuting block case, or the traditional scalar setting
studied by FitzGerald and Horn, Bhatia and Elsner, and Hiai. Thus, in
Section \ref{Sprelim}, we classify all measurable multiplicative
functions on $\C$ that preserve $[0,\infty)$, and identify a natural
two-parameter family of functions $\{ \Psi_{\alpha, \beta} : \alpha \in
\R, \beta \in \Z \}$ that is used throughout the paper to generalize the
power functions.
Next, in Section \ref{Sblock} we characterize which of these functions
preserve Loewner positivity when applied blockwise to Hermitian positive
semidefinite matrices $(H_{st})_{s,t=1}^n$. In Section \ref{Scommute}, we
consider the case where the blocks $H_{st}$ pairwise commute, and
complete the characterization initiated by D.~Choudhury in
\cite{Dipa_proc}. We also demonstrate how our work can be used to
generalize previous work by de Pillis \cite{depillis_69}, by
characterizing the functions $\Psi_{\alpha,\beta}$ for which the map
$(H_{st})_{s,t=1}^n \mapsto (\Psi_{\alpha,
\beta}(\tr(H_{st})))_{s,t=1}^n$ preserves Loewner positivity.
Finally, in Section \ref{Sentrywise}, we consider the traditional setting
where each block is $1 \times 1$. For all integers $\beta \in \Z$ and $n
\in \N$, we provide lower and upper bounds for the threshold power
$\alpha > 0$ above which $\Psi_{\alpha, \beta}[-]$ preserves Loewner
positivity on $n \times n$ Hermitian positive semidefinite matrices. In
particular, when $\beta = 1$, we completely resolve the $n=3$ case of a
question raised in 2001 by Xingzhi Zhan \cite[Acknowledgment
Section]{Hiai2009}, concerning the powers $\alpha > 0$ for which
$\Psi_{\alpha, 1}[-]$ preserves Loewner positivity. Moreover, we study
the same problem for arbitrary $\beta$, which had not been previously
done in the literature. \medskip
\noindent\textbf{Notation:}
Given a subset $S \subset \C$, denote by $\bp_n(S)$ the set of $n \times
n$ Hermitian positive semidefinite matrices with entries in $S$. We
denote the complex disc centered at $a \in \C$ and of radius $R > 0$ by
$D(a,R)$. We write $A \geq 0$ to denote that $A \in \bp_n(\C)$, and write
$A \geq B$ when $A - B \in \bp_n(\C)$. We denote by $I_n$ the $n \times
n$ identity matrix, and by ${\bf 0}_{n \times n}$ and ${\bf 1}_{n \times
n}$ the $n \times n$ matrices with every entry equal to $0$ and $1$
respectively. Finally, we denote the conjugate transpose of a vector or
matrix $A$ by $A^*$.
\section{Literature review}\label{Slit}
Entrywise powers and their properties have been studied by many authors
including Horn and FitzGerald \cite{FitzHorn}, Bhatia and Elsner
\cite{Bhatia-Elsner}, Hiai \cite{Hiai2009}, and Guillot, Khare, and
Rajaratnam \cite{GKR-crit-2sided}. Most of the known results concern
matrices with blocks of dimension $1 \times 1$. We now review two of the
most important results in the area.
\begin{theorem}[FitzGerald and Horn, {\cite[Theorem
2.2]{FitzHorn}}]\label{TFitzHorn}
Suppose $A = (a_{st}) \in \bp_n((0,\infty))$ for some $n \geq 2$. Then
$A^{\circ\alpha} := (a_{st}^\alpha) \in \bp_n$ for all $\alpha \in \N
\cup [n-2,\infty)$. If $\alpha \in (0,n-2)$ is not an integer, then there
exists $A \in \bp_n((0,\infty))$ such that $A^{\circ\alpha} \notin
\bp_n$. More precisely, Loewner positivity is not preserved for $A = ((1
+ \epsilon st))_{s,t=1}^n$, for all sufficiently small $\epsilon =
\epsilon(\alpha,n) > 0$ for $\alpha \in (0,n-2) \setminus \N$.
\end{theorem}
Note that in Theorem \ref{TFitzHorn}, the entries of the matrix $A$ are
assumed to be positive for the power $x^\alpha$ to be well-defined. In
practice, one also commonly encounters matrices with negative and complex
entries. In order to work with matrices with real entries, the papers
\cite{Bhatia-Elsner, Hiai2009} considered the odd and even extensions of
the power functions to the real line.
\begin{definition}
Let $\alpha \in \R$. We define the even and odd extensions to $\R$ of the
power function $x \mapsto x^\alpha$ via:
\begin{equation}
\phi_\alpha(x) := |x|^\alpha, \qquad \psi_\alpha(x) := \sgn(x)
|x|^\alpha, \qquad \forall x \neq 0,
\end{equation}
\noindent and $\phi_\alpha(0) =\psi_\alpha(0) := 0$. Also define
$f_\alpha(x) := x^\alpha$ for $x>0$, and $f_\alpha(0) := 0$.
\end{definition}
Note that the definitions of $\phi_\alpha, \psi_\alpha$ given above are
natural, as they yield the unique even and odd multiplicative extensions
to $\R$ of the standard power functions. The following result completely
characterizes the powers $\alpha$ such that $\phi_\alpha$ or
$\psi_\alpha$ preserves Loewner positivity when applied entrywise. The
reader is referred to \cite{GKR-crit-2sided} for a proof and history of
this result.
\begin{theorem}[{Bhatia and Elsner \cite{Bhatia-Elsner}, Hiai
\cite{Hiai2009}, Guillot, Khare, and Rajaratnam
\cite{GKR-crit-2sided}}]\label{Tcrit}
Let $\alpha \in \R$ and let $n \geq 2$. Then
\begin{enumerate}
\item $\phi_\alpha[A] \in \bp_n(\R)$ for all $A \in \bp_n(\R)$ if and
only if $\alpha \in 2\N \cup [n-2,\infty)$.
\item $\psi_\alpha[A] \in \bp_n(\R)$ for all $A \in \bp_n(\R)$ if and
only if $\alpha \in (-1+2\N) \cup [n-2,\infty)$.
\end{enumerate}
\noindent Moreover, if $f = \phi_\alpha$ or $f = \psi_\alpha$ does not
preserve positivity on $\bp_n(\R)$ for some $\alpha \in \R$, there exists
a rank $2$ matrix $A \in \bp_n(\R)$ such that $f[A] \not\in \bp_n(\R)$.
\end{theorem}
Blockwise powers yield a generalization of the entrywise powers analysis
studied above. We now recall a sufficient condition for preserving
positivity that was shown in \cite{Dipa_proc} in the case where $H =
(H_{st})$ is a block matrix with commuting blocks $H_{st}$.
\begin{theorem}[{Choudhury, \cite[Theorem 5]{Dipa_proc}}]\label{Tdipa}
Let $H = (H_{st})$ be a given positive semidefinite $mn \times mn$
matrix, where $\{H_{st}: 1 \leq s,t \leq n\}$ are a commuting family of
normal $m \times m$ matrices. If $H$ is positive semidefinite, then so is
$(H_{st}^\alpha)$ for all $\alpha \in \N$. If in addition each $H_{st}$
is positive semidefinite, then $(H_{st}^\alpha)$ is positive semidefinite
for all real $\alpha \geq mn-2$.
\end{theorem}
In Section \ref{Sblock} we completely characterize the powers $\alpha$
that preserve positivity when the blocks do not necessarily commute. We
then show in Section \ref{Scommute} that the bound $\alpha \geq mn-2$ in
Theorem \ref{Tdipa} is not sharp and that the optimal bound is $\alpha
\geq n-2$. Moreover, we will demonstrate how Theorem \ref{Tdipa} can be
naturally extended to blocks $H_{st}$ that are diagonalizable.
\section{Preliminaries and main results}\label{Sprelim}
Before we proceed to characterize functions preserving Loewner positivity
for block matrices, we provide a framework in which to work with powers
of complex matrices. In order to do so, first note that the functions
$\phi_\alpha$ and $\psi_\alpha$ defined in Section \ref{Slit} are in fact
the unique non-constant Lebesgue measurable multiplicative functions on
$\R$ (see e.g.~\cite{GKR-measurable}). Since we work with complex
matrices in the present paper, it is natural to first classify the
multiplicative maps on the complex plane under mild measurability
assumptions. Such a classification has been achieved in related work
\cite{GKR-measurable}.
\subsection{Multiplicative maps on the complex plane}\label{SSmult}
Given $\alpha, \beta \in \R$, define $\Psi_{\alpha,\beta} : \C\to \C$ by:
\begin{equation}
\Psi_{\alpha,\beta}(r \exp(i \theta)) := r^\alpha \exp(i \beta \theta) \
\forall r>0, \theta \in (-\pi,\pi], \qquad \Psi_{\alpha,\beta}(0) := 0.
\end{equation}
\noindent When $\beta \in \Z$, the maps $\Psi_{\alpha,\beta}$ are
multiplicative on $\C$ and continuous on the unit circle $S^1 := \{z \in
\C: |z| = 1\}$. Moreover, $(\alpha,\beta) \mapsto \Psi_{\alpha,\beta}$ is
a monoid homomorphism from the additive group $(\R \times \Z,+)$ to the
monoid of multiplicative maps on $\C$ (under pointwise multiplication).
The following lemma shows that the functions $\Psi_{\alpha, \beta}$ for
$\alpha \in \R$ and $\beta \in \Z$ are in fact the only non-constant
multiplicative functions from $\C$ to $\C$ that 1) are continuous on
$S^1$, 2) map the positive real axis into itself (needed to preserve
Loewner positivity), and 3) satisfy natural measurability conditions.
\begin{lemma}\label{LCmult}
Given $R \in (1,\infty]$ and $K : D(0,R) \to \C$, the following are
equivalent.
\begin{enumerate}
\item $K$ is multiplicative on $D(0,R)$, continuous on $S^1 \subset
D(0,R)$, sends $\I := (0,R)$ to $\R$, and is Lebesgue measurable on some
subinterval $I \subset \I$ which contains $1$.
\item Either $K \equiv 0$ or $K \equiv 1$ on $D(0,R)$, or there exist
$\alpha \in \R$ and $\beta \in \Z$ such that $K \equiv
\Psi_{\alpha,\beta}$.
\end{enumerate}
\noindent Moreover, the maps $\{ \Psi_{\alpha,\beta} : \alpha \in \R,
\beta \in \Z \} \cup \{ K \equiv 1 \}$ are linearly independent as
functions on $D(0,r)$ for any $0 < r \leq \infty$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{LCmult}]
Note that $K : S^1 \to \C$ is multiplicative and continuous, hence a
character. Therefore $K : D(0,R) \to \C$ is multiplicative and
conjugation-equivariant. The result now follows from \cite[Theorem
8]{GKR-measurable}.
\end{proof}
\subsection{Main results}
Before stating the main results of the paper, we introduce some notation.
Let $S \subset \C$ and $f: S \to \C$. Given a complex diagonalizable
matrix $A$ with eigen-decomposition $A = P^{-1} D P$ and spectrum
contained in $S$, we denote by $f(A)$ the matrix $f(A) = P^{-1} f(D) P$
where $f(D)$ denotes the diagonal matrix with diagonal $f(d_{11}), \dots,
f(d_{nn})$. We denote by $\bpm_{mn}(S)$ the subset of block matrices $H =
(H_{st})_{s,t=1}^n \in \bp_{mn}(\C)$ where each block $H_{st}$ is an $m
\times m$ diagonalizable matrix with spectrum contained in $S$. Note that
when $m=1$, the set $\bpm_{mn}(S)$ reduces to $\bp_n(S)$. Given $H =
(H_{st})_{s,t=1}^n \in \bpm_{mn}(S)$, we define
\begin{equation}
f^{[m]}[H] := (f(H_{st}))_{s,t=1}^n.
\end{equation}
\noindent When $m=1$, $f^{[m]}[A]$ reduces to $f[A]$. Using this
notation, we can now state the main results of the paper.
Recall that by Theorem \ref{TFitzHorn}, a power function $x^\alpha$
preserves positivity when applied entrywise to all $n \times n$ symmetric
positive semidefinite matrices with positive entries, if and only if
$\alpha \geq n-2$ or $\alpha \in \N$. Our first main result shows that,
surprisingly, the situation is radically different when the blocks have
size greater than $1$.
\begin{utheorem}\label{Tmain1}
Let $\beta \in \Z$ and let $m,n \geq 2$.
\begin{enumerate}
\item Given $\alpha > 0$, the matrix $f_\alpha^{[m]}[(H_{st})] =
(H_{st}^\alpha) \in \bp_{mn}(\C)$ for all $(H_{st}) \in
\bpm_{mn}([0,\infty))$, if and only if $\alpha = 1$.
If $\alpha \leq 0$, then $f_0^{[m]}[-]$ preserves positivity on
$\bpm_{mn}((0,\infty))$ if and only if $\alpha = 0$.
\item The functions $\phi_\alpha^{[m]}[-]$ do not preserve positivity on
$\bpm_{mn}(\R)$ for any $\alpha \in \R$.
\item For $\alpha \in \R$, the functions $\psi_\alpha^{[m]}[-]$ preserve
positivity on $\bpm_{mn}(\R)$ if and only if $\alpha = 1$.
\item For $\alpha \in \R$, the functions $\Psi_{\alpha, \beta}^{[m]}[-]$
preserve positivity on $\bpm_{mn}(\C)$ if and only if $\alpha = 1$ and
$\beta = \pm1$ -- i.e., $\Psi_{\alpha,\beta}(z) \equiv z$ or
$\overline{z}$.
\end{enumerate}
\end{utheorem}
A natural relaxation of the hypothesis in Theorem \ref{Tmain1} is to
assume that the blocks $H_{st}$ all commute with each other. Powers
preserving positivity when applied to block matrices where the blocks
commute have been studied by D.~Choudhury in \cite{Dipa_proc}. It is
natural to ask if the lower bound $\alpha \geq mn-2$ in Theorem
\ref{Tdipa} is sharp, or if other powers preserve positivity. We
completely settle this question in our second main result, Theorem
\ref{Tmain2}, by showing that the {\it critical exponent} is in fact
$\alpha = n-2$ and that smaller non-integer powers do not preserve
Loewner positivity. In Section \ref{Scommute} we also consider the
analogue of Theorem \ref{Tmain2} where the blocks are complex
diagonalizable.
\begin{utheorem}\label{Tmain2}
Let $\alpha > 0$ and $m,n \geq 2$. Then $(H_{st}^\alpha) \in
\bp_{mn}(\C)$ for all $(H_{st})_{s,t=1}^n \in \bp_{mn}(\C)$ such that
$H_{st} \in \bp_m(\C)$ and the blocks $H_{st}$ commute, if $\alpha \in \N
\cup [n-2,\infty)$.
If $\alpha \not\in \N \cup [n-2,\infty)$, there exist matrices $H_{st}
\in \bp_m(\C)$ such that $(H_{st}) \in \bp_{mn}(\C)$, the blocks $H_{st}$
commute, but $(H_{st}^\alpha)$ is not positive semidefinite.
Moreover, if $\alpha < 0$, there exist real symmetric positive definite
matrices $H_{st}$, $s,t = 1, \dots, n$ such that $(H_{st})_{s,t=1}^n \in
\bp_{mn}(\R)$, but $(H_{st}^\alpha)$ is not positive semidefinite.
\end{utheorem}
In our third main result, we consider an interesting question raised by
X.~Zhan in 2001 (see \cite[Acknowledgments]{Hiai2009}). Zhan asked if
Theorem \ref{TFitzHorn} can be generalized to matrices with complex
entries when the power functions $x^\alpha$ are replaced by the functions
$z = re^{i\theta} \mapsto r^\alpha e^{i\theta}$. This is precisely the
power function $\Psi_{\alpha, 1}$. More generally, in the framework
developed in Section \ref{SSmult}, it is natural to generalize Zhan's
question by asking for which values of $\alpha, \beta$ does
$\Psi_{\alpha, \beta}$ preserve positivity when applied entrywise. Our
third result, Theorem \ref{Tnew}, provides bounds on $\alpha, \beta$
which guarantee that $\Psi_{\alpha, \beta}$ preserves or does not
preserve Loewner positivity.
\begin{utheorem}\label{Tnew}
Let $n \geq 3$.
\begin{enumerate}
\item The entrywise function $\Psi_{\alpha,\beta}$ preserves Loewner
positivity on $\bp_n(\C)$ if $\beta \in \Z$, $(\alpha, \beta) \ne (0,0)$,
and either $\alpha \in |\beta| -2 +2\N$ or $\alpha \geq \max(n-2,|\beta|
+ 2n-6)$.
\item The entrywise function $\Psi_{\alpha,\beta}$ fails to preserve
positivity if either:
\begin{enumerate}
\item $\beta \not\in \Z$, or
\item $\alpha < 1$, or
\item $1 \leq \alpha < \max(n-2,|\beta| + 2 \lfloor (\sqrt{8n+1}-5)/2
\rfloor)$ and $\alpha \not\in |\beta| -2 +2\N$.
\end{enumerate}
\end{enumerate}
\end{utheorem}
\noindent Thus for $n \geq 3$, $\beta \in \Z$, and $\alpha \not\in
|\beta| -2 +2\N$, we see that $\Psi_{\alpha,\beta}$ preserves Loewner
positivity for $\alpha \geq \max(n-2, |\beta| + 2n-6)$, but not for
$\alpha < \max(n-2, |\beta| + 2 \lfloor (\sqrt{8n+1}-5)/2 \rfloor)$. Note
that if $n=3$, these two quantities coincide and equal $\max(1,|\beta|)$.
We therefore have the following corollary, which completely answers
Zhan's question for the $n=3$ case.
\begin{corollary}\label{Czhan}
For $n=3$, the entrywise power function $\Psi_{\alpha,\beta}$ preserves
Loewner positivity on $\bp_n(\C)$ if and only if $\beta \in \Z$ and
$\alpha \geq \max(1,|\beta|)$.
\end{corollary}
\noindent A consequence of Theorem \ref{Tnew} is that complex critical
exponents exist for the power functions $\Psi_{\alpha,\beta}$:
\begin{corollary}\label{Ccrit}
For every $n \geq 3$ and $\beta \in \Z$, there exists a smallest real
number $\alpha_{\min}$ such that $\Psi_{\alpha,\beta}[-]$ preserves
$\bp_n(\C)$ for all $\alpha \geq \alpha_{\min}$. Moreover, $\alpha_{\min}
= \max(1,|\beta|)$ for $n=3$, while for $n \geq 4$,
\[ \max(n-2,|\beta| + 2 \lfloor (\sqrt{8n+1}-5)/2 \rfloor) \leq
\alpha_{\min} \leq |\beta| + 2n-6. \]
\end{corollary}
\noindent Note that Theorem \ref{TFitzHorn} and an application of the
Schur product theorem imply that $n-2 \leq \alpha_{\min} \leq |\beta| +
2n-4$. Corollary \ref{Ccrit} thus greatly improves this lower bound for
the critical exponent $\alpha_{\min}$.
\section{Powers preserving positivity: the block case}\label{Sblock}
We now characterize powers preserving positivity when applied blockwise.
To prove Theorem \ref{Tmain1} we need some preliminaries. First recall
the notion of an $m$-matrix monotone function.
\begin{definition}
Let $I \subset \R$ be an interval and let $m \geq 1$. A function $f: I
\to \R$ is said to be {\it $m$-matrix monotone} (or {\it $m$-monotone})
if given $m \times m$ Hermitian matrices $A,B$ with spectrum in $I$,
\[ A \geq B \implies f(A) \geq f(B). \]
\end{definition}
\noindent The following lemma reformulates $m$-monotonicity of power
functions in terms of block matrices, and will be crucial in proving
Theorem \ref{Tmain1}.
\begin{lemma}\label{Lmonotone}
Given an integer $m \in \N$, define the subset $\calp_m \subset
\bp_{2m}(\C)$ via:
\[ \calp_m := \{ \begin{pmatrix} A & B\\ B & C \end{pmatrix} \in
\bp_{2m}^{[m]}([0,\infty)) : \det C \neq 0, BC = CB \}. \]
\noindent Also fix $\alpha \in \R$. Then the following are equivalent:
\begin{enumerate}
\item The blockwise power function $f_\alpha^{[m]}[-]$ sends $\calp_m$ to
$\bp_{2m}(\C)$.
\item The function $f_\alpha$ is $m$-monotone on $(0,\infty)$.
\end{enumerate}
\noindent In particular, if $f_\alpha^{[m]}[-]$ preserves Loewner
positivity on $\bp_{mn}^{[m]}(\C)$ for some $n \geq 2$, then it is
$m$-monotone.
\end{lemma}
\begin{proof}
First suppose $f_\alpha^{[m]}[-]$ preserves Loewner positivity on
$\calp_m$, and assume $A \geq B > 0$. Let $X \in \bp_m(\C)$ denote the
principal square root of $B$. Then the block matrix $M :=
\begin{pmatrix} A & X \\ X & I_m \end{pmatrix} \in
\bp_{2m}^{[m]}([0,\infty))$, by computing the Schur complement of $I_m$
in $M$. Therefore by hypothesis, the matrix $f_\alpha^{[m]}[M] =
\begin{pmatrix} A^\alpha & X^\alpha \\ X^\alpha & I_m \end{pmatrix}$ is
also positive semidefinite. Using Schur complements again, we conclude
that $A^\alpha - (X^\alpha)^2 = A^\alpha - B^\alpha \geq 0$. Thus $A \geq
B > 0 \Rightarrow A^\alpha \geq B^\alpha$ and so $f_\alpha$ is
$m$-monotone on $(0,\infty)$.
Conversely, suppose $f_\alpha$ is $m$-monotone on $(0,\infty)$, and
suppose $\begin{pmatrix} A & B \\ B & C \end{pmatrix} \in \calp_m$. Then
$A \geq B C^{-1} B$ (by taking Schur complements). Moreover, $B,C$ are
simultaneously diagonalizable, whence $B, C^{\pm 1}$ commute. It is now
easy to verify that $(B C^{-1} B)^\alpha = B^\alpha (C^\alpha)^{-1}
B^\alpha$. Now using the $m$-monotonicity of $f_\alpha$, we compute:
\[ C^\alpha \geq 0^\alpha = 0, \qquad A^\alpha = f_\alpha(A) \geq
f_\alpha(B C^{-1} B) = B^\alpha (C^\alpha)^{-1} B^\alpha. \]
\noindent In turn, this implies that the matrix $\begin{pmatrix} A^\alpha
& B^\alpha\\ B^\alpha & C^\alpha \end{pmatrix}$ is positive semidefinite,
proving (1). The final assertion is also clear since $\calp_m \oplus {\bf
0}_{m(n-2) \times m(n-2)} \subset \bp_{mn}^{[m]}(\C)$ (via padding by
zeros).
\end{proof}
Matrix monotone functions have been the subject of a detailed analysis by
Loewner \cite{Loewner34} and many others including Wigner and von Neumann
\cite{Wigner_et_al_54}, Bendat and Sherman \cite{Bendat_et_al_55},
Kor\'anyi \cite{Koranyi_56}, Donoghue \cite{Donoghue_74}, Sparr
\cite{Sparr_80}, Hansen and Petersen \cite{Hansen_et_al_81}, Ameur
\cite{Ameur_2003}, and more recently by Hansen \cite{Hansen2013} - also
see \cite{Hansen2013} for a history of the problem.
We now state an important and interesting characterization of matrix
monotone functions using Loewner matrices. This result was shown by
Hansen \cite{Hansen2013} and plays an essential role in proving Theorem
\ref{Tmain1}.
\begin{definition}
Let $I \subset \R$ and $f : I \to \R$ be differentiable. The {\it first
divided difference} of $f$ for $\lambda_1, \lambda_2 \in I$, denoted by
$[\lambda_1, \lambda_2]_f$ is given by
\[ [\lambda_1, \lambda_2]_f := \begin{cases}
\frac{f(\lambda_1)-f(\lambda_2)}{\lambda_1 - \lambda_2} & \textrm{if }
\lambda_1 \ne \lambda_2, \\
f'(\lambda_1) & \textrm{if } \lambda_1 = \lambda_2.
\end{cases} \]
\noindent Now given $m \geq 2$ and $\lambda_1, \dots, \lambda_m \in I$,
define the {\it Loewner matrix} $L_f(\lambda_1, \dots, \lambda_m)$ of $f$ at
the points $\lambda_j$ to be
\begin{equation}
L_f(\lambda_1, \dots, \lambda_m) := ([\lambda_s,\lambda_t]_f)_{s,t=1}^m.
\end{equation}
\end{definition}
\begin{theorem}[{Hansen \cite[Theorem 3.2]{Hansen2013}}]\label{THansen}
Let $m \in \N$ and $f$ be a real function in $C^1(I)$, where $I \subset
\R$ is an open interval. Then $f$ is $m$-monotone if and only if the
Loewner matrix $L_f(\lambda_1 , \dots, \lambda_m)$ is positive
semidefinite for all sequences $\lambda_1, \dots, \lambda_m \in I$.
\end{theorem}
\noindent We now have all the ingredients for proving Theorem
\ref{Tmain1}.
\begin{proof}[{\bf Proof of Theorem \ref{Tmain1}}]\hfill
\noindent {\bf Proof of (1).}
Clearly, $f_1^{[m]}[-]$ preserves positivity on $\bpm_{mn}([0,\infty))$.
Next, if $\alpha = 0$ and the blocks $H_{st}$ are positive definite, then
$f_0^{[m]}[(H_{st})] = {\bf 1}_{n \times n} \otimes I_m$, where $\otimes$
denotes the Kronecker product, and so $f_0^{[m]}[(H_{st})] \in
\bp_{mn}(\C)$. Now assume $\alpha \in \R$ and $\alpha \ne 0, 1$. We claim
that the function $f_\alpha^{[m]}[-]$ does not preserve positivity on
$\bpm_{mn}([0,\infty))$. It suffices to prove the claim for $m=n=2$ (the
general case follows by padding with zeros).
Thus, suppose $f_\alpha^{[2]}[-]$ preserves positivity on
$\bp_4^{[2]}((0,\infty))$. By Lemma \ref{Lmonotone}, the function
$f_\alpha(x) = x^\alpha$ is $2$-monotone on $(0,\infty)$. By Theorem
\ref{THansen}, this is possible if and only if the Loewner matrix
$L_{f_\alpha}(\lambda_1, \lambda_2)$
is positive semidefinite for all $\lambda_1, \lambda_2 > 0$
such that $\lambda_1 \ne \lambda_2$. Thus, the $(1,1)$-entry of
$L_{f_\alpha}(\lambda_1,\lambda_2)$ has to be nonnegative and so $\alpha
\geq 0$. Computing the determinant of $L_{f_\alpha}(\lambda_1,
\lambda_2)$, we obtain:
\begin{equation}
\det L_{f_\alpha}(\lambda_1, \lambda_2) = \alpha \lambda_1^{\alpha-1}
\cdot \alpha \lambda_2^{\alpha-1} - \left(\frac{\lambda_1^\alpha -
\lambda_2^\alpha}{\lambda_1-\lambda_2}\right)^2 \geq 0 \qquad \forall
\lambda_1, \lambda_2 > 0, \lambda_1 \ne \lambda_2.
\end{equation}
\noindent Now fix $\lambda_2 > 0$. If $\alpha > 1$, then $\det
L_{f_\alpha}(\lambda_1, \lambda_2) \to -\infty$ as $\lambda_1 \to \infty$
since $\alpha \ne 0,1$. Thus, $\det L_{f_\alpha}(\lambda_1, \lambda_2) <
0$ for $\lambda_1$ large enough. This proves that $f_\alpha(x) =
x^\alpha$ is not $2$-monotone, and hence $f_\alpha^{[m]}[-]$ does not
preserve positivity if $\alpha > 1$ or $\alpha < 0$.
Finally, suppose $\alpha \in (0,1)$. We first claim that there exists a
real matrix $\begin{pmatrix} A & X \\ X & N \end{pmatrix} \in
\bp_4^{[2]}((0,\infty))$ such that the matrix $\begin{pmatrix} A^\alpha &
X^\alpha \\ X^\alpha & N^\alpha \end{pmatrix}$ is not positive
semidefinite. To prove the claim, consider the matrix
\begin{equation}\label{E44}
M := \begin{pmatrix} 3/2 & 0 & 1 & 1/2\\ 0 & 2 & 1/2 & 1\\ 1 & 1/2 & 1 &
4/5\\ 1/2 & 1 & 4/5 & 223/250 \end{pmatrix} = \begin{pmatrix} A & X \\ X
& N \end{pmatrix},
\end{equation}
\noindent where $A,X,N \in \bp_2(\R)$. It can be verified that $\det
(\lambda I_4- M)$ is a fourth-degree polynomial which is positive for
$|\lambda|$ large and at $\lambda = 1,4$; zero at $\lambda = 0$; and
negative at $1/5, 2$. Therefore $0$ is an eigenvalue of $M$, and the
other three eigenvalues of $M$ lie in $(1/5,1), (1,2), (2,4)$. It is now
easily verified that $M \in \bp_4^{[2]}((0,\infty))$.
We next claim that $f_\alpha^{[2]}[M] \notin \bp_4$ for small $\alpha >
0$ close enough to zero. To verify the claim, we will compute explicitly
the determinant of $f_\alpha^{[2]}[M]$, and show that it is negative
close to $\alpha = 0$.
We begin by computing the powers of the $2 \times 2$ blocks $A,X,N$ of
$M$. The block $A$ is diagonal, while the powers of the off-diagonal
block $X$ are computed using its spectral decomposition:
\[ X = \begin{pmatrix} 1 & \frac{1}{2}\\ \frac{1}{2} & 1\end{pmatrix} = U
\diag(\frac{1}{2}, \frac{3}{2}) U^T, \ U := \frac{1}{\sqrt{2}}
\begin{pmatrix} -1 & 1\\ 1 & 1\end{pmatrix}\]
from which it follows that
\[
X^\alpha = \frac{1}{2} \begin{pmatrix} (3/2)^\alpha +
(1/2)^\alpha & (3/2)^\alpha - (1/2)^\alpha\\ (3/2)^\alpha - (1/2)^\alpha
& (3/2)^\alpha + (1/2)^\alpha \end{pmatrix}. \]
To compute the spectral powers of the last remaining block $N :=
\begin{pmatrix} 1 & 4/5\\ 4/5 & 223/250 \end{pmatrix}$, we define $x_\pm
:= 27 \pm \sqrt{160729} = 27 \pm \sqrt{27^2 + 400^2}$ for convenience.
Then $N$ has spectral decomposition $N = V D V^{-1}$, where
\[ V := \begin{pmatrix} x_-/400 & x_+/400 \\ 1 & 1 \end{pmatrix}, \quad
D := \diag(1 - \frac{x_+}{500}, 1 - \frac{x_-}{500}), \quad
V^{-1} = \frac{1}{2 \sqrt{160729}} \begin{pmatrix} -400 & x_+\\
400 & -x_- \end{pmatrix}. \]
\noindent Let $\lambda_\pm := 1 - \frac{x_\pm}{500}$ be the eigenvalues
of $N$. Since $V = U D'$ with $U$ unitary and $D'$ diagonal, we obtain:
\[ N^\alpha := V D^\alpha V^{-1} = \frac{1}{2\sqrt{160729}}
\begin{pmatrix}x_+ \lambda_-^\alpha - x_- \lambda_+^\alpha &
400(\lambda_-^\alpha - \lambda_+^\alpha) \\
400(\lambda_-^\alpha - \lambda_+^\alpha) & x_+ \lambda_+^\alpha - x_-
\lambda_-^\alpha \end{pmatrix}. \]
\noindent Therefore if we define
$g_M(\alpha) := \det f_\alpha^{[2]}[M]$, then
\begin{align*}
\frac{4}{2^\alpha} g_M(\alpha) &= 2^{2-\alpha} \det \begin{pmatrix}
A^\alpha & X^\alpha \\ X^\alpha & N^\alpha \end{pmatrix}
= \ 4 a^2 b^3 + 4 a L_- L_+ + \frac{54}{\sqrt{160729}} ab(1-ab)(L_- -
L_+)\\
&\ + (ab+1) \left( (2L-1)(L_- a^2 + L_+ b^2) - (2L+1)(L_+ a^2 + L_- b^2)
\right),
\end{align*}
\noindent where $L_\pm := \lambda_\pm^\alpha, a := (3/2)^\alpha, b :=
(1/2)^\alpha$, and $L := 200 / \sqrt{160729}$.
Note that $g_M(0) = \det f_0^{[2]}[M] = 0$. Moreover, using the explicit
form of the function $g_M(\alpha)$, it can be verified that $g_M'(0) = 0$
and $g_M''(0) < 0$. This shows that $g_M(\alpha) < 0$ for all $0 <
|\alpha| < \epsilon_M$ for some $\epsilon_M >0$.
Now suppose $f_\alpha^{[2]}[-]$ preserves positivity on
$\bp_4^{[2]}((0,\infty))$ for some $\alpha \in (0,1)$.
Choose $k \in \N$ such that $\alpha^k \in (0,\epsilon_M)$, with
$\epsilon_M$ as above. Then $(f_\alpha^{[2]})^{\circ k}[M] =
f_{\alpha^k}^{[2]}[M] \in \bp_4^{[2]}((0,\infty)) \subset \bp_4(\C)$,
which contradicts the previous paragraph. This proves that
$f_\alpha^{[2]}[-]$ does not preserve positivity for $\alpha \in
(0,1)$.\medskip
\noindent {\bf Proof of (2).}
The first part shows that $\phi_\alpha^{[m]}[-]$ does not preserve
positivity on $\bpm_{mn}(\R)$ for $\alpha \ne 0, 1$. We now prove that
$\phi_\alpha^{[m]}[-]$ also does not preserve positivity for $\alpha = 0$
and $\alpha = 1$. Suppose first $\alpha = 0$. Fix
$B := \begin{pmatrix}0 & 0 \\ 1 & 1 \end{pmatrix}$, and for $c \in \R$,
define the matrix
\begin{equation}\label{eqn:phi_0}
A(c) := \begin{pmatrix}
c I_2 & B \\
B^T & c I_2
\end{pmatrix}.
\end{equation}
\noindent Note that $A(c)$ has eigenvalues $c, c, c \pm \sqrt{2}$.
Moreover, $B$ is diagonalizable and has eigenvalues $0$ and $1$. As a
consequence, $\phi_0(B) = B$. Therefore the matrix $A(\sqrt{2}) \in
\bp_4^{[2]}(\R)$, but $\phi_0^{[2]}[A(\sqrt{2})] = A(1) \not\in \bp_4$.
This proves $\phi_0^{[2]}[-]$ does not preserve positivity on
$\bp_{4}^{[2]}(\R)$.
The case of general $m,n \geq 2$ follows by padding $A(\sqrt{2})$ with
zeros. To prove that $\phi_1^{[m]}[-]$ does not preserve positivity on
$\bpm_{mn}(\R)$, consider the matrix
\begin{equation}
M := \begin{pmatrix}
2 & 0 & -1 & -1 \\
0 & 1 & -1 & 0 \\
-1 & -1 & 2 & 0 \\
-1 & 0 & 0 & 1
\end{pmatrix}
\end{equation}
\noindent It is not difficult to verify that $M \in \bp_{4}^{[2]}(\R)$,
but $\det \phi_1^{[2]}[M] = -4/5$. This proves that $\phi_1^{[2]}[-]$
does not preserve positivity on $\bp_4^{[2]}(\R)$. It follows that
$\phi_1^{[m]}[-]$ does not preserve positivity on
$\bp_{mn}^{[m]}(\R)$ for $m,n \geq 2$.\medskip
\noindent {\bf Proof of (3).}
By part (1), the function $\psi_\alpha^{[m]}[-]$ does not preserve
positivity if $\alpha \ne 0, 1$. Clearly, $\psi_1^{[m]}[-]$ preserves
positivity since $\psi_1(x) = x$ for all $x \in \R$. That
$\psi_0^{[m]}[-]$ does not preserve positivity on $\bp_{mn}^{[m]}(\R)$
follows by considering the matrix $A(c)$ in Equation \eqref{eqn:phi_0}.
\medskip
\noindent {\bf Proof of (4).}
By part (1), $\Psi_{\alpha,\beta}^{[m]}[-]$ does not preserve positivity
on $\bpm_{mn}(\C)$ if $\alpha \ne 0,1$. Moreover, the above analysis of
the matrix $A(c)$ in Equation \eqref{eqn:phi_0} shows that $\Psi_{0,
\beta}^{[m]}[-]$ does not preserve positivity on $\bpm_{mn}(\C)$ for any
$\beta \in \Z$. Now suppose $\alpha = 1$. By the second part of the
proof, $\Psi_{1,0}^{[m]} \equiv \phi_1^{[m]}$ does not preserve
positivity on $\bpm_{mn}(\C)$. Also, $\Psi_{1,1}^{[m]}$ clearly preserves
positivity. Note that since a matrix $A$ is positive semidefinite if and
only if its complex conjugate $\overline{A}$ is positive semidefinite,
$\Psi_{\alpha,\beta}^{[m]}[-]$ preserves positivity on $\bpm_{mn}(\C)$ if
and only if $\Psi_{\alpha,-\beta}^{[m]}[-]$ does so. To conclude the
proof, it thus remains to prove that $\Psi_{1,\beta}^{[m]}[-]$ does not
preserve positivity on $\bpm_{mn}(\C)$ for $\beta \geq 2$. Without loss
of generality, let $m = n = 2$, and define:
\begin{equation}\label{EMabc}
M(a,b,c) := \begin{pmatrix}
1 & 0 & a & b \\
0 & 1 & c & a \\
\overline{a} & \overline{c} & 1 & 0 \\
\overline{b} & \overline{a} & 0 & 1
\end{pmatrix} \qquad a,b,c \in \C.
\end{equation}
\begin{comment}
\noindent The Schur complement of the lower right $2 \times 2$ block of
$M(a,b,c)$ is given by
\begin{equation}
S(a,b,c):= \begin{pmatrix}
1-|a|^2-|b|^2 & -\overline{a}b - a \overline{c} \\
-a\overline{b} - \overline{a}c & 1-|a|^2-|c|^2
\end{pmatrix}.
\end{equation}
\noindent Using \cite[Appendix A.5.5]{BoydVan_Convex}, it follows that
$M(a,b,c) \in \bp_4(\C)$ if and only if $S(a,b,c) \in \bp_2(\C)$, i.e.,
if and only if
\begin{align}
(1) \qquad& |a|^2 + |b|^2 \leq 1;\label{ENcond1}\\
(2) \qquad& |a|^2 + |c|^2 \leq 1;\\
(3) \qquad& \det S(a,b,c) = |a|^4 -2|a|^2 - |b|^2 - |c|^2 + |b|^2 |c|^2
- 2\ree(\overline{a}^2 b c) +1 \geq 0.
\end{align}
\end{comment}
\noindent One verifies that the four eigenvalues of the matrix $M(a,a,0)$
are $1 \pm a(\sqrt{5} \pm 1)/2$. Therefore if we fix $a \in (0,
(\sqrt{5}-1)/2)$, the matrix $M(a,a,0)$ is positive definite.
Consequently, there exists $\epsilon > 0$ such that $M(a,a,c) \in
\bp_4^{[2]}((0,\infty))$ for $|c| < \epsilon$.
We now claim that $\Psi_{1,\beta}^{[2]}[M(a,a,c)] \not\in \bp_4(\C)$
if $c$ is negative and close enough to $0$. To prove the claim, we
first compute $\Psi_{1,\beta}^{[2]}[M(a,a,c)]$.
Note that $\Psi_{1,\beta}(I_2) = I_2$; now set $B := \begin{pmatrix} a &
a \\ c & a \end{pmatrix}$, with $c < 0$. The eigenvalues of $B$ are $a
\pm i \sqrt{a |c|}$, with corresponding eigenvectors $v_\pm := (\mp i
\sqrt{a/|c|}, 1)^T$. As a consequence,
defining $\lambda_\pm := \Psi_{1,\beta}(a \pm i \sqrt{a |c|})$, we
obtain:
\begin{align*}
\Psi_{1,\beta}(B) &= \begin{pmatrix} -i \sqrt{a/|c|} & i \sqrt{a/|c|} \\
1 & 1\end{pmatrix} \begin{pmatrix}\lambda_+ & 0 \\ 0 &
\lambda_-\end{pmatrix} \begin{pmatrix} -i \sqrt{a/|c|} & i \sqrt{a/|c|}
\\ 1 & 1\end{pmatrix}^{-1} \\
&= \begin{pmatrix}\frac{\lambda_+ + \lambda_-}{2} & \frac{-i
\sqrt{a}}{\sqrt{|c|}} \cdot \frac{\lambda_+ - \lambda_-}{2} \\ \frac{i
\sqrt{|c|}}{\sqrt{a}} \cdot \frac{\lambda_+ - \lambda_-}{2} &
\frac{\lambda_+ + \lambda_-}{2}\end{pmatrix} = \begin{pmatrix} a' & b'\\
c' & a' \end{pmatrix},
\end{align*}
\noindent say. Thus $\Psi_{1,\beta}^{[2]}[M(a,a,c)] = M(a',b',c')$.
Now suppose $\beta \geq 2$. We will prove that there exists $a > 0$ such
that the $(1,2)$-entry of the real matrix $\Psi_{1,\beta}(B)$ is greater
than $1$, if $c$ is negative and close enough to $0$. Indeed, note that
\[ \lambda_\pm = \Psi_{1,\beta}(a \pm i \sqrt{a |c|}) = \sqrt{a^2 + a
|c|} e^{i\beta \arctan(\pm \sqrt{|c|/a})}. \]
\noindent Thus,
\begin{align*}
\lim_{c \to 0^-} \Psi_{1,\beta}(B)_{12} &= \lim_{c \to 0^-} \frac{-i
\sqrt{a}}{\sqrt{|c|}} \cdot \frac{\lambda_+ - \lambda_-}{2}\\
&= \lim_{c \to 0^-} \frac{-i \sqrt{a}}{\sqrt{|c|}} \sqrt{a^2 + a |c|}
\frac{e^{i\beta \arctan(\sqrt{|c|/a})} - e^{i\beta
\arctan(-\sqrt{|c|/a})}}{2} \\
&= -i a \lim_{c \to 0^-} \frac{e^{i\beta \arctan(\sqrt{|c|/a})} -
e^{i\beta \arctan(-\sqrt{|c|/a})}}{2\sqrt{|c|/a}} \\
&= -i a \left.\frac{d}{dy} e^{i\beta \arctan(y)}\right|_{y = 0} = -i a
\left.e^{i\beta \arctan(y)} i \beta \frac{1}{1+y^2}\right|_{y=0} = a
\beta.
\end{align*}
\noindent As a consequence, if $\beta \geq 2$ and $a \in (1/\beta,
(\sqrt{5}-1)/2)$, then for $c < 0$ small enough, the $(1,2)$-entry of
$\Psi_{1,\beta}(B)$ is greater than $1$.
But then the minor of $\Psi_{1,\beta}^{[2]}[M(a,a,c)]$ obtained by
deleting the second row and column is negative, from which it follows
that $\Psi_{1,\beta}^{[2]}[M(a,a,c)] \not\in \bp_4(\C)$.
Therefore $\Psi_{1,\beta}^{[2]}[-]$ does not preserve positivity on
$\bp_4^{[2]}(\C)$ if $\beta \ne \pm 1$. As before, the case of general
$m,n \geq 2$ follows by padding with zeros. This concludes the proof.
\end{proof}
\begin{remark}
In the proof of part (1) of Theorem \ref{Tmain1}, we showed that $\det
f_\alpha^{[2]}[M] < 0$ for all $\alpha \in (0,\epsilon_M)$ for some
$\epsilon_M \in (0,1)$, with $M$ as in Equation \eqref{E44}. In fact,
numerical computations indicate that $\det f_\alpha^{[2]}[M] < 0$ for all
$\alpha \in (0,1)$; this would provide a ``universal" counterexample $M$
for the proof of part (1).
\end{remark}
\section{Powers preserving positivity for commuting blocks}\label{Scommute}
Recall that D.~Choudhury \cite{Dipa_proc} studied an interesting variant
of the problem considered in Section \ref{Sblock} - namely, which
blockwise powers $(H_{st})_{s,t=1}^n \mapsto (H_{st}^\alpha)$ preserve
positivity when all the $m \times m$ blocks $H_{st}$ commute and are
positive semidefinite. It was shown in \cite{Dipa_proc} that if $\alpha
\in \N \cup [mn-2,\infty)$ then the corresponding blockwise power
preserves positivity. We now demonstrate that the bound $mn-2$ can be
significantly improved. More precisely, we completely characterize the
powers preserving Loewner positivity in that setting.
\begin{proof}[{\bf Proof of Theorem \ref{Tmain2}}]
The proof is a refinement of the argument in \cite[Theorem 5]{Dipa_proc}.
Let $H = (H_{st}) \in \bp_{mn}(\C)$ be as given. Since the blocks
$H_{st}$ commute, they are simultaneously diagonalizable, i.e., there
exists a $m \times m$ unitary matrix $U$ and diagonal matrices
$\Lambda_{st}$ such that $H_{st} = U \Lambda_{st} U^* \ \forall s,t$.
Letting $T := U^{\oplus n}$ and $\Lambda := (\Lambda_{st})$, we obtain
$H = T \Lambda T^{-1}$. Let $P$ be the permutation matrix such that
\begin{equation}\label{Eperm}
P^{-1} \Lambda P = A_1 \oplus \dots \oplus A_m,
\end{equation}
\noindent where $(A_{k})_{st} := (\Lambda_{st})_{kk}$ with $1 \leq k \leq
m$ and $1 \leq s,t \leq n$. Then $H = (TP) (A_1 \oplus \dots \oplus A_m)
(TP)^{-1}$. By assumption, $A_k \in \bp_n([0,\infty))\ \forall k$.
Moreover, since the entries of the matrices $A_k$ are the eigenvalues of
the blocks $H_{st}$, we have
$(H_{st}^\alpha) = (TP) (A_1^{\circ \alpha} \oplus \dots \oplus
A_m^{\circ \alpha})(TP)^{-1}$.
Here $A^{\circ \alpha} := (a_{st}^\alpha)$ denotes the entrywise power of
$A = (a_{st})$. Since $A_k$ are $n \times n$ matrices, it follows
immediately by Theorem \ref{TFitzHorn} that $(H_{st}^\alpha) \in
\bp_{mn}(\C)$ if $\alpha \in \N \cup [n-2,\infty)$.
Now suppose $\alpha \in (0,n-2) \setminus \N$. Choose $\epsilon > 0$ such
that the matrix $A := (1 + \epsilon st)_{s,t=1}^n$ satisfies $A^{\circ
\alpha} \not\in \bp_n$ (see Theorem \ref{TFitzHorn}). Let
\begin{equation}\label{Econst1}
\Lambda = (\Lambda_{st})_{s,t=1}^n := P A^{\oplus m} P^{-1},
\end{equation}
\noindent where $P$ is the permutation matrix given in Equation
\eqref{Eperm} and $\Lambda_{st}$ are $m \times m$ diagonal matrices.
Define $H_{st} := \Lambda_{st}$. Then the matrices $H_{st}$ are Hermitian
positive semidefinite, as is the matrix $H = (H_{st})$, but
$(H_{st}^\alpha) = P (A^{\circ \alpha} \oplus \dots \oplus A^{\circ
\alpha}) P^{-1}$ is not positive semidefinite by construction of $A$.
This shows that the powers $\alpha \in (0,n-2) \setminus \N$ do not
preserve positivity when applied blockwise.
Finally, suppose $\alpha < 0$. Let $A := I_{m \times m} + {\bf 1}_{m
\times m} \in \bp_m([1,2])$. Examining the leading principal $2 \times 2$
block of $A$, it follows that $A^{\circ \alpha} \not\in \bp_m$.
Repeating the same construction as in Equation \eqref{Econst1}, we
conclude that there exist commuting blocks $H_{st} := \Lambda_{st} \in
\bp_m(\C)$ such that $(H_{st}) \in \bp_{mn}(\C)$, but $(H_{st}^\alpha)
\not\in \bp_m(\C)$ if $\alpha < 0$. This concludes the proof.
\end{proof}
In Theorem \ref{Tmain2}, we assumed each block $H_{st}$ to be positive
semidefinite. This assumption was necessary for the powers
$H_{st}^\alpha$ to be well-defined. We now consider the case where the
blocks are not positive semidefinite. Using the functions $\phi_\alpha$
and $\psi_\alpha$, it is natural to extend the characterization provided
by Theorem \ref{Tmain2} to Hermitian blocks with arbitrary eigenvalues.
Using Theorem \ref{Tcrit}, we can now characterize the powers $\alpha$
such that $\phi_\alpha^{[m]}$ and $\psi_\alpha^{[m]}$ preserve positivity
when the blocks commute.
\begin{theorem}\label{Tmain3}
Let $\alpha \in \R \setminus \{0\}$ and $m,n \geq 2$. Then
\begin{enumerate}
\item $\phi_\alpha^{[m]}[H] \in \bp_{mn}(\C)$ for all $m \times m$
Hermitian matrices $H_{st}$ such that $(H_{st}) \in \bp_{mn}(\C)$ and the
blocks $H_{st}$ commute if $\alpha \in 2\N \cup [n-2,\infty)$. If $\alpha
\not\in 2\N \cup [n-2,\infty)$, there exist real symmetric matrices
$H_{st}$ such that $(H_{st}) \in \bp_{mn}(\R)$, the blocks $H_{st}$
commute, but $\phi_\alpha^{[m]}[H] \not\in \bp_{mn}(\R)$.
\item $\psi_\alpha^{[m]}[H] \in \bp_{mn}(\C)$ for all Hermitian $m \times
m$ matrices $H_{st}$ such that $(H_{st}) \in \bp_{mn}(\C)$ and the blocks
$H_{st}$ commute if $\alpha \in (-1+2\N) \cup [n-2,\infty)$. If $\alpha
\not\in (-1+2\N) \cup [n-2,\infty)$, there exist real symmetric matrices
$H_{st}$ such that $(H_{st}) \in \bp_{mn}(\R)$, the blocks $H_{st}$
commute, but $\psi_\alpha^{[m]}[H] \not\in \bp_{mn}(\R)$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is similar to the proof of Theorem \ref{Tmain2}.
Let $U$ be a unitary matrix and $P$ be a permutation matrix such that
defining $H := (H_{st})$ and $T := U^{\oplus n}$, we have
\begin{equation}\label{Efactor}
H = (TP) (A_1 \oplus \dots \oplus A_m) (TP)^{-1},
\end{equation}
\noindent where $A_1, \dots, A_m$ are $n \times n$ matrices containing
the eigenvalues of the blocks $H_{st}$. If $f =
\phi_\alpha$ or $\psi_\alpha$, we have $f^{[m]}[H] = (TP) (f[A_1] \oplus \dots \oplus f[A_m]) (TP)^{-1}$. It follows from Theorem \ref{Tcrit} that
$\phi_\alpha^{[m]}[H] \in \bp_{mn}(\C)$ if $\alpha \in 2\N \cup
[n-2,\infty)$ and $\psi_\alpha^{[m]}[H] \in \bp_{mn}(\C)$ if $\alpha
\in (-1+2\N) \cup [n-2,\infty)$. Conversely, if $f = \phi_\alpha$ and
$\alpha \not\in 2\N \cup [n-2,\infty)$ or $f = \psi_\alpha$ and $\alpha
\not\in (-1+2\N) \cup [n-2,\infty)$, then by \cite[Theorem 2.5,
Proposition 6.2]{GKR-crit-2sided} there exists a matrix $A \in \bp_n$
such that $f[A] \not\in \bp_n$. Using the same construction as in
Equation \eqref{Econst1}, we conclude that $f^{[m]}[-]$ does not
preserve positivity.
\end{proof}
\begin{remark}
We now address the case $\alpha = 0$, which was omitted from Theorem
\ref{Tmain3} for ease of exposition. We first claim that if $n = 2$ and
$H := (H_{st}) \in \bp_{2m}(\C)$ with Hermitian commuting blocks
$H_{st}$, then $\phi_0^{[m]}[H], \psi_0^{[m]}[H] \in \bp_{2m}(\C)$.
Indeed, as in Equation \eqref{Efactor}, the block matrix $H$ can be
factored as $H = (TP) (A_1 \oplus \dots \oplus A_m) (TP)^{-1}$, where
$A_1, \dots, A_m\in \bp_2$. Moreover, $\phi_0, \psi_0$ preserve
positivity when applied entrywise to $\bp_2$, since the only possible
resulting matrices are ${\bf 0}_{2 \times 2}, {\bf 1}_{2 \times 2}$,
$I_{2 \times 2}$, and $\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}$,
which are all positive semidefinite. However, when $n \geq 3$, we claim
that $\phi_\alpha^{[m]}[H], \psi_\alpha^{[m]}[H]$ are not always positive
semidefinite. Indeed, as in \cite[Equation 6.2]{GKR-crit-2sided}, define
\begin{equation}\label{Etop3}
A := \begin{pmatrix} 1 & 1/\sqrt{2} & 0\\ 1/\sqrt{2} & 1 &
1/\sqrt{2}\\ 0 & 1/\sqrt{2} & 1 \end{pmatrix} \oplus {\bf 0}_{(n-3)
\times (n-3)} \in \bp_n.
\end{equation}
\noindent One easily verifies that $\phi_0[A] = \psi_0[A] \not\in \bp_n$.
Using the same construction as in Equation \eqref{Econst1}, we conclude
that there exist commuting blocks $H_{st} := \Lambda_{st} \in \bp_m(\C)$
such that $H = (H_{st}) \in \bp_{mn}(\C)$, but $\phi_\alpha^{[m]}[H],
\psi_\alpha^{[m]}[H] \not\in \bp_{mn}(\C)$ when $\alpha = 0$.
\end{remark}
\begin{remark}
An interesting consequence of Theorem \ref{Tmain3} is that when the
blocks commute, preserving positivity is in fact independent of the block
size $m$ (see part (2) of Theorem \ref{TdePillis}). This is in contrast
to Theorem \ref{Tmain1}, in which increasing the block size to $m \geq 2$
drastically reduces the set of powers preserving positivity, when the
commutativity assumption is omitted.
\end{remark}
\noindent {\bf Powers of the trace function.}
Problems similar to the ones above have been considered in the
literature, with the power function $H_{st} \mapsto H_{st}^\alpha$
replaced by other functions mapping $m \times m$ blocks to $p \times p$
matrices (see e.g.~\cite{Thompson_61, Marcus_Katz_69, depillis_69,
Marcus_watkins_71, Zhang_2012}). In particular, de Pillis
\cite{depillis_69} studies the map $(H_{st})_{s,t=1}^n \mapsto ({\rm
tr}(H_{st}))_{s,t=1}^n$ and demonstrates that it preserves positivity.
See also \cite{Zhang_2012} for a nice short proof of the same result. To
conclude this section, we extend de Pillis's result by characterizing the
values $\alpha \geq 0, \beta \in \Z$ such that $(H_{st}) \mapsto
(\Psi_{\alpha, \beta}(\tr(H_{st})))$ preserves positivity.
\begin{theorem}\label{TdePillis}
Fix $\alpha \geq 0$, $\beta \in \Z$, and $m,n \in \N$. Then the following
are equivalent:
\begin{enumerate}
\item $\Psi_{\alpha, \beta}[(\tr(H_{st}))_{s,t=1}^n] \in \bp_{n}(\C)$
for all $(H_{st})_{s,t=1}^n \in \bp_{mn}(\C)$.
\item $\Psi_{\alpha, \beta}[-]$ preserves positivity on $\bp_n(\C)$.
\item $\Psi_{\alpha, \beta}[(\tr(H_s^* H_t))_{s,t=1}^n] \in \bp_n(\C)$
for all $m \times m$ complex matrices $H_1, \dots, H_n$.
\item $\Psi_{\alpha,\beta}^{[m]}[(H_{st})] \in \bp_{mn}(\C)$ if
$(H_{st})_{s,t=1}^n \in \bpm_{mn}(\C)$ and all blocks $H_{st}$ commute.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose first $(1)$ holds and let $A = (a_{st})_{s,t=1}^n \in \bp_n(\C)$.
Define $H_{st} \in \bp_m(\C)$ by $(H_{st})_{qr} := a_{st}$ if $q=r=1$ and
$0$ otherwise. Then $(H_{st})_{s,t=1}^n \in \bp_{mn}(\C)$, so
$\Psi_{\alpha, \beta}[A] \in \bp_n(\C)$ by (1). Thus $(1) \Rightarrow
(2)$. Conversely, if $(H_{st})_{s,t=1}^n \in \bp_{mn}(\C)$, then
$(\tr(H_{st}))_{s,t=1}^n \in \bp_n(\C)$ by \cite[Proposition
2.3]{depillis_69}, and $(2) \Rightarrow (1)$ follows immediately. Next,
$(2) \Leftrightarrow (3)$ because matrices of the form $(\tr(H_s^* H_t))$
are general Gram matrices in the inner product space $\C^{m \times m}$
with $\langle A, B \rangle := \tr(A^*B)$, so that the set of such
matrices coincides with $\bp_n(\C)$.
Finally, that $(2) \Leftrightarrow (4)$ follows by simultaneously
diagonalizing the blocks $H_{st}$ and proceeding as in the proof of
Theorem \ref{Tmain2}.
\end{proof}
Note that when $\beta$ is even or odd, the function $\Psi_{\alpha,
\beta}$ reduces on $\R$ to $\phi_\alpha$ and $\psi_\alpha$ respectively.
Thus the powers $\alpha$ such that $\phi_\alpha[-]$ or $\psi_\alpha[-]$
preserves positivity on $\bp_n(\R)$ in Theorem \ref{TdePillis} are known
(see Theorem \ref{Tcrit}). In the next section, we explore the general
problem of characterizing the values $\alpha, \beta$ for which
$\Psi_{\alpha, \beta}[-]$ preserves Loewner positivity on $\bp_n(\C)$.
\begin{comment}
\begin{theorem}
Fix integers $p \geq 0$ and $m$, and real $\alpha \in [p,p+1)$. Now
define $g_\alpha := \phi_p$ if $\alpha = p$ is odd, or $\psi_p$ if
$\alpha = p$ is even, and $g_\alpha := \phi_\alpha$ or $\psi_\alpha$ if
$\alpha \in (p,p+1)$. Then the following are equivalent:
\begin{enumerate}
\item $n \geq p+3 = \lfloor \alpha \rfloor + 3$.
\item There exists a real symmetric positive semidefinite matrix $A_{n
\times n}$ such that $g_\alpha[A]$ is not positive semidefinite. (In
fact, $A$ can be chosen to be Toeplitz of rank $2$.)
\item There exists a positive semidefinite real symmetric block matrix $H
= [H_{st}]_{s,t=1}^n$ with square diagonal $m \times m$ blocks, such that
the matrix $((g_\alpha[\tr(H_{st})]))_{s,t=1}^n$ is not positive
semidefinite.
\item There exist $m \times m$ real matrices $B_1, \dots, B_n$ such that
the matrix $M := ((g_\alpha(\tr(B_s^T B_t))))_{s,t=1}^n$ is not positive
semidefinite.
\end{enumerate}
\end{theorem}
\noindent In particular, note that finding matrices $B_j$ satisfying the
last assertion (4) is actually a reformulation of the assertion in part
(3).
\begin{proof}
That $(1) \Longleftrightarrow (2)$ was shown in our previous paper
$\spadesuit$. Now suppose (1) (and hence (2)) hold. In order to show (4),
note that every matrix of the form $(\tr(B_s^T B_t))_{s,t=1}^n$ is
positive semidefinite as it is the Gram matrix of the ``vectors" $B_1,
\dots, B_n$ with respect to the (nondegenerate) trace form. Moreover,
every positive semidefinite $n \times n$ matrix (with real entries) can
be obtained as such a Gram matrix. Now choose square matrices $B_s$ such
that the Gram matrix that one obtains in (4) is the counterexample $A$ in
part (2).
Conversely, if (4) holds, then the matrix $A := (( \tr(B_s^T B_t)))$ is a
Gram matrix, hence positive semidefinite - and it provides the necessary
example to prove (2). Thus $(2) \Longleftrightarrow (4)$.
Finally, we claim that $(3) \Longleftrightarrow (4)$. Indeed, if (4)
holds then define the block matrix $X := [B_1 | \dots | B_n]$. Then $H
= X^T X$ provides the necessary (counter)example in (3). Finally, if (3)
holds, to show (4) we use the following argument. Since $H_{N \times
N}$ is positive semidefinite, write $H = X^T X$ for some $X_{N \times
N}$. Now partition $X$ as: $X = [B'_1 | \cdots | B'_n]$ where $B'_s$ and
$H_{ss}$ have the same number of columns for all $1 \leq s \leq n$. Then
$H_{st} = {B'}_s^T B'_t$ for all $s,t$. Finally, let $B_s$ be the $N
\times N$ matrix defined by: $B_s = [B'_s | {\bf 0}]$. It is then easily
verified that $B_s^T B_t$ and $H_{st}$ have the same trace, so that the
matrices $B_1, \dots, B_n$ satisfy (4).
\end{proof}
\end{comment}
\section{Entrywise powers preserving positivity on Hermitian
matrices}\label{Sentrywise}
This section is devoted to proving Theorem \ref{Tnew}. As the proof is
long and intricate, we show the $n=3$ case in Section \ref{Sn=3}, and
then the general case in Section \ref{Sn_general}.
\subsection{Preserving positivity on Hermitian matrices of order
3}\label{Sn=3}
Note that for $n=1,2$, all maps $\Psi_{\alpha,\beta}$ preserve positivity
when applied entrywise to every matrix in $\bp_n(\C)$. In this subsection
we focus on the $n=3$ case. We begin by identifying a smaller sub-family
of matrices which it suffices to consider when verifying whether or not
$\Psi_{\alpha,\beta}$ preserves Loewner positivity.
\begin{lemma}\label{L3x3}
For $j=1,2,3$, suppose $r_j > 0, s_j \geq 0, t_j \in \R, \theta_j, \theta
\in (-\pi,\pi]$, and define ${\bf t} := (t_1, t_2, t_3)$. Now define:
\begin{equation}\label{EAp}
A := \begin{pmatrix}
r_1 & s_3 e^{i \theta_3} & s_2 e^{i \theta_2} \\
s_3 e^{-i \theta_3} & r_2 & s_1 e^{i \theta 1} \\
s_2 e^{-i \theta_2} & s_1 e^{-i \theta_1} & r_3
\end{pmatrix}, \qquad
T({\bf t}, \theta) := \begin{pmatrix}
1 & t_3 & t_2 e^{i\theta} \\
t_3 & 1 & t_1 \\
t_2 e^{-i\theta} & t_1 & 1
\end{pmatrix}.
\end{equation}
\noindent Then the following are equivalent:
\begin{enumerate}
\item $A \in \bp_3(\C)$;
\item $T({\bf t},\theta) \in \bp_3(\C)$, where $t_j := \frac{s_j
\sqrt{r_j}}{\sqrt{r_1 r_2 r_3}}$ for $j = 1,2,3$, and $\theta = \theta_1
+ \theta_3 - \theta_2$.
\item Given $t_j := \frac{s_j \sqrt{r_j}}{\sqrt{r_1 r_2 r_3}}$, we have
$t_j \in [0,1]$ for $j=1,2,3$, and $\det T({\bf t},\theta) = 1 -
\sum_{j=1}^3 t_j^2 + 2 t_1 t_2 t_3 \cos \theta \geq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
Define $D:= \diag(r_1^{-1/2},r_2^{-1/2},r_3^{-1/2})$. That $(1)
\Leftrightarrow (2)$ follows from the fact that the principal minors of
$T({\bf t},\theta)$ are equal to the corresponding principal minors of
$DAD$, and hence are obtained from the principal minors of $A$ by
rescaling by positive factors. That $(2) \Leftrightarrow (3)$ is obvious.
\end{proof}
The following corollary to Lemma \ref{L3x3} helps simplify the task of
ascertaining if an entrywise power function $\Psi_{\alpha,\beta}$
preserves Loewner positivity.
\begin{corollary}\label{Csp3}
Let $n \geq 3$, $\alpha \in \R$, and $\beta \in \Z$. Then
$\Psi_{\alpha,\beta}[-]$ preserves positivity on $\bp_3(\C)$ if and only
if $T({\bf t^{\circ \alpha}},\beta \theta) \in \bp_3(\C)$ for every ${\bf
t} \in [0,1]^3$ and $\theta \in (-3\pi, 3\pi)$ such that $\det T({\bf t},
\theta) \geq 0$.
\end{corollary}
\begin{proof}
Clearly $\Psi_{\alpha,\beta}$ preserves positivity on $\bp_2(\C)$, hence
on matrices $A \in \bp_3(\C)$ with at least one zero diagonal entry. For
all other matrices $A \in \bp_3(\C)$, we are now done by Lemma
\ref{L3x3}.
\end{proof}
In order to prove our next result, we recall the notion of a generalized
Dirichlet polynomial.
\begin{definition}
A {\it generalized Dirichlet polynomial} is a function $F : \R \to \R$
of the form
$\displaystyle F(x) = \sum_{j=1}^n a_j t_j^x$,
where $a_j, t_j, x \in \R$ and $t_1 > t_2 > \cdots > t_n > 0$.
\end{definition}
Given a sequence $(a_j)_{j=1}^n$, denote by $S[(a_j)]$ the number of sign
changes in the sequence after discarding all zero terms $a_j$. Also
define $A_j := a_1 + \cdots + a_j$ for all $1 \leq j \leq n$. Then
$S[(A_j)] \leq S[(a_j)]$. We now recall the following classical result
which extends Descartes' Rule of Signs to generalized Dirichlet
polynomials.
\begin{theorem}[Descartes' Rule of Signs,
\cite{Jameson,Laguerre}]\label{Tdescartes}
Suppose $F(x) = \sum_{j=1}^n a_j t_j^x : \R \to \R$ is a generalized
Dirichlet polynomial (with $t_1 > \cdots > t_n > 0$ as above), and $A_j =
a_1 + \cdots + a_j$ for all $j$. Then $F$ has at most $S[(A_j)]$ positive
zeros, and at most $S[(a_j)]$ real zeros.
\end{theorem}
Before we fully classify the entrywise powers which preserve Loewner
positivity on $\bp_3(\C)$, we first show that $\Psi_{\alpha,\beta}$
preserves positivity on $\bp_3(\C)$ if $\alpha \geq \max(1,|\beta|)$. We
also prove that $\Psi_{\alpha,\beta}$ does not preserve positivity on
$\bp_n(\C)$ if $\beta \not\in \Z$. In Section \ref{Sn_general}, we will
prove that $\Psi_{\alpha,\beta}$ does not preserve positivity on
$\bp_n(\C)$ if $\alpha < \max(n-2, |\beta| + 2 \lfloor (\sqrt{8n+1}-5)/2
\rfloor)$, thus completing the classification when $n=3$.
\begin{theorem}\label{Tzhan}
For $n=3$, the entrywise power function $\Psi_{\alpha,\beta}$ preserves
Loewner positivity on $\bp_n(\C)$ if $\beta \in \Z$ and $\alpha \geq
\max(1,|\beta|)$. Moreover, if $\beta \not\in \Z$, then
$\Psi_{\alpha,\beta}$ does not preserve Loewner positivity on
$\bp_n(\C)$.
\end{theorem}
\begin{proof}
Suppose $\beta \in \Z$ and $\alpha \geq \max(|\beta|,1)$. By Corollary
\ref{Csp3}, it suffices to show that $\Psi_{\alpha,\beta}$ preserves
positivity on all matrices $T({\bf t},\theta) \in \bp_3(\C)$ of the form
\eqref{EAp}. Using Lemma \ref{L3x3}, this reduces to showing:
\begin{equation}\label{E3x3}
1 - \sum_{j=1}^3 t_j^2 + 2 t_1 t_2 t_3 \cos \theta \geq 0 \quad \implies
\quad g_\beta(\alpha) := 1 - \sum_{j=1}^3 t_j^{2 \alpha} + 2 (t_1 t_2
t_3)^\alpha \cos(\beta \theta) \geq 0.
\end{equation}
\noindent In \eqref{E3x3} we may assume without loss of generality that
$\beta > 0$. There are now three cases: first, if $t_j = 0$ for some $j$,
then Equation \eqref{E3x3} is easy to show. Next, suppose $t_j$ are all
nonzero and $\max_j t_j = 1$, say $t_1 = 1$. Then $g_1(1) = -t_2^2 -t_3^2
+ 2 t_2 t_3 \cos \theta \geq 0$ if and only if $t_2 = t_3$ and $\cos
\theta = 1$. But then $\theta = 0$ or $\pm 2 \pi$ and \eqref{E3x3} again
follows. The third case is if $t_j \in (0,1)\ \forall j$. In this case we
use Theorem \ref{Tdescartes}: the partial sums of the coefficients are
$1, 0, -1, -2, -2 + 2 \cos(\beta \theta)$, and hence the generalized
Dirichlet polynomial has at most one positive root. First suppose
$\theta$ is not an integer multiple of $2\pi/\beta$. Note that
$g_\beta(0) = 1 -3 + 2\cos(\beta \theta) < 0$. Also, by the Schur product
theorem, $g_\beta(\beta) \geq 0$ since $\beta \in \N$. Thus, the
generalized Dirichlet polynomial $g_\beta$ has a unique root between $0$
and $\beta$.
It follows that $g_\beta(\alpha) \geq 0$ for all $\alpha \geq \beta$,
since $g_\beta(\alpha) \to 1$ as $\alpha \to \infty$. Finally, suppose
$\theta = 2\pi k/\beta$ for some $k \in \Z$. To show \eqref{E3x3}, note
that
\begin{equation}
1 - \sum_{j=1}^3 t_j^2 + 2 t_1 t_2 t_3 \cos \theta \geq 0 \quad \implies
\quad 1 - \sum_{j=1}^3 t_j^2 + 2 t_1 t_2 t_3 \geq 0.
\end{equation}
\noindent This implies that the real matrix $T({\bf t},0)$ as in Equation
\eqref{EAp} is positive semidefinite. Now \eqref{E3x3} follows by
applying Theorem \ref{TFitzHorn} to $T({\bf t},0)$, since $\alpha \geq
1$.
To conclude the proof, we now provide a ``universal'' example of a matrix
$A \in \bp_3(\C)$ such that $\Psi_{\alpha,\beta}[A \oplus {\bf 0}_{(n-3)
\times (n-3)}] \not\in \bp_n(\C)$ whenever $\beta \in \R \setminus \Z$.
Define
\begin{equation}
A = \begin{pmatrix}
1 & e^{2\pi i/3} & e^{-2\pi i/3} \\
e^{-2\pi i/3} & 1 & e^{2\pi i/3} \\
e^{2\pi i/3} & e^{-2\pi i/3} & 1
\end{pmatrix}.
\end{equation}
\noindent Clearly $A \in \bp_3(\C)$, but $\det
\Psi_{\alpha,\beta}[A] = -2 + 2 \cos(2\pi \beta)$, which is negative
precisely when $\beta \not\in \Z$. Thus $\Psi_{\alpha,\beta}$ does not
preserve positivity on $\bp_n(\C)$ when $\beta \not\in \Z$.
\end{proof}
\subsection{Bounds for arbitrary dimension $n$}\label{Sn_general}
We now prove Theorem \ref{Tnew}, which addresses the case of general $n
\geq 3$. The proof will use the following preliminary result, which
generalizes an idea from FitzGerald and Horn \cite[Theorem
2.2]{FitzHorn}.
\begin{proposition}\label{P01}
Let $\alpha > 1$ and fix an integer $n \geq 3$. Suppose
$\Psi_{\alpha-1,1}[A] \in \bp_{n-1}(\C)$ for all $A \in \bp_{n-1}(\C)$.
Then $\Psi_{\alpha,0}[A] \in \bp_n(\C)$ for all $A \in \bp_n(\C)$.
\end{proposition}
\begin{proof}
Suppose $\Psi_{\alpha-1,1}[A] \in \bp_{n-1}(\C)$ for all $A \in
\bp_{n-1}(\C)$. Fix $z = z_1 + z_2 i,w = w_1 + w_2 i \in \C$, where
$z_1,z_2,w_1,w_2 \in \R$, and denote by $z_\lambda := \lambda z +
(1-\lambda)w$. Then
\begin{align*}
\frac{d}{d\lambda} \Psi_{\alpha,0}(z_\lambda) = &\ \frac{\alpha}{2}
\Psi_{\alpha-2,0}(z_\lambda) \left[2(\lambda z_1 +
(1-\lambda)w_1)(z_1-w_1) + 2(\lambda z_2 +
(1-\lambda)w_2)(z_2-w_2)\right] \notag \\
= &\ \alpha \Psi_{\alpha-2,0}(z_\lambda) \ree(z_\lambda \overline{z-w}) =
\ \alpha \ree(\Psi_{\alpha-2,0}(z_\lambda) z_\lambda \overline{z-w}) \\
&= \alpha \ree(\Psi_{\alpha-1,1}(z_\lambda) \overline{z-w}).
\end{align*}
\noindent We now proceed as in the proof of \cite[Theorem 2.2]{FitzHorn}.
Note that
\begin{equation}
\Psi_{\alpha, 0}(z) = \Psi_{\alpha,0}(w) + \int_0^1 \frac{d}{d\lambda}
\Psi_{\alpha,0}(z_\lambda)\ d\lambda \ = \Psi_{\alpha,0}(w) + \alpha
\int_0^1 \ree(\Psi_{\alpha-1,1}(z_\lambda) \overline{z-w})\ d\lambda.
\label{eqn:horn_int}
\end{equation}
\noindent Now let $A \in \bp_n(\C)$ and let $\zeta := (a_{1n}, a_{2n},
\dots, a_{nn})^T /a_{nn}^{1/2}$ if $a_{nn} \ne 0$ and $\zeta := {\bf
0}_{n \times 1}$ otherwise. By \cite[Lemma 2.1]{FitzHorn}, the matrix $A
- \zeta \zeta^* \in \bp_n(\C)$. Also, note that the entries of the last
row and column of $A - \zeta \zeta^*$ are zero. Applying
\eqref{eqn:horn_int} entrywise, we obtain that
\begin{equation}\label{eqn:horn_int2}
\Psi_{\alpha, 0}[A] = \Psi_{\alpha, 0}[\zeta \zeta^*] + \alpha \int_0^1
\ree\left(\Psi_{\alpha-1,1}[\lambda A + (1-\lambda) \zeta \zeta^*] \circ
\overline{A-\zeta \zeta^*}\right)\ d\lambda.
\end{equation}
\noindent Note that the Schur product $\Psi_{\alpha-1,1}[\lambda A +
(1-\lambda) \zeta \zeta^*] \circ \overline{A-\zeta \zeta^*}$ in the
integrand in Equation \eqref{eqn:horn_int2} is positive semidefinite by
hypothesis and the fact that the last row and column of $A - \zeta
\zeta^*$ are zero. It follows immediately that $\Psi_{\alpha, 0}[A] \in
\bp_n(\C)$. This concludes the proof.
\end{proof}
We now have all the ingredients necessary to prove our last main result.
\begin{proof}[{\bf Proof of Theorem \ref{Tnew}}]\hfill
\noindent {\bf Proof of (1).}
Suppose first that $\beta \in \Z$ and $\alpha \in |\beta| -2 +2\N$, say
$\alpha = |\beta| + 2m$ with $m \geq 0$. Note that $A = (a_{st}) \in
\bp_n(\C)$ if and only if $\overline{A} := (\overline{a_{st}}) \in
\bp_n(\C)$. Then,
\begin{equation}
\Psi_{\alpha, \beta}[A] = \begin{cases}
\Psi_{2m,0}[A] = (A \circ \overline{A})^{\circ m}, & \textrm{ if } \beta
= 0, \\
\Psi_{2m + \beta,\beta}[A] = A^{\circ \beta} \circ (A \circ
\overline{A})^{\circ m}, & \textrm{ if } \beta > 0, \\
\Psi_{2m + |\beta|,\beta}[A] = \overline{A}^{\circ |\beta|} \circ (A
\circ \overline{A})^{\circ m} , & \textrm{ if } \beta < 0.
\end{cases}
\end{equation}
\noindent In all three cases, we obtain that $\Psi_{\alpha,\beta}[A] \in
\bp_n(\C)$ by the Schur product theorem.
Suppose instead $\beta \in \Z$ and $\alpha \geq \max(n-2, |\beta|+2n-6)$.
We claim that in that case, $\Psi_{\alpha,\beta}[-]$ also preserves
Loewner positivity on $\bp_n(\C)$. The proof is by induction on $n \geq
3$. For $n=3$ we are done by Theorem \ref{Tzhan}. Now suppose the
assertion holds for $n-1 \geq 3$. Then $\Psi_{\alpha,1}[-]$ preserves
Loewner positivity on $\bp_{n-1}(\C)$ for $\alpha \geq 2(n-1-3) + 1 =
2n-7$. Hence by Proposition \ref{P01}, $\Psi_{\alpha,0}[-]$ preserves
Loewner positivity on $\bp_n(\C)$ for $\alpha \geq 2n-7 + 1 = 2n-6$. Thus
if $\alpha \geq 2n-6 + |\beta|$ and $A \in \bp_n(\C)$, then
\[ \Psi_{\alpha,|\beta|}[A] = \Psi_{\alpha-|\beta|,0}[A] \circ A^{\circ
|\beta|}, \qquad \Psi_{\alpha,-|\beta|}[A] = \Psi_{\alpha-|\beta|,0}[A]
\circ \overline{A}^{\circ |\beta|}, \]
\noindent and these are both in $\bp_n(\C)$ by the Schur product theorem.
Therefore the claim is proved by induction.
\noindent {\bf Proof of (2).}
If $\beta \not\in \Z$, then Theorem \ref{Tzhan} shows that $\Psi_{\alpha,
\beta}$ does not preserve Loewner positivity on $\bp_n(\C)$. Thus assume
$\beta \in \Z$. If $\alpha < 1$, it is easy to see that $\Psi_{\alpha,
\beta}[-]$ does not preserve positivity on $\bp_n(\C)$ (see Equation
\eqref{Etop3}). It thus remains to prove that $\Psi_{\alpha,\beta}[-]$
does not preserve positivity on $\bp_n(\C)$ if $1 \leq \alpha < |\beta| +
2 \lfloor (\sqrt{8n+1}-5)/2 \rfloor$, but $\alpha - |\beta|$ is not a
nonnegative even integer. To show this statement, first note for each
integer $k \geq 0$ that
\[ \lfloor (\sqrt{8n+1}-5)/2 \rfloor \geq k \qquad \iff \qquad n \geq
\binom{k+3}{2}. \]
\noindent Thus, we first show the assertion for $n = \binom{k+3}{2}$,
from which it immediately follows for all $n > \binom{k+3}{2}$ by padding
with zeros. Moreover, it suffices to show that $\Psi_{\alpha,\beta}[-]$
does not preserve Loewner positivity on $\bp_n(\C)$ when $\alpha \in
(|\beta| + 2k-2, |\beta| + 2k)$, since the smaller values of $\alpha \in
(|\beta|, |\beta| + 2k) \setminus (\alpha - 2\Z)$ do not preserve
positivity on $\bp_n(\C)$ by considering lower values of $k$ (and then
padding by zeros).
Thus, suppose $n = \binom{k+3}{2}$ and $\alpha \in (|\beta| + 2k-2,
|\beta| + 2k)$. It suffices to show that $\Psi_{\alpha,\beta}[-]$ does
not preserve positivity on $\bp_n(\C)$. Since $\Psi_{\alpha,-\beta}[A] =
\overline{\Psi_{\alpha,\beta}[A]}$, we may assume $\beta \geq 0$. Now fix
$z \in \C^\times$ and consider the function $f : (-1/|z|,1/|z|) \to \C$,
given by:
\[ f(\epsilon) := \Psi_{\alpha,\beta}(1 + \epsilon z) = (1 + \epsilon
z)^{(\alpha + \beta)/2} (1 + \epsilon \overline{z})^{(\alpha-\beta)/2}.
\]
\noindent Defining $Z(\epsilon) := 1 + \epsilon z$, one has:
\[ \frac{df}{d \epsilon} = \frac{d \Psi_{\alpha,\beta}(Z(\epsilon))}{d
\epsilon} = \frac{\partial \Psi_{\alpha,\beta}}{\partial Z} \frac{dZ}{d
\epsilon} + \frac{\partial \Psi_{\alpha,\beta}}{\partial \overline{Z}}
\frac{d\overline{Z}}{d \epsilon}. \]
\noindent Repeatedly using this formula and the general Leibniz rule, we
obtain for any integer $l \geq 0$:
\begin{align*}
\frac{d^l f}{d \epsilon^l}(0) = &\ \sum_{j=0}^l \binom{l}{j}
\prod_{t=0}^{j-1} \left( \frac{\alpha+\beta}{2} - t \right)
\prod_{t=0}^{l-j-1} \left( \frac{\alpha-\beta}{2} - t \right) \cdot
\left. \frac{z^j \overline{z}^{l-j} f(\epsilon)}{(1 + \epsilon z)^j (1 +
\epsilon \overline{z})^{l-j}} \right|_{\epsilon=0}\\
= &\ \sum_{j=0}^l \binom{l}{j} \Psi_{l,l-2j}(z) \prod_{t=0}^{j-1} \left(
\frac{\alpha+\beta}{2} - t \right) \prod_{t=0}^{l-j-1} \left(
\frac{\alpha-\beta}{2} - t \right).
\end{align*}
\noindent Therefore by Taylor's theorem, as $\epsilon \to 0^+$ we have
\begin{align}
\Psi_{\alpha,\beta}(1 + \epsilon z) = &\ 1 + \sum_{l=1}^{k+1}
\sum_{j=0}^l \frac{c_{l,j} \epsilon^l}{l!} \Psi_{l,l-2j}(z) +
o(\epsilon^{k+2}),\\
\mbox{where} \quad c_{l,j} := &\ \binom{l}{j} \prod_{t=0}^{j-1} \left(
\frac{\alpha+\beta}{2} - t \right) \prod_{t=0}^{l-j-1} \left(
\frac{\alpha-\beta}{2} - t \right)\ \forall 1 \leq l \leq k+1,\ 0 \leq j
\leq l.\notag
\end{align}
\noindent Now consider the family of power functions $S_k := \{
\Psi_{l,l-2j} : 1 \leq l \leq k+1, 0 \leq j \leq l \} \cup \{ K \equiv
1\}$. Note that $S_k$ contains precisely $\binom{k+3}{2}$ functions,
which are linearly independent on $\C^n$ by Lemma \ref{LCmult}. Hence
there exists a vector $u_{k,n} \in \C^n$ such that
\begin{equation}\label{Echoiceofu}
\Psi_{k+1,k+1}[u_{k,n}] \notin {\rm span}_{\C} \{ h[u_{k,n}] : h \in S_k
\setminus \{ \Psi_{k+1,k+1} \} \}.
\end{equation}
\noindent Now define the matrix $A_\epsilon := {\bf 1}_{n \times n} +
\epsilon u_{k,n} u_{k,n}^* \in \bp_n(\C)$. Then,
\[ \Psi_{\alpha,\beta}[A_\epsilon] = {\bf 1}_{n \times n} +
\sum_{l=1}^{k+1} \frac{c_{l,j} \epsilon^l}{l!} \Psi_{l,l-2j}[u_{k,n}]
\Psi_{l,l-2j}[u_{k,n}]^* + o(\epsilon^{k+2}) C, \]
\noindent where $C_{n \times n}$ is a fixed matrix independent of
$\epsilon$. Moreover, there exists $v_{k,n} \in \C^n$ orthogonal to $\{
h[u_{k,n}] : h \in S_k \setminus \{ \Psi_{k+1,k+1} \} \}$, but not to
$\Psi_{k+1,k+1}[u_{k,n}]$. Now compute:
\begin{align*}
v_{k,n}^* \Psi_{\alpha,\beta}[A_\epsilon] v_{k,n} = &\ \frac{c_{k+1,0}
\epsilon^{k+1}}{(k+1)!} |v_{k,n}^* \Psi_{k+1,k+1}[u_{k,n}]|^2 +
o(\epsilon^{k+2}) v_{k,n}^* C v_{k,n}\\
= &\ \frac{|v_{k,n}^* \Psi_{k+1,k+1}[u_{k,n}]|^2}{2^{k+1}(k+1)!} \cdot
\epsilon^{k+1} \prod_{t=0}^k (\alpha - \beta - 2t) + o(\epsilon^{k+2})
v_{k,n}^* C v_{k,n}.
\end{align*}
\noindent Since $\alpha \in (\beta + 2k-2, \beta+2k)$, the first term
is negative, whence so is the entire expression for sufficiently small
$\epsilon > 0$. This shows that $\Psi_{\alpha,\beta}[-]$ does not
preserve Loewner positivity on $\bp_n(\C)$ if $\alpha \in (\beta+2k-2,
\beta+2k)$, which concludes the proof.
\end{proof}
\begin{remark}
Since $n \geq \binom{k+3}{2}$, we observe that the vector $u_{k,n} \in
\C^n$ satisfying \eqref{Echoiceofu} can in fact be chosen to have all its
entries in the complex disc $D(0,R)$ for any fixed $0 < R \leq \infty$.
Indeed, by Lemma \ref{LCmult}, the characters in the set $S_k$ are
linearly independent on $D(0,R)$. Thus there exists $u = u_{k,n} \in
D(0,R)^n$ such that the vectors $\{ h[u] : h \in S_k \}$ are linearly
independent.
\end{remark}
\bibliographystyle{plain}
|
1,314,259,995,635 | arxiv | \section{Introduction}\label{Secintro}
River basin geomorphology is a very old subject of study initiated
by Horton \cite{H45}.
Hack \cite{H57}, studying the
drainage system in the Shenandoah valley and the adjacent mountains of Virginia,
observed a power law relation
\begin{equation}
\label{Hacklaw} l \sim a^{0.6}
\end{equation}
between the length $l$ of a stream from its
source to a divide and the area of the basin $a$ that collects the
precipitation contributing to the stream as tributaries.
Hack also
corroborated this power law
through the data gathered by Langbein \cite{L47} of nearly $400$ different streams
in the northeastern United States. This empirical relation~(\ref
{Hacklaw}) is
widely accepted nowadays
albeit with a different exponent (see Gray \cite{G61},
Muller \cite{M73})
and is called Hack's law. Mandelbrot \cite{M83} mentions Hack's law to
strengthen his contention
that ``\textit{if all rivers as well as their basins are mutually similar},\vspace*{1pt}
\textit{the fractal length-area argument
predicts} (\textit{river}'\textit{s length})$^{1/D}$ \textit{is proportional to}
(\textit{basin}'\textit{s area})$^{1/2} $'' where
$ D > 1 $ is the fractal dimension of the river.
In this connection, it is worth remarking that the Hurst exponent in fractional
Brownian motion and in time series analysis arose from the study of the Nile
basin by Hurst \cite{H27} where he proposed the relation $l_{\perp} =
l_{\parallel}^{0.9}$
as that governing the width, $l_{\perp}$, and the length,
$l_{\parallel}$, of
the smallest rectangular region containing the drainage system.
Various statistical models of drainage networks have been proposed
(see Rodriguez-Iturbe and Rinaldo \cite{RR97} for a detailed survey). In this paper, we study the
tributary structure of
a two-dimensional drainage network called the Howard's model of
headward growth
and branching (see Rodriguez-Iturbe and Rinaldo \cite{RR97}). Our study is based on a scaling of
the process and
we obtain the watershed area of a stream as the area of a Brownian
excursion process. This gives a statistical
explanation of Hack's law and justifies the remark of Giacometti et~al.
\cite{GMRR96}:
``\textit{From the results}, \textit{we suggest that a statistical framework referring
to the scaling invariance of the entire basin structure should be used in
the interpretation of Hack}'\textit{s law}.''
We first present an informal description of the model: suppose that the
vertices of
the $d$-dimensional lattice $ {\mathbb Z}^d$ are open or closed with
probability $ p$ $ (0 < p < 1) $
and $ 1-p$, respectively, independently of all other vertices.
Each open vertex $ \bu\in{\mathbb Z}^d$ represents a water source and
connects to
a unique open vertex $ \bv\in{\mathbb Z}^d$. These edges represent
the channels through which water can flow.
The connecting vertex $ \bv$ is chosen so that the $d$th coordinate of
$ \bv$ is
one more than that of $ \bu$ and $ \bv$ has the minimum
$L_1$ distance from $ \bu$. In case of nonuniqueness of such a
vertex, we choose one of the closest
open vertices with equal probability, independently of everything else.
Let $V $
denote the set of \textit{open} vertices and $ h(\bu) $ denote the
uniquely chosen vertex to
which $ \bu$ connects, as described above. Set $ \langle\bu, h (\bu
) \rangle$ as the edge (channel) connecting
$ \bu$ and $ h(\bu) $.
From the construction, it follows that the random graph,
$ {\mathcal G} = (V, E) $ with edge set $ E:= \{ \langle\bu, h (\bu
) \rangle:
\bu\in V \}$, does not contain any circuit.
This model has been studied by
Gangopadhyay, Roy and Sarkar
\cite{GRS04} and
the following results were obtained.
\begin{theorem}
\label{GRS}
Let $0<p<1$.
\begin{longlist}[(ii)]
\item[(i)] For $d=2$ and $d=3$, ${\mathcal G}$ consists of one
single tree
almost surely,
and for $d\geq4$, ${\mathcal G}$ is a forest consisting of infinitely many
disjoint trees almost surely.
\item[(ii)] For any $d\geq2$, the graph ${\mathcal G}$ contains no
bi-infinite
path almost
surely.
\end{longlist}
\end{theorem}
In this paper, we consider only $d=2$. Before proceeding further, we
present a formal
description for $ d=2$ which will be used later. Fix $0< p < 1$ and let
$\{B_{\bu}: \bu= (\bu(1), \bu(2)) \in{\mathbb Z}^2 \}$ be
an i.i.d. collection of Bernoulli random variables with success
probability $p$. Set $ V = \{ \bu\in{\mathbb Z}^2: B_{\bu} = 1 \} $.
Let $\{U_{\bu}: \bu\in{\mathbb Z}^2 \}$ be
another i.i.d. collection of random variables, independent of the
collection of random variables $ \{ B_{\bu}: \bu\in{\mathbb Z}^2 \}$,
taking values in the set $ \{ 1, -1 \} $, with $ \P( U_{\bu} = 1 ) =
\P( U_{\bu} = -1) = 1/2 $.
For a vertex $ (x,t) \in{\mathbb Z}^2 $, we consider $ k_0 =
\min \{ \llvert k \rrvert: k \in{\mathbb Z}, B_{(x + k, t+1)} = 1 \}
$. Clearly, $ k_0 $ is almost surely finite.
Now, we define
\begin{eqnarray*}
h (x,t):= \cases{ (x+k_0, t+1)\in V, &\quad if $(x -
k_0, t+1) \notin V$,
\cr
(x-k_0, t+1)\in V, &\quad if
$(x+k_0, t+1) \notin V$,
\cr
(x+U_{(x,t)}k_0, t+1)
\in V, &\quad if $(x\pm k_0, t+1) \in V$.}
\end{eqnarray*}
For any $ k \geq0 $, let
\begin{eqnarray*}
h^{k+1} (x, t) &:=& h \bigl( h^{k} (x, t) \bigr)\qquad
\mbox{with } h^0(x,t):= (x,t),
\\
C_k (x,t) &:=& \cases{ \bigl\{ (y, t-k) \in V: h^{k} ( y,
t-k) = (x,t) \bigr\}, &\quad if $(x,t) \in V$,
\cr
\varnothing, &\quad otherwise,}
\\
C(x,t) &:=& \bigcup_{ k \geq0 } C_k (x,t).
\end{eqnarray*}
Here,
$ h^k (x,t) $ represents the ``$k$th generation progeny'' of $ (x,t) $,
the sets $ C_k (x,t) $ and $ C (x,t) $ denote, respectively, the set of $
k$th generation ancestors and the set of all ancestors of $ (x,t) $; $
C(x,t) = \varnothing$ if
$ (x,t) \notin V$. In the
terminology of drainage network, $C (x,t)$ represents the region of
precipitation, the water from which is channelled through the open
point $ (x,t) $ (see Figure~\ref{figCx,t}). From Theorem \ref
{GRS}(ii), we have that $ C (x,t) $ is finite almost surely.
\begin{figure}[t]
\includegraphics{1134f01.eps}
\caption{The bold vertices on the line $y = t-3$ constitute the set
$C_3(x,t)$ and all the bold vertices together constitute the cluster $C(x,t)$.}\label{figCx,t}
\end{figure}
Now, we define
\[
L(x,t):= \inf\bigl\{ k \geq0: C_k (x,t) = \varnothing\bigr\},
\]
as the ``length of the channel,'' which as earlier is finite almost surely.
We observe that for any $(x,t)\in{\mathbb Z}\times{\mathbb Z}$,
$L(x,t) \geq0$ and the distribution
of $L(x,t)$ does not depend upon $(x,t)$. Our first
result is about the length of the channel. We remark here that
Newman, Ravishankar and Sun \cite{NRS05} has a similar result in a set-up which allows crossing of paths.
\begin{theorem}
\label{clusterheight}
We have
\[
\lim_{ n \to\infty} \sqrt{ n } \mathbb{P} \bigl( L(0,0) > n \bigr) =
\frac{
1 }{
\gamma_0 \sqrt{\pi} },
\]
where $ \gamma_0^2:= \gamma_0^2 (p) = \frac{ (1-p)(2 - 2p +p^2) }{
p^2 ( 2-p)^2} $.
\end{theorem}
Next, we define
\begin{eqnarray*}
r_k (x,t) &:=& \cases{ \max\bigl\{ u: (u,t-k) \in C_k
(x,t) \bigr\}, &\quad if $0 \leq k < L(x,t)$, $(x,t) \in V$,
\cr
0, &\quad
otherwise,}
\\
l_k (x,t) &:=& \cases{ \min\bigl\{ u: (u,t-k) \in C_k
(x,t) \bigr\}, &\quad if $0 \leq k < L(x,t)$, $(x,t) \in V$,
\cr
0, &\quad
otherwise,}
\\
D_k(x,t) &:=& r_k (x,t) - l_k (x,t).
\end{eqnarray*}
The quantity $ D_k (x,t)$ denotes the \textit{width}
of the set of all $k$th generation ancestors of $ (x,t) $.
We define the \textit{width process} $
D_n^{(x,t)}(s) $ and the \textit{cluster process} $K_n^{(x,t)}(s)$
for $ s
\geq0$ as
follows: for $ k = 0, 1, \dots$ and $ k/ n
\leq s \leq
(k+1)/n $,
\begin{eqnarray}
\label{eqWidthClusterProcess} D_n^{(x,t)} (s) &: =& \frac{ D_k (x,t) }{ \gamma_0 \sqrt{n} } +
\frac{ (ns - [ns]) }{ \gamma_0 \sqrt{n} } \bigl( D_{k+1} (x,t) - D_k (x,t) \bigr),
\nonumber\\[-8pt]\\[-8pt]\nonumber
K_n^{(x,t)} (s) &: =& \frac{\# C_k (x,t) }{ \gamma_0 \sqrt{n} } + \frac{ (ns - [ns]) }{ \gamma_0 \sqrt{n} }
\bigl( \#C_{k+1} (x,t) - \#C_k (x,t) \bigr),
\end{eqnarray}
where $ \gamma_0 > 0 $ is as in the statement of Theorem \ref
{clusterheight}. In other words, $ D_n^{(x,t)} (s) $ [resp., $
K_n^{(x,t)} (s)$]
is defined $ D_k (x,t) / (\gamma_0 \sqrt{n} ) $ [resp., $\# C_k (x,t)
/ (\gamma_0 \sqrt{n})$] at time points $ s =
k/n $ and, at other time points defined by linear interpolation.
The distributions of both $D_n^{(x,t)}$ and $K_n^{(x,t)}$ are
independent of
$(x,t)$.
To describe our results, we need to introduce two processes, Brownian meander
and Brownian excursion, studied by
Durrett, Iglehart and Miller
\cite{DIM77}. Let $ \{ W(s): s \geq0 \} $ be a standard Brownian
motion with $ W(0) = 0$. Let
$\tau_1:= \sup\{s \leq1: W(s)=0\}$ and $\tau_2:= \inf\{s \geq1:
W(s)=0\}$. Note that $\tau_1 < 1$ and $\tau_2 > 1$ almost surely.
The standard Brownian meander, $W^{+}(s)$, and the standard Brownian
excursion, $W^{+}_{0}(s)$, are given by
\begin{eqnarray}
\label{eqBMBE} W^{+}(s) &:=& \frac{\llvert W(\tau_1 + s(1-\tau_1))\rrvert }{\sqrt{1 - \tau
_1}}, \qquad s \in[0,1],
\\
W^{+}_{0}(s) &:=& \frac{\llvert W(\tau_1 + s(\tau_2-\tau_1))\rrvert }{\sqrt{\tau
_2-\tau_1}}, \qquad s\in[0,1].
\end{eqnarray}
Both of these processes are a continuous nonhomogeneous Markov process
(see Durrett and Iglehart \cite{DI77} and references therein).
Further, $W^{+}(0) = 0$ and, for $x \geq0$, $\P(W^{+}(1) \leq x) = 1
- \exp(-x^2/2)$, that is,
$W^{+}(1)$ follows a Rayleigh distribution.
We also need some random variables obtained as functionals of these two
processes. In particular, let
\begin{eqnarray*}
I^{+}_{0}&:=& \int_{0}^{1}W^{+}_{0}(t)\,dt
\quad\mbox{and}\quad M^{+}_{0}:= \max\bigl
\{W^{+}_{0}(t): t\in[0,1]\bigr\}.
\end{eqnarray*}
Louchard and Janson \cite{JL07} showed that, as $x \to\infty$, the distribution
function and the density are, respectively, given by
\begin{eqnarray*}
\P\bigl(I^{+}_0 &>& x
\bigr) \sim\frac{6 \sqrt{6}}{\sqrt{\pi}} x\exp{\bigl(-6x^2\bigr)} \quad\mbox{and}
\quad f_{I^+_0}(x) \sim\frac{72 \sqrt{6}}{\sqrt
{\pi}} x^2\exp{
\bigl(-6x^2\bigr)}.
\end{eqnarray*}
The random variable $M^{+}_0$ is continuous, having a strictly positive
density on $ (0, \infty)$ (see Durrett and Iglehart \cite{DI77}) and for $x > 0$,
\begin{eqnarray*}
\hspace*{-3pt}&&\P\bigl(M^{+}_0 \leq x
\bigr) = 1 + 2\sum^{\infty}_{k=1} \exp {
\bigl(-(2kx)^2/2\bigr)}\bigl[1 - (2kx)^2\bigr]\qquad
\mbox{with }\E\bigl(M^{+}_0\bigr) = \sqrt {\pi/2}.
\end{eqnarray*}
For $f \in C[0,\infty)$, let $f\mid _{[0,1]}$ denotes the restriction of
$f$ over $[0,1]$.
Our next result is about the weak convergence of the width process
$D^{(0,0)}_n\mid_{[0,1]}$ and
the cluster process $K^{(0,0)}_n\mid_{[0,1]}$ under diffusive scaling.
Here and subsequently, as is commonly used in statistics, we use the
notation $X\mid Y$ to denote the conditional random variable $X$
given $Y$.
\begin{theorem}
\label{BM}
As $n\rightarrow\infty$, we have:
\begin{longlist}[(ii)]
\item[(i)] $ D^{(0,0)}_n\mid _{[0,1]} \mid {\mathbf{1}}_{\{ L(0,0) > n
\}} \weak \sqrt{2} W^{+} $,
\item[(ii)] $\sup\{\llvert pD_n^{(0,0)}(s) - K_n^{(0,0)}(s)\rrvert: s \in[0,1]\}
\mid {\mathbf{1}}_{\{ L(0,0) > n \}} \prob 0$.
\end{longlist}
\end{theorem}
The following corollary is an immediate consequence of Theorem \ref{BM}.
\begin{cor}
For $ u > 0 $, as $n \rightarrow\infty$ we have:
\begin{longlist}[(ii)]
\item[(i)]
$ \sqrt{ n} \mathbb{P} ( \# C_n (0,0) > \sqrt{n} \gamma_0 u )
\to \frac{ 1 }{ \gamma_0 \sqrt{\pi} } \exp( - u^2/4 p^2 )$,
\item[(ii)] $\mathbb{P} ( \sum_{k=0}^{n}\# C_k (0,0) > n^{3/2} \gamma_0 u
\mid L(0,0) > n) \to \mathbb{P}(p\sqrt{2}I^{+} > u)$.
\end{longlist}
\end{cor}
Before we proceed to state Theorem \ref{BEarea}, we recall some
results regarding random vectors whose distribution functions have
regularly varying tails (see Resnick~\cite{R07}, page~172). A random vector $Z$
on $(0, \infty)^d$ with a distribution function $F$ has a regularly
varying tail if, as $n \to\infty$, there exists a sequence $b_n \to
\infty$ such that $n \P\{Z/b_n \in\cdot\} \stackrel{v}{\to} \nu
(\cdot)$ for some $\nu\in M_{+}$ where $M_{+}:= \{\mu: \mu$
is a nonnegative Radon measure on $(0, \infty)^d\}$. Here, $\stackrel
{v}{\to} $ denotes vague convergence.
It is in this context that Theorem \ref{BEarea} obtains a regularly
varying tail for the distribution of $(L(x,t),
(\#C(x,t))^{2/3})$; which justifies that the exponent of Hack's law is
$2/3$ for
Howard's model. In addition, we obtain a scaling law, with a Hack
exponent of $1/2$, for the length of the stream, vis-\`a-vis the maximum
width of the region of
precipitation, that is
\begin{equation}
\label{eqndefMaxWidth} D_{\max}(0,0):= \max\bigl\{D_{k}(0,0):0\leq k
< L(0,0)\bigr\}.
\end{equation}
It should be noted that Leopold and Langbein
\cite{L62} obtained an exponent of $0.64$ through
computer simulations.
\begin{theorem}
\label{BEarea}
Let ${\mathbf E}:= [0,\infty) \times[0,\infty) \setminus\{(0,0)\}
$. There exist measures
$\mu$ and $\nu$ on the Borel $\sigma$-algebra on ${\mathbf E}$
such that for any Borel set $B \subseteq\mathbf E $
we have
\begin{eqnarray}
\label{eqnHack} \sqrt{n} \P \biggl[ \frac{ ( L(0,0), (\#C(0,0))^{2/3}
)}{n} \in B \biggr] &\to&\mu(B),
\\
\label{eqnMaxExp} \sqrt{n} \P \biggl[ \frac{ ( L(0,0), (D_{\max}(0,0))^{1/2}
)}{n} \in B \biggr] &\to&\nu(B),
\end{eqnarray}
with $\mu$ and $\nu$ being given by
\begin{eqnarray*}
\mu(B) &=& \int\!\!\int_B \frac{3\sqrt{v}}{4 \sqrt{2\pi}\gamma
_0^2pt^3}f_{I^+_0}
\biggl(\frac{v^{3/2}}{
\gamma_0 p\sqrt{2t^3}}\biggr)\,dv \,dt,
\\
\nu(B) &=& \int\!\!\int_B \frac{v}{\sqrt{2\pi}\gamma
_0^2pt^2}f_{M^+_0}
\biggl(\frac{v^2}{\gamma_0 p\sqrt{2t}}\biggr) \,dv \,dt
\end{eqnarray*}
where $f_{I^+_0}$ and $f_{M^+_0}$ denote the density functions of
$I^+_0$ and
$M^+_0$, respectively.
Moreover, for $\lambda, \tau> 0$, we have
\begin{eqnarray}\label{eqnTrivialCasesHack}
&& \sqrt{n} \P \biggl[ \frac{ ( L(0,0), (\#C(0,0))^{\alpha}
)}{n} \in(\tau, \infty) \times(
\lambda, \infty) \biggr]
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad = \cases{ 0, &\quad if $\alpha< \displaystyle\frac{2}{3}$,
\vspace*{4pt}\cr
\displaystyle\frac{1}{\sqrt{\pi\tau\gamma^2_0}}, &\quad if $\alpha> \displaystyle\frac{2}{3}$}
\end{eqnarray}
and
\begin{eqnarray}
\label{eqnTrivialCasesHurst}
&& \sqrt{n} \P \biggl[ \frac{ ( L(0,0), (D_{\max} (0,0))^{\alpha
} )}{n} \in(\tau, \infty) \times(
\lambda, \infty) \biggr]
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad = \cases{ 0, &\quad if $\displaystyle \alpha< \frac{1}{2}$,
\vspace*{4pt}\cr
\displaystyle \frac{1}{\sqrt{\pi\tau\gamma^2_0}}, &\quad if $\displaystyle\alpha> \frac{1}{2}$.}
\end{eqnarray}
\end{theorem}
The estimates of the densities $ f_{I^+_0} $ and $ f_{M^+_0} $ imply
that $ \mu$ and $\nu$ are
finite measures on ${\mathbf E}$.
An immediate consequence of the above theorem is the following.
\begin{cor}
\label{corBEarea}
As $n\rightarrow\infty$ for $ u > 0 $, we have:
\begin{longlist}[(ii)]
\item[(i)]
$\sqrt{n}\mathbb{P} ( \# C (0,0) > \sqrt{2n^3}\gamma_0pu )
\to\frac{1}{2\sqrt{\pi}\gamma_0}\int_{0}^{ \infty} t^{- \sfrac
{3}{2}} \wb{F}_{I^{+}_{0}}
(ut^{- \sfrac{3}{2}}) \,dt $,\vspace*{2pt}
\item[(ii)] $\sqrt{n}\mathbb{P} ( D_{\max}(0,0) > \sqrt{2n}\gamma
_0pu )
\to\frac{1}{2\sqrt{\pi}\gamma_0}\int_{0}^{\infty} t^{ - \sfrac
{3}{2}} \wb{F}_{M^{+}_{0}}
(ut^{- \sfrac{1}{2}}) \,dt $,
\end{longlist}
where $F_{I^{+}_{0}}$ and $F_{M^{+}_{0}}$ are the distribution
functions of
$I^{+}_{0}$ and $M^{+}_{0}$, respectively, and $\wb{F}_{I^{+}_{0}}:=
1 - F_{I^{+}_{0}}$,
$\wb{F}_{M^{+}_{0}}:= 1 - F_{M^{+}_{0}}$.
\end{cor}
The proofs of the above theorems are based on a scaling of the process.
In the next section, we define a dual graph and show that as processes,
under a suitable scaling, the original and the dual processes
converge jointly to the Brownian web and its dual in distribution (the
double Brownian web). This
invariance principle is used in Sections~\ref{ClusterThm} and \ref{BMBEarea}
to prove the theorems.
In this connection, it is worth noting that in Proposition \ref
{propDualwedgealternate}, we have
provided an alternate characterization of the dual of Brownian web
which is of independent interest.
This characterization is suitable for proving the joint convergence of
coalescing noncrossing path family and its dual to the double Brownian
web and has been used in
Theorem \ref{theoremGRSDual-DobleBW} to achieve the required convergence.
We should mention here that the Brownian web appears as a universal
scaling limit
for various network models (see Fontes~et al. \cite{FINR04},
Ferrari, Fontes and Wu \cite{FFW05},
Coletti, Fontes and Dias \cite{CFD09}).
It is reasonable to expect that with suitable modifications our method
will give similar results in other network models. Our results will
hold for any network model which admits a dual and satisfies (i)
conditions listed in Remark \ref{remDualGraph},
(ii) the scaled model and its dual converges weakly to the double
Brownian web (see Section~\ref{SecDual}) and (iii) a certain sequence
of counting random variables are uniformly integrable (see Lemma \ref
{lemUI}). In this sense, our result can be considered as a
universality class result.
\section{Dual process and the double Brownian web}\label{SecDual}
\subsection{Dual process}\label{SubSecDualProcess}
For the graph $\mathcal G$, we now describe a dual process such that
the set of ancestors $ C(x,t)$
(defined in the previous section) of a vertex $ (x,t) \in V$ is bounded
by two dual paths.
The dependency inherent in the graph $\mathcal G$ implies that,
although the cluster is bounded by two dual paths,
these paths are \textit{not given by independent random walks}.
The dual vertices are precisely the mid-points between two consecutive
open vertices
on each horizontal line $\{y = n\}, n \in{\mathbb Z}$ with each
dual vertex having a unique offspring dual vertex in the negative
direction of the $y$-axis.
Before giving a formal definition, we direct the attention of the
reader to Figure~\ref{GRSDual}.
\begin{figure}[b]
\includegraphics{1134f02.eps}
\caption{The black points are open vertices, the gray points are the
vertices of the dual
process and the gray (dashed) paths are the dual paths.}\label{GRSDual}
\end{figure}
For $ \bu \in{\mathbb Z}^2$, we define
\begin{eqnarray}
\label{DualIncrement} J^{+}_{\bu} &:=& \inf\bigl\{k: k \geq1,
\bigl( \bu(1)+k, \bu(2) \bigr) \in V\bigr\},
\nonumber\\[-8pt]\\[-8pt]\nonumber
J^{-}_{\bu} &:=& \inf\bigl\{k: k\geq1, \bigl( \bu(1) -k,
\bu(2)\bigr) \in V\bigr\}.
\end{eqnarray}
Next, we define $r(\bu):= ( \bu(1)+ J^{+}_{\bu},\bu(2))$ and $l(\bu
):= (\bu(1) - J^{-}_{\bu},\bu(2))$, as the first
open point to the right (\textit{open right neighbour}) and the first open
point to the left (\textit{open left neighbour})
of $\bu$, respectively.
For $(x,t)\in V$, let $\hat{r}(x,t):= (x + J^{+}_{(x,t)}/2,t)$
and $\hat{l}(x,t):= (x -J^{-}_{(x,t)}/2,t)$
denote,\vspace*{1pt} respectively, the right dual neighbour and the left dual
neighbour of $(x,t)$ in the dual vertex set.
Finally, the dual vertex set is given by
\[
\wh{V}:= \bigl\{\hat{r}(x,t),\hat{l}(x,t):(x,t) \in V \bigr\}.
\]
For a vertex $ (u,s)\in\wh{V} $, let $ (v,s-1) \in\wh{V}
$ be such that the straight line segment
joining $ (u,s)$ and $(v,s-1)$ does not cross any edge in $ {\mathcal
G}$. The dual edges are edges joining all
such $ (u,s)$ and $ (v,s-1) $. Formally, for $(u,s)\in\wh{V}$,
we define
\begin{eqnarray}
\label{eqndefalar} a^{l} (u,s) &:=& \sup\bigl\{ z: (z,s-1)\in V,
h(z,s-1) (1) < u \bigr\},
\nonumber\\[-8pt]\\[-8pt]\nonumber
a^{r} (u,s) &:=& \inf\bigl\{z: (z,s-1) \in V, h(z,s-1) (1) > u \bigr\}
\end{eqnarray}
and set $ \hat{h} ( u, s):= ( (a^{l} (u,s)+a^{r} (u,s))/2, s- 1)
$. Note that $ ( a^{r} (u,s), s-1) $ and
$ (a^{l} (u,s), s-1) $ are the nearest vertices in $V$ to the right and
left, respectively,
of the dual vertex $ \hat{h} ( u, s) $.
Finally, the edge set of the dual graph $\widehat{{\mathcal G}}:=
(\wh{V}, \widehat{E})$
is given by
\[
\widehat{E}:= \bigl\{ \bigl\langle(u,s), \hat{h}(u,s) \bigr\rangle: (u,s)
\in \wh{V}\bigr\}.
\]
\begin{remark}
\label{remDualGraph}
Note that the vertex set of the dual graph is a subset of $\frac
{1}{2}{\mathbb Z} \times{\mathbb Z}$.
Before we proceed, we list some properties of the graph $\mathcal G$
and its dual $\widehat{{\mathcal G}}$.
\begin{longlist}[(2)]
\item[(1)] ${\mathcal G}$ uniquely
specifies the dual graph $\widehat{{\mathcal G}}$ and
the dual edges do not intersect the original edges. The construction
ensures that
$\widehat{{\mathcal G}} $ does not contain any circuit.
\item[(2)] For $ (x,t) \in V$, the cluster $C(x,t)$ is enclosed
within the dual paths starting from $\hat{r}(x,t)$ and $\hat{l}(x,t)$.
The boundedness of $C(x,t)$ for every $(x,t)\in V$ implies that
these two dual paths coalesce, thus $\widehat{{\mathcal G}} $ is a
single tree.
\item[(3)] Since paths starting from any two open vertices in the
original graph
coalesce and the dual edges do not cross the original edges, there is
no bi-infinite
path in $\widehat{{\mathcal G}} $.
\end{longlist}
\end{remark}
We now obtain a Markov process from the dual paths.
Fix $(u,s)\in\wh{V}$ and for $k\geq1$, set $\hat{h}^k(u,s)
:= \hat{h}(\hat{h}^{k-1}(u,s))$
where $ \hat{h}^0(u,s):= (u,s)$. Letting $ \widehat
{X}^{(u,s)}_{k}$ denote the first coordinate of $\hat{h}^k(u,s)$,
it may be observed that
$\widehat{X}^{(u,s)}_{k+1}$ is a function of
$ \widehat{X}^{(u,s)}_{k}$ and the collection of random variables
$ \{ (B_{\bu}, U_{\bu}): \bu(2) = s -k-1\in{\mathbb Z} \} $.
Thus, by the
random mapping representation (see, e.g.,
Levin, Peres, and Wilmer
\cite{LPW08}) we have the following.
\begin{prop}
\label{DualMarkov}
For $(u,s)\in\wh{V}$, the process $\{\widehat{X}^{(u,s)}_k:
k\geq0\}$ is a
time homogeneous Markov process.
\end{prop}
Before we proceed, we make the following observations about the
transition probabilities of the
Markov process. Let $ G $ be a geometric random variable taking values
in $ \{1, 2, \dotsc\}$, that is,
$ \P( G = l ) = p (1-p)^{l-1} $ for $ l \geq1 $. For any $\bu\in
{\mathbb Z} \times{\mathbb Z}$, the random
variables $J^{+}_{\bu}$ and $J^{-}_{\bu}$ are i.i.d.\vspace*{1pt} copies of the
geometric random variable $ G$
independent of $ B_{ \bu} $.
Further, if $ \bu_1, \bu_2 \in{\mathbb Z}^2 $ are such that $ \bu
_1(1) \geq\bu_2 (1) - 1 $ and $ \bu_1 (2)
= \bu_2 (2) $, the random variables $J^{+}_{\bu_1}$
and $ J^{-}_{\bu_2}$ are also independent. Now, for $ u \notin
{\mathbb Z} $ and, $ v \in{\mathbb Z}/2 $, we have
\begin{eqnarray}
\label{eqntranprobnonint} \P\bigl( \widehat{X}^{(u,s)}_1 -
\widehat{X}^{(u,s)}_0 = v \mid \widehat{X}^{(u,s)}_0
= u \bigr) & =& \P\bigl( J^{+}_{(u-1/2,s-1)} - J^{-}_{(u+1/2,s-1)}
= 2v \bigr)
\nonumber\\[-8pt]\\[-8pt]\nonumber
& =& \P( G_1 - G_2 = 2v ),
\end{eqnarray}
where $ G_1 $ and $ G_2 $ are i.i.d. copies of $G$, defined above. If $
u \in{\mathbb Z} $ and $ v \in{\mathbb Z}/2 $,
we have, using notations from above
\begin{eqnarray}
\label{eqntranprobint}
&& \P\bigl( \widehat{X}^{(u,s)}_1 -
\widehat{X}^{(u,s)}_0 = v \mid \widehat{X}^{(u,s)}_0
= u \bigr)
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&\qquad
= (1-p) \P(
G_1 - G_2 = 2v ) + p \P(G = 2v)/2 + p \P(G = - 2v)/2,
\end{eqnarray}
where $ G_1 $ and $ G_2 $ are as above. It is therefore obvious that
the transition
probabilities of $\widehat{X}^{(u,s)}_k$ depend on
whether the present state is an integer or not.
From equations (\ref{eqntranprobnonint}) and (\ref
{eqntranprobint}), we state the following.
\begin{prop}
\label{propMartingale}
For any $(u,s) \in\wh{V}$, $ \{ \widehat{X}^{(u,s)}_k: k \geq0
\}$
is an $L^2$-martingale with respect to the filtration $ {\mathcal F}_k
:= \sigma
( \{ B_{\bu}, U_{ \bu}: \bu\in{\mathbb Z}^2, \bu(2) \geq s - k \})$.
\end{prop}
\subsection{Dual Brownian web}\label{SubSecDualBW}
In this section, we briefly describe
the dual Brownian web $\widehat{{\mathcal W}}$ associated with
${\mathcal W}$ and present an
alternate characterization of the dual Brownian web $\widehat
{{\mathcal W}}$.
The Brownian web
(studied extensively by Arratia \cite{A79,A81},
T{\'o}th and Werner \cite{TW98},
Fontes~et al. \cite{FINR04})
may be viewed as a collection
of one-dimensional coalescing Brownian motions starting from every
point in the space time
plane $\R^2$.
We recall relevant details from Fontes~et al. \cite{FINR04}.
Let $\R^{2}_c$ denote the completion of the space time plane $\R^2$ with
respect to the metric
\[
\rho\bigl((x_1,t_1),(x_2,t_2)
\bigr):= \bigl\llvert \tanh(t_1)-\tanh(t_2)\bigr\rrvert
\vee\biggl\llvert \frac{\tanh(x_1)}{1+\llvert t_1\rrvert } -\frac{\tanh(x_2)}{1+\llvert t_2\rrvert } \biggr\rrvert.
\]
As a topological space $\R^{2}_c$ can be identified with the
continuous image of $[-\infty,\infty]^2$ under a map that identifies
the line
$[-\infty,\infty]\times\{\infty\}$ with the point $(\ast,\infty
)$, and the line
$[-\infty,\infty]\times\{-\infty\}$ with the point $(\ast,-\infty)$.
A path $\pi$ in $\R^{2}_c$ with starting time $\sigma_{\pi}\in
[-\infty,\infty]$
is a mapping $\pi:[\sigma_{\pi},\infty]\rightarrow[-\infty,\infty
] \cup\{ \ast\}$ such that
$\pi(\infty)= \ast$ and, when $\sigma_\pi= -\infty$, $\pi
(-\infty)= \ast$.
Also $t \mapsto(\pi(t),t)$ is a continuous
map from $[\sigma_{\pi},\infty]$ to $(\R^{2}_c,\rho)$.
We then define $\Pi$ to be the space of all paths in $\R^{2}_c$ with
all possible starting times in $[-\infty,\infty]$.
The following metric, for $\pi_1,\pi_2\in\Pi$
\begin{eqnarray*}
d_{\Pi} (\pi_1,\pi_2)&:=& \max \biggl\{\bigl
\llvert \tanh(\sigma_{\pi
_1})-\tanh(\sigma_{\pi_2})\bigr\rrvert,
\\
&&{}\sup_{t\geq\sigma_{\pi_1}\wedge
\sigma_{\pi_2}} \biggl\llvert \frac{\tanh(\pi_1(t\vee\sigma_{\pi
_1}))}{1+\llvert t\rrvert }-
\frac{
\tanh(\pi_2(t\vee\sigma_{\pi_2}))}{1+\llvert t\rrvert }\biggr\rrvert \biggr\}
\end{eqnarray*}
makes $\Pi$ a complete, separable metric space.
\begin{remark}
\label{remMetricConv}
Convergence in this metric can be described as locally uniform
convergence of paths as
well as convergence of starting times. Therefore, for any $ \varepsilon>
0$ and $m>0$,
we can choose $ \varepsilon_1 ( = f ( \varepsilon, m)) > 0 $ such that for
$ \pi_1, \pi_2
\in\Pi$ with $\{(\pi_i(t),t):t\in[\sigma_{\pi_i},m]\}\subseteq
[-m,m]\times[-m,m]$ for $i=1,2$,
$ d_{\Pi} ( \pi_1, \pi_2 ) < \varepsilon_1 $ implies that $ \Vert ( \pi
_1 (\sigma_{\pi_1}),
\sigma_{\pi_1} ) - ( \pi_2 (\sigma_{\pi_2}), \sigma_{\pi_2} )
\Vert _2 < \varepsilon$ and $ \sup\{
\llvert \pi_1 (t) - \pi_2 (t) \rrvert: t \in[\max\{\sigma_{\pi_1},\sigma
_{\pi_2}\},m] \} < \varepsilon$.
We will use this later several times.
\end{remark}
Let ${\mathcal H}$ be the space of compact subsets of $(\Pi,d_{\Pi})$
equipped with
the Hausdorff metric $d_{{\mathcal H}}$.
The Brownian web ${\mathcal W}$ is a random
variable taking values in the complete separable metric space
$({\mathcal H},d_{{\mathcal H}})$.
Before introducing the dual Brownian web, we require a similar metric space
on the collection of backward paths.
As in the definition of $\Pi$, let $ \wh{\Pi}$ be the
collection of all paths $ \hat{\pi}$
with starting time $\sigma_{\hat{\pi}} \in[-\infty,\infty]$
such that
$\hat{\pi}: [-\infty, \sigma_{\hat{\pi}}] \to [-\infty
,\infty] \cup\{\ast\}$ with
$\hat{\pi} (-\infty)= \ast$ and, when $\sigma_{\hat{\pi
}} = +\infty$, $\hat{\pi}(\infty)= \ast$.
As earlier $t \mapsto(\hat{\pi}(t),t)$ is a continuous
map from $[-\infty, \sigma_{\hat{\pi}} ]$ to $(\R^{2}_c,\rho)$.
We equip $ \wh{\Pi}$ with the metric
\begin{eqnarray*}
d_{\wh{\Pi}} (\hat{\pi}_1,\hat{\pi}_2)&:=& \max \biggl\{\bigl\llvert \tanh(\sigma_{\hat{\pi}_1})-\tanh(
\sigma_{\hat{\pi}_2})\bigr\rrvert,
\\
&&{}\sup_{t\leq\sigma_{\hat{\pi}_1}\vee
\sigma_{\hat{\pi}_2}} \biggl\llvert \frac{\tanh(\hat{\pi
}_1(t\wedge\sigma_{\hat{\pi}_1}))}{1+\llvert t\rrvert }-
\frac{\tanh
(\hat{\pi}_2(t\wedge\sigma_{\hat{\pi}_2}))}{1+\llvert t\rrvert }\biggr\rrvert \biggr\}
\end{eqnarray*}
making $(\wh{\Pi}, d_{\wh{\Pi}})$ a complete, separable
metric space.
The complete separable metric space of compact sets of paths of
$\wh{\Pi}$ is denoted
by $(\widehat{{\mathcal H}}, d_{\widehat{{\mathcal H}}})$, where
$d_{\widehat{{\mathcal H}}}$
is the Hausdorff metric on $\widehat{{\mathcal H}}$, and let $
{\mathcal B}_{\widehat{{\mathcal H}}}$
be the corresponding Borel $\sigma$ field.
\subsection{Properties of $(\mathcal{W},\widehat{\mathcal{W}})$}
The Brownian web and its dual $( {\mathcal W},\widehat{{\mathcal W}})$
is a $({\mathcal H}\times
\widehat{{\mathcal H}}, {\mathcal B}_{{\mathcal H}}\times{\mathcal
B}_{\widehat{{\mathcal H}}})$ valued
random variable such that ${\mathcal W}$ and $\widehat{{\mathcal W}}$
uniquely determine
each other almost surely with $\widehat{{\mathcal W}}$ being equally
distributed as $-{\mathcal W}$,
the Brownian web rotated 180\tsup{o} about the origin. The interaction
between the paths
in ${\mathcal W}$ and $\widehat{\mathcal W}$ is that of Skorohod
reflection (see Soucaliuc, T{\'o}th and Werner
\cite{STW00}).
We introduce some notation to study the sets $ \{ \pi(t+s): \pi\in
{\mathcal W}, \sigma_{\pi} \leq t \} $ and
$ \{ \hat{\pi} (t - s): \hat{\pi} \in\widehat{\mathcal
W}, \sigma_{\hat{\pi}} \geq t \} $.
For a $({\mathcal H},B_{{\mathcal H}})$ valued random variable $K$ and
$t\in\R$,
let $K^{t-}:= \{\pi:\pi\in K$ and $\sigma_{\pi}\leq t\}$.
Similarly, for
a $(\widehat{{\mathcal H}},B_{\widehat{{\mathcal H}}})$ valued random
variable $\widehat{K}$
and $t\in\R$, let $\widehat{K}^{t+}:= \{\hat{\pi}: \hat{\pi}
\in\widehat{K}$ and $\sigma_{\hat{\pi}} \geq t\}$.
For $t_1, t_2 \in\R$, $t_2 > t_1$ and a $({\mathcal H},B_{{\mathcal
H}})$ valued random variable $K$,
define
\begin{eqnarray}
\label{eqnDefNK} {\mathcal M}_{K}(t_1,t_2) &:=& \bigl\{\pi(t_2):\pi\in K^{t_1 -}, \pi (t_2)
\in[0,1]\bigr\};
\nonumber\\[-8pt]\\[-8pt]\nonumber
\xi_{K}(t_1,t_2) &:=&\#{\mathcal
M}_{K}(t_1,t_2),
\end{eqnarray}
that is, $\xi_{K}(t_1,t_2)$ denotes
the number of distinct points in $[0,1]\times t_2$ which are on some
path in $K^{t_1-}$.
We note that for $t > 0$, ${\mathcal M}_{\mathcal W}(t_0,t_0 + t) =
{\mathcal N}_{\mathcal W}(t_0,t;0,1)$
as defined in Sun and Swart \cite{SS08}.
It is known that for all $t > 0$
the random variable $\xi_{{\mathcal W}}(t_0,t_0 + t)$ is finite almost surely
(see $(E_1)$ in Theorem 1.3 in Sun and Swart \cite{SS08}) with
\begin{equation}
\label{expEta} \E\bigl(\xi_{{\mathcal W}}(t_0,t_0 + t)
\bigr) = \frac{1}{\sqrt{\pi t}}.
\end{equation}
Moreover, from the known properties of $ ({\mathcal W},\widehat
{{\mathcal W}})$ the proof of the following proposition is
straightforward (for details, see Roy, Saha and Sarkar \cite{RSS15}).
\begin{prop}
\label{lemPropEtaPtset}
For any $t_0 < t_1$, almost surely we have:
\begin{longlist}[(iii)]
\item[(i)] ${\mathcal M}_{{\mathcal W}}(t_0,t_1) \cap\Q= \varnothing$;
\item[(ii)] each point in ${\mathcal M}_{{\mathcal W}}(t_0,t_1)$ is of
type $(1,1)$;
\item[(iii)] for each $x \in{\mathcal M}_{{\mathcal W}}(t_0,t_1)$,
there exists $\pi_1,
\pi_2 \in{\mathcal W}$ with $\sigma_{\pi_1} < t_0$, $\sigma_{\pi
_2} > t_0$ and $\pi_1(t_1)
= \pi_2(t_1)= x$;
\item[(iv)] for each $x \in{\mathcal M}_{{\mathcal W}}(t_0,t_1)$,
there exist exactly
two paths $\hat{\pi}^{(x,t_1)}_r$ and $\hat{\pi
}^{(x,t_1)}_l$ in $\widehat{{\mathcal W}}$
starting from $(x,t_1)$ with $\hat{\pi}^{(x,t_1)}_r(t) > \hat{\pi}^{(x,t_1)}_l(t)$ for all $[t_0,t_1)$.
\end{longlist}
\end{prop}
There are several ways to construct
$\widehat{{\mathcal W}}$ from ${\mathcal W}$. In this paper, we follow
the \textit{wedge characterization}
provided by Sun and Swart \cite{SS08}.
For $\pi^r,\pi^l \in{\mathcal W}$ with coalescing time $t^{\pi
^r,\pi^l}$
and $\pi^r(\max\{\sigma_{\pi^r},\sigma_{\pi^l}\})>
\pi^l(\max\{\sigma_{\pi^r},\sigma_{\pi^l}\})$,
the wedge with right boundary $\pi^r$ and left boundary $\pi^l$,
is an open set in $\R^2$ given by
\begin{eqnarray}\label{defwedgenoncpt}
A &=& A\bigl(\pi^r, \pi^l\bigr)
\nonumber\\[-8pt]\\[-8pt]\nonumber
&:=& \bigl
\{(y,s):\max\{\sigma_{\pi^l},\sigma _{\pi^r}\} < s <
t^{\pi^r,\pi^l}, \pi^l(s) < y <\pi^r(s) \bigr\}.
\end{eqnarray}
A path $\hat{\pi} \in\wh{\Pi}$, is said to \textit{enter
the wedge
$A$ from outside} if there exist $t_1$ and $t_2$ with $ \sigma
_{\hat{\pi}} > t_1 > t_2$ such that
$(\hat{\pi}(t_1), t_1) \notin\wb{A}$ and $ (\hat{\pi
}(t_2), t_2) \in A$.
From Theorem 1.9 in Sun and Swart \cite{SS08}, it follows that the dual Brownian
web $\widehat{\mathcal W}$ associated with
the Brownian web ${\mathcal W}$ satisfies the following wedge characterization.
\begin{theorem}
\label{theoremDualwedge}
Let $({\mathcal W}, \widehat{{\mathcal W}})$ be a Brownian web and its
dual. Then almost surely
\[
\widehat{{\mathcal W}} = \{\hat{\pi}: \hat{\pi} \in \wh{\Pi}
\mbox{ and does not enter any wedge in } {\mathcal W} \mbox{ from outside} \}.
\]
\end{theorem}
Because of Theorem \ref{theoremDualwedge}, for a $({\mathcal H}\times
\widehat{\mathcal H}, {\mathcal B}_{{\mathcal H}}
\times{\mathcal B}_{\widehat{\mathcal H}})$ valued random variable
$({\mathcal W}, {\mathcal Z})$ to show
that ${\mathcal Z}= \widehat{\mathcal W}$, it suffices to check that
${\mathcal Z}$ satisfies the wedge condition. Here we present an
alternate condition which
is easier to check.
\begin{prop}
\label{propDualwedgealternate}
Let $({\mathcal W}, {\mathcal Z})$ be a $({\mathcal H}\times\widehat
{\mathcal H}, {\mathcal B}_{{\mathcal H}}
\times{\mathcal B}_{\widehat{\mathcal H}})$ valued random variable
such that:
\begin{longlist}[(2)]
\item[(1)] for any deterministic $ (x,t) \in\R^2$, there exists a
path $\hat{\pi}^{(x,t)}
\in{\mathcal Z}$ starting at $(x,t)$ and going backward in time almost surely;
\item[(2)] paths in ${\mathcal Z}$ do not cross paths in ${\mathcal
W}$ almost surely, that is, there
does not exist any $\pi\in{\mathcal W}$, $\hat{\pi} \in
{\mathcal Z}$ and $t_1,t_2 \in
(\sigma_{\pi}, \sigma_{\hat{\pi}})$ such that $(\hat{\pi
}(t_1)-\pi(t_1))
(\hat{\pi}(t_2)-\pi(t_2))< 0$ almost surely;
\item[(3)] paths in ${\mathcal Z}$ and paths in ${\mathcal W}$ do
not coincide over any time interval
almost surely, that is, for any $ \pi\in{\mathcal W}$
and $ \hat{\pi} \in{\mathcal Z}$ and for no pair of points $ t_1
< t_2 $ with $ \sigma_{\pi}
\leq t_1 < t_2 \leq\sigma_{\hat{\pi}} $
we have $ \hat{\pi}(t) = {\pi}(t) $ for all $ t \in[t_1,t_2]$
almost surely.
\end{longlist}
Then ${\mathcal Z} = \widehat{{\mathcal W}}$ almost surely.
\end{prop}
\begin{pf}
From conditions (2) and (3), we have that $\hat{\pi} \in
{\mathcal Z}$ does not enter any wedge in ${\mathcal W}$ from outside.
Hence, ${\mathcal Z} \subseteq\widehat{\mathcal W}$.
The argument for $\widehat{\mathcal W} \subseteq{\mathcal Z}$ follows from
the \textit{fish-trap} technique introduced in the proof of\vspace*{2pt} Lemma 4.7
of Sun and Swart \cite{SS08}.
It shows that
$\widehat{\mathcal W} \subseteq\widetilde{\mathcal Z}$ almost surely for any
$({\mathcal H}, {\mathcal B}_{{\mathcal H}})$ valued random variable
$\widetilde{\mathcal Z}$ satisfying
(i) paths in $\widetilde{\mathcal Z}$ do not\vspace*{1pt} cross paths is ${\mathcal W}$
and (ii) for any deterministic
countable dense set, there exist paths in $\widetilde{\mathcal Z}$
starting from every point of that dense set (for details, see Roy, Saha and Sarkar \cite{RSS15}).
\end{pf}
\subsection{Convergence to the double Brownian web}
For any\vspace*{1pt} $(x,t) \in V$, the path $\pi^{(x,t)}$ in the random graph
$\mathcal G$ is obtained as the
piecewise linear function $\pi^{(x,t)}: [t, \infty) \to\mathbb R$
with $\pi^{(x,t)}(t+k) = h^{k}(x,t)(1)$ for every $k \geq0$ and
$\pi^{(x,t)}$ being linear in the interval $[t+k,t+k + 1]$.
Similarly, for $(x,t)\in\wh{V}$,
the dual path $\hat{\pi}^{(x,t)}$ is the piecewise linear
function $\hat{\pi}^{(x,t)}: (-\infty, t] \to\mathbb R$
with $\hat{\pi}^{(x,t)}(t-k) = \hat{h}^{k}(x,t)(1)$ for
every $k \geq0$ and
$\hat{\pi}^{(x,t)}$ being linear in the interval $[t-k-1,t-k]$.
Let ${\mathcal X}:= \{\pi^{(x,t)}:(x,t) \in V\}$ and
$\widehat{{\mathcal X}}:= \{\hat{\pi}^{(x,t)}: (x,t)\in
\wh{V}\}$ be the collection of all possible paths and dual paths
admitted by $\mathcal G$ and $\widehat{\mathcal G}$.
For a given $ \gamma> 0$ and a path $\pi$ with starting time $
\sigma_{\pi}$, the scaled path
$ \pi_n(\gamma): [ \sigma_{\pi}/n, \infty] \to
[-\infty, \infty]$ is given by $\pi_n(\gamma) (t)=
\pi(n t)/ (\sqrt{n} \gamma)$ for each $n \geq1$. Thus, the starting
time of the scaled path $\pi_n(\gamma)$ is
$\sigma_{\pi_n(\gamma) }= \sigma_{\pi}/ n $.
Similarly, for the backward path $ \hat{\pi} $, the scaled
version is
$ \hat{\pi}_n(\gamma): [ -\infty, \sigma_{\hat{\pi}}/n]
\to
[-\infty, \infty]$ given by $\hat{\pi}_n(\gamma) (t)=
\hat{\pi}(n t)/ (\sqrt{n} \gamma)$ for each $n \geq1$.
For each $ n \geq1 $, let $ {\mathcal X}_n = {\mathcal X}_n(\gamma):=
\{\pi_n^{(x,t)}(\gamma):(x,t) \in V\}$ and $\widehat{{\mathcal X}}_n
= \widehat{{\mathcal X}}_n(\gamma):=
\{ \hat{\pi}_n^{(x,t)}(\gamma):(x,t) \in\wh{V} \}$ be the
collections of all the $n$th order diffusively scaled paths and dual paths, respectively.
The\vspace*{1pt} closure $\wb{\mathcal X}_n(\gamma)$ of
${\mathcal X}_n(\gamma)$ in $(\Pi,d_{\Pi})$ and the closure
$\wb{\widehat{{\mathcal X}}}_n(\gamma)$
of $\wh{\mathcal X}_n(\gamma)$ in $(\wh{\Pi}, d_{\widehat
{\Pi}})$ are
$({\mathcal H},{\mathcal B}_{{\mathcal H}})$ and $ (\widehat{{\mathcal
H}}, {\mathcal B}_{\widehat{{\mathcal H}}})$
valued random variables, respectively. Coletti, Fontes and Dias \cite{CFD09} showed the
following
\begin{theorem}
\label{theoremGRS-BW}
For $\gamma_0:= \gamma_0 (p)$ as in Theorem \ref{clusterheight}, as
$n\rightarrow\infty$,
$\wb{{\mathcal X}}_n(\gamma_0)$ converges weakly to the
standard Brownian web ${\mathcal W}$.
\end{theorem}
Our main result is the joint invariance principle
for $\{(\wb{\mathcal X}_n(\gamma_0),\wb{\widehat
{{\mathcal X}}}_n(\gamma_0)): n \geq1\}$ considered as $({\mathcal
H} \times\widehat{{\mathcal H}}, {\mathcal B}_{{\mathcal H}} \times
{\mathcal B}_{\widehat{{\mathcal H}}})$ valued random variables.
\begin{theorem}
\label{theoremGRSDual-DobleBW}
$\{ (\wb{{\mathcal X}}_n(\gamma_0), \wb{\widehat{{\mathcal X}}}_n(\gamma_0)): n \geq0\}$ converges weakly
to $({\mathcal W}, \widehat{{\mathcal W}})$ as $n\rightarrow\infty$.
\end{theorem}
We require the following propositions to prove Theorem \ref
{theoremGRSDual-DobleBW}. We say that
$ \{ \widehat{W}^{(x,t)}(u): u \leq t \}$ is a Brownian motion going
\textit{back in time} if
$ \widehat{W}^{(x,t)}(t-s): = W(t+s), s \geq0$ where $\{W(u): u
\geq t\}$ is a
Brownian motion with $W(t) = x$.
\begin{prop}
\label{propDual-Bmotion}
For any deterministic point $(x,t)\in\R^2$, there exists a sequence
of paths $\hat{\theta}^{(x,t)}_n \in\widehat{{\mathcal
X}}_n(\gamma_0)$
which converges in distribution to $ \widehat{W}^{(x,t)}$.
\end{prop}
\begin{pf}
For any $(x,t) \in\R^2$ fix $ t_n
= \lfloor n t \rfloor$ and $ x_n = \max\{ \lfloor\sqrt{n} \gamma_0
x \rfloor+ j: j \leq0,
( \lfloor\sqrt{n} \gamma_0 x \rfloor+ j, t_n ) \in\wh{V} \}
$. Let $ \hat{\theta}^{(x,t)}_n \in
\widehat{{\mathcal X}}_n (\gamma_0) $ be the scaling of the path
\mbox{$\hat{\pi}^{(x_n,t_n)} \in\widehat{{\mathcal X}} $}.
Since $ {\mathcal G}$ is invariant under translation by lattice points
and $\widehat{\mathcal G}$ is uniquely
determined by ${\mathcal G}$, the conditional distribution of $ \{
(x_n,t_n) + \hat{h}^j(0,0): j \geq0 \} $ given
$ (0,0) \in\wh{V} $ is the same as that of $ \{ \hat{h}^j(x_n,t_n ): j \geq0 \} $.
We observe that
$ ( x_n /( \sqrt{n} \gamma_0), t_n / n ) \to(x,t)$ as $n \to\infty
$ almost surely. Hence, it suffices to prove that the scaled
dual path starting from $ (0,0)$ given $ (0,0)\in\wh{V}$
converges in distribution to $ \widehat{W}^{(0,0)}$.
From Proposition \ref{propMartingale}, we see that $ \widehat
{X}_j^{(0,0)} = \hat{h}^j(0,0) (1) $ is
an $L^2$ martingale with respect to the filtration $ \sigma( \{
B_{(z,s)}, U_{ (z,s)}: z \in{\mathbb Z}, s \geq - k \})$.
Let
\[
\eta_n ( u ):= s_n^{-1} \bigl[
\widehat{X}_j^{(0,0)} + \bigl(\widehat {X}_{j+1}^{(0,0)}
- \widehat{X}_j^{(0,0)} \bigr) \bigl(u s_{n}^2
- s_j^2\bigr) / \bigl( s_{j+1}^2 -
s_j^2\bigr) \bigr]
\]
for\vspace*{2pt} $ u \in[0,\infty) $ and $ s_j^2 \leq u s_n^2 < s_{j+1}^2 $,
where $ s_n^2 = \sum_{j=1}^n \E( (\widehat{X}_{j}^{(0,0)} - \widehat
{X}_{j-1}^{(0,0)} )^2 ) $.
We know $ \eta_n $ converges in distribution
to a standard Brownian motion (see Theorem~3,~\cite{B71}). Since $ s_n^2 / (n \gamma_0^2) \to1 $,
it can be seen that $ \sup_{u \in[0,M]} \llvert \eta_n (u) - \hat{\theta}_n^{(0,0)} (-u) \rrvert \to0 $ in probability
for any $ M > 0 $.
So by Slutsky's theorem, we conclude that
$\hat{\theta}_n^{(0,0)}$ converges in distribution
to a standard Brownian motion going backward in time.
\end{pf}
The next result helps in estimating the probability that a direct path
and a dual path
stay close to each other for some time period.
Given $m \in\N$ and $\varepsilon, \delta>0$, we define the event
\begin{eqnarray*}
B^{\varepsilon}_n &= & B^{\varepsilon}_n(\delta, m)
\\
&:=& \bigl\{ \mbox {there exist }\pi_1^n,
\pi_2^n, \pi_3^n \in{\mathcal
X}_n \mbox{ such that } \sigma_{\pi_1^n},
\sigma_{\pi_2^n} \leq 0,
\\
&&{} \sigma_{\pi_3^n} \leq\lfloor n\delta\rfloor/n, \pi_1^n(0)
\in [-m,m], \bigl\llvert \pi_1^n(0) -
\pi_2^n(0)\bigr\rrvert < \varepsilon,\mbox{ with}
\\
&&{} \pi_1^n\bigl(\lfloor n\delta\rfloor/n\bigr) \neq
\pi_2^n\bigl(\lfloor n\delta \rfloor/n\bigr)
\mbox{ and } \bigl\llvert \pi_1^n\bigl( \lfloor n
\delta\rfloor/n\bigr) - \pi _3^n\bigl( \lfloor n\delta
\rfloor/n\bigr)\bigr\rrvert < \varepsilon,\mbox{ with}
\\
&&{}\pi_1^n\bigl( 2 \lfloor n\delta\rfloor/n\bigr) \neq
\pi_3^n\bigl( 2 \lfloor n\delta\rfloor/n\bigr) \bigr\}.
\end{eqnarray*}
\begin{lemma}
\label{lemmacoalescenceGRS}
For any $m \in\N$ and $\varepsilon, \delta> 0$, we have
\[
\P\bigl( B^{\varepsilon}_n (\delta, m) \bigr) \leq C_1
(\delta, m) \varepsilon,
\]
where $ C_1 (\delta, m) $ is a positive constant, depending only on $
\delta$ and $ m $.
\end{lemma}
\begin{pf}
Let $ D^{\varepsilon}_n $ be the unscaled version of the event $
B^{\varepsilon}_n $, that is,
\begin{eqnarray*}
D^{\varepsilon}_n &:=& \bigl\{ \mbox{there exist } (x,0), (y,0),
\bigl(z, \lfloor n\delta\rfloor\bigr) \in V \mbox{ such that}
\\
&&{} x \in[ - m \sqrt{n} \gamma_0, m \sqrt{n} \gamma_0 ],
\llvert x - y \rrvert < \sqrt{n} \varepsilon\gamma_0\mbox{ and }
h^{\lfloor n\delta \rfloor} (x, 0) \neq h^{\lfloor n\delta\rfloor} (y, 0),
\\
&&{} \bigl\llvert h^{\lfloor n\delta\rfloor} (x, 0) (1) - z \bigr\rrvert < \sqrt{n} \varepsilon
\gamma_0, h^{2 \lfloor n\delta\rfloor} (x, 0) \neq h^{\lfloor n\delta\rfloor} \bigl(z,
\lfloor n\delta\rfloor \bigr) \bigr\}.
\end{eqnarray*}
\begin{figure}
\includegraphics{1134f03.eps}
\caption{The vertices $(l,0)$ and $(l+1,0)$ and the corresponding
vertex $(k, \lfloor n\delta\rfloor)$ as required in the proof of
Lemma \protect\ref{lemmacoalescenceGRS}.}\label{lem211}
\end{figure}
On the event $D^{\varepsilon}_n$ there exists $l \in[- m \sqrt{n}
\gamma_0, m \sqrt{n} \gamma_0]
\cap{\mathbb Z}$ such that the unscaled paths starting from $(l,0)$
and $(l+1, 0)$
(as in Figure~\ref{lem211}) do not meet in time $\lfloor n\delta
\rfloor$---an event which occurs with probability at most $C_2/\sqrt{n \delta}$
for some constant $C_2 > 0$ (see Theorem 4 of Coletti, Fontes and Dias \cite{CFD09}).
Supposing $h^{\lfloor n\delta\rfloor} (l, 0)(1) = k$, two unscaled
paths, one starting from a vertex $\lfloor\sqrt{n} \varepsilon\gamma_0
\rfloor$ distance to the left of~$k$ and the other starting from a
vertex $\lfloor\sqrt{n} \varepsilon\gamma_0 \rfloor$ distance to the
right of $k$, do not meet in time $\lfloor n\delta\rfloor$ has a
probability at most $C_2 2\sqrt{n} \varepsilon\gamma_0 /\sqrt{n \delta
}$ for all $k \in{\mathbb Z}$.
Thus, summing over all possibilities of $l$ and $k$ and using Markov
property we have
\begin{eqnarray*}
\P\bigl(D^{\varepsilon}_n\bigr) &\leq&\P\Biggl(\bigcup
_{l= - 2m \sqrt{n} \gamma_0}^{2m
\sqrt{n} \gamma_0} \bigcup_{k \in{\mathbb Z}}
\bigl\{ h^{\lfloor n\delta
\rfloor} (l, 0) (1) = k \neq h^{\lfloor n\delta\rfloor} (l+1, 0) (1) \mbox{ and}
\\
&&{} h^{\lfloor n\delta\rfloor} \bigl(k - \lfloor\sqrt{n} \varepsilon
\gamma_0 \rfloor, \lfloor n\delta\rfloor\bigr) \neq h^{\lfloor n\delta
\rfloor}
\bigl(k + \lfloor\sqrt{n} \varepsilon\gamma_0 \rfloor, \lfloor n\delta
\rfloor\bigr)\bigr\}\Biggr)
\\
&\leq&\sum_{l= - 2m \sqrt{n} \gamma_0}^{ 2m \sqrt{n} \gamma_0} \frac{2 C_2 \sqrt{n} \varepsilon\gamma_0 }{ \sqrt{n \delta}}
\sum_{k \in{\mathbb Z}} \P\bigl\{ h^{\lfloor n\delta\rfloor} (l, 0) (1) = k
\neq h^{\lfloor
n\delta\rfloor} (l+1, 0) (1)\bigr\}
\\
&\leq&\sum_{l= - 2m \sqrt{n} \gamma_0}^{ 2m \sqrt{n} \gamma_0} \frac{2 C_2 \sqrt{n} \varepsilon\gamma_0 }{ \sqrt{n \delta}} \P
\bigl\{ h^{\lfloor n\delta\rfloor} (l, 0) (1) \neq h^{\lfloor n\delta\rfloor
} (l+1, 0) (1)\bigr\}
\\
&\leq& \sum_{l= - 2m \sqrt{n} \gamma_0}^{ 2m \sqrt{n} \gamma_0} \frac{2 C_2 \sqrt{n} \varepsilon\gamma_0 }{ \sqrt{n \delta}}
\frac
{C_2}{\sqrt{n \delta}}
\\
&\leq& C_1 (\delta, m) \varepsilon.
\end{eqnarray*}\upqed
\end{pf}
\begin{pf*}{Proof of Theorem \ref{theoremGRSDual-DobleBW}}
Since $\widehat{{\mathcal X}}$ consists of noncrossing paths only, Proposition
\ref{propDual-Bmotion} implies the tightness of the family $\{\wb
{\widehat{{\mathcal X}}}_n: n\geq1\}$
(see Proposition B.2 in the Appendix of Fontes~et al. \cite{FINR04}).
The joint family $\{(\wb{{\mathcal X}}_n,\wb{\widehat{{\mathcal X}}}_n): n\geq1\}$
is tight since
each of the two marginal families
is tight. To prove Theorem \ref{theoremGRSDual-DobleBW}, it suffices
to show that for any
subsequential limit $({\mathcal W}, {\mathcal Z})$ of $\{(\wb
{{\mathcal X}}_n,\wb{\widehat{{\mathcal X}}}_n): n\geq1\}$,
the random variable ${\mathcal Z}$ satisfies the conditions given in
Proposition \ref{propDualwedgealternate}
Consider a convergent subsequence of $\{(\wb{\mathcal X}_n,
\wb{\widehat{{\mathcal X}}}_n): n\geq1\}$ such that $({\mathcal
W},{\mathcal Z})$ is its weak limit
and by Skorohod's representation theorem, we may
assume that the convergence happens almost surely.
For ease of notation, we denote the convergent subsequence by itself.
From Proposition \ref{propDual-Bmotion}, it follows that for any deterministic
$(x,t) \in\R^2$ there exists a path $\hat{\pi} \in{\mathcal
Z}$ starting at $(x,t)$
going backward in time almost surely.
Since $(\wb{\mathcal X}_n,
\wb{\widehat{{\mathcal X}}}_n)$ converges to $({\mathcal
W},{\mathcal Z})$ almost surely, if a dual path
in ${\mathcal Z}$ crosses a path in $\mathcal W$, there exists a dual
path in
$\wh{\mathcal X}_n$ which crosses a path in ${\mathcal X}_n$, for
some $n \geq1$,
yielding a contradiction.
Hence, the paths in ${\mathcal Z}$ do not cross paths in $\mathcal W$
almost surely
(for details, see Roy, Saha and Sarkar \cite{RSS15}).
Now, to prove that condition (3) in Proposition \ref
{propDualwedgealternate} is satisfied,
we define the following event: for $ \delta> 0 $ and positive integer
$ m \geq1 $, let
\begin{eqnarray*}
A (\delta, m ) &:=& \bigl\{\mbox{there exist paths } \pi\in {\mathcal W}
\mbox{ and }\hat{\pi} \in{\mathcal Z} \mbox{ with }
\sigma_{\pi},\sigma_{\hat{\pi}} \in (-m,m),
\\
&&{}\mbox{and there exists } t_0 \mbox{ such that }
\sigma_{\pi
} < t_0 < t_0 + \delta<
\sigma_{\hat{\pi}},
\\
&&{}\mbox{and } {-}m < \pi(t) = \hat{\pi}(t) < m \mbox { for all } t
\in[t_0, t_0+\delta] \bigr\}.
\end{eqnarray*}
It is enough to show that for any fixed $ \delta> 0 $ and for $m \geq1$,
we have $ \P ( A (\delta, m ) ) = 0 $.
We present here the idea of the proof; more details are available in
Roy, Saha and Sarkar \cite{RSS15}.
Fix $\varepsilon> 0$. Since we are in a setup where the scaled paths
converge almost surely,
for all large $n$ there exist $\pi^n_1 \in{\mathcal X}_n$ and
$\hat{\pi}^n
\in\wh{\mathcal X}_n$ within $\varepsilon$ distance of $\pi$ and
$\hat{\pi}$, respectively.
Using the fact that a dual vertex lies in the middle of two open
vertices and the forward paths
cannot cross the dual paths, it follows that for all large $n$
there exist $\pi^n_2, \pi^n_3 \in\wh{\mathcal X}_n$
such that:
\begin{longlist}[(a)]
\item[(a)] $\max\{\llvert \pi^n_1(\sigma_{\pi^n_2})-\pi^n_2(\sigma_{\pi^n_2})\rrvert,
\llvert \pi^n_1(\sigma_{\pi^n_3})-\pi^n_3(\sigma_{\pi^n_3})\rrvert \} <
4\varepsilon$;\vspace*{2pt}
\item[(b)] $\pi^n_1(\sigma_{\pi^n_2}+\delta/3) \neq\pi
^n_2(\sigma_{\pi^n_2}+ \delta/3)$ and
$\pi^n_1(\sigma_{\pi^n_3}+\delta/3) \neq\pi^n_3(\sigma_{\pi
^n_3}+ \delta/3)$.
\end{longlist}
This gives us that
$A (\delta, m ) \subseteq\liminf_{n \to\infty} \bigcup_{j =
1}^{\lfloor6m/\delta\rfloor} B^{4\varepsilon}_n( \delta/3, 2m; j ) $.
\begin{figure}
\includegraphics{1134f04.eps}
\caption{The event $A(\delta,m)$. The bold paths are from $({\mathcal
W}, \widehat{{\mathcal W}})$
and the approximating dashed paths are from $({\mathcal X}_n, \widehat
{\mathcal X}_n)$.}\label{figDoubleBweb}
\end{figure}
Here, $B^{4\varepsilon}_n( \delta/3, 2m; j )$ is a translation
of the event $ B^{4\varepsilon}_n(\delta/3, 2m) $, considered in Lemma
\ref{lemmacoalescenceGRS};
translated such that the starting time of the paths $\pi^1_n$ and $\pi
^2_n$ are shifted by $ -m +
j \lfloor n \delta/ 3 \rfloor/ n $ (see Figure~\ref{figDoubleBweb}).
By\vspace*{1pt} translation invariance of our model and Lemma \ref
{lemmacoalescenceGRS}, for all $ n \geq1$
we have $ \P( B^{4\varepsilon}_n(\delta/3, 2m; j )) \leq4 C_1 ( \delta
/3, 2m) \varepsilon$.
This completes the proof.
\end{pf*}
\section{Proof of Theorem \texorpdfstring{\protect\ref{clusterheight}}{1.2}}\label{ClusterThm}
Let $\xi:= \xi_{{\mathcal W}}(0,1)$ and $\xi_n:=
\xi_{\wb{\mathcal X}_n}(0,1)$ be as defined in (\ref{eqnDefNK}).
The proof of Theorem \ref{clusterheight} follows from the following
proposition.
\begin{prop}
\label{propWeakConv1}
$\E[\xi_n] \to\E[\xi]$ as $n \to\infty$.
\end{prop}
We first complete the proof of Theorem \ref{clusterheight} assuming
Proposition \ref{propWeakConv1}.
\begin{pf*}{Proof of Theorem \ref{clusterheight}}
Using the translation
invariance of our model, we have
\begin{eqnarray*}
\sqrt{n}\gamma_0 \P\bigl(L(0,0)> n\bigr) & =& \sum
_{k=0}^{\lfloor\sqrt
{n}\gamma_0 \rfloor} \E( \mathbf{1}_{\{L(k,n) > n\}} ) \times
\frac{\sqrt{n}\gamma_0}{\lfloor\sqrt{n}\gamma_0 \rfloor
+1}
\\
& =& \E( \xi_n) \times \frac{\sqrt{n}\gamma_0}{\lfloor\sqrt
{n}\gamma_0 \rfloor+1} \to
\E(\xi) = \frac{1}{\sqrt{\pi}}\qquad\mbox{as } n \to\infty.
\end{eqnarray*}
This proves Theorem \ref{clusterheight}.
\end{pf*}
Proposition \ref{propWeakConv1} will be proved through a sequence of lemmas.
To state the next lemma, we recall from Theorem \ref{theoremGRSDual-DobleBW}
that $(\wb{\mathcal X}_n,\wb{\wh{\mathcal X}}_n) \Rightarrow
({\mathcal W}, \widehat{{\mathcal W}})$ as $n \to\infty$.
Using Skorohod's representation theorem,
we assume that we are working on a probability space where
$d_{{\mathcal H}\times\widehat{{\mathcal H}}}((\wb{\mathcal X}_n,
\wb{\wh{\mathcal X}}_n), ({\mathcal W}, \widehat{{\mathcal
W}})) \to0 $ almost surely as $ n \to\infty$.
\begin{lemma}
\label{lemEtaptsetConv}
For $t_1 > t_0$, we have
\[
\P\bigl(\xi_{\wb{\mathcal X}_n} (t_0,t_1) \neq
\xi_{{\mathcal
W}}(t_0,t_1)\mbox{ for infinitely many }n \bigr) = 0.
\]
\end{lemma}
\begin{pf}
We prove the lemma for $t_0 = 0$ and $t_1 = 1$, that is, for $\xi_n =
\xi_{\wb{\mathcal X}_n} (0,1)$
and $\xi_{{\mathcal W}}(0,1)$, the proof for general $t_0,t_1$ being similar.
First, we show that, for all $k \geq0$,
\begin{equation}
\label{eqnweakconvliminf} \liminf_{n\to\infty} \mathbf{1}_{\{\xi_n \geq k\}} \geq
\mathbf {1}_{\{\xi\geq k\}}\qquad\mbox{almost surely}.
\end{equation}
Indeed, for $k = 0$, both $\mathbf{1}_{\{\xi_n \geq k\}}$ and
$\mathbf{1}_{\{\xi\geq k\}}$
equal $1$.
For $k\geq1$, (\ref{eqnweakconvliminf}) follows from
almost sure convergence of $(\wb{\mathcal X}_n,\wb{\widehat
{\mathcal X}}_n)$ to $
({\mathcal W}, \widehat{{\mathcal W}})$ and from the properties of the
set ${\mathcal M}_{\mathcal W}(0,1)$
as described in Proposition \ref{lemPropEtaPtset}.
To complete the proof, we need to show that $\P(\limsup_{n \to\infty
} \{\xi_n > \xi\}) = 0$.
This is equivalent to showing that $\P(\Omega^{k}_0) = 0$ for all
$k\geq0$,
where
\[
\Omega^{k}_0:= \bigl\{\omega: \xi_n(\omega)
> \xi(\omega) = k\mbox{ for infinitely many }n\bigr\}.
\]
Consider $k=0$ first. From Proposition \ref{lemPropEtaPtset}, it
follows that on the event $\xi=0$,
almost surely we can obtain $\gamma:=\gamma(\omega)>0$ such that
${\mathcal M}_{{\mathcal W}}(0,1)
\cap(-\gamma, 1 + \gamma) = \varnothing$.
From the almost sure convergence of $(\wb{\mathcal X}_n,\wb
{\wh{\mathcal X}}_n)$ to $
({\mathcal W}, \widehat{{\mathcal W}})$,
we have $\P(\Omega^0_0) = 0$
For $k > 0$, on the event $\Omega^{k}_0$ we show
a forward path $\pi\in{\mathcal W}$ coincides with a dual path
$\hat{\pi}
\in\widehat{\mathcal W}$ for a positive time which leads to a contradiction.
From Proposition \ref{lemPropEtaPtset}, it follows that given $\eta> 0$,
there exist $m_0 \in\N$ and $s_0 \in(1/m_0,1)$ such that $\P(\xi
_{\mathcal W}(1/m_0, 1)
= \xi_{\mathcal W}(1/m_0, s_0) = \xi_{\mathcal W}(0, 1) = k ) > 1-
\eta$,
that is, the paths leading to any single point considered in $
{\mathcal M}_{{\mathcal W}}(0,1) =
{\mathcal M}_{{\mathcal W}}(1/m_0,1)$ have coalesced before time $ s_0$.
Fix $ 0 < \varepsilon< 1/m_0 $ such that
$(x - \varepsilon,x + \varepsilon) \subset(0,1)$ for all $x \in{\mathcal
M}_{{\mathcal W}}(1/m_0, 1) $
and the $\varepsilon$-tubes around the $k$ paths contributing to
${\mathcal M}_{{\mathcal W}}(s_0, 1) $, {viz}., $\pi_1 (t),
\dotsc, \pi_k(t), t \in[s_0, 1]$, given by
\[
T^{i}_{\varepsilon}:= \bigl\{(x,t): \pi_i(t) -
\varepsilon\leq x \leq\pi _i(t) + \varepsilon, s_0 \leq t
\leq1 \bigr\}\qquad\mbox{ for }i=1,\dotsc,k,
\]
are disjoint.
Since we have almost sure convergence on the event $\Omega^k_0$, there
exists $n_0$ such that
one of the $k$ tubes must contain at least two paths, $ \pi_1^{n_0},
\pi_2^{n_0}$ (say)
of ${\mathcal X}_{n_0}$ which do not coalesce by time $1$.
From the construction of dual paths,
it follows that there exists at least one dual
path $\hat{\pi}^{n_0} \in\wb{\widehat{{\mathcal X}}}^{1+}_{n_0}$
lying between $\pi_{1}^{n_0}$ and $\pi_{2}^{n_0}$ for $t\in[s_0,1]$,
and hence
we must have an approximating $\hat{\pi} \in\widehat{\mathcal
W}^{1+}$ close to
$\hat{\pi}^{n_0}$ for $t \in[s_0,1]$.
Since we have only finitely many disjoint $k$ tubes,
taking $\varepsilon\to0$ and using compactness of $\widehat{\mathcal
W}$ we obtain that there
exists $\hat{\pi} \in\widehat{\mathcal W}$ such that
$ \hat{\pi} (t) = \pi_i(t) $ for $ t \in[s_0, 1]$ and for some
$ 1\leq i \leq k$.
This violates the property of Brownian web and
its dual that they do not spend positive Lebesgue time together.
Hence, $\P(\Omega^{k}_{0}) = 0$ for all $k \geq0$ and this completes
the proof of the lemma.
\end{pf}
Lemma \ref{lemEtaptsetConv} immediately gives the following corollary.
\begin{cor}
\label{corEtaweakConv}
As $n \to\infty$, $\xi_n$ converges in distribution to $\xi$.
\end{cor}
Corollary \ref{corEtaweakConv} along with the following lemma
completes the proof of
Proposition \ref{propWeakConv1}.
\begin{lemma}
\label{lemUI}
The family $\{\xi_n: n \in\N\}$ is uniformly integrable.
\end{lemma}
\begin{pf}
For $m \in\N$, let
\[
K_m = [-m,m]^2 \cap{\mathbb Z}^2\quad
\mbox{and}\quad\Omega_m:= \bigl\{ (0,1),(0,-1),(1,1),(1,-1)\bigr
\}^{K_m}.
\]
We assign the product probability measure $\P^{\prime}$ whose
marginals for $ \bu\in K_m $ are
given by
\[
\P^{\prime}\bigl\{\zeta: \zeta(\bu) = (a,b)\bigr\} = \cases{
\displaystyle\frac{p}{2}, &\quad for $a = 1$ and $b \in\{1,-1\}$,
\vspace*{3pt}\cr
\displaystyle\frac{(1-p)}{2},
&\quad for $a = 0$ and $b \in\{1,-1\}$.}
\]
$\P^{\prime}$ is the measure induced by the random variables $\{
(B_{\bu}, U_{\bu}): \bu\in K_m \}$.
For ${\mathcal\zeta} \in\Omega_m$
and for $K \subseteq K_m $,
the $K$ cylinder of $\zeta$ is given by $C(\zeta, K):=\{\zeta
^{\prime}: \zeta^{\prime}(\bu) =
\zeta(\bu)$ for all $\bu\in K\}$. For any two events $A, B \subseteq
\Omega_m$, let
\begin{eqnarray*}
A \Box B &:=& \bigl\{\zeta: \mbox{there exists }K = K(\zeta) \subseteq
K_m
\mbox{ such that } C(\zeta, K) \subseteq A,
\\
&& \mbox{and }C\bigl(\zeta, K^{\prime}\bigr)\subseteq B\mbox{ for
} K^{\prime} = K_m
\setminus K \bigr\}
\end{eqnarray*}
denote the disjoint occurrence of $A$ and $B$.
Note that this definition is associative, that is,
for any $A, B, C \subseteq\Omega_m$
we have $(A \Box B)\Box C = A \Box(B\Box C)$.
Let
\begin{eqnarray*}
F^m_n&:= & \bigl\{\mbox{there exist
}(u_1,n), (u_2, n) \in\wh{V} \mbox{ with } 0 \leq u_1 < u_2 \leq\sqrt{n}\gamma_0\mbox{ and}
\\
&&{} \bigl(v_{1}^l, l\bigr), \bigl(v_{2}^l,
l\bigr) \in V \mbox{ for all } 0 \leq l \leq n\mbox{ such that}
\\
&&{} {-}m \leq v_{1}^l < \hat{h}^l
(u_1,n) (1) < \hat{h}^l (u_2,n) (1) <
v_{2}^l \leq m \bigr\},
\\
E^m_n(k) &:=& \bigl\{\mbox{for } 1 \leq i \neq j\leq k,
\mbox{ there exists } (x_i,0) \in V \mbox{ with}
\\
&&{} h^n(x_i,0) (1) \in[0, \sqrt{n}\gamma_0 ]
, h^n(x_i,0) \neq h^n(x_j,0),
h^l(x_i, 0 ) (1) \in[-m,m]
\\
&&{} \mbox{for all }0\leq l \leq n\bigr\}.
\end{eqnarray*}
We claim that for all $k \geq2$,
\begin{equation}
\label{eqGRSBK} E^m_n(3k)\subseteq\underbrace{F^m_n
\square F^m_n\square \cdots\square F^m_n}_{k~\mathrm{times}}.
\end{equation}
We prove it for $k=2$. For general $k$, the proof is similar.
Let $(u_i, n) \in\wh{V}, 1 \leq i \leq5$ and $ (x_i, 0) \in V,
1 \leq i \leq6$ be
as in Figure~\ref{figUI}. The region explored to obtain the vertex $
\hat{h}^j ( u_i,n)$ for $ 1 \leq j \leq n $
is contained in $ \bigcup_{l=0}^{n-1} [h^l (x_{i},0)(1),\break h^l (x_{i+1},0)
(1) ]\times\{l\} $. Thus, the regions
explored to obtain the dual paths starting from $ (u_1,n), (u_2,n)$ and
the dual paths starting from $ (u_4,n), (u_5,n)$ are disjoint (see
Figure~\ref{figUI}).
Hence, it follows that $ E^m_n(6) \subseteq F^m_n \square F^m_n$.
\begin{figure}
\includegraphics{1134f05.eps}
\caption{The event $E^m_n(6)$.}
\label{figUI}
\end{figure}
Since the event $E^m_n(k)$ is monotonic in $m$, from (\ref{eqGRSBK})
we get
\begin{eqnarray*}
\P(\xi_n \geq3k) & =& \P\Bigl(\lim_{m \to\infty}
E^m_n(3k)\Bigr) = \lim_{m
\to\infty}\P\bigl(
E^m_n(3k)\bigr)
\\
& \leq& \lim_{m \to\infty}\P\bigl( F^m_n \Box
\cdots\Box F^m_n\bigr) = \lim_{m \to\infty}
\P^{\prime}\bigl( F^m_n \Box\cdots\Box
F^m_n\bigr).
\end{eqnarray*}
Applying the BKR inequality (see Reimer \cite{R00}), we get
\begin{equation}
\label{eqUI} \P(\xi_n \geq3k) \leq\lim_{m \to\infty}
\bigl(\P^{\prime}\bigl( F^m_n \bigr)
\bigr)^k = \Bigl(\P\Bigl(\lim_{m \to\infty}
F^m_n \Bigr)\Bigr)^k = \bigl(
\P(F_n)\bigr)^k,
\end{equation}
where $ F_n:= \{$there exist $ (u_1, n), (u_2, n) \in\wh{V} $ with
$ 0\leq u_1 < u_2 \leq\sqrt{n}\gamma_0 $ such that $
\hat{h}^{n} (u_1,n) \neq\hat{h}^{n} (u_2,n) \}$.
For any $(x,t) \in\R^2$ fix $ t_n
= \lfloor n t \rfloor$ and $ x_n = \max\{ \lfloor\sqrt{n} \gamma_0
x \rfloor+ j: j \leq0$,
$( \lfloor\sqrt{n} \gamma_0 x \rfloor+ j, t_n ) \in\wh{V} \}
$. Let
$ \hat{\theta}^{(x,t)}_n \in \widehat{{\mathcal X}}_n (\gamma
_0) $ be the scaling of the path
$\hat{\pi}^{(x_n,t_n)} \in\widehat{{\mathcal X}} $.
Define
\begin{eqnarray*}
F^{\prime}_n&:= & \bigl\{\hat{\theta}^{(0,1)}_n
\mbox{ and } \hat{\theta}^{(1,1)}_n \mbox{ do not
coalesce in time } 1\bigr\}.
\end{eqnarray*}
We observe that $ F_n \subseteq F^{\prime}_n $.
Now $\P(F^{\prime}_n)$ converges to the probability that two
independent Brownian
motions starting at a distance $1$ from each other do not meet
by time $1$. Since $\lim_{n\rightarrow\infty}\P(F^{\prime}_n)<1$,
the family $\{\xi_n:n\in\N\}$ is uniformly integrable.
\end{pf}
\begin{remark}
It is to be noted that Newman, Ravishankar and Sun \cite{NRS05} also used ideas of negative correlation
to establish the weak convergence of ${\mathcal M}_{\wb{\mathcal X}_n}$ as a point process on $\R$
for a more general setup where paths can cross each other.
In our case, the negative correlation ideas come in a much less
essential manner only
to establish uniform integrability as the noncrossing nature of paths
enable us to
obtain Corollary \ref{corEtaweakConv}.
\end{remark}
\section{Proofs of Theorems \texorpdfstring{\protect\ref{BM}}{1.3} and \texorpdfstring{\protect\ref{BEarea}}{1.4}}\label{BMBEarea}
\begin{figure}[b]
\includegraphics{1134f06.eps}
\caption{The two dual paths $\hat{\pi}^{\hat{l}(x,t)}$ and
$\hat{\pi}^{\hat{r}(x,t)}$ enclose the cluster $C(x,t)$.
These dual paths after scaling are each Brownian paths.}
\label{ideafigure}
\end{figure}
In this section, we prove Theorems \ref{BM} and \ref{BEarea}.
The main idea of the proof is that the horizontal distance between the
dual paths $\hat{\pi}^{\hat{r}(x,t)}$ and $\hat{\pi
}^{\hat{l}(x,t)}$ (see Figure~\ref{ideafigure}) form a Brownian
excursion process after scaling. The cluster $C(x,t)$ being enclosed
between these two paths, its size is related to the area under the
Brownian excursion.
For a formal proof, we need to introduce some notation.
For $\tau> 0$, let $S^\tau,S^{\tau^+}:C[0,\infty) \to\R$ be
defined by
$S^\tau(f):= \inf\{t \geq0: f(t + s)\geq f(t )$ for all $0\leq s
\leq\tau\}$
and $S^{\tau^+}(f):= \inf\{t \geq0: f(t + s) > f(t )$ for all $0 <
s \leq\tau\}$.
Let $T^{\tau^+}: C[0,\infty) \to C[0,\infty)$ be the map given by
\begin{eqnarray}
\label{eqfMeander} T^{\tau^+}(f) (s):= \cases{ f\bigl(S^{\tau^+} + s
\bigr)-f\bigl(S^{\tau^+}\bigr), &\quad if $S^{\tau^+} < \infty$,
\cr
f(s), &
\quad otherwise.}
\end{eqnarray}
For a Brownian motion $ W $ with $ W(0) = 0$, we define $W^{\tau} =
T^{\tau^+}(W)$.
From Bolthausen \cite{B76}, we have $S^{\tau^+} = S^\tau< \infty$ almost surely
under the measure induced by $W$ on $C[0,\infty)$ and
$W^{1}\mid _{[0,1]} \disteq W^{+}$ where $W^+$ is the standard Brownian
meander process defined in (\ref{eqBMBE}). From the scaling
property of Brownian motion, it follows that
$ \{ W^{\tau} (s): s \in[0, \tau] \}
\disteq\{ \sqrt{\tau}W^{+}(s / \tau): s \in[0, \tau] \} $.
Durrett, Iglehart and Miller \cite{DIM77} (Theorem 2.1) proved that
$ W \mid {\mathbf1}_{ \{\min_{s \in[0,1]}W (s) \geq-\varepsilon\} } \weak
W^{+}$ as $ \varepsilon\downarrow0$. Using this result and the scaling
property of $W^{\tau}$, given above,
straightforward calculations imply the following lemma and its
corollary (for details, see Roy, Saha and Sarkar \cite{RSS15}).
\begin{lemma}
\label{lemRandomMeander}
For $\tau> 0$ considering $W$ as a standard Brownian motion on
$[0,\infty)$ starting from $0$, we have
$W \mid {\mathbf1}_{\{\min_{t\in[0,\tau]}W(t) \geq-1/n\} }\weak
W^{\tau}$ as $n \to\infty$.
\end{lemma}
Define $\widetilde{W}^{\tau}$ as the process on $C[0,\infty)$ given by
\begin{eqnarray*}
\widetilde{W}^{\tau}(t):= \cases{ W^{\tau}(t), &\quad if $0 \leq t
\leq\tau$,
\cr
W^{\tau}(\tau) + \widetilde{W}(t-\tau), &\quad otherwise,}
\end{eqnarray*}
where $\widetilde{W}$ is a Brownian motion on $[0,\infty)$,
independent of $W^{\tau}$, with $\widetilde{W} (0) = 0$.
For $f \in C[0,\infty)$, let $t_f:= \inf\{s > 0: f(s) = 0\}$
with $t_f = \infty$ if $f(s)\neq0$ for all $s > 0$.
Consider the mapping $H:C[0,\infty) \to C[0,\infty)$ given by $
H(f)(t):= \mathbf{1}_{\{t \leq t_f\}}f(t)$.
We define $W^{+,\tau} = H(W^{\tau})$.
A
similar argument as that of Lemma \ref{lemRandomMeander} gives us
the following corollary.
\begin{cor}
\label{corRandomBetaMeander}
For $\tau> 0$, we have, $W^{\tau} \disteq\widetilde{W}^{\tau}$ and
$W^{+,\tau}
\disteq H(\widetilde{W}^{\tau})$.
\end{cor}
Let $A\subset C[0,\infty)$ be such that
\begin{eqnarray}
\label{defAset} A &:= & \bigl\{f \in C[0,\infty): t_f <\infty\mbox{
and for every } \varepsilon> 0 \mbox{ there exists }
\nonumber\\[-8pt]\\[-8pt]\nonumber
&&{} s\in(t_f, t_f+\varepsilon) \mbox{ with }f(s)<0\bigr\}.
\end{eqnarray}
From Corollary \ref{corRandomBetaMeander}, it follows that
$\P(W^{\tau} \in A) = 1$. Hence, $H$ is continuous almost surely
under the measure induced by $W^\tau$ on $C[0,\infty)$.
Next, we obtain the distribution of $\int_0^\infty W^{+,\tau}(t)\,dt$.
\begin{lemma}
\label{lemmaRandomExcursionArea}
For $\tau, \lambda > 0$, we have
\begin{eqnarray*}
&& \P\biggl(\int_0^\infty W^{+,\tau}(t)\,dt >
\lambda\biggr) = \frac{ \sqrt{\tau}}{2} \int_\tau^{\infty}
t^{-3/2}\wb{F}_{I^+_0} \bigl(\lambda t^{-3/2}\bigr)\,dt.
\end{eqnarray*}
\end{lemma}
\begin{pf}
We give here a straightforward
proof using random walk.
Let $\{S_n: n \geq0\}$ be a
symmetric random walk with variance $1$ starting at $S_0 = 0$.
Since $\P(W^\tau\in A) = 1$, minor modification of the argument used
to prove Lemmas 2.4 and
2.5, Bolthausen \cite{B76} shows that $H\circ T^{\tau^+}$ is almost surely
continuous under
the measure induced by $W$ on $C[0,\infty)$ (for details, see Roy, Saha and Sarkar \cite{RSS15}).
From Donsker's invariance principle and from the
continuous mapping theorem, it follows that
for $\lambda> 0$, a continuity point of
$\int_0^\infty W^{+,\tau}(t) \,dt$, we have
\begin{eqnarray*}
&&\P\biggl(\int_0^\infty W^{+,\tau}(t)\,dt >
\lambda\biggr) = \lim_{n \to\infty} \P\biggl(\int_0^\infty
H\bigl(T^{\tau^+}(Y_n)\bigr) (t)\,dt > \lambda\biggr),
\end{eqnarray*}
where
\begin{equation}
\label{eqYn} Y_n (t):= \frac{ S_k }{\sqrt{n} } + \frac{ (nt - [nt]) }{ \sqrt{n}} (
S_{k+1} - S_k )\qquad\mbox{for }\frac{ k}{ n} \leq t <
\frac{ k+1}{n}.
\end{equation}
A
similar argument as in Lemma 3.1 of Bolthausen \cite{B76} gives us that
(for details, see Roy, Saha and Sarkar \cite{RSS15})
\begin{eqnarray*}
&& \P\biggl(\int_0^\infty H\bigl(T^{\tau^+}(Y_n)
\bigr) (t)\,dt > \lambda\biggr)
\\
&&\qquad = \P\biggl(\int_0^\infty H(Y_n)
(t)\,dt > \lambda\Big| \min_{t\in[0,\tau
]}Y_n(t) \geq0,
t_0 > n\tau\biggr),
\end{eqnarray*}
where
$t_0:= \inf\{n>0: S_n =0\}$ is the first return time to $0$ of the
random walk.
Hence for $\lambda> 0$, a continuity point of $W^{+,\tau}$, we
obtain
\begin{eqnarray*}
&& \P\biggl(\int_0^\infty W^{+,\tau}(t)\,dt >
\lambda\biggr)
\\
&&\qquad = \lim_{n \to\infty} \P\biggl(\int_0^\infty
H(Y_n) (t)\,dt > \lambda \Big| \min_{t\in[0,\tau]}Y_n(t)
\geq0, t_0 > n\tau\biggr)
\\
&&\qquad = \lim_{n \to\infty} \sum_{j=1}^{\infty}
\frac{n^{3/2}\P(t_0 = \lfloor n\tau\rfloor+ j)}{n(\sqrt
{n}\P(t_0 > n\tau))}
\\
&&\quad\qquad{} \times\P\biggl(\int_0^\infty
H(Y_n) (t)\,dt > \lambda \Big| \min_{t\in[0,\tau]}Y_n(t)
\geq0, t_0 = \lfloor n\tau\rfloor+ j\biggr)
\\
&&\qquad = \lim_{n \to\infty} \frac{1}{\sqrt{n}\P(t_0 > n \tau)}\int_{\lfloor n\tau
\rfloor/n}^\infty
g_n(t) f_n(t)\,dt,
\end{eqnarray*}
where\vspace*{1pt} for $t\geq\lfloor n\tau\rfloor/n$,
$f_n(t) = \P(\int_0^\infty H(Y_n)(u)\,du > \lambda\mid \min_{t\in
[0,\tau]}Y_n(t) \geq0,
t_0 = \lfloor n t \rfloor+ 1)$ and
$g_n(t) = n^{3/2}\P(t_0 = \lfloor n t \rfloor+ 1)$.
It is known that (see Kaigh \cite{K76})
\[
\lim_{n\to\infty}\sqrt{n}\P(t_0 > n) = \sqrt{
\frac{2}{\pi
}}\quad\mbox{and}\quad \lim_{n\to\infty}n^{3/2}
\P(t_0 = n) = \frac{1}{\sqrt
{2\pi}}.
\]
Hence, from Theorem 2.6 Kaigh \cite{K76} together with
the continuous mapping theorem and the
scaling property of the Brownian motion we have
$\P(\int_0^\infty W^{+,\tau}(t)\,dt > \lambda) = \frac{ \sqrt{\tau
}}{2} \int_\tau^{\infty} t^{-3/2}\wb{F}_{I^+_0}(\lambda
t^{-3/2})\,dt$.
Finally, $I^+_0$ being a continuous random variable (see Louchard and Janson \cite
{JL07}), it
follows that the random variable $\int_0^\infty W^{+,\tau}(t)\,dt$ is
continuous. This
completes the proof.
\end{pf}
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{BM}}{1.3}}
Recall that $\hat{r}(x,t)$ and $\hat{l}(x,t)$ denote
the right and left dual neighbours, respectively, of $(x,t)\in V$. Let
$\hat{D}_k(x,t):=
\hat{h}^k (\hat{r}(x,t))(1) - \hat{h}^k (\hat{l}(x,t))(1)$ where $\hat{h}$ is as defined after (\ref
{eqndefalar}). Consider the continuous
function $\hat{D}^{(x,t)}_n \in C[0,\infty)$ given by
\begin{eqnarray}\label{eqDualDn}
&&\hat{D}_n^{(x,t)} (s):=
\frac{ \hat{D}_k (x,t) }{ \gamma_0
\sqrt{n} } + \frac{ (ns - [ns]) }{ \gamma_0 \sqrt{n} } \bigl( \hat{D}_{k+1} (x,t) -
\hat{D}_k (x,t)\bigr)
\nonumber\\[-8pt]\\[-8pt]
\eqntext{\displaystyle\mbox{for }\frac{ k}{ n} \leq s
\leq\frac{ k+1}{n}.}
\end{eqnarray}
Fix $\tau> 0$.
For an ${\mathcal H}\times\widehat{{\mathcal H}}$ valued random
variable $(K,\widehat{K})$ and
for $x \in{\mathcal M}_{K}(0, \tau)$ let $\hat{\pi}^{(x,\tau
)}_r$ be defined as
\begin{eqnarray*}
\hat{\pi}^{(x,\tau)}_r:= \cases{ \hat{\pi}, &\quad if $
\sigma_{\hat{\pi}} = \tau$ and there is no $\hat{\pi}_1 \in
\widehat{K}^{\tau+}$ with $x < \hat{\pi}_1(\tau)<\hat{\pi}(\tau)$,
\cr
\hat{\pi}_0, &\quad otherwise,}
\end{eqnarray*}
where $\hat{\pi}_0$ denotes the constant zero function with
$\sigma_{\hat{\pi}_0} = \tau$. In other words,
$\hat{\pi}^{(x,\tau)}_r \in\widehat{K}^{\tau+}$ is such that
among all $\hat{\pi}\in\widehat{K}^{\tau+}$,
$\hat{\pi}^{(x,\tau)}_r(\tau)$ is closest to $(x,\tau)$
on the right. Similarly, $\hat{\pi}^{(x,\tau)}_l$ is defined as
the path closest to $(x,\tau)$
on the left.\vspace*{2pt}
For $\hat{\pi}\in\wh{\Pi}$ with $\sigma_{\hat{\pi}}
\geq\tau$, let $g(\hat{\pi}) \in C[0,\infty)$ be given by $
g(\hat{\pi})(t):= \hat{\pi}(\tau-t)$ for $t \geq0$.
Fix $f \in C_b[0,\infty)$ and define
\begin{eqnarray*}
&&\kappa_{ (K, \widehat{K}) }(\tau,f):= \sum_{x \in{\mathcal M}_{K}(0,\tau)} f
\bigl(g\bigl(\hat{\pi}^{(x,\tau
)}_r\bigr) - g\bigl(\hat{\pi}^{(x,\tau)}_l\bigr)\bigr).
\end{eqnarray*}
Let\vspace*{1pt} $\kappa(\tau,f):= \kappa_{({\mathcal W},\widehat{\mathcal
W})}(\tau,f)$,
and $\kappa_n(\tau,f):= \kappa_{(\wb{\mathcal X}_n,\wb{\widehat
{{\mathcal X}}}_n)}(\tau,f) $.
Comparing with the definitions introduced
in (\ref{eqnDefNK}), for $m_f = \sup\{\llvert f(s)\rrvert: s \in[0,\infty)\}$
we have
\begin{equation}
\label{eqnRelationKappa} \kappa(\tau,f) \leq m_f\xi_{{\mathcal W}}(0,\tau),
\kappa_n(\tau,f) \leq m_f\xi_{\wb{\mathcal X}_n} (0,\tau)
\qquad\mbox{for all } n \geq1.
\end{equation}
From Proposition \ref{lemPropEtaPtset}, we know that for each
$x \in{\mathcal M}_{\mathcal W}(0,\tau)$, there exist $\hat{\pi
}^{(x, \tau)}_r$,
$\hat{\pi}^{(x, \tau)}_l \in\widehat{\mathcal W}$ both starting
from $ (x, \tau)$
with $\hat{\pi}^{(x, \tau)}_r(0) >
\hat{\pi}^{(x, \tau)}_l(0) $.
The following lemma is the main tool for
establishing Theorem \ref{BM} and Theorem \ref{BEarea}.
\begin{lemma}
\label{lemmaKappanKappaExp}
For $\tau> 0$ and $f \in C_b[0,\infty)$, we have
\begin{equation}
\label{eqnKnLimitK} \lim_{n \to\infty}\E\bigl[\kappa_n(\tau,f)
\bigr] = \E\bigl[\kappa(\tau,f)\bigr].
\end{equation}
\end{lemma}
\begin{pf}
From (\ref{eqnRelationKappa}) and Lemma \ref{lemUI}, it follows that
the family $\{\kappa_n(\tau,f): n \in\N\}$ is uniformly integrable.
Hence, it suffices to show that $\kappa_n(\tau,f)$ converges in
distribution to $\kappa(\tau,f)$ as
$n \to\infty$. We assume\vspace*{1pt} that we are working on
a probability space such that $(\wb{{\mathcal X}}_n, \wb{\widehat
{{\mathcal X}}}_n)$ converges
to $({\mathcal W}, \widehat{{\mathcal W}})$ almost surely in
$({\mathcal H}\times\widehat{\mathcal H},
d_{{\mathcal H}\times\widehat{{\mathcal H}}})$.
From Lemma \ref{lemEtaptsetConv},
we have $\lim_{n \to\infty}\xi_{\wb{{\mathcal X}}_n}(0,\tau) =
\xi_{\mathcal W}(0,\tau)$ almost surely,
and hence from (\ref{eqnRelationKappa}) for $ \xi_{\mathcal
W}(0,\tau) = 0$, we have $
\kappa_n(\tau,f) = \kappa(\tau,f) = 0$ for all $ n $ large.
Next, we consider the case $\xi_{\mathcal W}(0,\tau) = k\geq1$.
Suppose ${\mathcal M}_{\mathcal W}(0,\tau) = \{x_1,\ldots, x_k \}$.
From Lemma \ref{lemEtaptsetConv},
we have\vspace*{2pt} that ${\mathcal M}_{\wb{{\mathcal X}}_n}(0,\tau) = \{
x^n_1,\ldots, x^n_k \}$
for all large $n$ and $\lim_{n \to\infty} x^n_i = x_i$ for all $1
\leq i \leq k$.
Fix $T\geq0$. To complete the proof, it is enough to show that
$\sup\{\llvert \hat{\pi}^{(x_i,\tau)}_r(\tau-s) - \hat{\pi
}^{(x^n_i,\tau)}_r(\tau-s)\rrvert
\vee\llvert \hat{\pi}^{(x_i,\tau)}_l(\tau-s) - \hat{\pi
}^{(x^n_i,\tau)}_l(\tau- s)\rrvert:
s \in[0, \tau+ T]\} \to0$ as $n \to\infty$ for all $1 \leq i \leq k$.
We observe that for $y_i \in(\hat{\pi}^{(x_i,\tau)}_r(0),
\hat{\pi}^{(x_i,\tau)}_l(0))
\cap\Q$ there exists $\pi^{(y_i,0)} \in{\mathcal W}$ such that $\pi
^{(y_i,0)}(\tau) = x_i$.
We choose $ \varepsilon= \varepsilon(\omega) > 0 $ so that for all $1 \leq
i \leq k$:
\begin{longlist}[(a)]
\item[(a)] $(x_i- \varepsilon, x_i +\varepsilon) \subset(0,1)$, $(x_i-
2\varepsilon, x_i +2\varepsilon)
\cap{\mathcal M}_{\mathcal W}(0, \tau) = \{x_i\}$ and
\item[(b)] $(\hat{\pi}^{(x_i,\tau)}_r(0)-\pi
^{(y_i,0)}(0))\wedge
(\pi^{(y_i,0)}(0)-\hat{\pi}^{(x_i,\tau)}_l(0)) > 2 \varepsilon$.
\end{longlist}
Let $n_0 = n_0(\omega) $ be such that,
for all $n \geq n_0$:
\begin{longlist}[(ii)]
\item[(i)] $\xi_{\wb{{\mathcal X}}_n}(0,\tau) = \xi_{\mathcal
W}(0,\tau)$
and
\item[(ii)] for all $1 \leq i \leq k$ there exist $\hat{\pi
}^{1,n}_i, \hat{\pi}^{2,n}_i
\in\wb{\wh{\mathcal X}}^{\tau+}_n$ and
$\pi^{n}_i \in\wb{{\mathcal X}}^{0 -}_n$ such that
$\sup\{\llvert \hat{\pi}^{1,n}_i(\tau- s) - \hat{\pi}^{(x_i,\tau
)}_r(\tau- s)\rrvert \vee
\llvert \hat{\pi}^{2,n}_i(\tau- s) - \hat{\pi}^{(x_i,\tau
)}_l(\tau- s)\rrvert \vee
\llvert \pi^{n}_i(\tau- s) - \pi^{(y_i,0)}(\tau- s)\rrvert: s \in[0,\tau+
T]\} < \varepsilon$.
\end{longlist}
The choice of $n_0 $ ensures that ${\mathcal M}_{\wb{{\mathcal X}}_n}(0,\tau) \cap(x_i - \varepsilon,
x_i + \varepsilon) = \{x^{n}_i\}$. Since there exist only two dual paths
starting from $(x_i,\tau)$,
because of the uniqueness of $x_i^n$ in the interval
$(x_i-\varepsilon, x_i+\varepsilon)$ and the noncrossing nature of our
paths we must have
$\hat{\pi}^{(x^n_i,\tau)}_r(\tau-s) = \hat{\pi
}^{1,n}_i(\tau-s)$ and
$\hat{\pi}^{(x^n_i,\tau)}_l(\tau-s) = \hat{\pi
}^{2,n}_i(\tau-s)$ for all
$s \in[0, \tau+T]$ and for all $n \geq n_0$ (for details, see Roy, Saha and Sarkar \cite{RSS15}).
Since $T\geq0$ is chosen arbitrarily, this completes the proof.
\end{pf}
The next lemma calculates $\E[\kappa(\tau,f)]$.
\begin{lemma}
\label{lemmaOpenSetBdry}
For $\tau>0$ and $f \in C_b[0,\infty)$, we have
\[
\E\bigl[\kappa(\tau,f)\bigr] = \E\bigl( f\bigl(\sqrt{2} W^{+,\tau} \bigr)
\bigr) /\sqrt{\pi \tau}.
\]
\end{lemma}
\begin{pf} Let\vspace*{2pt} $I_n \subset\{0,1,\ldots,n-1\}$ given by
$I_n:=
\{i: 0 \leq i \leq n-1, \hat{\pi}^{(i/n,\tau)},\break \hat{\pi
}^{((i+1)/n,\tau)} \in
\widehat{\mathcal W}$ such that $\hat{\pi}^{(i/n,\tau)}(0) <
\hat{\pi}^{((i+1)/n,\tau)}(0)\}$. We define
\[
{\mathcal R}_n(\tau,f) = \sum_{i \in I_n} f
\bigl(g\bigl(\hat{\pi }^{((i+1)/n,\tau)} - \hat{\pi}^{(i/n,\tau)}\bigr)
\bigr).
\]
From Proposition \ref{lemPropEtaPtset}, we know ${\mathcal
M}_{{\mathcal W}}(0,\tau) \cap\Q= \varnothing$.
For each $x \in{\mathcal M}_{\mathcal W}(0,\tau)$, set $ l^x_n
=\lfloor nx \rfloor/n $
and $ r^x_n = l^x_n + (1/n)$.
Since there are exactly two dual paths $\hat{\pi}^{(x,\tau)}_r$
and $\hat{\pi}^{(x,\tau)}_l$
starting from $(x,\tau)$ with $\hat{\pi}^{(x,\tau)}_r(0) >
\hat{\pi}^{(x,\tau)}_l(0)$,
from Proposition 3.2(e) of Sun and Swart \cite{SS08} it follows that
$\{\hat{\pi}^{(l^x_n,\tau)}: n \in\N\}$ and $\{\hat{\pi
}^{(r^x_n,\tau)}: n \in\N\}$
converge to $\hat{\pi}^{(x,\tau)}_l$ and $\hat{\pi
}^{(x,\tau)}_r$, respectively,
in $(\wh{\Pi}, d_{\wh{\Pi}})$ as $n \to\infty$.
Hence, $ {\mathcal R}_n(\tau,f) \to\kappa(\tau,f) $ almost surely\vspace*{1pt}
as $n \to\infty$.
For each $i \in I_n$, there exist $y_i \in(\hat{\pi}^{(i/n,\tau)}(0),
\hat{\pi}^{((i+1)/n,\tau)}(0))\cap\Q$ and $\pi^{(y_i,0)}\in
{\mathcal W}$ such that
$\pi^{(y_i,0)}(\tau) \in{\mathcal M}_{\mathcal W}(0,\tau)$.
Hence, for $m_f = \sup\{\llvert f(t)\rrvert: t \geq0\}$ we have
${\mathcal R}_n(\tau,f) \leq m_f\xi_{\mathcal W}(0,\tau)$ for all
$n$. As
$\E[\xi_{\mathcal W}(0,\tau)] < \infty$, the family $\{{\mathcal
R}_n(\tau,f): n \in\N\}$
is uniformly integrable, and hence we have $\lim_{n \to\infty} \E
[{\mathcal R}_n(\tau,f)] =
\E[\kappa(\tau,f)]$. From the fact that $ g( \hat{\pi
}^{((i+1)/n,\tau)})
- g( \hat{\pi}^{(i/n,\tau)}) \disteq H(1/n + \sqrt{2}W) $
where $ W$ denotes the standard Brownian motion on $[0,\infty)$, we have
\begin{eqnarray*}
&& \lim_{n \to\infty} \E\bigl[{\mathcal R}_n(\tau,f)\bigr]
\\
&&\qquad = \lim_{n \to\infty}n \E\Bigl[f\bigl(H(1/n + \sqrt{2}W)\bigr) \big|
1/n + \min_{t\in[0,\tau]}\sqrt{2}W (t) > 0\Bigr]
\\
&&\quad\qquad{} \times\P\Bigl(1/n + \min_{t\in[0,\tau]}\sqrt{2}W (t) > 0
\Bigr)
\\
&&\qquad = \lim_{n \to\infty}\E\Bigl[ f\bigl(H(1/n+ \sqrt{2}W) \bigr) \big|
\min_{t\in[0,\tau]}\sqrt{2}W(t) > -1/n \Bigr] n
\\
&&{}\quad\qquad{}\times \bigl(2\Phi(1/\sqrt{2
\tau }n) - 1\bigr)
\nonumber
\\
&&\qquad = \E\bigl( f\bigl(\sqrt{2} W^{+,\tau}\bigr) \bigr) /\sqrt{\pi\tau},
\end{eqnarray*}
where the last equality follows from Lemma \ref{lemRandomMeander},
Slutsky's theorem and
continuous mapping theorem.
This completes the proof.
\end{pf}
Now, to complete the proof of Theorem \ref{BM} we need the following lemmas.
\begin{lemma}
\label{lemDhatBM}
For $\tau> 0$, we have
$\hat{D}^{(0,0)}_n \mid {\mathbf1}_{\{ L(0,0) > n\tau\}}
\weak \sqrt{2} W^{+,\tau}$
as $n \to\infty$.
\end{lemma}
\begin{pf}
Using translation invariance of our model, we have
\[
\E\bigl(f\bigl(\hat{D}^{(0,0)}_n\bigr) \mid {
\mathbf1}_{\{ L(0,0) > n\tau\}}\bigr) = \frac{ \E[\kappa_n(\tau,f)] }{ \E[\xi_{\wb{\mathcal X}_n}(0,\tau)] } \to\frac{ \E[\kappa(\tau,f)] } { \E[\xi_{\mathcal W}(0,\tau)] } = \E
\bigl( f\bigl(\sqrt{2} W^{+,\tau} \bigr)\bigr).
\]
This holds for all $f \in C_b[0, \infty)$ which completes the
proof.
\end{pf}
\begin{lemma}
\label{lemDhatDnKn}
For $\tau> 0$, we have:
\begin{longlist}[(a)]
\item[(a)] $\sup\{\llvert \hat{D}^{(0,0)}_n(s) -
D^{(0,0)}_n(s)\rrvert: s \geq0\}
\mid {\mathbf1}_{\{ L(0,0) > n\tau\}} \prob0$ as $n \to\infty$,
\item[(b)] $\sup\{\llvert K^{(0,0)}_n(s) - pD^{(0,0)}_n(s)\rrvert: s \geq
0\}
\mid {\mathbf1}_{\{ L(0,0) > n\tau\}} \prob0$ as $n \to\infty$.
\end{longlist}
\end{lemma}
\begin{pf}
For part (a), fix $0 < \alpha< 1/2$, $T
\geq0$ and we observe that
\begin{eqnarray*}
&& \P\bigl(\sup\bigl\{\bigl\llvert \hat{D}_k(0,0)-
D_k(0,0)\bigr\rrvert: k \geq0\bigr\} \geq n^{\alpha
}, L(0,0)
> n\tau\bigr)
\\
&&\qquad \leq\P\bigl(\max\bigl\{\bigl\llvert \hat{D}_k(0,0)-
D_k(0,0)\bigr\rrvert: 0 \leq k \leq n(\tau + T) + 1\bigr\} \geq
n^{\alpha},
\\
&&\quad\qquad{} L(0,0) > n\tau\bigr)
+ \P\bigl(L(0,0) > n(\tau+T)\bigr).
\end{eqnarray*}
Because of Theorem \ref{clusterheight}, it is enough to show that
$ \sqrt{n} \P(\max\{\llvert \hat{D}_k(0,0)- D_k(0,0)\rrvert:0\leq k \leq
n(\tau+ T)+1\} \geq
n^{\alpha}, L(0,0) > n\tau)\to0 $ as $ n\to\infty$.
Here, we present the simple idea behind the proof; the details are
available in Roy, Saha and Sarkar \cite{RSS15}.
The distance $d^l_k$ between $l_k(0,0)$ and the closest open vertex to
the left of $l_k(0,0)$ being $n^{\alpha}$ or more has a probability
$(1-p)^{n^{\alpha}}$. Thus, the probability that the maximum such
difference for $0\leq k \leq n(\tau+ T)+1$
is bigger that $n^{\alpha}$ is of the order $n(1-p)^{n^{\alpha}}$.
Similarly, for the distance $d^r_k$ associated with the vertex $r_k(0,0)$.
Since $\llvert \hat{D}_k(0,0)- D_k(0,0)\rrvert \leq d^l_k + d^r_k$, as $ n\to
\infty$, we have that
$ \sqrt{n} \P(\max\{\llvert \hat{D}_k(0,0)- D_k(0,0)\rrvert:0\leq k \leq
n(\tau+ T)+1\} \geq
n^{\alpha}, L(0,0) > n\tau)$ converges to $0$.
For part (b) of the lemma, we need ${D}^{(0,0)}_n \mid {\mathbf
1}_{\{ L(0,0) > n\tau\}} \weak \sqrt{2} W^{+,\tau}$
as $n \to\infty$ which follows from part (a) and Lemma \ref
{lemDhatBM}. Hence, $r_k(0,0) - l_k(0,0)$ is of the order $\sqrt{n}$.
Also given $l_k(0,0)$ and $r_k(0,0)$, the number of open vertices lying
between these vertices has a binomial distribution with parameters
$(r_k(0,0) - l_k(0,0) -1)$ and $p$. Since these open vertices together
with $l_k(0,0)$ and $r_k(0,0)$ constitute $C_k(0,0)$, the proof follows
from similar order comparisons as done in (a).
\end{pf}
\begin{pf*}{Proof of Theorem \ref{BM}}
We remarked that $W^{1}\mid _{[0,1]} = W^{+,1}\mid _{[0,1]} \disteq W^{+}$.
The proof of Theorem \ref{BM} follows from Lemmas \ref{lemDhatBM}
and \ref{lemDhatDnKn} and Slutsky's theorem
with the choice of $\tau= 1$.
\end{pf*}
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{BEarea}}{1.4}}
For $\lambda> 0$, let $\bar{\lambda}:= \lambda^{3/2}(\sqrt{2}\gamma_0p)^{-1}$.
We show that:
\begin{lemma}
\label{lemmaLambdaBetaPositiveNonTruncate}
For $ \tau, \lambda > 0 $,
\begin{eqnarray*}
&& \lim_{n \to\infty}\sqrt{n}\P \Biggl( L(0,0) > n\tau, \sum
_{k = 0}^
{\infty}\# C_k(0,0) > (\lambda
n)^{3/2} \Biggr)
\\
&&\qquad = \frac{1}{ \gamma_0 \sqrt{\pi\tau}} \P \biggl( \sqrt{2}\int_{0}^{\infty}
W^{+,\tau}(t)\,dt > \bar{\lambda} \biggr)
\\
&&\qquad = \frac{1}{ 2\gamma_0 \sqrt{\pi}}\int
_\tau ^\infty\wb{F}_{I^+_0} \bigl(\bar{\lambda} t^{-3/2}\bigr)t^{-3/2} \,dt.
\end{eqnarray*}
\end{lemma}
\begin{pf} For $f \in C[0,\infty)$ let
$I(f):= \int_0^\infty H(f)(t)\,dt$. Since $\P(W^\tau\in A) = 1$ where
$A$ is defined
as in (\ref{defAset}), $I$ is almost surely continuous under the
measure induced by $W^\tau$ on
$C[0,\infty)$. The proof follows from
Theorem \ref{BM}(ii) and the continuous mapping theorem.
\end{pf}
From the previous lemma,
we derive the following.
\begin{cor}
\label{corLambdaPositiveNonTruncate}
For $ \lambda> 0 $, we have
\begin{eqnarray*}
&& \lim_{n \to\infty}\sqrt{n}\P \bigl(\# C(0,0) > (\lambda
n)^{3/2} \bigr) = \frac{1}{ 2\gamma_0 \sqrt{\pi}}\int_0^\infty
\wb{F}_{I^+_0} \bigl(\bar{\lambda} t^{-3/2}\bigr)t^{-3/2}
\,dt.
\end{eqnarray*}
\end{cor}
\begin{pf} For any $\tau> 0$, we have
$\P(\# C(0,0) > (n\lambda)^{3/2}) \geq\P(L(0,0)>n\tau,\# C(0,0) >
(n\lambda)^{3/2})$, and hence\vspace*{2pt}
$\liminf_{n \to\infty}\sqrt{n}\P(\# C(0,0) > (n\lambda)^{3/2})
\geq
\frac{1}{ 2\gamma_0 \sqrt{\pi}}\int_0^\infty\wb{F}_{I^+_0}
(\bar{\lambda} t^{-3/2})t^{-3/2} \,dt$.\vspace*{2pt
We observe that
\begin{eqnarray*}
&& \sqrt{n}\P\bigl(L(0,0)\leq n\tau, \# C(0,0) > (n\lambda)^{3/2}\bigr)
\\
&&\qquad \leq\sqrt{n}\P\Biggl(\sum_{k=0}^{\lfloor n\tau\rfloor}
\hat{D}_k(0,0) > (n\lambda)^{3/2}\Biggr)
\\
&&\qquad \leq\sqrt{n}\E\Biggl[\sum_{k=0}^{\lfloor n\tau\rfloor}
\widehat {D}_k(0,0)\Biggr](n\lambda)^{- 3/2}
\\
&&\qquad = \sqrt{n}\bigl(\lfloor n\tau\rfloor+ 1\bigr)\E\bigl(\widehat
{D}_0(0,0)\bigr) (n\lambda)^{- 3/2},
\end{eqnarray*}
where we have used the fact that $\{\hat{D}_k(0,0) =
\hat{h}^k(\hat{r}(0,0))(1) - \hat{h}^k(\hat{l}(0,0))(1): k \geq0\}$
is a martingale (see Proposition \ref{propMartingale}).
From the earlier discussions, it also follows that $\E(\widehat
{D}_0(0,0))\leq
2\E(G) = 2(1-p)p^{-1}$ where $G$ is a geometric random variable. Thus,
$\limsup_{n\to\infty}\sqrt{n}\P(L(0,0)\leq n\tau, \# C(0,0) >
(n\lambda)^{3/2})
=0 $ as $\tau\to0$,
which completes the proof.
\end{pf}
\begin{pf*}{Proof of Theorem \ref{BEarea}}
We first recall the result Lemma 6.1 of Resnick \cite{R07},
page 174 which states that for nonnegative Radon measures $\mu, \mu
_n, n \geq1$, on
$[0,\infty)^d \setminus\{ \mathbf{0}\}$ we have $\mu_n\stackrel
{v}{\to} \mu$ if and only if $\mu_n ([0,x_1] \times\cdots
\times[0, x_d] )^c \to
\mu ([0,x_1] \times\cdots\times[0, x_d] )^c$ for all
$x_1, \ldots, x_d \geq0$ with $(x_1, \ldots, x_d) \neq\mathbf{0}$.
This result implies that
Lemma \ref{lemmaLambdaBetaPositiveNonTruncate} together
with Corollary \ref{corLambdaPositiveNonTruncate}
and Theorem \ref{clusterheight} prove (\ref{eqnHack})
Fix $\tau>0$, $\lambda> 0$. For $\alpha< 2/3$, $\delta> 0$ and
for all large $n$, we have
$\P(L(0,0)>n\tau,\#C(0,0)>(n\lambda)^{1/\alpha}) \leq
\P(L(0,0)>n\tau,\#C(0,0)>(n\delta)^{3/2})$. Fix any $\varepsilon> 0$
and choose
$\delta= \delta(\varepsilon) > 0$ so that $\frac{1}{\gamma_0\sqrt
{\pi\tau}}
\P ( \sqrt{2}\int_{0}^{\infty} W^{+,\tau}(t)\,dt > \wb{\delta
} )
< \varepsilon$, where $\wb{\delta} = \delta^{3/2}(\gamma_0p)^{-1}$.
From Lemma \ref{lemmaLambdaBetaPositiveNonTruncate}, we have
\[
\limsup_{n \to\infty}\sqrt{n}\P\bigl(L(0,0)>n\tau,\#C(0,0)>(n\lambda
)^{1/\alpha}\bigr) < \varepsilon.
\]
On the other hand,
from the properties of $W^{+}$ and $W^{\tau}$, it follows that
$\P(\int_0^\infty W^{+,\tau}(t)\,dt > 0) = 1$ for $\tau> 0$. Now for
$\alpha> 2/3$ and
$\delta> 0$ we have $\P(L(0,0)>n\tau,\#C(0,0) > (n\lambda
)^{1/\alpha}) \geq
\P(L(0,0)>n\tau,\#C(0,0) > (n\delta)^{3/2})$ for all large $n$.
Again from Lemma \ref{lemmaLambdaBetaPositiveNonTruncate}, we have
\begin{eqnarray*}
&& \liminf_{n \to\infty}\sqrt{n}\P\bigl(L(0,0)>n\tau,\#C(0,0)>(n
\lambda )^{1/\alpha}\bigr)
\\
&&\qquad \geq \frac{1}{\gamma_0\sqrt{\pi\tau}} \P \biggl( \sqrt{2}\int_{0}^{\infty}
W^{+,\tau}(t)\,dt > \wb{\delta } \biggr).
\end{eqnarray*}
Since
\begin{eqnarray*}
&& \limsup_{n \to\infty}\sqrt{n} \P\bigl(L(0,0)>n\tau,\#C(0,0)>(n
\lambda)^{1/\alpha}\bigr)
\\
&&\qquad \leq\lim_{n \to\infty}\sqrt{n}\P\bigl(L(0,0)>n\tau\bigr) =
\frac
{1}{\gamma_0\sqrt{\pi\tau}},
\end{eqnarray*}
letting $\delta\to0$,
we have $\lim_{n \to\infty}\sqrt{n}\P(L(0,0)>n\tau,\#
C(0,0)>(n\lambda)^{1/\alpha}) =
\frac{1}{\gamma_0\sqrt{\pi\tau}}$ for $\alpha> 2/3$.
This completes the proof of (\ref{eqnTrivialCasesHack}).
The argument for $(L(0,0), (D_{\max}(0,0))^{1/2})$ being similar is
omitted. \end{pf*}
\section*{Acknowledgements}
Kumarjit Saha is grateful to the Indian Statistical Institute for
a fellowship to pursue his Ph.D. The authors also thank the referee for
comments which led to a significant improvement of the paper.
|
1,314,259,995,636 | arxiv | \section{#1}\setcounter{equation}{0}\setcounter{theorem}{0}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{{\mathbb N}}{\hfill\nonumber}
\newcommand{\textrm}{\textrm}
\newcommand \nc {\newcommand}
\nc \proof {\noindent {\em{Proof.\/ }}}
\nc \qed {$\Box$\hfill}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{question}[theorem]{Question}
\nc \bth[1] {\begin{theorem}\label{t#1} }
\nc \ble[1] {\begin{lemma}\label{l#1} }
\nc \bpr[1] {\begin{proposition}\label{p#1} }
\nc \bco[1] {\begin{corollary}\label{c#1} }
\nc \bde[1] {\begin{definition}\label{d#1}\rm }
\nc \bex[1] {\begin{example}\label{e#1}\rm }
\nc \bre[1] {\begin{remark}\label{r#1}\rm }
\nc \bcon[1] {\begin{conjecture}\label{con#1}\rm }
\nc \bque[1] {\begin{question}\label{que#1}\rm }
\nc {\eth} { \end{theorem} }
\nc {\ele} { \end{lemma} }
\nc {\epr}{ \end{proposition} }
\nc {\eco} { \end{corollary} }
\nc {\ede} {\end{definition} }
\nc {\eex} { \end{example} }
\nc {\ere} {\end{remark} }
\nc {\econ} { \end{conjecture} }
\nc {\eque} {\end{question} }
\nc \eqref[1] {{\rm{(\ref{#1})}}}
\nc \thref[1]{Theorem \ref{t#1}}
\nc \leref[1]{Lemma \ref{l#1}}
\nc \prref[1]{Proposition
\ref{p#1}} \nc \coref[1]{Corollary \ref{c#1}}
\nc \deref[1]{Definition \ref{d#1}}
\nc \exref[1]{Example \ref{e#1}}
\nc \reref[1]{Remark \ref{r#1}}
\nc \conref[1]{Conjecture\ref{con#1}}
\newcommand {\normprod}[1]{ {\textrm{:}}{#1}{\textrm{:}} }
\def W_{1+\infty} {W_{1+\infty}}
\def \W(N) {W_{1+\infty}(N)}
\def {\mathcal A} {{\mathcal A}}
\def {\mathcal M} {{\mathcal M}}
\def {\mathcal L} {{\mathcal L}}
\def {\mathcal O} {{\mathcal O}}
\def {\mathbb R} {{\mathcal R}}
\def {\mathcal D} {{\mathcal D}}
\def \gamma {\gamma}
\def {\mathcal B} {{\mathcal B}}
\def b {b}
\newcommand{\hfill\nonumber}{\hfill\nonumber}
\def{\mathrm{\, d}}{{\mathrm{\, d}}}
\def {\mathcal K} {{\mathcal K}}
\def\alpha{\alpha}
\def\beta{\beta}
\def\delta{\delta}
\def\Delta{\Delta}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\epsilon{\epsilon}
\def\varepsilon{\varepsilon}
\def\lambda{\lambda}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\varkappa{\varkappa}
\def\omega{\omega}
\def\Omega{\Omega}
\def\varphi{\varphi}
\def\sigma{\sigma}
\def\Sigma{\Sigma}
\def\theta{\theta}
\def\Theta{\Theta}
\def\zeta{\zeta}
\def{\mathrm d}{\partial}
\def {\mathbb R} {{\mathbb R}}
\def {\mathbb C} {{\mathbb C}}
\def {\mathbb Z} {{\mathbb Z}}
\def {\mathbb N} {{\mathbb N}}
\def {\mathbb V} {{\mathbb V}}
\def {\mathbb F} {{\mathbb F}}
\def {\mathbb N} {{\mathbb N}}
\def {\mathbb Z} {{\mathbb Z}}
\def {\mathbb Q} {{\mathbb Q}}
\def {\mathbb R} {{\mathbb R}}
\def {\mathbb C} {{\mathbb C}}
\def {\mathrm{Hom}}{ {\mathrm{Hom}}}
\def {\mathrm{Aut}}{ {\mathrm{Aut}}}
\def {\mathrm{End}}{ {\mathrm{End}}}
\def {\mathrm{Tr}}{ {\mathrm{Tr}}}
\def {\mathrm{Coker}} { {\mathrm{Coker}} }
\def {\mathrm{ord}} { {\mathrm{ord}} }
\def {\mathrm{rank}} { {\mathrm{rank}} }
\def {\mathrm{span}} { {\mathrm{span}} }
\def {\mathrm{const}} { {\mathrm{const}} }
\def {\mathrm{mod}} { {\mathrm{mod}} }
\def {\mathrm{Spec}} { {\mathrm{Spec}} }
\def {\mathrm{diag}} { {\mathrm{diag}} }
\def {\mathrm{deg}} { {\mathrm{deg}} }
\def {\mathrm{mult}} { {\mathrm{mult}} }
\def {\mathrm{Res}} { {\mathrm{Res}} }
\def {\mathrm{ad}} { {\mathrm{ad}} }
\def {\mathrm{Ad}} { {\mathrm{Ad}} }
\def {\mathrm{wt}} { {\mathrm{wt}} }
\def {\mathrm{Psd}} { {\mathrm{Psd}} }
\def {\mathrm{Im}} { {\mathrm{Im}} }
\def {\mathrm{Re}} { {\mathrm{Re}} }
\def {\partial} { {\partial}}
\renewcommand \ker { {\mathrm{Ker}} }
\def \overrightarrow {\overrightarrow }
\nc \Wr {Wr} \nc \GRN { \Gr^{(N)} }
\nc \GRA[1] { \Gr_A^{(#1)} }
\nc \GRAN { \GRA{N} } \nc \GrA[1] { \Gr_A(#1) }\nc \GrAa {
\GrA{\alpha} }
\nc \GRB[1] { \Gr_B^{(#1)} }
\nc \GRBN { \GRB{N} } \nc \GrB[1] { \Gr_B(#1) } \nc \GrBb {
\GrB{\beta} }
\nc \GRMB[1] { \Gr_{MB}^{(#1)} }
\nc \GRMBN { \GRMB{N} } \nc \GrMB[1] { \Gr_{MB}(#1) } \nc \GrMBb {
\GrMB{\beta} }
\def\dfrac#1#2{{\displaystyle\frac{#1}{#2}}}
\begin{document}
\title{{\LARGE\bf{Cohomology of $GL_4({\mathbb Z})$ with Non-trivial
Coefficients}}}
\author{
I. ~Horozov
\thanks{E-mail: ihorozov@brandeis.edu}
\\ \hfill\\ \normalsize \textit{Department of Mathematics,}\\
\normalsize \textit{ Brandeis University, 415 South St.,}\\
\normalsize \textit {MS 050, Waltham, MA 02454 } \\
}
\date{}
\maketitle
\begin{abstract}
In this paper we compute the cohomology groups of $GL_4({\mathbb Z})$ with
coefficients in symmetric powers of the standard representation
twisted by the determinant. This problem arises in Goncharov's
approach to the study of motivic multiple zeta values of depth 4.
The techniques that we use include Kostant's formula for
cohomology groups of nilpotent Lie subalgebras of a reductive Lie
algebra, Borel-Serre compactification, a result of Harder on
Eisenstein cohomology. Finally, we need to show that the ghost
class, which is present in the cohomology of the boundary of the
Borel-Serre compactification, disappears in the Eisenstein
cohomology of $GL_4({\mathbb Z})$. For this we use a computationally
effective version for the homological Euler characteristic of
$GL_4({\mathbb Z})$ with non-trivial coefficients.
\end{abstract}
\tableofcontents
\section{Introduction}
\subsection{Main result and applications}
The main goal of this paper is to present a computation of
cohomology groups
$$H^i(GL_4({\mathbb Z}), S^{n-4} V_4 \otimes det),$$
where $S^{n-4} V_4$ is the $(n-4)$-th
symmetric power of the standard representation $V_4$
and $det$ is the determinant representation.
The above cohomology groups describe certain spaces
of motivic multiple zeta values.
This relation was revealed by Goncharov who suggested to me the problem
of computing the cohomology groups of $GL_4({\mathbb Z})$.
Recall the definition
of multiple zeta values
$$\zeta(k_1,\dots,k_m)=
\sum_{0<n_1<\dots <n_m}\frac{1}{n_1^{k_1}\dots n_m^{k_m}},$$
where $k_1+\dots +k_m$ is called {\it{weight}} and $m$ is
called {\it{depth}}.
Goncharov has described the cases of {\it{depth}}=$2$ \cite{G2}
and of {\it{depth}}=$3$ \cite{G3}. He relates the space of motivic
multiple zeta values of {\it{depth}}=$2$ and {\it{weight}}=$n$ to
the cohomology groups of $GL_2({\mathbb Z})$ with coefficients in the
$(n-2)$-symmetric power of the standard representation $V_2$,
namely, to
$$H^i(GL_2({\mathbb Z}),S^{n-2}V_2).$$
He calls this a misterious relation between the multiple zeta
values of {\it{depth}}=$m$ and the ''modular variety''
$$GL_m({\mathbb Z})\backslash GL_m({\mathbb R})/SO_m({\mathbb R})\times {\mathbb R}^{\times}_{>0}.$$
In the paper \cite{G3}, he relates the spaces of
motivic multiple zeta values
of {\it{depth}}=$3$ and {\it{weight}}=$n$ to the cohomology of
$GL_3({\mathbb Z})$ with
coefficients in the $(n-3)$-symmetric power of
the standard representation $V_3$, namely,
$$H^i(GL_3({\mathbb Z}),S^{n-3}V_3).$$
Goncharov has also related
the case of multiple zeta values of
{\it{depth}}=$4$ and {\it{weight}}=$n$ to the computation of
the cohomology of $GL_4({\mathbb Z})$ with coefficients in the
$(n-4)$-symmetric power
of the standard representation $V_4$ twisted by the determinant
(private communications). That is,
in order to compute the spaces of
motivic multiple zeta values of {\it{depth}}=$4$
and {\it{weight}}=$n$ one has to know
$$H^i(GL_4({\mathbb Z}),S^{n-4}V_4 \otimes det).$$
The main result of this paper is the following. \bth{1.1} The
dimensions of the cohomology groups of $GL_4({\mathbb Z})$ with
coefficients the symmetric powers of the standard representation
twisted by the determinant are given by
$H^i(GL_4({\mathbb Z}),S^{n-4}V_4 \otimes det)=
\left\{\begin{tabular}{ll}
${\mathbb Q} \oplus H^1_{cusp}(GL_2({\mathbb Z}),S^{n-2}V_2 \otimes det)$ &for $i=3,$\\
$0$ &for $i\neq 3.$
\end{tabular}\right.$
\\
\\
More explicitly,
$dim(H^3(GL_4({\mathbb Z}),S^{12n-4+k}V_4 \otimes det))
=\left\{\begin{tabular}{ll}
$n+1$ & for $k=0,4,6,8,10,$\\
$n$ & for $k=2,$\\
$0$ & for $k$ odd.
\end{tabular}\right.$
\eth
\subsection{Computational methods and notation}
All representations that we consider are finite dimensional
representations of $GL({\mathbb Q})$ defined over ${\mathbb Q}$. However, we shall
consider them as representations of the arithmetic subgroups via
inclusion. We assume that the reader is familiar with group
cohomology. For a good introduction to this subject and to various
Euler characteristics of group see \cite{Br}.
We are going to describe briefly various types of cohomology groups of
arithmetic groups, namely, boundary cohomology, cohomology at the
infinity, Eisenstein cohomology, interior cohomology and cusp
cohomology. All of them are based on a compactification of certain
space, called Borel-Serre compactification. The reader who is not
familiar with these constructions should not be discouraged. We
have tried to present a piece of ''Calculus'' for cohomology of
arithmetic groups. That is, we give the definitions intuitively
rather than strictly, and describe the computational tools which
we are going to use. The constructions and the proofs of the basic
tools could be found in the cited literature. What we do in the
main part of this paper is to present the desired computation
based on these tools.
We start with the Borel-Serre compactification
\cite{BoSe}. Let $\Gamma$ be a subgroup of $GL_m({\mathbb Q})$ which is
commensurable
to $GL_m({\mathbb Z})$. That is, the intersection $\Gamma \cap GL_m({\mathbb Z})$ is
of finite index both in $\Gamma$ and in $GL_m({\mathbb Z})$.
Let
$$X=GL_m({\mathbb R})/SO_m({\mathbb R})\times{\mathbb R}^{\times}_{>0}.$$
Then $X$ is a contractable topological space on which $\Gamma$ acts on the left.
And let
$$Y_{\Gamma}=\Gamma\backslash X.$$
Then the Borel-Serre compactification of $Y_\Gamma$, denoted by
$\overline{Y}_\Gamma$,
is a compact space, containing $Y_\Gamma$. Moreover, it is of the
same homotopy
type as $Y_\Gamma$. If $V$ is a representation of $\Gamma$ and
$V^\sim$ is the corresponding sheaf then
$$H^i_{top}(\overline{Y}_\Gamma,V^{\sim})=H^i_{group}(\Gamma,V).$$
The space $\overline{Y}_\Gamma$ can be split into
strata, where each stratum corresponds
to a parabolic subgroup $P$ of $GL_{m/{\mathbb Q}}$
and the maximal stratum is $Y_\Gamma$.
Also the closure of a stratum corresponding
to a parabolic subgroup $P$ consists of all strata corresponding
to parabolic
subgroups $Q$ so that $Q\subset P$.
Let $Y_{\Gamma,P}$ be the stratum corresponding to a
parabolic subgroup $P$. Let
$$P({\mathbb Z})=P({\mathbb Q})\cap \Gamma.$$
Then the topological
cohomology of $\overline{Y}_P$ coincides with the group cohomology of
$P({\mathbb Z})$. More precisely,
$$H^i_{top}(\overline{Y}_P,j^*_P V^{\sim})=H^i_{group}(P({\mathbb Z}),V),$$
where $V$ is a representation over the rational numbers and
$V^{\sim}$ the corresponding sheaf on $\overline{Y}_\Gamma$ and
$j_P^*V^\sim$ is its restriction on $\overline{Y}_{\Gamma,P}$.
The boundary of the Borel-Serre compactification is
$${\mathrm d}\overline{Y}_\Gamma=\overline{Y}_\Gamma-Y_\Gamma=\cup_P \overline{Y}_{\Gamma,P}.$$
The inclusion $$j:{\mathrm d}\overline{Y}_\Gamma\subset \overline{Y}_\Gamma$$ induces
$$j^{\#}: H^i_{top}(\overline{Y}_\Gamma,V^{\sim})\rightarrow
H^i_{top}({\mathrm d}\overline{Y}_\Gamma,j^*V^{\sim}).$$
We call the range of the last map $j^\#$ cohomology of the boundary.
We use the notation
$$H^i_{\mathrm d}(\Gamma,V):=H^i_{top}({\mathrm d}\overline{Y}_\Gamma,j^*V^{\sim}).$$
We warn the reader that it is not a standard notation.
The image of the map $j^\#$ is called cohomology at the infinity
of $\Gamma$. We use the notation
$$H^i_{inf}(\Gamma,V):= {\mathrm{Im}} (j^\#).$$
And the kernel of the map $j^\# $ is called interior cohomology of $\Gamma$.
We use the notation
$$H^i_{!}(\Gamma,V):=\ker(j^\#).$$
For the representations that we will consider we have that the
cohomology at infinity coincides with the Eisenstein cohomology.
This is used for describing certain maps between cohomology
groups. Also the interior cohomology coincides with the cusp
cohomology. In the representations which we will concider we are
going to use that fact in order to show that the interior cohomology
vanishes.
In our problem we have
$$H^i_{cusp}(GL_4({\mathbb Z}),S^{n-4}V_4\otimes det)=0,$$
where $V_m$ is the standard $m$-dimensional representation of
$GL_m({\mathbb Q})$. And $S^{n}$ is the $n$-th symmetric power. The last
equality holds for $n>4$ because the representation
$$S^{n-4}V_4\otimes det$$
is not
self-dual. For $n=4$ it is true because $$H^i_{cusp}(SL_4({\mathbb Z}),{\mathbb Q})=0.$$
Thus, we need to compute only the Eisenstein cohomology.
The highest weight representation will be denoted by $L[a_1, ...
,a_m]$, where the weight $[a_1, ... ,a_m]$ sends $diag[H_1, ...
,H_m]$ to $a_1(H_1)+ ... +a_m(H_m)$. Sometimes we shall denote the
weight simply by $\lambda$. At a later stage there will be a number of
cohomologies to consider. In order to make the answer more
observable, sometimes we abbreviate. For example:
$$H^i(L[a_1,\dots,a_d]):=H^i(GL_d({\mathbb Z}), L[a_1,\dots,a_d]).$$
For further abbreviation we set
$$\begin{array}{ll}
&(a_1,a_2|a_3|a_4):=H^1(L[a_1,a_2]) \otimes H^0(L[a_3]) \otimes H^0(L[a_4])\\
&(a_1|a_2,a_3|a_4):=H^0(L[a_1]) \otimes H^1(L[a_2,a_3]) \otimes H^0(L[a_4])\\
&(a_1|a_2|a_3,a_4):= H^0(L[a_1]) \otimes H^0(L[a_2]) \otimes H^1(L[a_3,a_4])\\
&(a_1,a_2|a_3,a_4):=H^1(L[a_1,a_2]) \otimes H^1(L[a_3,a_4])\\
&(a_1|a_2|a_3|a_4):=H^0(L[a_1]) \otimes H^0(L[a_2]) \otimes H^0(L[a_3])
\otimes H^0(L[a_4])
\end{array}$$
We also will use the abbreviation
$$(\overline{a_1,a_2}|a_3|a_4):=H^1_{cusp}(L[a_1,a_2]) \otimes H^0(L[a_3])
\otimes H^0(L[a_4]).$$ We consider the parabolic subgroups of
$GL_4$ that contain a fixed Borel subgroup. We shall consider the
standard representation of
$GL_4$ with the choice of the Borel subgroup $B$ being the upper
triangular matrices. Then the parabolic subgroups can be listed in
the following way: $P_{ij}$ is the smallest parabolic subgroup
containing a non-zero $a_{ji}$-entry. And $P_{12,34}$ is the
smallest parabolic subgroup containing $a_{21} \neq 0$ and $a_{43}
\neq 0$. More precisely: All parabolic subgroups contain $B$ which
is upper triangular. Also, $P_{12}$ has a quotient $GL_2 \times
GL_1 \times GL_1,$ $P_{23}$ has a quotient $GL_1 \times GL_2
\times GL_1,$ $P_{34}$ has a quotient $GL_1 \times GL_1 \times
GL_2,$ $P_{13}$ has a quotient $GL_3 \times GL_1,$ $P_{24}$ has a
quotient $GL_1 \times GL_3,$ and $P_{12,34}$ has a quotient $GL_2
\times GL_2.$
We are going to use the Kostant's theorem \cite{K} in order to
obtain information about the parabolic subgroups. To do that we
need to examine carefully the action of the Weyl group, $W$ on the
root system of $gl_n$. Also we need the Weyl group, $W_P$
associated to the algebra $P$. In order to use the Kostant
theorem, we need to examine the action of the Weyl group $W$ on
the root system of $gl_n$ up to permutation of the root system of
$P$. That is, we need to consider representatives of the quotient
$W_P \backslash W$. We state Kostant's theorem \cite{K}.
\bth{1.1} Let $V$ be a representation of highest
weight $\lambda$. Let $N_P$ be nilpotent radical of a parabolic group $P$,
and let $\rho$ be half of the sum of the positive roots. Then
$$H^i(N_P, V)= \oplus _{\omega} L_{{\omega}(\lambda + \rho) - \rho},$$
where the sum is taken over the representatives of the quotient
$W_P \backslash W$ with minimal length such that their length is
exactly $i$. In the above notation $L_\lambda$ means
representation of $N_P$ with highest weight $\lambda.$ \eth
Let $[a,b,c,d]$ denote an element of the root lattice (inside
$h^*$) whose value on the diagonal entry
$[H_{11},H_{22},H_{33},H_{44}]$ in $h$ is $a H_{11}+ b H_{22}+ c
H_{33}+ d H_{44}.$ The Weyl group acts on the weight lattice by
permuting the entries of $[a,b,c,d].$ It is well known that the
Weyl group is generated by reflections perpendicular to the
primitive roots. We can choose positivity so that the primitive
roots correspond to the permutation $(12)$, $(23)$ and $(34)$,
(having $sl_4$ in mind; $(12)$ sends $[a,b,c,d]$ to $[b,a,c,d]$.)
Then the length of an element of the Weyl group is precisely the
(minimal) number of successive transpositions, or equivalently,
the (minimal) number of reflections w.r.t. the primitive roots. In
this setting the right quotient $W_P \backslash W$ can be
interpreted as shuffles in the following way: Take for example the
parabolic subalgebra $P_{23}$. Its Levi quotient $M_P =
M_{P_{23}}$ is $gl_1 \times gl_2 \times gl_1.$ Thus, $W_P$ is
generated by $(23).$ Among the representatives of the quotient
$W_P \backslash W$ we can consider the ones that preserve the
order of the subset $\{23\}$ inside $\{1234\}$. Thus, we can
consider all shuffles of $\{1|23|4\}.$ Similarly, if we take the
parabolic subalgebra $P_{12,34}$, we need to consider the shuffles
of $\{12|34\}$ so that the order of $\{12\}$ and the order of $\{34\}$ is preserved.
And for the subalgebras $P_{13}$ we consider the
shuffles of the set $\{123|4\}$, which means permutations of
$\{1234\}$ such that the order $\{123\}$ is preserved.
In order to apply Kostant's theorem, we need to examine the length of
each element $\omega$
in the Weyl group $W,$ and also the resulting weight
$\omega (\lambda + \rho) - \rho$, where
$\lambda=[a,b,c,d]$ is the weight of $V$ and
$\rho$ is half of the sum of the positive roots.
After we obtain the cohomology of the parabolic groups we have to
consider a spectral sequence involving these cohomologies in order
to obtain the cohomology of the boundary of the Borel-Serre
compactification. Then we use homological Euler characteristics in order to compute
the cohomology groups of $GL_m({\mathbb Z})$ for $m=2,3,4$.
{\bf Acknowledgments:} I would like to thank Professor Goncharov
for giving me this problem and for computational techniques that I
learned from him. I would like to thank Professor Harder for
teaching me important computational techniques.
This work was initiated at Max-Planck Institute f\"ur Mathematik.
I am very grateful for the stimulating atmosphere, created there,
as well as for the financial support during my stay.
\section {Homological Euler characteristics of $GL_m({\mathbb Z})$}
We call {\it{homological Euler characteristic of a group}} $\Gamma$
the alternating sum of the dimension of the cohomology of the group.
We denote it by
$\chi_h(\Gamma,V),$
where $V$ is a finite dimensional representation of $\Gamma$. More precisely,
$$\chi_h(\Gamma,V)=\sum_i (-1)^i dim H^i(\Gamma,V).$$
In this section we compute the homological Euler characteristics
of $GL_m({\mathbb Z})$ for $m=2,3,4$ with representations which later will
occur in the Kostant's formula applied to $GL_4({\mathbb Z})$ with
coefficients in the representation $(n-4)$-th symmetric power of
the standard representation twisted by the determinant which is
$L[n-3,1,1,1]$
The material in this section is in the spirit of the papers \cite{Ho2}
and \cite{Ho1}. Most of the formulas and notations are taken from there.
The only exception is the computation of $\chi_h(GL_3({\mathbb Z}),L[n-3,1,0])$, done here in details.
We start with $GL_2({\mathbb Z})$.
\bth{3.1}
Let $S^nV_2$ be the $n$-th symmetric power of the
standard representation of $GL_2$. Then\\
$\chi_h(GL_2({\mathbb Z}), S^{12n+k}V_2)= \left\{\begin{tabular}{cl}
$-n+1$ & $k=0$ \\
$-n$ & $k=2,4,6,8$ \\
$-n-1$ & $k=10$ \\
$0$ & $k=odd,$ \\
\end{tabular}\right.
$\\
and\\
$\chi_h(GL_2({\mathbb Z}), S^{12n+k}V_2\otimes \det)=
\left\{\begin{tabular}{cl}
$-n$ & $k=0$ \\
$-n-1$ & $k=2,4,6,8$ \\
$-n-2$ & $k=10$ \\
$0$ & $k=odd.$ \\
\end{tabular}\right.
$
\\
\eth
For $GL_m({\mathbb Z})$ $m=3 \mbox{ and }4$ we need to consider the representations\\
\begin{tabular}{l}
$L[n-3,1,0]=\ker(S^{n-3}V_3\otimes V_3 \rightarrow S^{n-2}V_3),$\\
\\
$L[n-2,1,1]=S^{n-3}V_3\otimes det,$\\
\\
$L[n-2,2,2]=S^{n-4}V_3,$ \\
\\
$L[n-3,1,1,1]=S^{n-4}V_4\otimes det.$\\
\end{tabular}
\bth{3.2}
The homological Euler characteristics of $GL_3({\mathbb Z})$ and $GL_4({\mathbb Z})$
with coefficients in the above representation are given by\\
\begin{tabular}{l}
(a) $\chi_h(GL_3({\mathbb Z}),L[n-3,1,0])
=\chi_h(GL_2({\mathbb Z}),S^{n-4}V_2)-\chi_h(GL_2({\mathbb Z}),S^{n-2}V_2)$,\\
\\
(b) $\chi_h(GL_3({\mathbb Z}),L[n-2,1,1])=\chi_h(GL_2({\mathbb Z}),S^{n-2}V_2\otimes det),$\\
\\
(c) $\chi_h(GL_3({\mathbb Z}),L[n-2,2,2])=\chi_h(GL_2({\mathbb Z}),S^{n-4}V_2),$ \\
\\
(d) $\chi_h(GL_4({\mathbb Z}),L[n-3,1,1,1])=\chi_h(GL_2({\mathbb Z}),S^{n-2}V_2\otimes det).$\\
\end{tabular}
\eth The technique that we are going to use involves a substantial
simplification of the trace formula which works when
$\Gamma=GL_m({\mathbb Z})$ or a group co-mensurable to $GL_m({\mathbb Z})$. The
simplification of the trace formula for $GL_m({\mathbb Z})$ was developed
in \cite{Ho1,Ho2}. Besides the simplification we are going to use
some computation which were done in the above two papers.
Now we present the simplification of the trace formula in the case of
$GL_m({\mathbb Z})$. An arithmetic group $\Gamma$
has also an orbifold Euler characteristic.
We denote it by $\chi(\Gamma)$, without subscript.
It is in fact an Euler characteristic of a certain orbifold.
There is
a more algebraic description. If an arithmetic group $\Gamma$ has no torsion
then the orbifold Euler characteristic coincides with the homological
Euler characteristic with coefficients in the trivial representation.
$$\chi(\Gamma)=\chi_h(\Gamma,{\mathbb Q}).$$
If $\Gamma$ has torsion choose a torsion free finite index subgroup $\Gamma_0$.
Then
$$\chi(\Gamma)=\frac{\chi(\Gamma_0)}{[\Gamma:\Gamma_0]}.$$
Let $C(A)$ denote the centralizer of the element $A$ inside $\Gamma$.
Then the classical trace formula is
$$\chi_h(\Gamma,V)=\sum_{A} \chi(C(A)) {\mathrm{Tr}}(A|V),$$
where the sum is taken over all torsion elements considered up to
conjugation. And $C(A)$ denotes the centralizer of the element $A$
inside $\Gamma$. We remark that in this formula the identity
element is also considered as a torsion element.
For the simplification of the trace formula we need the following
definition. Let $A$ be an element in $GL_m({\mathbb Z})$. Consider it as an
$m\times m$ matrix. Let $f$ be its characteristic polynomial. Let
$$f=f_1^{a_1}\dots f_l^{a_l}$$
be the factorization of $f$ into irreducible over ${\mathbb Q}$
polynomials. Denote by
$$R(g,h)=\prod_{i,j}(\alpha_i-\beta_j)$$ the resultant of the polynomials
$$g=\prod_i(x-\alpha_i) \mbox{ and } h=\prod_j(x-\beta_i).$$
Denote by
$$R(A)=\prod_{i<j}R(f_i^{a_i},f_j^{a_j})$$
\bth{2.10}
Let $V$ be a finite dimensional representation
of $GL_m({\mathbb Q})$. Then the homological Euler characteristic of
$GL_m({\mathbb Z})$ with coefficients in $V$ is given by
$$
\chi_h(GL_m({\mathbb Z}),V)=
\sum_{A} |R(A)|\chi(C(A)) {\mathrm{Tr}}(A|V),
$$
where the sum is taken over torsion matrices $A$ consisting
of square blocks $A_{11},\dots A_{ll}$ on the block-diagonal and zero
blocks off the diagonal. Also the matrices $A_{ii}$ are non-conjugate to
each other. And they are chosen from the set
$\{+1,+I_2,-1,-I_2,T_3,T_4,T_6\},$
where
$$T_3=
\left[
\begin{tabular}{rr}
$0$ & $1$\\
$-1$ & $-1$
\end{tabular}
\right],
\mbox{ }
T_4=
\left[
\begin{tabular}{rr}
$0$ & $1$\\
$-1$ & $0$
\end{tabular}
\right],
\mbox{ }
T_6=
\left[
\begin{tabular}{rr}
$0$ & $-1$\\
$1$ & $1$
\end{tabular}
\right].
$$
The blocks on the diagonal are chosen up to permutation. And the
characteristic polynomial $f_i$ of $A_{ii}$ is a power of an
irreducible polynomial, and $f_i$ and $f_j$ are relatively prime.
\eth
{\bf{Remark:}} There is one more simplification that we can make.
In the formula in theorem 3.3 for the homological Euler characteristic
one can do the summation in the following way. If the $-I_m$ acts
on $V$ nontrivially then all the cohomologies of $GL_m({\mathbb Z})$ with coefficients
in $V$ vanish and the homological Euler characteristic vanishes.
If $-I_m$ acts trivially on $V$ then $ {\mathrm{Tr}}(-A|V)= {\mathrm{Tr}}(A|V)$. Also, $C(-A)=C(A)$
and $|R(-A)|=|R(A)|$. If $-A$ is not conjugate to $A$ then in the
sum of theorem 3.3 we can compute the invariants for $A$. And for $-A$
they are the same. Note that $-A$ is conjgate to $A$ if and only if
one can obtain $-A$ by permuting the blocks on the diagonal of $A$.
\proof (of theorem 3.2) Parts (b), (c) and (d) are computed \cite{H1}.
We are going to prove part (a). We are going to use the following notation.
Given a matrix $A$ whose blocks no the diagonal are $A_{11},\dots, A_{ll}$
and whose blocks off the diagonal are zero, we write it as
$$A=[A_{11},\dots, A_{ll}].$$ Using this notation and the notation of
theorem 2.3 we quote lemma 4.2 of the paper\cite{Ho1}
\ble{4.2}For the centralizers and the resultants of the torsion elements in
$GL_3({\mathbb Z})$ we have\\
\begin{tabular}{l}
(c) $|R([I_2,-1])|\chi(C([I_2,-1]))=-\frac{1}{12}$,\\
\\
(e) $|R([T_3,1])|\chi(C([T_3,1]))=\frac{1}{4}$,\\
\\
(f) $|R([T_6,1])|\chi(C([T_6,1]))=\frac{1}{12}$,\\
\\
(i) $|R([T_4,1])|\chi(C([T_4,1]))=\frac{1}{4}$.\\
\\
\end{tabular}
\ele
Also, we are going to use lemma 5.3 from the same paper \cite{H1}.
\ble{5.3}
The traces of the torsion
elements in $GL_3({\mathbb Z})$ acting on the symmetric power of the standard
representation are given by:\\
\\
(c) \mbox{ } $ {\mathrm{Tr}}([I_2,-1]|S^{2n+k}V_3)= \left\{\begin{tabular}{cc}
$n+1$ & $k=0$ \\ $n+1$ & $k=1$, \\
\end{tabular}\right.
$ \\
(e) \mbox{ } $ {\mathrm{Tr}}([T_3,1]|S^{3n+k}V_3)=
\left\{\begin{tabular}{cc} $1$ & $k=0$ \\ $0$ & $k=1$ \\ $0$ &
$k=2$, \\
\end{tabular}\right.
$\\
(f) \mbox{ } $ {\mathrm{Tr}}([T_6,1]|S^{6n+k}V_3)
=
\left\{\begin{tabular}{rc} $1$ & $k=0$ \\ $2$ & $k=1$ \\ $2$ &
$k=2$ \\ $1$ & $k=3$ \\ $0$ & $k=4$ \\ $0$ & $k=5$, \\
\end{tabular}\right.
$\\
\\
(i) \mbox{ } $ {\mathrm{Tr}}([T_4,1]|S^{4n+k}V_3)= \left\{\begin{tabular}{cc}
$1$ & $k=0$ \\ $1$ & $k=1$ \\ $0$ & $k=2$ \\ $0$ & $k=3$. \\
\end{tabular}\right.
$\\
\ele
In order to compute $ {\mathrm{Tr}}(A|L[w-3,1,0])$ for torsion elements $A$,
we are going to use
$$L[w-3,1,0]=\ker(S^{w-3}V_3\otimes V_3 \rightarrow S^{w-2}V_3).$$
Also, we are going to use that
$$ {\mathrm{Tr}}(A|V\otimes W)= {\mathrm{Tr}}(A|V) {\mathrm{Tr}}(A|W).$$
Using the above two equalities together with lemma 3.5, we obtain
the following.
\ble{5.3}
The traces of the torsion
elements in $GL_3({\mathbb Z})$ acting on the $L[w-3,1,0]$ are given by:\\
\\
(c) \mbox{ } $ {\mathrm{Tr}}([I_2,-1]|L[2n-1+k,1,0])= \left\{\begin{tabular}{cc}
$-1$ & $k=0$ \\ $0$ & $k=1$, \\
\end{tabular}\right.
$ \\
(e) \mbox{ } $ {\mathrm{Tr}}([T_3,1]|L[3n-1+k,1,0])=
\left\{\begin{tabular}{cc} $-1$ & $k=0$ \\ $0$ & $k=1$ \\ $0$ &
$k=2$, \\
\end{tabular}\right.
$\\
(f) \mbox{ } $ {\mathrm{Tr}}([T_6,1]|L[6n-1+k,1,0])
=
\left\{\begin{tabular}{rc} $-1$ & $k=0$ \\ $0$ & $k=1$ \\ $2$ &
$k=2$ \\ $3$ & $k=3$ \\ $2$ & $k=4$ \\ $0$ & $k=5$, \\
\end{tabular}\right.
$\\
\\
(i) \mbox{ } $ {\mathrm{Tr}}([T_4,1]|L[4n-1+k,1,0])= \left\{\begin{tabular}{cc}
$-1$ & $k=0$ \\ $0$ & $k=1$ \\ $1$ & $k=2$ \\ $0$ & $k=3$. \\
\end{tabular}\right.
$\\
\ele
For each of the torsion elements $A$ in $GL_3({\mathbb Z})$ we have that
$A$ and $-A$ are not conjugated. When we use theorem 2.3 we can
count only four of the torsion elements listed in lemmas 2.4, 2.5
and 2.6 and multiply by two in order to consider the contribution
of the negative of these torsion elements. Thus, using theorem
2.3, lemma 2.4 and
lemma 2.6 we obtain\\
\\
\begin{tabular}{l}
$\chi_h(GL_3({\mathbb Z}),L[12n-1,1,0])=2(\frac{1}{12}-\frac{1}{4}-\frac{1}{12}
-\frac{1}{4})=-1,$\\
\\
$\chi_h(GL_3({\mathbb Z}),L[12n+1,1,0])=2(\frac{1}{12}+0+\frac{2}{12}
+\frac{1}{4})=1,$\\
\\
$\chi_h(GL_3({\mathbb Z}),L[12n+3,1,0])=2(\frac{1}{12}+0+\frac{2}{12}
-\frac{1}{4})=0,$\\
\\
$\chi_h(GL_3({\mathbb Z}),L[12n+5,1,0])=2(\frac{1}{12}-\frac{1}{4}-\frac{1}{12}
+\frac{1}{4})=0,$\\
\\
$\chi_h(GL_3({\mathbb Z}),L[12n+7,1,0])=2(\frac{1}{12}+0+\frac{2}{12}
-\frac{1}{4})=0,$\\
\\
$\chi_h(GL_3({\mathbb Z}),L[12n+1,1,0])=2(\frac{1}{12}+0+\frac{2}{12}
+\frac{1}{4})=1,$\\
\end{tabular}\\
Consider the statement of theorem 2.2 part (a). The above computation
of homological Euler characteristics
gives the left hand side of part(a).
The right hand side can be computed directly
from theorem 2.1. They do coincide. Thus, part (a) of theorem 2.2 is
proven.
\section {Cohomology of $GL_2({\mathbb Z})$.}
This section is to show how the computational method works for
$GL_2({\mathbb Z})$. All the results are known, but we need them for the
later sections. We are going to compute Eisenstein cohomology and cusp cohomology of
$GL_2({\mathbb Z})$ with coefficients in some representations.
First we are going to compute the cohomology of the boundary using
Kostant's theorem.
Let $L[a,b]$ be the irreducible representation with
highest weight $[a,b]$. The group $GL_2$ has one parabolic subgroup up to conjugation -
the Borel subgroup $B$.
It has a nilpotent radical $N$ and a Levi quotient $GL_1 \times GL_1$.
The Weyl group
has two elements. Also, the half of the `sum' of the positive roots
is $\rho=[1/2,-1/2]$ Consider the following table:
$$
\begin{array}{llllllll}
&\omega \epsilon W &length &\omega (\lambda +\rho )-\rho \\
&12 &0 &[a,b]\\
&21 &1 &[b-1,a+1]\\
\end{array}
$$
From Kostant's theorem we obtain that
$$H^n(N, L[a,b])= \left\{
\begin{array}{lll}
&L[a,b] &n=0,\\
&L[b-1,a+1] &n=1.
\end{array}
\right.
$$
The integral points of the
Levi quotient of $B$ are $GL_1({\mathbb Z}) \times GL_1({\mathbb Z})$. Using
the Hochschild-Serre spectral sequence we compute $H^n(B,L[a,b])$.
If both $a$ and $b$ are even then
$H^0(B,L[a,b])=H^0(GL_1({\mathbb Z}), L[a]) \otimes H^0(GL_1({\mathbb Z}), L[b])={\mathbb Q}$,
and the rest of the cohomology groups are trivial.
If both $a$ and $b$ are odd then
$H^1(B,L[a,b])=H^0(GL_1({\mathbb Z}), L[b-1]) \otimes H^0(GL_1({\mathbb Z}), L[a+1])={\mathbb Q}$.
If $a+b$ is odd then $H^n(B,L[a,b])=0$ for all $n$.
There are several cases. If $a+b$ is odd then $-I$ acts non-trivially
on $L[a,b]$. So the cohomology of $GL_2({\mathbb Z})$ vanishes. If $a=b=2k$ then
$L[a,b]$ is the trivial representation of $GL_2({\mathbb Z})$. So
$$H^i(GL_2({\mathbb Z}),L[2k,2k])=H_{Eis}^i(GL_2({\mathbb Z}),L[2k,2k])=
\left\{\begin{tabular}{ll}
${\mathbb Q}$ & $i=0,$\\
$0$ & $i=1,$
\end{tabular}
\right.
$$
and
$$H^i_{cusp}(GL_2({\mathbb Z}),L[2k,2k])=0.$$
If $a=b=2k+1$ then
$$H^i(GL_2({\mathbb Z}),L[2k+1,2k+1])=0.$$
So the Eisenstein and the cusp cohomology also vanish.
The interesting cases are when both $a$ and $b$ are even or when
both $a$ and $b$ are odd. For those cases we do not give a
complete proof, but rather an interpretation of the cohomologies.
If follows from considering modular forms for $GL_2({\mathbb Z})$ of weight $2(a-b)$ or equivalently, holomorphic modular forms for $SL_2({\mathbb Z})$. The Eisenstein cohomology is generated by the Eisenstein series and the dimension of the cusp cohomology $H^1_{cusp}(GL_2({\mathbb Z}),L[a,b])$ is equal to the dimension of the cusp forms of weight $2(a-b)$.
In any of these cases we have
$$H^0(GL_2({\mathbb Z}),L[a,b])=0.$$ Also,
if $a$ and $b$ are both odd, we have that the map
$$H^1(GL_2({\mathbb Z}),L[a,b])\rightarrow H^1(B,L[a,b])={\mathbb Q}$$
is surjective. Then
$$H^1_{Eis}(GL_2({\mathbb Z}),L[a,b])={\mathbb Q},$$
and
$$dimH^1_{cusp}(GL_2({\mathbb Z}),L[a,b])=-1+dimH^1(GL_2({\mathbb Z}),L[a,b]).$$
If the weights $a$ and $b$ are both even, then the Eisenstein cohomology
coincides with the whole group cohomology.
Here is one interpretation of the cohomology of $GL_2({\mathbb Z})$ in
cases when both $a$ and $b$ are either
even or odd.
We are not going to use the following interpretation, only the above formulas,
but it is nice to keep it in mind.
Let $a$ and $b$ be both odd.
Then
$$\begin{tabular}{ll}
&$H^1(SL_2({\mathbb Z}),L[a,b])=$\\
&$H^1_{cusp}(GL_2({\mathbb Z}),L_{[a+1,b+1]})\oplus H^1_{cusp}(GL_2({\mathbb Z}),L_{[a,b]})
\oplus H_{Eis}^1(GL_2({\mathbb Z}),L_{[a,b]}).$
\end{tabular}
$$
The first direct summand corresponds to holomorphic
cuspidal forms of weight $a-b-2$. The second summand correspond to
anti-holomorphic cusp forms of weight $a-b-2$. And the last summand
corresponds to the Eisenstein series of weight $a-b-2$ (when bigger than 2).
Keeping in mind the above decompositions one can compute the dimensions of
the cohomlogy groups (or dimensions of cusp forms) using theorem 2.1.
Note that in theorem 2.1 the homological Euler characteristic is
equal to minus the dimension of the first cohomology group, since the higher
cohomology groups vanish as well as the zeroth.
\sectionnew{Cohomology of $GL_3({\mathbb Z})$}
In this section we compute cohomology groups of $GL_3({\mathbb Z})$ with coefficients in
certain representations which are needed for our main problem.
They arise as representations of the Levi quotients of two of the
maximal parabolic subgroups of $GL_4$, namely, $P_{13}$ and $P_{24}$.
They lead to computation of cohomology groups of $GL_3({\mathbb Z})$
with coefficients
in any of the representations
$L[0,0,0]={\mathbb Q}$, $L[w-3,1,0]$,
$L[w-2,2,2]$ and $L[w-2,1,1]$.
\bth{5.1} The cohomology of $GL_3({\mathbb Z})$ with coefficients
in the above representations are given by
\\
\\
\begin{tabular}{l}
(a) $H^i(GL_3({\mathbb Z}), {\mathbb Q})=\left\{
\begin{array}{lll}
&(0|0|0) &i=0,\\
&0 &i \neq 0.
\end{array}
\right.$
\\
\\
(b) $H^i(GL_3({\mathbb Z}),L[n-3,1,0])=\left\{
\begin{array}{lll}
&(\overline{n-3,-1}|2) &i=2\\
&(-2|\overline{n-2,2}) &i=3\\
&0 &i\neq 2,3
\end{array}
\right.$
\\
\\
(c) $H^i(GL_3({\mathbb Z}), L[n-2,2,2])=\left\{
\begin{array}{lll}
&(0|\overline{n-1,3}) &i=3\\
&0 &i \neq3
\end{array}
\right.$
\\
\\
(d) $H^i(GL_3({\mathbb Z}), L[n-2,1,1])=\left\{
\begin{array}{lll}
&(0|\overline{n-1,1}) &i=2\\
&0 &i \neq 2.
\end{array}
\right.
$
\end{tabular}
\eth
Before proving the above theorem, we examine
the cohomology of $GL_3({\mathbb Z})$ with coefficients in $L_{[a,b,c]}$
The algebraic group $GL_3$ has three parabolic subgroups: $B$,
$P_{12}$ and $P_{23}$. In order to find their cohomology groups, we
need the explicit action of the Weyl group; more precisely we need
the various $\omega(\lambda+\rho)-\rho$ that enter in Kostant's
theorem. Note that half of the sum of the positive roots is
$\rho=[1,0,-1]$.
$$
\begin{array}{llllllll}
&\omega \epsilon W &\mbox{length of }\omega& &\omega(\lambda) &\omega(\lambda+\rho)-\rho \\
&123 &\:\:0& &[a,b,c] &[a,b,c]\\
&132 &\:\:1& &[a,c,b] &[a,c-1,b+1]\\
&213 &\:\:1& &[b,a,c] &[b-1,a+1,c]\\
&231 &\:\:2& &[b,c,a] &[b-1,c-1,a+2]\\
&312 &\:\:2& &[c,a,b] &[c-2,a+1,b+1]\\
&321 &\:\:3& &[c,b,a] &[c-2,b,a+2]
\end{array}
$$
Using the Kostant's theorem we find the cohomology groups
of the nilpotent radicals of the parabolic groups.
$$
H^q(H,L[a,b,c])=\left\{
\begin{array}{lll}
&L[a,b,c] &q=0 \\
&L[a,c-1,b+1] \oplus L[b-1,a+1,c] &q=1 \\
&L[b-1,c-1,a+2] \oplus L[c-2,a+1,b+1] &q=2 \\
&L[c-2,b,a+2] &q=3
\end{array}
\right.
$$
$$
H^q(N_{12},L[a,b,c])=\left\{
\begin{array}{lll}
&L[a,b,c] &q=0\\
&L[a,c-1,b+1] &q=1\\
&L[b-1,c-1,a+2] &q=2
\end{array}
\right.
$$
$$
H^q(N_{23},L[a,b,c])=\left\{
\begin{array}{lll}
&L[a,b,c] &q=0\\
&L[b-1,a+1,c] &q=1\\
&L[c-2,a+1,b+1] &q=2
\end{array}
\right.
$$
In order to pass to cohomologies of the parabolic groups,
we use the Hochschild-Serre spectral sequence relating
the nil radical and the Levi quotient of a parabolic subgroup
to the parabolic subgroup itself; namely the short exact
sequence $N \rightarrow P \rightarrow S$. We recall the notation
$H^n(L[a_1,...,a_k])=H^n(GL_k {\mathbb Z}, L[a_1,...,a_k])$ and
$(a|b|c)= H^0(L[a]) \otimes H^0(L[b]) \otimes H^0(L[c])$.
$$
H^i(B,L[a,b,c])=\left\{
\begin{array}{lll}
&(a|b|c) &i=0\\
&(a|c-1|b+1) \oplus (b-1|a+1|c) &i=1\\
&(b-1|c-1|a+2) \oplus (c-2|a+1|b+1) &i=2\\
&(c-2|b|a+2) &i=3
\end{array}
\right.
$$
$$
E_2^{p,q}(P_{12},L[a,b,c])=\left\{
\begin{array}{lll}
&H^p(L[a,b]) \otimes H^0(L[c]) &q=0\\
&H^p(L[a,c-1]) \otimes H^0(L[b+1] &q=1\\
&H^p(L[b-1,c-1])\otimes H^0(L[a+2] &q=2
\end{array}
\right.
$$
$$
E_2^{p,q}(P_{23},L[a,b,c])=\left\{
\begin{array}{lll}
&H^0(L[a]) \otimes H^p(L[b,c]) &q=0\\
&H^0(L[b-1] \otimes H^p(L[a+1,c]) &q=1\\
&H^0(L[c-2]) \otimes H^p(L[a+1,b+1]) &q=2
\end{array}
\right.
$$
It is true that the above two spectral sequences
stabilize at the $E_2$-level.
However, in any particular case the formulas will be
much simpler, and one can use them to compute
the boundary cohomology.
Let $B$, $P_{12}$ $P_{23}$ be the parabolic subgroups of $GL_3 {\mathbb Z}$.
{\bf $H^i(GL_3({\mathbb Z}),{\mathbb Q})$}
For part (a) we have
$$
H^i(B, {\mathbb Q})=\left\{
\begin{array}{lll}
&(0|0|0) &i=0\\
&(-2|0|2) &i=3\\
&0 &n \neq 0,3
\end{array}
\right.
$$
$$H^0(P_{12}, {\mathbb Q})=H^0(GL_2({\mathbb Z}) ,{\mathbb Q}) \otimes H^0(GL_1({\mathbb Z}), {\mathbb Q})$$
$$H^0(P_{23}, {\mathbb Q})=H^0(GL_1({\mathbb Z}) ,{\mathbb Q}) \otimes H^0(GL_2({\mathbb Z}), {\mathbb Q})$$
From Mayer-Vietoris we obtain that the boundary cohomology of $GL_3({\mathbb Z})$ is
$$H^i_{\partial}(GL_3({\mathbb Z}),{\mathbb Q})=\left\{
\begin{array}{lll}
&(0|0|0) &i=0\\
&(-2|0|2) &i=4\\
&0 &i \neq 0,4
\end{array}
\right.
$$
The homological Euler characteristic of $GL_3({\mathbb Z})$ with
trivial coefficients is $1$ (see theorem 2.2 part (c) and theorem 2.1).
That is,
$$\chi_h(GL_3({\mathbb Z}),{\mathbb Q})=1.$$
Then the forth cohomology of the boundary component disappears in
the Eisenstein cohomology. Therefore,
$$H_{Eis}^i(GL_3({\mathbb Z}),{\mathbb Q})=\left\{
\begin{array}{lll}
&(0|0|0) &i=0\\
&0 &i \neq 0.
\end{array}
\right.
$$
Also, the cusp cohomology of $GL_3({\mathbb Z})$ with trivial coefficients is zero.
Therefore the Eisenstein cohomology coincides with the whole group cohomology.
We proceed to part(b).
{\bf $H^i(B,L[n-3,1,0])$}
Using the computations in the beginning of this section, we obtain
$$H^i(B,L[n-3,1,0])=\left\{
\begin{array}{lll}
&0 &i=0\\
&(0|n-2|0) &i=1\\
&(-2|n-2|2) &i=2\\
&0 &i=3
\end{array}
\right.
$$
$$H^i(P_{12},L[n-3,1,0])=\left\{
\begin{array}{lll}
&(n-3,1|0) &i=1\\
&(n-3,-1|2) &i=2\\
&0 &i\neq 1,2
\end{array}
\right.
$$
$$H^i(P_{23},L[w-3,1,0])=\left\{
\begin{array}{lll}
&(0|n-2,0) &i=2\\
&(-2|n-2,2) &i=3\\
&0 &i\neq2,3\\
\end{array}
\right.
$$
Using Mayer-Vietoris, for the cohomology of the boundary
of the Borel-Serre compactification, we obtain
$$H^i_\partial(GL_3({\mathbb Z}),L[n-3,1,0])=\left\{
\begin{array}{lll}
&(\overline{n-3,1}|0) &i=1\\
&(\overline{n-3,-1}|2)\oplus (0|\overline{n-2,0}) &i=2\\
&(-2|\overline{n-2,2}) &i=3\\
&0 &i\neq 1,2,3
\end{array}
\right.
$$
The representation $L[n-3,1,0]$ is not self dual. So the cohomology of
$GL_3({\mathbb Z})$ with coefficients in $L[n-3,1,0]$ coincides with the
Eisenstein cohomology, which is a subspace of the cohomology
of the boundary. The first cohomology of $GL_3({\mathbb Z})$ with
coefficients in any representation vanishes. For the homological
Euler characteristic of $GL_3({\mathbb Z})$ with coefficients in $L[n-3,1,0]$
(theorem 2.2 part (a)) we have
$$\chi_h(GL_3({\mathbb Z}),L[n-3,1,0])=
\chi_h(GL_2({\mathbb Z}),S^{n-4}V_2)-\chi_h(GL_2({\mathbb Z}),S^{n-2}V_2).$$
We obtain that the dimension of the second cohomology is half of the
dimension of the second cohomology of the boundary of the
Borel-Serre compactification. That is,
$$dim H^2_{Eis}(GL_3({\mathbb Z}),L[n-3,1,0])=
\frac{1}{2} dim H^2_\partial(GL_3({\mathbb Z}),L[n-3,1,0]).$$
Also,
$$dim H^3_{Eis}(GL_3({\mathbb Z}),L[n-3,1,0])=
dim H^3_\partial(GL_3({\mathbb Z}),L[n-3,1,0]).$$
The second cohomology of the boundary is a direct sum of two spaces with
the same dimesions. In order to find out which of the subspaces or
which linear combination of the spaces enters in the Eisenstein cohomology,
we have to consider the central characters of the two parabolic subgroups \cite{Ha}.
For the parabolic subgroup $P_{12}$ we take the central torus
$$\left[
\begin{tabular}{ccc}
$t$ & & \\
& $t$ & \\
& & $t^{-2}$
\end{tabular}
\right].$$
The highest weight induces a character on it, namely $[n-3,-1,2]$,
whose evaluation on the above
element is
$$n-3-1-2\times 2=n-8.$$
For the parabolic subgroup $P_{23}$ we take the central torus
$$\left[
\begin{tabular}{ccc}
$t^2$ & & \\
& $t^{-1}$ & \\
& & $t^{-1}$
\end{tabular}
\right].$$
The highest weight induces a character on it, namely $[0,n-2,0]$,
whose evaluation on the above
element is
$$0-(n-2)=-n+2.$$
Their sum is -6. The space which enters in the Eisenstein cohomology
has higher weight. Thus we need to solve
$$n-8>-n+2.$$
Thus for $n>5$ we have
$$H^i(GL_3({\mathbb Z}),L[n-3,1,0])=\left\{
\begin{array}{lll}
(\overline{n-3,-1}|2) &i=2\\
(-2|\overline{n-2,2}) &i=3\\
0 &i\neq 2,3
\end{array}
\right.
$$
The value of $n$ is always even and greater or equal to $4$. The other option
for $n$ is $n=4$. Then
$$H^i(GL_3({\mathbb Z}),L[1,1,0])=\left\{
\begin{array}{lll}
(0|\overline{4-2,0}) &n=2\\
(-2|\overline{4-2,2}) &n=3\\
0 &n\neq 2,3
\end{array}
\right.
$$
That is,
$$H^i(GL_3({\mathbb Z}),L[1,1,0])=0.$$
{\bf $H^*(GL_3, L[n-2,2,2])$ when $n$ is even.}
Using the computation in the beginning of section 3 we obtain:
$$H^i(B,L[n-2,2,2])=\left\{
\begin{array}{lll}
&(n-2|2|2) &i=0\\
&0 &i=1\\
&0 &i=2\\
&(0|2|n) &i=3
\end{array}
\right.
$$
$$H^i(P_{12},L[n-2,2,2])=\left\{
\begin{array}{lll}
&(n-2,2|2) &i=1\\
&0 &i \neq1
\end{array}
\right.
$$
$$H^i(P_{23},L[n-2,2,2])=\left\{
\begin{array}{lll}
&(n-2|2|2]) &i=0\\
&(0|n-1,3) &i=3\\
&0 &i \neq 0,3
\end{array}
\right.
$$
Using Mayer-Vietoris we obtain
$$H^i_{\mathrm d}(GL_3({\mathbb Z}), L[n-2,2,2])=\left\{
\begin{array}{lll}
&(\overline{n-2,2}|2) &i=1\\
&(0|\overline{n-1,3}) &i=3\\
&0 &i \neq 1,3
\end{array}
\right.
$$
The first cohomology of $GL_3({\mathbb Z})$ vanishes, Therefore,
$$H^i(GL_3, L[n-2,2,2])=\left\{
\begin{array}{lll}
&(0|\overline{n-1,3}) &i=3\\
&0 &i \neq3
\end{array}
\right.
$$
{\bf $H^*(GL_3, L[n-2,1,1])$ when $n$ is even.}
Using the computation in the beginning of section 3 we obtain:
$$H^i(B,L[n-2,1,1])=\left\{
\begin{array}{lll}
&0 &i=0\\
&(n-2|0|2) &i=1\\
&(0|0|n) &i=2\\
&0 &i=3
\end{array}
\right.
$$
$$H^i(P_{12},L[n-2,1,1])=\left\{
\begin{array}{lll}
&(n-2,0|2) \oplus (0|0|n) &i=2\\
&0 &i \neq 2
\end{array}
\right.
$$
$$H^i(P_{23},L[n-2,1,1])=\left\{
\begin{array}{lll}
&(0|n-1,1) &i=2\\
&0 &i \neq 2
\end{array}
\right.
$$
Using Mayer-Vietoris we obtain
$$H^i_{\partial}(GL_3({\mathbb Z}), L[n-2,1,1])=\left\{
\begin{array}{lll}
&(0|0|n)\oplus (n-2,0|2) \oplus (0|\overline{n-1,1}) &i=2\\
&0 &i \neq 2,
\end{array}
\right.
$$
From the homological
Euler characteristic of $GL_3({\mathbb Z})$ with coefficients in $L[n-2,1,1]$
we obtain that
$$dim H^2_{Eis}(GL_3({\mathbb Z}),L[n-2,1,1])=
\frac{1}{2}(-1+ dim H^2_\partial(GL_3({\mathbb Z}),L[n-3,1,0])).$$
For the parabolic subgroup $P_{12}$ we take the central torus
$$\left[
\begin{tabular}{ccc}
$t$ & & \\
& $t$ & \\
& & $t^{-2}$
\end{tabular}
\right].$$
The highest weight induces a character on it, namely $[n-2,0,2]$,
whose evaluation on the above
element is
$$n-2+0-2\times 2=n-6.$$
For the parabolic subgroup $P_{23}$ we take the central torus
$$\left[
\begin{tabular}{ccc}
$t^2$ & & \\
& $t^{-1}$ & \\
& & $t^{-1}$
\end{tabular}
\right].$$
The highest weight induces a character on it, namely $[0,n-1,1]$,
whose evaluation on the above
element is
$$0-(n-1)-1=-n.$$
Their sum is -6. The space which enters in the Eisenstein cohomology
has higher weight. Thus we need to solve
$$n-6>-n.$$
Thus for $n>3$, which is always the case, we have
$$H^i(GL_3({\mathbb Z}),L[n-2,1,1])=\left\{
\begin{tabular}{ll}
$(n-2,0|2)\oplus (0|0|n)$ & $i=2$\\
$0$ & $i\neq 2$
\end{tabular}
\right.
$$
\section {Cohomologies of the parabolic subgroups of $GL_4$.}
This section consists of computation of cohomology of the
parabolic subgroups of $GL_4({\mathbb Z})$ with coefficients in the
representation $S^{n-4} V_4 \otimes det$. We use Kostant's theorem
in order to compute these cohomology groups. In the process we reduce
the question to computation of the cohomology groups of the Levi
quotients which have factors $GL_1({\mathbb Z})$, $GL_2({\mathbb Z})$ or/and
$GL_3({\mathbb Z})$. For the last three groups we use the computation from
the sections on cohomolgy of $GL_2({\mathbb Z})$ and of $GL_3({\mathbb Z})$.
Recall the notation of the parabolic subgroups: We choose the
Borel subgroup $B$ to be the group of upper triangular matrices.
Let $N$ be its unipotent radical of $B$. Let $P_{ij}$ be the
smallest parabolic subgroup containing $B$ and containing a non
zero $a_{ji}$-entry. Similarly, $P_{12,34}$ is the smallest
(parabolic) subgroup containing $B$ and containing non zero
$a_{21}$- and $a_{43}$-entries. The unipotent radicals of $P_{ij}$
will be denoted by $N_{ij}$; and the Levi quotient by
$S_{ij}=P_{ij}/N_{ij}$.
\bpr{4.1}(cohomologies of the parabolic subgroups)
Let
\\
$V=S^{n-4} V_4 \otimes det$.
Then
$$H^i(B, V)= \left\{
\begin{array}{llllll}
&(0|n-2|0|2) &i=2 \\
&(0|0|0|n) \oplus (-2|w-2|2|2) &i=3 \\
&(-2|0|2|n) &i=6\\
&0 &i \neq 2,3,6.
\end{array}
\right.$$
$$H^i(P_{12}, V)= \left\{
\begin{array}{llllll}
&(n-3, 1|0|2) &i=2, \\
&(0|0|0|w) \oplus (n-3,-1|2|2) &i=3, \\
&0 &i \neq 2,3.
\end{array}
\right.$$
$$H^i(P_{23},V)=
\left\{\begin{array}{llllll}
&(0|n-2,0|2) \oplus (0|0,0|n) &i=3\\
&(-2|n-2,2|2) &i=4, \\
&0 &i \neq 3,4.
\end{array}
\right.$$
$$H^n(P_{34},V)= \left\{
\begin{array}{llllll}
&(0|0|n-1,1)\oplus (-2|n-2|2|2) &i=3, \\
&(-2|0|n-1,3) &i=6 \\
&0 &i \neq 3,6.
\end{array}
\right.$$
$$H^i(P_{13}, V)= \left\{
\begin{array}{llllll}
&(0|0|0|n)\oplus (\overline{n-3,-1}|2|2) &i=3, \\
&(-2|\overline{n-2,2}|2) &i=4, \\
&0 &i\neq 3,4.
\end{array}
\right.$$
$$H^i(P_{12,34}, V)= \left\{
\begin{array}{llllll}
&(n-3,-1|2|2) \oplus (0|0|n-1,1) &i=3 \\
&0 &i \neq 3.
\end{array}
\right.$$
$$H^i(P_{24}, V)= \left\{
\begin{array}{llllll}
&(0|0|0|n)\oplus (0|n-2,0|2) &i=3, \\
&(-2|0|\overline{n-1,3}) &i=6, \\
&0 &i\neq 3,6..
\end{array}
\right.$$
\epr
The main tool in the proof will be Kostant's theorem
and Hochschild-Serre spectral sequence.
In terms of weights representation $S^{n-4} \otimes det$
is $L[n-3,1,1,1]$. We shall denote the representation $L[n-3,1,1,1]$ simply
by $V$. We identify the Weyl group of $GL_4$ with
the permutation group of four elements.
We also need the length of the permutation
which we denote by $l$.
$$
\begin{array}{llllllllllll}
\\
&w \: &l \: &w(\lambda+\rho)-\rho \\
&1234 \: &0 \: &[w-3,1,1,1]\\
&1243 \: &1 \: &[w-3,1,0,2]\\
&1324 \: &1 \: &[w-3,0,2,1]\\
&1342 \: &2 \: &[w-3,0,0,3]\\
&1423 \: &2 \: &[w-3,-1,2,2]\\
&1432 \: &3 \: &[w-3,-1,1,3]\\
\\
&2134 \: &1 \: &[0,w-2,1,1]\\
&2143 \: &2 \: &[0,w-2,0,2]\\
&2314 \: &2 \: &[0,0,w-1,1]\\
&2341 \: &3 \: &[0,0,0,w]\\
&2413 \: &3 \: &[0,-1,w-1,2]\\
&2431 \: &4 \: &[0,-1,1,w]\\
\\
&3124 \: &2 \: &[-1,w-2,2,1]\\
&3142 \: &3 \: &[-1,w-2,0,3]\\
&3214 \: &3 \: &[-1,1,w-1,1]\\
&3241 \: &4 \: &[-1,1,0,w]\\
&3412 \: &4 \: &[-1,-1,w-1,3]\\
&3421 \: &5 \: &[-1,-1,2,w]\\
\\
&4123 \: &3 \: &[-2,w-2,2,2]\\
&4132 \: &4 \: &[-2,w-2,1,3]\\
&4213 \: &4 \: &[-2,1,w-1,2]\\
&4231 \: &5 \: &[-2,1,1,w]\\
&4312 \: &5 \: &[-2,0,w-1,3]\\
&4321 \: &6 \: &[-2,0,2,w]\\
\end{array}
$$
Now we can consider particular parabolic subgroup $P$.
In order to apply Kostan's theorem
we need to find good representatives
$W_P \backslash W$; more precisely representatives
of minimal length. This can be done by choosing the elements
of the permutation group
that preserve the ordered subsets corresponding to $P_{ij}$.
For example, when we consider $P_{23}$ the minimal representatives
of $W_{P_{23}}\backslash W$ are the permutations $w$ such that $w(2)<w(3)$.
When we consider $P_{12,34}$ we need the permutations $w$ such that
$w(1)<w(2)$ and $w(3)<w(4)$. And for the group $P_{13}$ the needed permutations
are the ones such that $w(1)<w(2)<w(3)$.
Thus, using Kostant's theorem we obtain:
It is easier to describe the cohomology
$$H^n(N,V),$$
than to write it down. One can think of it in the following way.
Consider the last column of the above table. If it is with
weight $[a,b,c,d]$ and with length $l$ then
$H^l(N,V)$ contains the representation
$L[a,b,c,d]$. Also,
all components of the cohomology are obtained in this way.
$$H^i(N_{12}, V)= \left\{
\begin{array}{llllll}
&L_{[n-3, 1, 1, 1]}, &i=0,\\
&L_{[n-3, 1,0,+2]} \oplus L_{ [n-3,0,+2,1]}, &i=1,\\
&L_{ [n-3,0,0,+3]} \oplus L_{ [n-3,-1,+2,+2]} \oplus L_{[0,0,n-1,1]}, &i=2,\\
&L_{[n-3,-1,1,+3]} \oplus L_{[0,-1,n-1,2]} \oplus L_{ [0,0,0,n]}, &i=3,\\
&L_{[-1,-1,n-1,+3]} \oplus L_{[0,-1,1,n]}, &i=4,\\
&L_{[-1,-1,2,n]}, &i=5.
\end{array}
\right.
$$
$$H^i(N_{23}, V)= \left\{
\begin{array}{llllll}
&L_{[n-3, 1, 1, 1]}, &i=0, \\
&L_{[n-3, 1,0,+2]} \oplus L_{[0, n-2, 1, 1]}, &i=1, \\
&L_{[n-3,0,0,3]} \oplus L_{ [0,n-2,0,+2]} \oplus L_{[-1,n-2,2,1]}, &i=2, \\
&L_{[-2,n-2,2,2]} \oplus L_{[-1,n-2,0,3]}\oplus L_{[0,0,0,n]}, &i=3, \\
&L_{[-2,n-2,1,3]} \oplus L_{[-1,1,0,n]}, &i=4, \\
&L_{[-2,1,1,n]}, &i=5.
\end{array}
\right.
$$
$$H^i(N_{34},V)= \left\{
\begin{array}{llllll}
&L_{[n-3, 1, 1, 1]}, &i=0, \\
&L_{[n-3,0,+2,1]} \oplus L_{[0, n-2, 1, 1]}, &i=1, \\
&L_{[n-3,-1,2,2]} \oplus L_{[-1,n-2,2,1]}\oplus L_{[0,0,n-1,1]}, &i=2, \\
&L_{[-2,n-2,2,2]} \oplus L_{[0,-1,n-1,2]}\oplus L_{ [-1,1,n-1,1]}, &i=3, \\
&L_{[-2,1,n-1,2]}\oplus L_{[-1,-1,n-1,+3]}, &i=4, \\
&L_{[-2,0,n-1,3]}, &i=5.
\end{array}
\right.
$$
$$H^i(N_{13}, V)= \left\{
\begin{array}{llllll}
&L_{[n-3, 1, 1, 1]} , &i=0, \\
&L_{[n-3, 1,0,+2]} , &i=1, \\
&L_{[n-3,0,0,3]} , &i=2, \\
&L_{[0,0,0,n]} , &i=3.
\end{array}
\right.
$$
$$H^i(N_{12,34},V)= \left\{
\begin{array}{llllll}
&L_{[n-3, 1, 1, 1]} &i=0 \\
&L_{[n-3,0,+2,1]} &i=1 \\
&L_{[n-3, -1,2,2]} \oplus L_{[0,0,n-1,1]} &i=2 \\
&L_{[0,-1,n-1,2]} &i=3 \\
&L_{[-1,-1,n-1,+3]} &i=4
\end{array}
\right.
$$
$$H^i(N_{24}, V)= \left\{
\begin{array}{llllll}
&L_{[n-3, 1, 1, 1]}, &i=0, \\
&L_{[0, n-2, 1, 1]}, &i=1, \\
&L_{[-1,n-2,2,1]}, &i=2, \\
&L_{[-2,n-2,2,2]}, &i=3.
\end{array}
\right.
$$
Now we apply the Hochschild-Serre spectral sequence to the exact sequences
$0 \rightarrow N_{ij} \rightarrow P_{ij} \rightarrow S_{ij} \rightarrow 0$.
Thus, the spectral sequence is
of the form $$H^p(S,H^q(N,V)) => H^{p+q}(P,V).$$
In the computation we are going
to use the Kunneth formula
$H^*(G_1 \times G_2, V_1 \otimes V_2) = H^*(G_1, V_1) \otimes H^*(G_2, V_2)$.
A substantial simplification comes from the facts that
$H^p(GL_m ({\mathbb Z}), det)=0$, for $m=1,2,3$.
It can be proven by the Hochschild-Serre spectral sequence
relating $GL_n$ to $SL_n$ and $G_m$.
One more observation about the computation of
the cohomology of the parabolic subgroups.
The cohomology groups of the nilpotent radical $H^*(N, V)$ are
representations of the Levi quotient.
For example, $H^0(N_{12}, S^{w-4}V_4 \otimes det)
= L_{[w-3, 1, 1, 1]} = L_{[w-3,1]} \otimes L_{[1]} \otimes L_{[1]}$,
since the Levi quotient is
$S_{12}=GL_2({\mathbb Z}) \times GL_1({\mathbb Z}) \times GL_1({\mathbb Z})$.
We are going to use some abbreviation in the computation that follows.
More precisely,
by $H^p(L[a,b])$ we mean $H^p(GL_2 {\mathbb Z}, L_{[a,b]})$, similarly, by
$H^p(L[a])$ we mean $H^p(GL_1 {\mathbb Z}, L_{[a]})$
and by $H^p(L[a,b,c])$ we mean $H^p(GL_3 {\mathbb Z}, L_{[a,b,c]})$. Also,
we set $V = S^{w-4}V \otimes det=L[n-3,1,1,1]$.
\subsection{Cohomology of $B$}
The Levi quotient of a Borel subgroup is a Cartan subgroup.
Thus the representations
obtained from Kostan's theorem decompose into
tensor product of one dimensional representations.
$$
E_2 ^{p,q}=H^p(S, H^q(N,V)= \left\{
\begin{array}{llllll}
&H^p(S, L_{[0,n-2,0,2]}) &q=2 \\
&H^p(S, L_{[0,0,0,n]}) \oplus H^p(S, L_{[-2,n-2,2,2]}) &q=3 \\
&H^p(S, L_{[-2,0,2,n]}) &q=6\\
&0 &q \neq 2,3,6.
\end{array}
\right.
$$
All other representations of $S$ do not contribute to the
cohomology of the Borel subgroup because at least one of the
entries of the weight is an odd number. The ones that are left
contain only even coefficients. Thus, they are trivial
representations of $GL_1 ({\mathbb Z})$. Then the $E_2$-terms of the spectral
sequence can be simplified to:
$$
E_2 ^{p,q}= \left\{
\begin{array}{llllll}
&(0|n-2|0|2) &p=0, q=2 \\
&(0|0|0|n) \oplus (-2|n-2|2|2) &p=0, q=3 \\
&(-2|0|2|n) &p=0, q=6\\
&0 &otherwise
\end{array}
\right.
$$
The only non-zero entries of the above spectral sequence occur
only when $p=0$. Therefore the sequence degenerates at the
$E_2$-level, and cohomology of the Borel subgroup is
$$
H^i(B, V)= \left\{
\begin{array}{llllll}
&(0|n-2|0|2) &i=2 \\
&(0|0|0|n) \oplus (-2|n-2|2|2) &i=3 \\
&(-2|0|2|n) &i=6\\
&0 &i \neq 2,3,6
\end{array}
\right.
$$
\subsection{Cohomology of $P_{12}$}
We proceed similarly with the other parabolic subgroups. Recall,
the Levi quotient of $P_{12}$ is $S_{12}=GL_2({\mathbb Z}) \times GL_1({\mathbb Z})
\times GL_1({\mathbb Z})$. Thus, the spectral sequence becomes:
$$E_2 ^{p,q}=H^p(S_{12}, H^q(N_{12}, V)$$
$$E_2 ^{p,q}=\left\{
\begin{array}{llllll}
&H^p(S_{12}, L_{[n-3, 1,0,+2]}) = H^p(S_{12}, L_{[n-3, 1]} \otimes L_{[0]}
\otimes L_{[2]}) &q=1, \\
&H^p(S_{12}, L_{ [n-3,-1,+2,+2]}) =H^p(S_{12}, L_{[n-3,-1]} \otimes L_{[2]}
\otimes L_{[2]}) &q=2, \\
&H^p(S_{12}, L_{ [0,0,0,n]}) = H^p(S_{12}, L_{[0,0]} \otimes L_{[0]}
\otimes L_{[n]}) &q=3, \\
&0 &q \neq 1,2,3.
\end{array}
\right.
$$
The representations of the $GL_1({\mathbb Z})$ quotients that give a contribution
to the cohomology groups are the trivial representations.
Thus,
$$E_2 ^{p,q}= \left\{
\begin{array}{llllll}
&H^p(L[w-3, 1]) \otimes H^0(L[0]) \otimes H^0(L[2]) &q=1, \\
&H^p(L[w-3,-1]) \otimes H^0(L[2])^{\otimes 2} &q=2, \\
&H^p(L[0,0]) \otimes H^0(L[0]) \otimes H^0(L[w]) &q=3, \\
&0 &q \neq 1,2,3.
\end{array}
\right.
$$
We are going to use the fact that $H^p(GL_2({\mathbb Z}), L)=0$ for $p>1$,
for any representation $L$ of $GL_2({\mathbb Q})$.
In particular the differential
$d_2: E_2 ^{p,q} \rightarrow E_2 ^{p+2,q-1}$ is zero, since
$E_2 ^{p,q}$ is non-zero only when $p=0$ or $p=1$.
Therefore, the spectral sequence degenerates at
the $E_2$-level, and the cohomology of $P_{12}$ is the following.
$$H^i(P_{12}, V)= \left\{
\begin{array}{llllll}
&(n-3, 1|0|2) &i=2, \\
&(0|0|0|n) \oplus (n-3,-1|2|2) &i=3, \\
&0 &i \neq 2,3.
\end{array}
\right.
$$
\subsection{Cohomology of $P_{23}$}
Recall that the Levi quotient
$S_{23}$ of $P_{23}$ is $GL_1({\mathbb Z}) \times GL_2({\mathbb Z}) \times GL_1({\mathbb Z})$.
Then
$$E_2 ^{p,q}=H^p(S_{23}, H^q(N_{23},V)= \left\{
\begin{array}{llllll}
&H^p(S_{23}, L_{[0,n-2,0,2]}) &q=2, \\
&H^p(S_{23}, L_{[-2,n-2,2,2]} \oplus L_{[0,0,0,n]}) &q=3, \\
&0 &q \neq 2,3.
\end{array}
\right.
$$
Using similar arguments, we obtain
$$H^i(P_{23},V)=
\left\{\begin{array}{llllll}
&(0|n-2,0|2) \oplus (0|0|0|n) &i=3\\
&(-2|n-2,2|2) &i=4, \\
&0 &i \neq 3,4.
\end{array}
\right.
$$
\subsection{Cohomology of $P_{34}$}
Recall that the Levi quotient $S_{34}$ of $P_{34}$ is
$GL_1({\mathbb Z}) \times GL_1({\mathbb Z}) \times GL_2({\mathbb Z})$.
Then
$$E_2 ^{p,q}=H^p(S_{34}, H^q(N_{34},V))= \left\{
\begin{array}{llllll}
&H^p(S_{34},L_{[0,0,n-1,1]}) &q=2, \\
&H^p(S_{34},L_{[-2,n-2,2,2]}) &q=3, \\
&H^p(S_{34},L_{[-2,0,n-1,3]}) &q=5.
&0 &q \neq 2,3,5
\end{array}
\right.
$$
Similarly, we obtain
$$E_2 ^{p,q} =\left\{
\begin{array}{llllll}
&H^0(L[0]) \otimes H^0(L[0]) \otimes H^p(L[n-1,1]) &q=2,\\
&H^0(L[-2]) \otimes H^0(L[n-2]) \otimes H^p(L[2,2]) &q=3,\\
&H^0(L[-2]) \otimes H^0(L[0]) \otimes H^p(L[n-1,3]) &q=5,\\
&0 &q \neq 2,3,5
\end{array}
\right.
$$
And finally, the cohomology of $P_{34}$ is
$$H^i(P_{34},V)= \left\{
\begin{array}{llllll}
&(0|0|n-1,1)\oplus (-2|n-2|2|2) &i=3, \\
&(-2|0|n-1,3) &i=6 \\
&0 &i \neq 3,6.
\end{array}
\right.
$$
\subsection{Cohomology of $P_{13}$}
Recall that the Levi quotient $S_{13}$ of $P_{13}$ is $GL_3 {\mathbb Z} \times GL_1 {\mathbb Z}$.
$$H^p(S_{13}, H^i(N_{13}, V))= \left\{
\begin{array}{llllll}
&0 &q=0, \\
&H^p(S_{13}, L_{[n-3, 1,0,2]}) &q=1, \\
&0 &q=2, \\
&H^p(S_{13}, L_{[0,0,0,n]}) &q=3.
\end{array}
\right.
$$
We can simplify it to
$$H^p(S_{13}, H^q(N_{13},V))= \left\{
\begin{array}{llllll}
&0 &q=0, \\
&H^p(L[n-3, 1,0])\otimes H^0(L[2]) &q=1, \\
&0 &q=2, \\
&H^p(L[0,0,0])\otimes H^0(L[n]) &q=3.
\end{array}
\right.
$$
From the section "Cohomology of $GL_3({\mathbb Z})$" we know that
for $n>5$ we have
$$H^p(GL_3({\mathbb Z}),L[n-3,1,0])=\left\{
\begin{array}{lll}
(\overline{n-3,-1}|2) &p=2\\
(-2|\overline{n-2,2}) &p=3\\
0 &p\neq 2,3
\end{array}
\right.
$$
And for $n=4$
$$H^p(GL_3({\mathbb Z}),L[n-3,1,0])=0.$$
Also
$$H^p(GL_3({\mathbb Z}),{\mathbb Q})=\left\{
\begin{array}{lll}
&(0|0|0) &p=0\\
&0 &p \neq 0.
\end{array}
\right.
$$
Therefore
$$H^p(S_{13}, H^q(N_{13},V))= \left\{
\begin{array}{llllll}
&(0|0|0|n) &p=0\mbox{ and }q=3, \\
&(\overline{n-3,-1}|2|2) &p=2\mbox{ and }q=1, \\
&(-2|\overline{n-2,2}|2) &p=3\mbox{ and }q=1, \\
&0 &\mbox{for all other p and q}.
\end{array}
\right.
$$
The above spectral sequence degenerates at $E_2$ level. Therefore
$$H^i(P_{13},V))= \left\{
\begin{array}{llllll}
&(0|0|0|n) \oplus (\overline{n-3,-1}|2|2) &i=3,\\
&(-2|\overline{n-2,2}|2) &i=4,\\
&0 &i\neq 3,4.
\end{array}
\right.
$$
\subsection{Cohomology of $P_{12,34}$}
For the last parabolic subgroup we can obtain a better answer
in terms of cohomology of $GL_2({\mathbb Z})$.
However, the $d_2$ differential might be non-trivial.
Recall that the Levi quotient $S_{12,34}$ of $P_{12,34}$ is
$GL_2({\mathbb Z}) \times GL_2({\mathbb Z})$.
Then the spectral sequence is
$$E_2 ^{p,q}=H^p(S_{12,34},H^q(N_{12,34},V))$$
$$E_2 ^{p,q}=\left\{
\begin{array}{llllll}
&H^p(S_{12,34}, L_{[n-3, -1,2,2]})\oplus H^p(S_{12,34}, L_{[0,0,n-1,1]})&q=2\\
&0 &q\neq 2
\end{array}
\right.
$$
Therefore
$$E_2 ^{1,2}=
[H^1(L[n-3,-1])\otimes H^0(L[2,2])]
\oplus [H^0(L[0,0])\otimes H^1(L[n-1,1])].$$
And
$$E_2 ^{p,q}=0\mbox{ for } p\neq 1 \mbox{ or }q\neq 2$$
Finally,
$$H^i(P_{12,34},V)= \left\{
\begin{array}{llllll}
&(n-3,-1|2|2)\oplus (0|0|n-1,1) &i=3 \\
&0 &i \neq 3
\end{array}
\right.
$$
\subsection{Cohomology of $P_{24}$}
Recall that the Levi quotient $S_{24}$ of $P_{24}$ is
$GL_1({\mathbb Z}) \times GL_3({\mathbb Z})$.
We have the spectral sequence
$$H^p(S_{24}, H^q(N_{24},V))= \left\{
\begin{array}{llllll}
&0 &q=0, \\
&H^p(S_{24},L_{[0, n-2, 1, 1]}) &q=1, \\
&0 &q=2, \\
&H^p(S_{24},L_{[-2,n-2,2,2]}) &q=3.
\end{array}
\right.
$$
We can simplify it to
$$H^p(S_{24}, H^q(N_{24},V))= \left\{
\begin{array}{llllll}
&0 &q=0, \\
&H^0(L[0])\otimes H^p(L[n-2, 1, 1]) &q=1, \\
&0 &q=2, \\
&H^0(L[-2])\otimes H^p(L[n-2,2,2]) &q=3.
\end{array}
\right.
$$
From the section "Cohomology of $GL_3({\mathbb Z})$" we know that
$$H^i(GL_3({\mathbb Z}),L[n-2,1,1])=\left\{
\begin{tabular}{ll}
$(n-2,0|2)\oplus (0|0|n)$ & $i=2$\\
$0$ & $i\neq 2$
\end{tabular}
\right.
$$
And also
$$H^i(GL_3, L[n-2,2,2])=\left\{
\begin{array}{lll}
&(0|\overline{n-1,3}) &i=3\\
&0 &i \neq3
\end{array}
\right.
$$
Therefore,
$$H^p(S_{24}, H^q(N_{24},V))= \left\{
\begin{tabular}{llllll}
$(0|0|0|n)\oplus (0|n-2,0|2)$ &$p=2,$ $q=1$, \\
$(-2|0|\overline{n-1,3})$ &$p=3,$ $q=3$, \\
$0$ & for all other p and q.
\end{tabular}
\right.
$$
The spectral sequence degenerates. Therefore,
$$H^i(P_{24},V))= \left\{
\begin{tabular}{llllll}
$(0|0|0|n)\oplus (0|n-2,0|2)$ &$i=3,$ \\
$(-2|0|\overline{n-1,3})$ &$i=6,$ \\
$0$ &$i\neq 3,6.$
\end{tabular}
\right.
$$
\sectionnew{Boundary cohomology of $GL_4({\mathbb Z})$}
In this section we compute the cohomology of the boundary of the
Borel-Serre compactification associated to $GL_4({\mathbb Z})$ with coefficients in
$$V=S^{n-4}\otimes det = L[n-3,1,1,1].$$ The Eisenstein cohomology,
which in our case is the whole group cohomology, injects
into the cohomology of the boundary.
We recall briefly several statements about Borel-Serre compactification
associated to $GL_m({\mathbb Z})$. Let
$$X=GL_m({\mathbb R})/SO_m({\mathbb R})\times{\mathbb R}^{\times}_{>0}.$$
And let
$$Y=GL_m({\mathbb Z})\backslash X.$$
Then the Borel-Serre compactification of $Y$, denoted by $\overline{Y}$,
is a compact space, containing $Y$, and of the same homotopy type.
The space $\overline{Y}$ is obtained by attaching cell $\sigma_P$ to $X$,
corresponding to each parabolic subgroup $P$. Denote by $Y_P$ the projection
of $\sigma_P$ to $\overline{Y}$. Let $\overline{Y}_P$ be the closure of
$Y_P$. Then $\overline{Y}_Q \subset\overline{Y}_P$ when $Q\subset P$.
The boundary of $\overline{Y}$ is obtained by gluing together
the spaces $\overline{Y}_P$.
In the following computation we shall denote by $Y_{ij}$ the space
$Y_{P_{ij}}$. For these spaces we have
$$H^i_{top}(\overline{Y}_{ij},i^*F_V)=
H^i_{group}(P_{ij},V),$$
for a suitable sheaf $F_V$ on $\overline{Y}$, where $i$ is the inclusion
of $\overline{Y}_{ij}$ into $\overline{Y}$. For simplification we will
not write the restriction functor $i^*$.
The cohomology of the boundary can be computed the spectral sequence
of the type 'Mayer-Vietoris'.
$$
\xymatrix{
&H^q(\overline{Y}_{13},F_V))
\ar[r] \ar[dr]
&H^q(\overline{Y}_{12},F_V)) \ar[dr]\\
E_1^{*.q}: &H^q(\overline{Y}_{12,34},F_V) \ar[ur]
|!{[u];[r]}\hole \ar[dr] |!{[d];[r]}\hole
& H^q(\overline{Y}_{23},F_V) \ar[r]
& H^q(\overline{Y}_B,F_V)\\
&H^q(\overline{Y}_{24},F_V) \ar[r] \ar[ur]
& H^q(\overline{Y}_{34},F_V) \ar[ur]
}
$$
The direct sum of the first column will be $E_1^{0,q}$; the direct
sum of the second column will be $E_1^{1,q}$; and $E_1^{2,q} =
H^q(\overline{Y}_B,F_V) $. We have non-zero terms when $q=2,3,4$
or $6$. Similarly, to the Mayer-Vietoris sequence, we want every
square at the $E_1$ level
to be anti-commutative. It can be achieved in the following way.
First, consider the maps induced by the inclusion of the boundary
components. Then the squares will commute. Then change the sign of
every other arrow mapping a subspace of $E_1^{0,q}$ to a subspace
of $E_1^{1,q}$ as it is done in the definition of the spectral
sequence. Then the squares will anti-commute. \bth{6.1} The above
spectral sequence stabilizes at $E_2$ level. It converges to the
cohomology of the boundary of the Borel-Serre compactification
associated to $GL_4({\mathbb Z})$, which is
$$H^i_{\mathrm d}(GL_4({\mathbb Z}),V)=\left\{
\begin{tabular}{lll}
$(0|0|0|n)\oplus (\overline{n-3,-1}|2|2)\oplus (\overline{n-3,1}|0|2)$
& $i=3,$\\
$0$
& $i\neq 3,$
\end{tabular}\right.
$$
where $$(a_1|a_2|\dots |a_k)=\otimes_{i=1}^k H^0(GL_1({\mathbb Z}),L[a_i]),$$
and
$$(\overline{a_1,a_2}|a_3|a_4)=H^1_{cusp}(GL_2({\mathbb Z}),L[a_1,a_2])
\otimes (a_3|a_4).$$
\eth
\proof
We consider all non-vanishing
terms of the spectral sequence at $E_1$ level. The non-vanishing
terms occur
at $q=2,3,4$ and $6$. For a fixed $q$ we have arrows going in
direction of the index $p$ induced by the inclusion of the
parabolic subgroups. We compute
$kernel/image$ for these arrows in order to find the $E_2$ level
of the spectral sequence. As a consequence we find that the
spectral sequence degenerates at $E_2$ level. Then we compute the
cohomology to which it converges, which is the cohomology of the
boundary.
\subsection{Computation of $E_2^{*,2}$}
For the $E_1^{p,2}$-terms the only non-zero cohomologies come are
$H^2(P_{12},V)$ and $H^2(B,V)$. We have
$$(n-3,1|0|2) \rightarrow (0|n-2|0|2).$$
Therefore,
$$E_2^{p,2}=\left\{
\begin{tabular}{ll}
$(\overline{n-3,1}|0|2)$ & $p=1$\\
$0$ & $p \neq1$
\end{tabular}
\right.
$$
\subsection{Computation of $E_2^{*,3}$}
First we consider the case $n>5$. Now we describe the $E_1^{*,3}$
terms. Consider the columns of the diagram below. Break each
column into pairs of vector spaces. Each pair comes one parabolic
subgroup. For example $(0|0|0|n)$ and $(\overline{n-3,-1}|2|2)$
come from third cohomology of $P_{13}$. The two vector spaces
below come from the third cohomology of $P_{12,34}$. The maps
correspond to the inclusion of the parabolic subgroups.
$$
\xymatrix{
(0|0|0|n) \ar[ddr] \ar[r] & (0|0|0|n)\ar[ddr]\\
(\overline{n-3,-1}|2|2) \ar[r] & (n-3,-1|2|2)\ar[ddr]\\
(n-3,-1|2|2) \ar[ur] \ar[ddr] & (0|0|0|n) \ar[r] & (0|0|0|n)\\
(0|0|n-1,1) \ar[ddr] & (0|n-2,0|2) & (-2|n-2|2|2)\\
(0|0|0|n) \ar[uur] & (-2|n-2|2|2) \ar[ur]\\
(0|n-2,0|2) \ar[uur] & (0|0|n-1,1) \ar[uuur]
}
$$
There are many cancelation which occur when passing to $E_2$
level. In order to follow the cancelation one considers the
connected graph of the above diagram. There are 3 connected
graphs: one containing the space
$(0|0|0|n)$ coming from the 3rd cohomology of the
Borel subgroup, and another containing $(-2|n-2|2|2)$ again from
the 3rd cohomology of the Borel subgroup, and the 3rd containing
$(0|n-2,0|2)$ from the 3rd cohomology of $P_{24}$. Consider the
graph containing $(0|0|0|n)$. The only term that is not cancelled
at $E_2$ level is the vector space $(0|0|0|n)$ which comes from
the parabolic group $P_{24}$. Now consider the second connected
graph, containing $(-2|n-2|2|2)$. After cancelation the only
vector space left is $(\overline{n-3,-1}|2|2)$ coming from
$P_{13}$. For the 3rd connected graph, there are two vertices
corresponding to $(0|n-2,0|2)$. So they cancel and do not
contribute to the $E_2$ level. Thus, for $n>4$ we have
$$E_2^{p,3}=\left\{
\begin{tabular}{ll}
$(0|0|0|n)\oplus (\overline{n-3,-1}|2|2)$ & $p=0$,\\
$0$ & $p\neq 0.$
\end{tabular}
\right.
$$
Now we have to examine the case $n=4$. The vector spaces are all
the same as in the case $n>4$ except the exchange of
$(\overline{n-3,-1}|2|2)$ with $(0|n-2,0|2)$ in the 3rd cohomology
of $P_{13}$. Note also that for $n=4$, we have $(0|n-2,0|2)=0$.
Then the $E_1^{*,3}$ terms form the following anticommutative
diagram:
$$
\xymatrix{
(0|0|0|4) \ar[ddr] \ar[r] & (0|0|0|4)\ar[ddr]\\
0 & (4-3,-1|2|2)\ar[ddr]\\
(4-3,-1|2|2) \ar[ur] \ar[ddr] & (0|0|0|4) \ar[r] & (0|0|0|4)\\
(0|0|4-1,1) \ar[ddr] & 0 & (-2|4-2|2|2)\\
(0|0|0|4) \ar[uur] & (-2|4-2|2|2) \ar[ur]\\
0 & (0|0|4-1,1) \ar[uuur]
}
$$
There are 2 connected graphs in the above diagram. One containing
the vector space $(0|0|0|4)$ coming from the Borel subgroup. The
other containing the vector space $(-2|4-2|2|2)$ again coming from
the Borel subgroup. Consider the graph containing $(0|0|0|4)$. The
only terms that is not canceled at $E_2$ level is the vector space
$(0|0|0|4)$ which comes from the parabolic group $P_{24}$. Now
consider the second connected graph, containing $(-2|4-2|2|2)$.
All of its terms of that graph cancel when passing to $E_2$ level.
Thus, for $w=4$ we have
$$E_2^{p,3}=\left\{
\begin{tabular}{ll}
$(0|0|0|4)$ & $p=0$,\\
$0$ & $p\neq 0.$
\end{tabular}
\right.
$$
\subsection{Computation of $E_2^{*,4}$}
For $q=4$ the only non-zero terms at $E_1$ level come from $P_{13}$ and
$P_{23}$. We have
$$H^4(P_{13},V))\rightarrow H^4(P_{23},V)).$$
From the first theorem (theorem 5.1) in the section "Cohomology
of the parabolic subgroups of $GL_4$" we obtain
$$(-2|n-2,2|2)\rightarrow (-2|n-2,2|2).$$
Therefore,
$$E_2^{*,4}=0.$$
\subsection{Computation of $E_2^{*,6}$}
When $q=6$, for all even $w$, the non-zero terms give
$$E_1^{*,6}: H^6(P_{24},V)\rightarrow H^6(P_{34},V)\rightarrow H^6(B,V),$$
which are isomorphic to
$$(-2|0|\overline{w-1,3}) \rightarrow (-2|0|w-1,3) \rightarrow
(-2|0|2|w)$$
from theorem 5.1.
The above sequence is exact. Therefore,
$$E_2^{*,6} =0.$$
The spectral sequence degenerates at $E_2$ level. Therefore, we can find what
is the cohomology of the boundary of the Borel-Serre compactification
associated to $GL_4({\mathbb Z})$ with coefficients in the sheaf $F_V$ associated to
$$V=S^{n-4}V_4\otimes det.$$
Let us recall the notation that we are going to use.
By $H^i_{\mathrm d}(GL_4({\mathbb Z}),V)$ we mean the cohomology of the boundary of
the Borel-Serre compactification associated to $GL_4({\mathbb Z})$ with coefficients
the sheaf $F_V$
For even $n$ greater than $4$ we have
$$H^i_{\mathrm d}(GL_4({\mathbb Z}),S^{n-4}V_4\otimes det)=\left\{
\begin{tabular}{ll}
$(0|0|0|n)\oplus (\overline{n-3,-1}|2|2) \oplus (\overline{n-3,1}|0|2)$
& $i=3$,\\
$0$ & $i\neq 3$.
\end{tabular}
\right.
$$
Note that the first two summands for the 3rd cohomology of the boundary
come from 3rd cohomology of the maximal parabolic subgroups. And the last
summand comes from the 2nd cohomology of a non-maximal parabolic subgroup.
Since it comes from second cohomology of a parabolic subgroup,
but it contributes in the 3rd cohomology of the boundary, it is called a ghost
class.
\sectionnew{Cohomology of $GL_4({\mathbb Z})$}
We are going to show that the ghost class do not enter in the
Eisenstein cohomology of $GL_4({\mathbb Z})$ which coinsides with the whole
cohomology of $GL_4({\mathbb Z})$.
Since the cohomology of the boundary is concentrated in degree 3,
it is enough to compute homological Euler characteristic of $GL_4({\mathbb Z})$
with coefficients in $S^{n-4}V_4\otimes det$. Recall the homological
Euler characteristic of an arithmetic group $\Gamma$ with
coefficients in a finite dimensional representation is
$$\chi_h(\Gamma,V)=\sum_i (-1)^i\dim H^i(\Gamma,V).$$
Note that $S^{n-4}V_4\otimes det=L[n-3,1,1,1]$ and
$S^{n-2}V_2\otimes det=L[n-1,1]=L[n-3,-1]$
Form \cite{???} we know that
$$\chi_h(GL_4({\mathbb Z}),S^{n-4}V_4\otimes det)=
\chi_h(GL_2({\mathbb Z}),S^{n-2}V_2\otimes det).$$
Therefore, for even $n$ greater than $4$, we have
$$H^i(GL_4({\mathbb Z}),S^{n-4}V_4\otimes det)=\left\{
\begin{tabular}{ll}
$(0|0|0|n)\oplus (\overline{n-3,-1}|2|2)$
& $i=3$,\\
$0$ & $i\neq 3$.
\end{tabular}
\right.
$$
In the case $n=4$ we use the same argument.
$$H^i_{\mathrm d}(GL_4({\mathbb Z}),det)=\left\{
\begin{tabular}{ll}
$(0|0|0|4)$ & $i=3$,\\
$0$ & $i\neq 3$.
\end{tabular}
\right.
$$
Also, the homological Euler characteristic gives
$$\chi_h(GL_4({\mathbb Z}),det)=-1.$$
Therefore, for $n=4$ the cohomology of the boundary coincides with
the Eisenstein cohomology. And we have
$$H^i_{Eis}(GL_4({\mathbb Z}),det)=\left\{
\begin{tabular}{ll}
$(0|0|0|4)$ & $i=3$,\\
$0$ & $i\neq 3$.
\end{tabular}
\right.
$$
On the other hand,
$$H^i_{cusp}(SL_4({\mathbb Z}),{\mathbb Q})=0.$$
Therefore,
$$H^i_{cusp}(GL_4({\mathbb Z}),det)=0.$$
And we conclude that
$$H^i(GL_4({\mathbb Z}),det)=\left\{
\begin{tabular}{ll}
$(0|0|0|4)$ & $i=3$,\\
$0$ & $i\neq 3$.
\end{tabular}
\right.
$$
\renewcommand{\em}{\textrm}
\begin{small}
\renewcommand{\refname}{ {\flushleft\normalsize\bf{References}} }
|
1,314,259,995,637 | arxiv | \section{Motivation}
\label{sec-intro}
Precise measurement of $\pi^0$ production
when a neutrino scatters coherently off a
target nucleus,
{\boldmath $\nu + {\cal A} \rightarrow \nu + {\cal A} + \pi^0$},
depicted in Figure~\ref{fig-feynman},
is challenging: the cross-section ($\sigma$)
of coherent-$\pi^0$ (\boldmath {Coh$\pi^0$} ) is 0.003
of the inclusive neutrino charged current (CC) interactions
at $E_\nu \simeq 25$~GeV~\cite{Rein:1982pf};
the single $\pi^0$ is notoriously refractory to accurate
identification in neutrino detectors. Consequently
the past cross-section measurements of \boldmath {Coh$\pi^0$} \
have been poor,
with a precision no better than $\simeq 30\%$
~\cite{EXAP,EXGGM,EXCHARM,EXSKAT,EX15FT};
recently the MiniBOONE experiment
has reported the fraction of \boldmath {Coh$\pi^0$} \ in all exclusive NC $\pi^0$
production~\cite{EXMB} .
This challenge is the primary motivation
for the present analysis. The second motivation is utilitarian.
Since \boldmath {Coh$\pi^0$} \ is almost collinear
with the incident neutrino, in massive neutrino detectors
a \boldmath {Coh$\pi^0$} \ event will manifest itself as a
forward electromagnetic shower
posing a background for the \ne-induced signal.
This is relevant to the long baseline experiments
searching for \ne\ appearance with the
purpose of measuring the mixing angle $\Theta_{13}$.
A precise measurement of \boldmath {Coh$\pi^0$} ,
although conducted at energies higher than those of the
long baseline projects at Fermilab (MINOS/NO$\nu$A),
will constrain the error on a model-prediction of this
background to the \ne\ appearance.
Finally, the study of coherent pion production provides an
insight into the structure of the weak hadronic
current~\cite{Rein:1982pf, Belkov:1986hn}, and offers
a test of the partially conserved axial-vector current hypothesis
(PCAC)~\cite{ADLER}. Ref.~\cite{Kopeliovich:1992ym}
presents an excellent review of these topics.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]
{cohpi0_feynman3.eps}
\caption
{Diagram of the \boldmath {Coh$\pi^0$} \ process,
{\boldmath $\nu + {\cal A} \rightarrow \nu + {\cal A} + \pi^0$}. }
\label{fig-feynman}
\end{center}
\end{figure}
A coherent interaction,
Figure~\ref{fig-feynman},
where no charge or isospin is
exchanged between the $\nu$ and
the target nucleus (${\cal A}$) which
recoils without breakup, leads
to an enhancement in the cross-section. In the \boldmath {Coh$\pi^0$} \ process
the interaction is mediated by a pomeron-like particle
bearing the quantum number of the vacuum. The
cross section is dominated by the axial vector current.
The contribution of the isovector current to the \boldmath {Coh$\pi^0$} \ process
is minimal where $Z^0$ can be viewed as a $\rho$ meson which
produces a $\pi^0$ exchanging an isoscalar $\omega$ with
${\cal A}$. This minimal contribution of the isovector
current to the \boldmath {Coh$\pi^0$} \ arises from two reasons:
(a) the cross section of the isovector $\rho$-${\cal A}$
interaction is zero in the forward direction, a direction preferred by
the nuclear form factor; and (b) the vector component has
a contribution proportional to $(1-2\sin^2\theta_W)^2$
reducing the isovector contribution further,
the net reduction with respect to the axial part being a factor of
3.5. The PCAC hypothesis stipulates
that for zero-momentum transfer ($Q^2=0$, where $Q^2$
is the negative of the square of the four-momentum
transfer from the incident neutrino to the target),
the $\nu$-${\cal A}$ cross section can be
related to the $\pi$-${\cal A}$ cross section.
The $\nu$-${\cal A}$
cross section in the forward direction is related to the strong
$\pi$-${\cal A}$ interaction as follows:
\begin{equation}
\left [ \frac{d^3\sigma (\nu {\cal A} \rightarrow \nu {\cal A} \pi^0)}{dxdydt} \right ]_{Q^2=0}=
\frac{G^2ME_\nu}{\pi^2} \frac{1}{2} f^2_\pi (1-y)
\left [ \frac {d\sigma(\pi {\cal A} \rightarrow \pi {\cal A})}{dt} \right ]_{yE_\nu=E_\pi}
\label{eq-rscohq0}
\end{equation}
\noindent
In Equation~(\ref{eq-rscohq0})
$G$ is the Fermi coupling constant,
$M$ is the nucleon mass, $x=Q^2/2M\nu$ and
$y=\nu/E_\nu$, where $\nu$ is the energy
of the hadronic system in the final state,
are the standard scaling variable,
and $f_\pi=0.93\,m_\pi$ is the pion decay constant.
The variable $t$ quantifies the coherence (forwardness) and is
defined as $t=p^2_T=(q-P_\pi)^2$, i.e. the square of
the four-momentum transfer to the nucleus.
In a neutral current (NC) event since the emergent
neutrino remains invisible, $|t|$ cannot be measured.
Instead the very small transverse momentum expected
in a coherent interaction can be quantified using the variable
$\zeta$ defined as:
$\zeta_{\pi^0}=E_{\pi^0} \left [ 1-\cos(\theta_{\pi^0}) \right ].$
This variable has the property that its
distribution depends weakly on the incident neutrino energy.
For low but non-zero $Q^2$ values, the hadron
dominance model~\cite{HDM} provides a guide
to extend the cross section
formula for the \boldmath {Coh$\pi^0$} -like process. The $Z^0$ boson can be
viewed as a superposition of axial vector and vector currents.
These compose the weak hadronic current.
\section{Beam and Detector}
\label{sec-nomad}
The Neutrino Oscillation MAgnetic
Detector (NOMAD) experiment at CERN used a
neutrino beam ~\cite{CERN-BEAM}
produced by
the 450~GeV protons from the
Super Proton Synchrotron (SPS) incident on
a beryllium target and producing
secondary $\pi^{\pm}$, $K^{\pm}$, and $K^0_L$ mesons.
The positively charged mesons were focussed by
two magnetic horns into a
290~m long evacuated decay pipe. Decays of
$\pi^{\pm}$, $K^{\pm}$, and $K^0_L$
produced the SPS neutrino beam.
The average neutrino flight path to
NOMAD was 628~m, the detector being
836~m downstream of the Be-target.
The SPS beamline and the neutrino flux incident
at NOMAD are described in~\cite{NOMAD-FLUX}.
The $\nu$-flux in NOMAD is constrained by the
$\pi^{\pm}$ and $K^{\pm}$ production measurements in
proton-Be collision by the SPY experiment
~\cite{SPY1, SPY2, SPY3} and by an
earlier measurement conducted by
Atherton {\it et al.}~\cite{ATHERTON}.
The $E_\nu$-integrated relative composition of
\nm:\nmb:\nel:\neb\ CC events,
constrained $in$ $situ$ by the
measurement of CC-interactions
of each of the neutrino species, is
$1.00: 0.025: 0.015:0.0015$. Thus, 95\% of $\nu$-events,
are due to \nm-interactions with a small \nmb-contamination.
The NOMAD experiment was designed to search for
\mutotau\ oscillations at $\Delta m^2 \geq 5$~eV$^2$,
and in large $\Delta m^2$ range it set
stringent limit~\cite{NOMAD-NMNT} on this search,
along with the CHORUS experiment~\cite{CHORUS-NMNT}.
The NOMAD apparatus~\cite{NOMAD-NIM}
was composed of several sub-detectors. The active
target comprised 132 planes of $3 \times 3$~m$^2$ drift chambers (DC)
with an average density similar to that of liquid
hydrogen (0.1~gm/cm$^3$).
On average, the equivalent material in the DC
encountered by
particles produced in a $\nu$-interaction
was about half a radiation length
and a quarter of an hadronic interaction length ($\lambda$).
The fiducial mass of the NOMAD DC-target, 2.7 tons, was
composed primarily of carbon (64\%), oxygen (22\%), nitrogen (6\%),
and hydrogen (5\%) yielding an effective atomic number,
\boldmath {${\cal A}$} =12.8, similar to carbon.
Downstream of the DC, there were nine modules of transition radiation
detectors (TRD), followed by a preshower (PRS) and a lead-glass
electromagnetic calorimeter (ECAL).
The ensemble of DC, TRD, and PRS/ECAL was placed within
a dipole magnet providing a 0.4~T magnetic field orthogonal
to the neutrino beam line.
Two planes of scintillation counters, $T_1$ and $T_2$,
positioned upstream and downstream of the TRD,
provided the trigger in combination with an
anti-coincidence signal, ${\overline V}$,
from the veto counter upstream and outside the magnet.
Downstream of the magnet was a hadron calorimeter,
followed by two muon-stations each comprising large area
drift chambers and separated by an iron filter
placed at 8- and 13-$\lambda$'s downstream of
the ECAL, that provided a clean identification of the muons.
The schematic of the detector
in the Y-Z view is shown in Figure~\ref{fig-evtpi01}.
The charged tracks in the DC were measured with an
approximate momentum ($p$) resolution of
$\sigma_p/p = 0.05/\sqrt{L} \oplus 0.008p/\sqrt{L^5}$
($p$ in GeV/$c$ and $L$ in meters)
with unambiguous charge separation in the energy range of interest.
The detailed individual reconstruction
of each charged and neutral track and their
precise momentum vector measurement
enabled a quantitative description of
the event kinematics: the strength and
basis of NOMAD analyses.
The experiment recorded over 1.7 million
neutrino interactions in its active drift-chamber (DC) target.
These data are unique in that they constitute the largest
high resolution neutrino data sample with
accurate identifications of \nm, \nmb, \nel, and \neb\ charged
current interactions in the energy range
${\cal O}(1) \leq E_\nu \leq 300$~GeV.
In addition, the experiment recorded over 2 million
$\nu$-interactions in the Al-coil
and over 20 million in the Fe-scintillator calorimeter,
both upstream of the active-DC target.
\newpage
\begin{landscape}
\begin{figure}
\begin{center}
\includegraphics[scale=1.00]
{cohpi-12839-3978.epsi}
\caption{Schematic of the DC tracker and a
coherent $\pi^0$ event candidate in NOMAD where both
photons from the $\pi^0$ decay convert in the DC's.
The red crosses represent drift chamber digitizations that
are used in the track-reconstruction,
whereas the black ones are not.
The upstream (\boldmath {$\gamma 1$} ) and downstream (\boldmath {$\gamma 2$} ) momentum
vectors when extrapolated upstream
intersect within the fiducial volume.
}
\label{fig-evtpi01}
\end{center}
\end{figure}
\end{landscape}
\section{The \boldmath {Coh$\pi^0$} \ Signature and Models}
\label{sec-sig-rsbk}
The signature for \boldmath {Coh$\pi^0$} \ is a single forward $\pi^0$
and nothing else. The \boldmath {$\pi^0 $} \ will promptly decay into
two forward photons ($\gamma$). In massive neutrino
detectors the signal will manifest itself as an electromagnetic
shower, short and compact, with a forward direction.
The accompanying irreducible backgrounds will be
\ne, \ane, and $\nu$-NC events dominated by
$\pi^0$'s. In NOMAD, however, the \boldmath {Coh$\pi^0$} \
signal will reveal two distinct photons.
The photons will either both convert in the DC target,
or one of the photons will convert in the tracker and the other
will be measured in the electromagnetic calorimeter (ECAL),
or both photons will be measured in the ECAL.
In this analysis we focus on the event
sample where both photons convert in the DC target.
Figure~\ref{fig-evtpi01} shows such an event. The momenta
of the associated $e^-$ and $e^+$ are measured in the magnetic field.
Each event thus provides a complete $\pi^0$-momentum vector.
We use the Rein-Sehgal (RS) model~\cite{Rein:1982pf}
to simulate the \boldmath {Coh$\pi^0$} \ interaction in the NOMAD detector.
As a check we also simulated the \boldmath {Coh$\pi^0$} \ interaction following the
Belkov-Kopeliovich (BK)~\cite{Belkov:1986hn} model.
The \boldmath {$\pi^0 $} \ reconstruction efficiency computed using
the BK model is similar to that determined by the RS model.
Recently a set of new \boldmath {Coh$\pi^0$} \ calculations has
been proposed (see~\cite{Singh:2006bm},
\cite{AlvarezRuso:2007it}, and~\cite{Paschos:2005km}).
They focus on \boldmath {Coh$\pi^0$} \ production
in low-energy neutrino interaction (${\cal {O}} (1)$~GeV).
However, the present \boldmath {Coh$\pi^0$} \ measurement at
an average $E_\nu \simeq 25$~GeV, more
precise by about a factor of three than currently available,
could be used to constrain parameters used in
these calculations.
\section{Selection of Exclusive $2$-$\gamma$ Events}
\label{sec-sel}
We select events with two converted
photons in the DC target. The analysis
uses the entire NOMAD data and the
associated Monte Carlo (MC) samples
as described in ~\cite{NOMAD-XSEC}.
The number of fully corrected \nm-CC
in the standard fiducial volume of NOMAD is
$1.44 \times 10^6$ events: the denominator for
the present measurement.
The NC-DIS sample, defined by requiring that
the generated invariant hadronic mass squared ($W^2$)
be $\geq 1.96$~GeV$^2$,
is normalized to $0.53 \times 10^6$
events which corresponds to 0.37 of the \nm-CC. The
NC-Resonance ($W^2 \leq 1.96$)
sample is set at 3.5\% of the NC-DIS.
The MC sample specific to this analysis is the
RS \boldmath {Coh$\pi^0$} \ simulation. Motivated by the \nm-induced
coherent-$\pi^+$ cross sections
presented in~\cite{Belkov:1986hn} and the fact
that the NC/CC coherent pion cross section ratio should be (1/2),
the \boldmath {Coh$\pi^0$} \ sample is normalized to 5000 events
with generated $E_{\pi^0} \geq 0.5$~GeV.
The large sample of data and those of the NC and CC
deep inelastic scattering (DIS) MC events are
subjected to a preselection. The preselection includes
the following requirements: (a) the presence of one
converted photon whose reconstructed
conversion point is defined as the event vertex ($X$, $Y$, $Z$);
(b) no identified muons; (c) vertex coordinates
of the converted photon within the fiducial volume,
$|X,(Y-5)|\leq 130$~cm and $Z_{Min} \leq Z \leq 405$~cm where
$Z_{Min}$ depends upon the detector configuration (see
~\cite{NOMAD-XSEC} for detail);
(d) the invariant mass ($M_{ee}$) of
the $e^-$ and $e^+$ less than 100~MeV/$c^2$ which
selects both the converted photons --- the upstream being \boldmath {$\gamma 1$} ,
and the downstream being \boldmath {$\gamma 2$} ---,
with 95\% purity and 97\% efficiency.
The preselection reduces the data and the NC-MC samples by
a factor of about a hundred.
The cuts for the final selection of the \boldmath {Coh$\pi^0$} \ events are set
to maximize the selection efficiency
of two photon conversions in the DC tracker.
The cuts are optimized to reduce the
NC-DIS background while keeping the \boldmath {Coh$\pi^0$} \ signal high.
We also look at about 10\% of the data
to check the efficacy of cuts used
in reducing the background induced
by $\nu$-interactions occurring outside the fiducial
volume --- the outside background (OBG).
The remaining
data have no
influence on the choice of the cuts.
The results presented here include the entire data sample.
Among the generated \boldmath {Coh$\pi^0$} ,
only about 29\% of events trigger the apparatus. The
loss arises from the non-converted photons ($\simeq 50\%$)
and, among the converted photons, from
the $e^-/e^+$ tracks that do not reach
the downstream trigger counters ($\simeq 20\%$).
The final event selection follows the preselection
cuts with more stringent requirement. The $M_{ee}$
cut is tightened to $50$~MeV/$c^2$ which increases
the photon conversion
purity to $\geq 98\%$ while reducing the efficiency
to 93\%. Two additional cuts are imposed to
reduce outside background by requiring
that there be no tracks upstream
of the first photon conversion (\boldmath {$\gamma 1$} ) and that
there be no hits associated with the
tracks composing the \boldmath {$\gamma 1$} \ in the most upstream DC.
The second photon conversion, \boldmath {$\gamma 2$} ,
occurs downstream. The two reconstructed
photon momentum vectors enable one to
determine the $\nu$-interaction vertex by
extrapolating the vectors upstream and finding
the coordinates of their distance of closest approach (DCA).
The procedure defines the DCA-vertex with
coordinates denoted as DCA-X, DCA-Y, and DCA-Z.
The DCA-vertex resolution is well understood
using ordinary $\nu$-interactions where the
primary charged tracks composing the
event vertex are ignored
and the rest of the
event is subjected to the \boldmath {$\gamma 1$} \ and \boldmath {$\gamma 2$} \ reconstruction.
The DCA-X and DCA-Y resolution is
$\simeq 2.5$~cm. However, the DCA-Z resolution is poor,
$\simeq 13$~cm.
This is expected since photons from a \boldmath {Coh$\pi^0$} \ decay
have a small opening angle, consequently
their intersection in the Z-direction will be poorly determined.
Finally, the angular resolution of the \boldmath {$\gamma 1$} \
and \boldmath {$\gamma 2$} \ vectors is precise ($\simeq 5$~mrad)
but the momentum resolution,
as determined via the curvature of the
$e^-$ and $e^+$ tracks, is poorer ($\simeq 13\%$) due to the
bremsstrahlung losses.
Therefore we have principally relied
upon angular variables to determine the signal.
Table~\ref{tab-sel} summarizes the selection of events in
the MC samples.
The reconstruction efficiency of the \boldmath {Coh$\pi^0$} \ signal
is 7.8\% (the BK model yields 7.7\%.)
Table~\ref{tab-sel} also shows that the NC-Resonance
production contributes less than 1\% to the sample. In the following
the resonance contribution is simply added to the NC-DIS
component.
The preselected data are subjected to identical cuts. Having
identified the two photons, and having imposed the DCA-X/Y
cuts, data can be compared with the respective predictions
as shown in the Table~\ref{tab-finalsel}.
Note that the fraction of events failing the DCA-Z
cut is larger in data than those in the \boldmath {Coh$\pi^0$} \ and NC-DIS
simulations.
This is due to neutrinos interacting in material just
outside the fiducial volume cut such as the
magnet, coil, etc., which are not simulated
in the MC. Some of these
interactions will also produce events
with DCA-Z $\geq Z_{Min}$. The measurement of this
background and the calibration of the NCDIS and \boldmath {Coh$\pi^0$} \
predictions are presented in the following section.
\begin{table}\centering
{\small{
\begin{tabular}{||c||c||c|c||}
\hline
Cut & \boldmath {Coh$\pi^0$} -RS & NC-DIS & NC-Res \\
\hline \hline
Raw & 1435.4 & 4743.2 & 1132.8 \\
No $\mu$-ID & 1435.4 & 4687.9 & 1125.7 \\ \hline
\boldmath {$\gamma 1$} \ Fid-Cuts & 1373.0 & 4682.3 & 1030.4 \\
\boldmath {$\gamma 1$} \ $M_{ee}\leq50$~MeV & 917.5 & 3664.9 & 27.2 \\
No Upstream Track & 862.2 & 1717.7 & 23.8 \\
No Veto & 858.4 & 1659.5 & 23.7 \\ \hline
\boldmath {$\gamma 2$} \ Fid-Cuts & 128.9 & 311.7 & 1.2 \\
\boldmath {$\gamma 2$} \ $M_{ee}\leq50$~MeV & 117.5 & 236.7 & 1.1 \\ \hline
$E_{\pi^0}\geq 0.5$~GeV & 117.5 & 236.7 & 1.1 \\
DCA-$|X,(Y-5)|\leq 130$~cm & 115.9 & 225.2 & 1.0 \\ \hline
DCA-$Z \geq Z_{Min}$ & 112.6 & 222.5 & 1.0 \\ \hline
DCA-$Z \leq Z_{Min}$ & 3.3 & 2.7 & 0.0 \\
\hline \hline
\end{tabular
\caption{Selection of Exclusive 2-$\gamma$ Events in the MC Samples:
The MC samples have been normalized as presented in Section~\ref{sec-sel}.}
\label{tab-sel}
}}
\end{table}
\section{Extraction of the \boldmath {Coh$\pi^0$} \ Signal}
\label{sec-signal}
The extraction of \boldmath {Coh$\pi^0$} \ signal is data driven.
Monte Carlo simulations can neither reliably provide
the normalization of the outside-background nor the
normalization of the NC-DIS induced \boldmath {$\pi^0 $} \ where
nothing else is visible
nor the shape of the $\zeta$ variables.
Distinct control samples in the data
provide a measure of these backgrounds, including
the integral and the shape of the variables
relevant to this analysis.
First we present the measurement of
background induced by $\nu$-interactions outside
the fiducial volume (OBG).
As shown in Table~\ref{tab-sel},
the fraction of MC events
in the fiducial region but with DCA-Z $\leq Z_{Min}$
is negligible. The 169 data events that fail
the DCA-Z cut (see Table~\ref{tab-finalsel}) are dominated by
interactions upstream of the detector ($Z \leq Z_{Min}$);
the contribution from the events entering from the sides
give a small contribution
($\leq 2\%$ of the background). This is for two reasons:
first, since the transverse resolution of DCA-vertex is accurate
to $\simeq \pm 3$~cm, the DCA-X and DCA-Y cuts largely eliminate
these events; second, among the events relevant to
the \boldmath {Coh$\pi^0$} \ selection the two photons travel along the beam
while particles entering the detector from the sides
have much larger angles.
The 169 events failing the DCA-Z cut (Table~\ref{tab-finalsel})
are the key to providing the normalization for the
outside-background (OBG).
To determine the OBG a different data sample
is selected in which a
vertex is reconstructed upstream of the detector
($Z \leq Z_{Min}$). In this
control sample the primary tracks are then ignored
and the events are subjected to the \boldmath {Coh$\pi^0$} \ analysis.
A total of 1378 events survive this selection of which
451 (927) events have the DCA vertex
within (outside) the fiducial volume.
Figure~\ref{fig-dca-comp-obg} compares the shape of the
Z-distribution of the DCA of the 169 events that fail the DCA cut in
the \boldmath {Coh$\pi^0$} \ signal sample with the 927 events that fail this cut in the
control sample. The shapes agree well.
We thus measure the normalized OBG prediction to be:
$ \left [ 451/927 \right ] \times 169 = 82.2 \pm 6.9$ events.
The distributions of the OBG variables (vertex position, \boldmath {$\zeta$}, \boldmath {M$_{\gamma \gamma}$} , etc.)
are measured using the two-photon data with
DCA-Z$\leq Z_{Min}$ normalized to 82.2 events.
Table~\ref{tab-finalsel} presents the calibrated OBG background.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{zdca_compare_10cm.eps}
\caption
{Comparison of the $Z$-DCA Distributions Failing DCA-Cut:
Shown are $Z$-DCA distributions of the \boldmath {Coh$\pi^0$} \
sample
(solid-black)
and that of events originating from
interactions upstream (open-red).
}
\label{fig-dca-comp-obg}
\end{center}
\end{figure}
Second, we present the measurement of the
NC-DIS background.
The NC-DIS component of the 2-\boldmath {$\gamma $} \ sample is
selected using the kinematic variables. We
use events with $M_{\pi^0} \geq 0.2$~GeV/$c^2$ or
$\zeta_{\gamma 1 / \gamma 2} \geq 0.05$,
where the \boldmath {Coh$\pi^0$} \ contribution
is minimal, to obtain the normalization of the
NC-DIS, 0.86, with a 7.5\% statistical precision.
The distributions of the NC-DIS variables
predicted by the MC are corrected
using the Data-Simulator (DS) technique:
first, NC events with a reconstructed primary
vertex are selected from both data and MC; then,
after removing the primary tracks, these events are subjected
to the \boldmath {Coh$\pi^0$} \ analysis; finally, the ratio Data/MC
provides the DS-correction. This correction is found to
be unity within $\pm 10\%$.
Table~\ref{tab-finalsel} presents the calibrated NC-DIS background.
\begin{table}\centering
{\small{
\begin{tabular}{|||c||c|c|c|c||c|||}
\hline
Cut & \boldmath {Coh$\pi^0$} -RS & NC-DIS & OBG & Total & Data \\
\hline \hline
DCA-$|X,(Y-5)|\leq 130$~cm & 114.2 & 193.7 & 241.9 & 549.8 & 550 \\ \hline
DCA-$Z \geq Z_{Min}$ & 110.9 & 191.4 & 82.2 & 384.5 & 381 \\ \hline
DCA-$Z \leq Z_{Min}$ & 3.3 & 2.3 & 159.7 & 165.3 & 169 \\ \hline \hline
\end{tabular
\caption{DCA-Cuts and the 2-$\gamma$ Samples:
Data and predictions passing the DCA cuts are shown.
The final calibration of the
\boldmath {Coh$\pi^0$} \ and background predictions are given in
Section~\ref{sec-signal}.
}
\label{tab-finalsel}
}}
\end{table}
Finally, we present the extraction of the \boldmath {Coh$\pi^0$} \ signal which
is based upon three variables: \boldmath {$\zeta_{\gamma 1}$}, \boldmath {$\zeta_{\gamma 2}$}, and \boldmath {$\Theta_{1 2}$},
where \boldmath {$\Theta_{1 2}$}\ is the opening angle between \boldmath {$\gamma 1$} \ and \boldmath {$\gamma 2$} .
The choice of variables is dictated by the resolution.
The variables \boldmath {$\zeta_{\gamma 1}$}\ and \boldmath {$\zeta_{\gamma 2}$}\ are correlated while
\boldmath {$\Theta_{1 2}$}\ displays no correlation with the former variables.
A $\chi^2$ between data and prediction is defined
using two distributions: the two-dimensional \boldmath {$\zeta_{\gamma 1}$}\ and
\boldmath {$\zeta_{\gamma 2}$}\ distribution, and the \boldmath {$\Theta_{1 2}$}\ distribution. The $\chi^2$
between the data and the prediction is minimized with respect to
the \boldmath {Coh$\pi^0$} \ normalization factor, $\alpha$.
The expected numbers of OBG and NC-DIS events are determined
as described above, and are kept fixed, while the simulated
\boldmath {Coh$\pi^0$} \ sample is normalized to 5000 generated events. The
$\chi^2$ is minimized with respect to $\alpha$ which is varied
between 0 and 2 in steps of 0.01. The minimum
$\chi^2$, 45.1 for 44 degrees of freedom (DoF), is obtained for
$\alpha = 0.985 \pm 0.113$. The probability of this fit
is 0.44.
Using the number of \boldmath {Coh$\pi^0$} \ signal (112.6)
in Table~\ref{tab-sel} and $\alpha = 0.985$,
we extract the observed signal: $110.9 \pm 12.5$.
The error is statistical and corresponds to a $\chi^2$
change by one unit.
To check if the two photon data can be explained using only
OBG and NC-DIS component, we set the \boldmath {Coh$\pi^0$} \ contribution
to zero and fit for the normalization of OBG and NC-DIS ---
their respective distributions being fixed by the data. The
best $\chi^2$ was 80.3 for 43 DoF but neither
the normalization nor any of the data distributions ---
the \boldmath {$\gamma 1$} \ and \boldmath {$\gamma 2$} \ vertex positions,
the DCA-vertex position, energy, $P_T$, \boldmath {$\zeta$}, \boldmath {M$_{\gamma \gamma}$} , etc. ---
are well described by this hypothesis. The probability of
this fit is $\leq$0.001.
Having determined all the components of the
2-\boldmath {$\gamma $} \ sample, Table~\ref{tab-finalsel} compares
the final predictions with the data. Below
we present a comparison of a set of salient variables between
data in symbols and expectation ---
DS-corrected NC-DIS in red-dotted histogram,
OBG in green-histogram, the
\boldmath {Coh$\pi^0$} \ signal in blue-coarsely-hatched histogram,
and the total expectation (MC) in black histogram.
Figure~\ref{fig-epi} and Figure~\ref{fig-ptpi}
compare the
$E_{\gamma \gamma}$, defined as $E_{\gamma 1}+E_{\gamma 2}$,
and $P_{T{\gamma \gamma}}$ distributions.
Figure~\ref{fig-mpi0} compares the
invariant mass distribution computed using the \boldmath {$\gamma 1$} \ and \boldmath {$\gamma 2$} \ vectors.
Figure~\ref{fig-zeta1} and Figure~\ref{fig-zeta2}
compare the \boldmath {$\zeta_{\gamma 1}$}\ and \boldmath {$\zeta_{\gamma 2}$}\ distributions;
and Figure~\ref{fig-theta12} compares the \boldmath {$\Theta_{1 2}$}\ distribution.
The agreement between data and MC for the variables
is satisfactory.
For illustration, in Figure~\ref{fig-mgg-bkg}
we present the comparison
of the \boldmath {M$_{\gamma \gamma}$} \ distribution between data and the best
fitted (OBG+NC-DIS) prediction with \boldmath {Coh$\pi^0$} \ set to zero:
here the Data-vs-MC $\chi^2$ increases by 12 units compared to
the Figure~\ref{fig-mpi0}.
\clearpage \newpage
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{2v0s_page07_epi.eps}
\caption
{Comparison of the $E_{\gamma \gamma}$, defined as
$E_{\gamma 1}+E_{\gamma 2}$, between data (symbol) and
MC (\boldmath {Coh$\pi^0$} \ in hatched blue, OGB in dot-dash green, NCDIS in
dotted red, total in solid histograms).}
\label{fig-epi}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{2v0s_page10_ptpi.eps}
\caption
{Data and MC Comparison of the $P_{T \gamma \gamma}$ Distribution.}
\label{fig-ptpi}
\end{center}
\end{figure}
\clearpage \newpage
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{mpi0.eps}
\caption{Data and MC Comparison of the $M_{\gamma \gamma}$ Distribution.}
\label{fig-mpi0}
\end{center}
\end{figure}
\clearpage \newpage
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{zeta1.eps}
\caption{Data and MC Comparison of the \boldmath {$\zeta_{\gamma 1}$}\ Distribution.}
\label{fig-zeta1}
\end{center}
\end{figure}
\clearpage \newpage
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{zeta2.eps}
\caption{Data and MC Comparison of the \boldmath {$\zeta_{\gamma 2}$}\ Distribution.}
\label{fig-zeta2}
\end{center}
\end{figure}
\clearpage \newpage
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{theta12.eps}
\caption{Data and MC Comparison of the \boldmath {$\Theta_{1 2}$}\ Distribution.}
\label{fig-theta12}
\end{center}
\end{figure}
\clearpage \newpage
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]
{mpi0_bkg.eps}
\caption{Comparison of the $M_{\gamma \gamma}$ Distribution between
data and the best fitted (OBG+NC-DIS) with \boldmath {Coh$\pi^0$} \ set to zero. }
\label{fig-mgg-bkg}
\end{center}
\end{figure}
\section{Systematic Uncertainties }
\label{sec-syst}
The principal source of systematic error in the measurement
of the \boldmath {Coh$\pi^0$} \ cross section comes from
the error in determining the NC-DIS induced
contribution to the 2-\boldmath {$\gamma $} \ sample.
The 7.5\% error in the NC-DIS contribution
translates to 7.0\% in the signal.
Since the OBG is entirely determined by the
169 events that fail the DCA-cut, its contribution
to the \boldmath {Coh$\pi^0$} \ signal is computed to be 5.4\%.
The error
in the \boldmath {$\pi^0 $} \ reconstruction efficiency is estimated
to be 2.7\% determined using $\gamma$-conversions
from standard DIS interactions.
Finally, the error in the absolute flux determination
is determined to be 2.5\% which comes about as
follows: the error is 2.1\% for $E_\nu \geq 30$ GeV,
2.6\% for $10 \leq E_\nu \leq 30$~GeV, and
4.0\% for $2.5 \leq E_\nu \leq 10$~GeV as
determined in ~\cite{NOMAD-XSEC}; these errors
are folded in with the \boldmath {Coh$\pi^0$} \ cross-section as a
function of $E_\nu$ yielding an overall flux normalization
error of 2.5\%.
These errors are summarized in Table~\ref{tab-errors}.
\begin{table}
\begin{center}
\begin{tabular}{|||c||c|||} \hline
Source & Error \\ \hline
NC-DIS & 7.0\% \\
OBG & 5.4\% \\
\boldmath {$\pi^0 $} \ Reconstruction & 2.7\% \\
Absolute Normalization & 2.5\% \\ \hline
Total & 9.5\% \\ \hline
\end{tabular
\caption{Systematic Uncertainties in the \boldmath {Coh$\pi^0$} \ Cross Section.}
\label{tab-errors}
\end{center}
\end{table}
\section{Result}
\label{sec-final}
Using the RS model, the \boldmath {Coh$\pi^0$} \ reconstruction efficiency is
estimated to be 2.27\%. This value is the product of the
fraction of \boldmath {Coh$\pi^0$} \ events that trigger the apparatus (29.0\%),
and the reconstruction efficiency (7.8\%).
The $\nu$-sample is dominated by the \nm-interactions.
The \boldmath {Coh$\pi^0$} \ sample is corrected for the small
contribution from other neutrino species to
yield a pure \nm-contribution.
The correction factor to account for the
\anm, \ne, and \ane\ contributions to the \boldmath {Coh$\pi^0$} \
interactions is 0.94. The factor takes into account
the different energy spectra for the different $\nu$-flavors
(we assume that the $\nu$ and
$\bar \nu$ induced \boldmath {Coh$\pi^0$} \ cross sections are the
same). The error in the \boldmath {Coh$\pi^0$} \ cross section
due to this 6\% correction is $\leq 0.6\%$ and
is deemed negligible in this analysis.
Thus the \nm-induced \boldmath {Coh$\pi^0$} \ events are
{\boldmath $4630 \pm 522 (stat) \pm 426 (syst)$} events.
The number of fully corrected \nm-CC in the same fiducial volume
is measured to be $1.44 \times 10^{6}$. Our result is:
\begin{equation}
\frac {\sigma (\nu {\cal A} \rightarrow \nu {\cal A} \pi^0)}
{\sigma (\nu_\mu {\cal A} \rightarrow \mu^- X)} =
\left [ 3.21 \pm 0.36(stat) \pm 0.29(syst) \right ] \times 10^{-3}
\label{eq-ccrat}
\end{equation}
Using the measured inclusive \nm-CC cross-section
from ~\cite{NOMAD-XSEC} as a function of $E_\nu$,
the absolute cross section of \boldmath {Coh$\pi^0$} \ production
for ${\cal A}=12.8$
at the average energy of the neutrino flux $E_\nu = 24.8$~GeV
is determined to be:
\begin{equation}
\sigma (\nu {\cal A} \rightarrow \nu {\cal A} \pi^0) =
\left [ 72.6 \pm 8.1(stat) \pm 6.9(syst) \right ] \times 10^{-40}
cm^2/{nucleus}
\label{eq-sigcohp}
\end{equation}
The measurement agrees with the RS prediction of
$\simeq (78 \times 10^{-40}) cm^2/nucleus $ using
${\cal A}=12.8$ and the CERN-SPS flux.
A comparison of the NOMAD measurement of
the \boldmath {Coh$\pi^0$} \ with other published measurements is summarized
in Table~\ref{tab-cohp-expt-sum}.
To summarize, we have presented an analysis of
the \boldmath {Coh$\pi^0$} \ interaction in the \nm-NC using the
two reconstructed photons in the final state.
This is the most precise measurement of the \boldmath {Coh$\pi^0$} \ process.
\begin{table}
\begin{tabular}{|||c||c|c||c|c|||} \hline
Experiment & ${\cal N}$ucleus & Avg-$E_\nu$ & $\sigma (Coh \pi^0)$ & \boldmath {Coh$\pi^0$} /\nm-CC \\
& & GeV & $10^{-40} cm^2/{\cal N}ucleus$ & $10^{-3}$ \\ \hline
Aachen-Padova ~\cite{EXAP} & 27 & 2 & $(29 \pm 10)$ & \\
Gargamelle ~\cite{EXGGM} & 30 & 2 & $(31 \pm 20)$ & \\
CHARM ~\cite{EXCHARM} & 20 & 30 & $(96 \pm 42)$ & \\
SKAT ~\cite{EXSKAT} & 30 & 7 & $(79 \pm 28)$ &
$(4.3 \pm 1.5)$ \\
15' BC ~\cite{EX15FT} & 20 & 20 & & $(0.20\pm0.04)$ \\
NOMAD & 12.8 & 24.8 & $(72.6\pm 10.6)$ &
$(3.21\pm 0.46)$ \\ \hline \hline
\end{tabular
\caption{Compilation of \boldmath {Coh$\pi^0$} \ Measurements:
We point out that Ref.~\cite{Kopeliovich:1992ym} cites a value of
$(2.0 \pm 0.4) \times 10^{-3}$ for \boldmath {Coh$\pi^0$} /\nm-CC as attributed to
~\cite{EX15FT}.
}
\label{tab-cohp-expt-sum}
\end{table}
\section*{Acknowledgments}
We gratefully acknowledge the CERN SPS staff for the magnificent
performance of the neutrino beam. The experiment was supported
by the following agencies:
ARC and DIISR of Australia; IN2P3 and CEA of France, BMBF of
Germany, INFN of Italy, JINR and INR of Russia, FNSRS of
Switzerland, DOE, NSF, Sloan, and Cottrell Foundations of
USA, and VP Research Office of the University of South Carolina.
|
1,314,259,995,638 | arxiv | \section{Introduction}
The James Webb Space Telescope (JWST) was launched on 25 December 2021, and after a journey of about a month, JWST arrived at its final destination -- the second Lagrange point (L2) of the Sun-Earth orbit. The L2 is located at $\sim 1.5 \times 10^6$~km away from the Earth, implying it is possible to measure the parallax of JWST from two distant sites on Earth. For example, two sites separated by 100~km will be able to measure a parallax of $\sim 6.88\arcsec$. Therefore, JWST on L2 provides a great opportunity to demonstrate parallax, an important astronomical concept to be taught in introductory astronomical cources. In this work, we perform near simultaneous observations on JWST from two sites, and demonstrated that it is possible to measure parallax, and hence the distance from Earth, for JWST. This will be of great interest for educational purposes.
\section{The Near Simultaneous Observations and the Distance of JWST}
\begin{figure*}
\epsscale{1.2}
\plotone{img.eps}
\caption{Near simultaneous images taken from GIT (left panel) and LOT (right panel). Locations of JWST on these images, based on the information taken from \url{https://theskylive.com/jwst-info}, are marked. The yellow circles on both images represent a randomly chosen reference star to guide the eyes for the relative position of JWST on these images. Both images were reduced in a standard manner (bias and dark subtracted, and flat-fielded). Astrometric refinement was done using the {\tt astrometry.net} package \citep{lang2010}. The measured JWST coordinates are 07h26m40.0s, 09d57m04.6s for GIT and 07h26m03.0s, 09d59m30.0s for LOT.} \label{fig}
\end{figure*}
A coordinated observation was carried out on 08 February 2022 using the Lulin One-meter Telescope (LOT, located in Taiwan; $120^\circ 52\arcmin 25\arcsec E,\ 23^\circ 28\arcmin 07\arcsec N$) and the 0.7-m GROWTH India Telescope (GIT, located in India; $78^\circ 57\arcmin 55.1\arcsec E,\ 32^\circ 46\arcmin 44.1\arcsec N$). A sequence of 17 images were taken from both telescopes starting at UT 14:52, and we identified a pair of images that were closest in time (UT 15:05:38 and 15:05:22 for GIT and LOT, respectively). JWST was clearly detected on both images, as shown in Figure \ref{fig}. We measured the angular separation of JWST on both images, which was found to be $\sim 566.0\arcsec$, or a parallax of $\sim 283.0\arcsec$. Given that the distance between GIT and LOT is $\sim 4214.17$~km, our measured parallax of JWST translates to an approximate distance of $\sim 1.5358 \times 10^6$~km.
We refine this calculation by correcting for two factors. First, the straight-line distance between GIT and LOT is shorter than the distance along the surface. We use the \texttt{EarthLocation} feature in astropy to define the two observatory locations, and find that the direct distance between them is 4142~km. Second, the line joining the observatories was not exactly perpendicular to the line of sight to JWST, but has an angle of 79\degr.6.
With these values, the corrected distance is $1.4849\times 10^6$~km. Our astrometric uncertainty of about 0\arcsec.08 yields a distance uncertainty of about 200~km.
\section{Conclusion}
In this work, we demonstrated that parallax can be measured for JWST based on two distant sites on Earth, which can be of a great interest for teaching the concept of parallax. Some education-friendly animations are available at \url{https://sites.google.com/view/growthindia/outreach/spotting-jwst}. The pair of images are also available at the same URL, which can be used in various educational purposes (for examples, measuring positions and parallax of JWST, distance calculation, etc).
\acknowledgments
We thank the observing staff at Lulin Observatory, C.-S. Lin, H.-Y. Hsiao, and W.-J. Hou, to carry out the requested observations. This publication has made use of data collected at Lulin Observatory, partly supported by MoST grant 109-2112-M-008-001. The GROWTH India Telescope (GIT) is a 70-cm telescope with a 0.7-degree field of view, set up by the Indian Institute of Astrophysics (IIA) and the Indian Institute of Technology Bombay (IITB) with funding from DST-SERB and IUSSTF. It is located at the Indian Astronomical Observatory (Hanle), operated by IIA. We acknowledge funding by the IITB alumni batch of 1994, which partially supports operations of the telescope.
This research made use of Astropy,\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{astropy2013, astropy2018}.
\facility{LO:1m}
\software{{\tt astropy} \citep{astropy2013,astropy2018}, {\tt astrometry.net} \citep{lang2010}}
|
1,314,259,995,639 | arxiv | \section{Introduction}
In the last years, the utilization of FPGA accelerator cards became more common thanks to the availability of integrated tool flows based upon high level synthesis (HLS),
which allow to generate hardware starting from a code listing easing the development and testing of designs. Even if multiple particular programming patterns \cite{Intel:FPGA:programmmingguide}\cite{Intel:FPGA:bestpracticeguide}\cite{de2018transformations} are available in order to write codes that turn into efficient HLS designs, none of them can prevent a problem that shows up when many FPGA logic resources are used: a low maximum frequency. This limits the attainable performance, since the maximum frequency effects directly the throughput of memory and floating-point operations. A low maximum frequency is mostly dictated by critical paths, i.e., interconnections between logic resources that have large delays in signal propagation. These delays are due to the congestion of the routing fabric, that can be a consequence of a poor route and place of FPGA logic resources.
\if0
{
High performance computing (HPC) needs devices that are able to provide high-throughput floating-point operations. CPUs and GPUs achieve this goal by using multiple threads working concurrently on different cores/streaming multiprocessors. Nowadays, each core/streaming multiprocessor features units that are capable of increasing even more the throughput of specific vector/tensor floating-point operations.
The configuration capability of FPGAs can have a huge potential in the creation of this kind of units,
that can not only target single vector/tensor operations but also more specialized tasks.
However, floating-point throughput is not enough in order to accelerate codes for HPC.
Having an adequate memory throughput between different memory systems of a device is fundamental in order to sustain high floating-point performance. Main memory is a bottleneck for all devices: CPUs, GPUs and FPGAs.
CPUs and GPUs overcome this bottleneck with different levels of caches. These caches allow to store data coming from a slower source (i.e., the main memory or a slower cache) and reuse them with a higher throughput.
FPGAs provide configurable on-chip memories that can implement cache systems that the designer can tailor to a specific task.
The FPGA, once freed form the routing congestion, can provide a new paradigm for achieving high-throughput by means of complex pipelines that achieve a high degree of concurrency at the level of the FPGA fabric, overcoming the multithreading paradigm.
}
\fi
In order to fix this issue, it is fundamental to investigate algorithms that do not create routing congestion once implemented with HLS.
The past provides us interesting examples of \emph{architecture-aware} algorithms such as the ones for systolic array architectures,
that allow solve a broad range of problems based on matrix computations \cite{kung1982systolic}\cite{lee1990mapping}\cite{lee1990mapping}.
Several papers implement bi-dimensional systolic array architectures for matrix multiplication on FPGAs \cite{wang2021autosa}\cite{moss2018customizable}\cite{fblas}.
In this paper, we want to investigate a new connection scheme between the processing elements in order to go beyond their bi-dimensional floorplanning, lowering the granularity of processing elements while increasing the total resource utilization.
In this regard, it is fundamental to investigate algorithms that properly implemented in HLS can produce efficient interconnections (i.e., that do not create critical paths) between the more important logic resources: the DSPs for floating-point arithmetic, the on-chip memories, and the global memory controllers for data provisioning.
This brought us to the formulation of a three-dimensional systolic array architecture for matrix multiplication.
The concept of three-dimensional systolic array architectures was already developed to target three-dimensional fabrication processes \cite{lakhani19962d}\cite{linderman1984three}.
Recently, Kung et al. \cite{kung2018mapping} proposed a three-dimensional mapping of systolic arrays into the 2.5-dimensional Xilinx FPGA architecture.
In our investigation, the third dimension is more conceptual than physical. It is a parameter for controlling the data throughput between processing elements.
\if 0
This paper is structured as follows:
Section 2 introduces the characteristics of the hardware platform important for understanding the design choices.
Section 3 defines and implements a three-dimensional systolic array architecture for on-chip matrix multiplication. Its HLS implementation, reported in Listing \ref{lst:systo}, is compact (\(\approx\) 15 lines) and fully configurable, allowing its easy integration within more complex designs.
In the sake of testing this systolic array architecture, Section 4 evaluates an algorithm that integrates it in a off-chip matrix multiplication design, that is able to circumvent the off-chip memory bottlenecks by means of data reuse.
Section 5 describes the hardware implementation of the aforementioned design.
Section 6 evaluates its performance results comparing them with the ones obtained by the best design available for Stratix 10 FPGAs.
\fi
\section{Tool flow and Hardware description}
\if0
FPGA stands for field-programmable gate array and identifies integrated devices made of different \emph{blocks} that can be wired together via re-configurable interconnections. These blocks feature different functionalities: memory blocks (M20K), digital signal processors blocks (DSP), logic array blocks (LAB).
Intel Stratix 10 FPGA architecture has the right characteristics for providing high-throughput performance: DSP blocks with native single-precision floating-point support and a routing fabric that includes registers that allow to improve the maximum frequency attainable.
\fi
In this paper, we consider the \emph{Intel FPGA SDK for OpenCL} tool flow, that lets us integrate OpenCL kernels within the FPGA.
This process is made of distinct automated phases:
\if 0
First, the OpenCL kernel written in a programming language (C99) is translated into a hardware description language (HDL).
Then, Intel Quartus Prime synthesizes and integrates the aforementioned HDL within an FPGA design.
This design is made of the user generated kernel logic and the board support package (BSP) provided by the FPGA accelerator card vendor. The BSP manages the kernel logic within the FPGA accelerator card, e.g., start the kernel execution, manage the data transfer between the host and accelerator, connect the kernel logic with the main memory (DDR). The BSP occupies a predefined set of FPGA logic resources that cannot be exploited by the kernel logic
Different phases within Intel Quartus Prime turn the FPGA design into a ready-to-use bitstream that can be loaded in the FPGA.
\fi
the \emph{synthesis}, that translates the kernel code into logic resources, the \emph{fitter}, that places and routes these logic resources into specific FPGA blocks honoring the timing constraints, and the \emph{timing analysis}, that validates the timing performances, establishing the maximum frequency (\(f_{max}\)) of the design and determining \emph{de facto} its performance.
The only way users can achieve high performance is by writing kernel codes that translate into good designs in these phases.
The Intel HLS tool aims to create pipelined logic circuits (a.k.a. pipelines) starting from loops within the code.
The instructions within the \emph{loop body} are turned into a logic circuit, that performs the original operations in a time, measured in clock cycles, that we define as loop-body latency (\(l_{body}\)). The iterations of the loop are executed in pipeline, i.e, in each clock cycle, different loop iterations are executed concurrently by different stages of the logic circuit.
The main design goal is creating pipelines that can start the execution of a new iteration in each clock cycle, i.e., having an initiation interval (II) equal to one. This is very important since the total latency taken by a loop executed in pipeline is \[l_{tot} = l_{body} + II\ \#it \quad [ \text{cycles} ] \] whereas \(\#it\) is the number of loop iterations.
In this regards, the HLS tool provides pragmas and reports that lets the user adjust the code for the sake of achieving \(II=1\).
In case of an ideal pipeline, in which \(II=1\) and \(\#it >> l_{body}\), the throughput of \(op\)-operations (e.g., floating-point operations) measured in \(op\)-per-second is
\begin{equation} \label{eq:thr}
T_{op} = \mathcal{T}_{op}\ f_{max} \quad {\small \Big[ \tfrac{op}{s}} \Big]
\end{equation}
whereas \(\mathcal{T}_{op}\) is the \(op\) throughput measured in \([ op / \text{cycle}]\), i.e., the number of \(op\)-operations started in each clock cycle. The last equation shows that the throughput of a given operation is linearly dependent on \(\mathcal{T}_{op}\) and \(f_{max}\).
In case of an ideal pipeline, \(\mathcal{T}_{op}\) is equal to the number of \(op\)-operations present in the loop body, which reflects directly in FPGA resource utilization, not only in terms of blocks but also in required routing fabric, e.g., a floating-point multiplication uses a DSP block and all the wires needed to carry operands, the result, and other required control signals.
Unfortunately, \(\mathcal{T}_{op}\) and \(f_{max}\) conflict since increasing \(\mathcal{T}_{op}\) increases wire usage, this can cause routing congestions, that create critical paths, that lower the \(f_{max}\).
This is the reason why it is important to investigate algorithms and HLS implementation methods that can connect most FPGA logic resources without creating routing fabric congestions. Nevertheless, the knowledge of the FPGA accelerator card, in particular its memory systems, plays a fundamental role in the investigation of efficient implementations.
\if 0
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Stratix 10 GX 2800 resources in Bittware 520N accelerator card. BSP based upon Quartus 19.4.0 Build 64 Pro.}
\label{tab:GX2800-Resources}
\centering
\begin{tabular}{cccc}
\hline
\bfseries & Total & BSP & Available \\
\hline\hline
DSP & 5,760 & 1,047 & 4,713 \\
M20K & 11,721 & 2,627 & 9,094 \\
LAB & 93,312 & 22,762 & 70,550 \\
MLAB & & & \(\approx\) 17,637 \\
\hline
\end{tabular}
\end{table}
\fi
\subsection{Global Memory}\label{sec:globalmem}
The term \emph{global memory} refers to the main memory within the accelerator card, outside the FPGA. The host system manages the global memory by means of OpenCL API function calls, that allow the user to allocate, transfer, delete buffers within it.
In this paper, we consider a Bittware 520N accelerator card, which has four 8~GByte~DDR4@2400MT/s memory modules, each of them can provide a peak theoretical throughput of \[B_{ddr} = 19200\ \text{MB/s}\] for a total of \(76800\ \text{MB/s}\).
Each memory module is connected to the FPGA via a dedicated memory controller.
The HLS tool turns the global memory pointers accessed within the kernel code into load-or-store units (LSU) that can read or write a fixed number of bytes per clock cycle depending on the pointed data, e.g., reading or writing a location of a single-precision floating-point array produces a 4-byte LSU. It must be noted, that the HLS tool is only able to create LSUs having a size of power-of-two bytes, e.g. reading or writing three sequential values of a single-precision floating-point array produces a 16-byte LSU.
Global memory accesses can create stalls. A stall is introduced within the pipeline if the memory controller is not able to cope with the transmission rate requested by LSUs, i.e.
\begin{equation}\label{eq:stt}
\mathcal{B}_r\ f_{max} > e\ B_{ddr}
\end{equation}
whereas \(\mathcal{B}_r\) is the throughput of data requests in \({\small [ \text{bytes} / \text{cycle} ]}\)
and \(e\) is the memory controller efficiency, which is close to \(1\) in case of sequential aligned read-or-write-only accesses \cite{Intel:FPGA:emif}. These kind of accesses produce aligned burst-coalesced LSUs, which are the most suited for Stratix 10 FPGAs \cite{Intel:FPGA:bestpracticeguide}.
If a stall is present (i.e., \eqref{eq:stt} holds true), the stall rate is evaluated as
\begin{equation*}
stall = 1 - \frac{e\ B_{ddr}}{\mathcal{B}_r\ f_{max}}
\end{equation*}
which corresponds to the fraction of requests that cannot be fulfilled by a memory controller.
A stall does not allow the pipeline to run with \(II=1\) even if the HLS tool is able to generate it.
In case of stalls, the throughput of op-operations within the loop body \eqref{eq:thr} is reformulated as
\begin{equation}\label{eq:stallT}
T_{op} = (\ 1-stall\ )\ \mathcal{T}_{op}\ f_{max} \quad {\small \Big[ \tfrac{op}{s}} \Big]
\end{equation}
This shows the importance of avoiding stalls, since they decrease linearly the throughput of the operations present in the loop body.
In summary, considering that LSUs have a power-of-two byte size and that the memory controller efficiency (\(e\)) for aligned burst-coalesced accesses is close to \(1\), depending on \(f_{max}\), a global memory LSU can at most request
\begin{equation}\label{eq:globalthr}
\small
\mathcal{B}_{ddr} =
\begin{cases}
64\ {\small \tfrac{\text{bytes}}{\text{cycle}}} = 16 & {\small \tfrac{\text{sp-floats}}{\text{cycle}},\ 150\ \text{MHz} < f_{max} \leq 300\ \text{MHz}} \\
32\ {\small \tfrac{\text{bytes}}{\text{cycle}}} = 8 & {\small \tfrac{\text{sp-floats}}{\text{cycle}},\ 300\ \text{MHz} < f_{max} \leq 600\ \text{MHz}}
\end{cases}
\end{equation}
to a memory controller without creating stalls. For convenience, in the following sections, the data throughput (\(\mathcal{B}\)) is expressed in terms of single-precision floating-point values transferred per clock cycle, i.e., \([ \text{sp-floats} / \text{cycle} ]\).
\subsection{Floating-Point Operations}\label{sec:float}
The Stratix 10 architecture features Variable Precision DSP blocks \cite{Intel:S10:dspguide} that can be configured in order to perform operations on different data types. Most notably, these DSPs can execute single-precision floating-point operations natively. A DSP block can do different kind of operations, such as multiplications, additions or fused multiply–adds. In the last configuration, a DSP block is able to perform two floating-point operations per clock cycle. So, the maximum floating-point throughput of a design using \(\#DSP\) blocks in fused multiply–add configuration is
\begin{equation}\label{eq:tpeak}
T_{peak} = 2\ \#DSP \ f_{max} \quad {\small [\text{FLOPS}]}
\end{equation}
Variable Precision DSP blocks can also accumulate values produced in successive iterations within an internal register. Unfortunately, it is not possible to exploit this capability in pipelines with \(II=1\).
Multiple DSP blocks can be chained together in order to do floating-point operations involving more operands, such as a dot product.
The HLS tool is able to recognize a dot product computation and translate it into a \emph{dot product unit},
which performs the sum of the product of two vectors \(\{\ v_i,\ w_i\ |\ 0\leq i < d_p\ \}\) with a scalar \(z\), i.e.,
\begin{equation}
r\ =\ z\ +\ \sum_{i = 0}^{d_p-1} v_i w_i
\end{equation}
Each dot product unit embeds \(d_p\) DSP blocks.
The peak floating-point throughput of a dot product unit in pipeline is
\begin{equation}
\mathcal{T}_{flop} = 2\ d_p \quad {\small \Big[ \tfrac{\text{FLOP}}{\text{cycle}} \Big]}
\end{equation}
which needs to be sustained by the input-data throughput for reading \(z\) and \(d_p\) values of \(v\) and \(w\)
\begin{equation} \label{eq:BIN}
\mathcal{B}_{in} = 2\ d_p\ + 1 \quad {\small \Big[ \tfrac{\text{sp-floats}}{\text{cycle}} \Big]}
\end{equation}
\if 0
and the output-data throughput for writing \(r\)
\begin{equation}
\mathcal{B}_{out} = 1\ \quad {\small \Big[ \tfrac{\text{sp-floats}}{\text{cycle}} \Big]}
\end{equation}
These
\fi
This data throughput needs to be constantly satisfied in order to avoid stalls that can decrease the floating-point throughput, as seen in \eqref{eq:stallT}.
Considering \eqref{eq:globalthr},
the floating-point throughput sustainable using only the global memory is extremely low, it is around ten GFLOPS, since the available data throughput can feed just few DSPs without stalls.
The on-chip memory is necessary for exploiting a large number of DSPs.
\if 0
\begin{table}[!t]
\renewcommand{\arraystretch}{1.2}
\caption{Single precision floating-point dot product latency}
\label{tab:dot-prod}
\centering
\begin{tabular}{lcccc}
\hline\hline
dot product unit size (\(d_p\)) & 1 & 2 & 4 & 8 \\
\hline
latency (\(l_{\mathbf{dot}d_p}\)) [clocks] & 6 & 8 & 11 & 15 \\
\hline\hline
\end{tabular}
\end{table}
\fi
\subsection{Local Memory}
The term \emph{local memory} refers to the memory stored on chip within M20Ks or MLABs and is usually generated by arrays declared within the kernel code.
These on-chip memories can implement two kind of memory systems: \emph{FIFO}, that allows to enqueue and dequeue data, and \emph{mapped}, that lets access data randomly by its address.
Mapped memory systems features LSUs similar to the global memory ones. However, for local memory, it is possible to avoid stalls by means of the \emph{memory partitioning}, i.e., the user can constrain array portions to be allocated in specific parts of the memory system, having their own independent LSU, that can work concurrently in order to provide the data throughput required without stalls. The possibility of having many small partitions (i.e, made just of few M20K/MLAB blocks) is a key aspects, since it allows the fine-grain distribution of the data throughput throughout the FPGA, close to the blocks that need it, in our case the DSPs.
\if 0
Another source of on-chip memory are registers, in particular the Hyper-Registers featured by Intel Stratix 10 FPGA architecture. These registers are present in high number in the routing fabric. They can store data between the loop-body instructions introducing one-clock-cycle delay.
Register can be inserted using the \verb|__fpga_reg()| function, which adds one register between its argument and its return values. This function does not belong to the OpenCL kernel language (C99), it is provided by Intel for the specific purpose of inserting registers in order to break the critical paths improving the \(f_{max}\).
This is a key aspect in our investigation, since we use chains of registers for transmitting data from block memory to the DSP blocks.
\fi
\section{Systolic Array Architecture for Matrix Multiplication}
Systolic array architectures are made of a grid of processing elements (PE).
Each PE exchanges data with its neighbouring PEs without a global control logic. A systolic array architecture does computations in virtue of the operations that each PE applies on data passing through them. One simple application of a systolic array architecture is matrix multiplication.
\subsection{Classical Systolic Array}
\if 0
\begin{figure}
\centerline{\input{figs/fig4.eps_tex}}
\caption{Connection scheme of a bi-dimensional Cartesian grid of PEs for Okuda-Song systolic array architecture. The \(A\) and \(B\) elements pass through the \(j\) and \(i\) directions of the grid. The result \(c_{ij}\) is accumulated inside the PE.}
\label{fig:2dPE}
\end{figure}
\fi
A classical example of systolic array architecture for matrix multiplication has been proposed by Okuda-Song \cite{song1994systolic}, it is organized as a bi-dimensional grid of \(d^0_i \times d^0_j\) PEs. During the computation, PE\(_{ij}\) receives and sends all the elements of the \(i-\)row of \(A\) and the \(j-\)column of \(B\), multiply-accumulating them in order to compute \(c_{ij} \in C\).
This kind of systolic array architecture can be defined as follows.
\begin{definition}[Classical Systolic Array Matrix Multiplication] \label{pr:classsystommm}
Given \(A \in (d^0_i \times K)\) and \(B \in (K \times d^0_j)\), \( A B = C \in (d^0_i \times d^0_j)\)
can be computed in pipeline by a bi-dimensional Cartesian grid of \(d^0_i \times d^0_j\) multiply-accumulate units in a total latency of
\begin{equation*}
l_{tot} = d^0_i + d^0_j + K - 1 + l_{\mathbf{MAC}} \quad [\text{cycles}]
\end{equation*}
The input matrices enter the grid by two of its edges,
the \(A\) values enter in \(\{\ \text{PE}_{i0}\ |\ 0 \leq i < d^0_i\ \}\),
whereas the \(B\) values enter in \(\{\ \text{PE}_{0j}\ |\ 0 \leq j < d^0_j\ \}\).
\if0
The values are input in a skewed way, i.e.,
\(a_{ik} \in A\) enters in PE\(_{i0}\) at time
\begin{equation*}
t_{a_{ik}} = i + k \quad [\text{cycle}
\end{equation*}
whereas \(b_{kj} \in B\) enters in PE\(_{0j}\) at time
\begin{equation*}
t_{b_{kj}} = j + k \quad [\text{cycle}]
\end{equation*}
At \(t = i + j + k\), PE\(_{ij}\) receives \(a_{ik} \in A\) from PE\(_{i(j-1)}\) and \(b_{kj} \in B\) from PE\(_{(i-1)j}\), then it starts to multiply-accumulate them with the partial result stored internally. At the same time, PE\(_{ij}\) sends \(a_{i(k-1)} \in A\) and \(b_{(k-1)j} \in B\), received in the previous clock, to PE\(_{i(j+1)}\) and PE\(_{(i+1)j}\). At the end of the computation, PE\(_{ij}\) contains \(c_{ij} \in C\).
\fi
The floating-point throughput of this systolic array architecture is
\begin{equation*}
\mathcal{T}_{flop} = 2\ d^0_i\ d^0_j \quad {\small \Big[ \tfrac{\text{FLOP}}{\text{cycle}} \Big]}
\end{equation*}
whereas the data throughput of \(A\) and \(B\) values entering the grid is
\begin{equation*}
\mathcal{B}_{A} = d^0_i \quad
\mathcal{B}_{B} = d^0_j \quad {\small \Big[\tfrac{\text{sp-floats}}{\text{cycle}}\Big] }
\end{equation*}
\end{definition}
\subsection{Proposed Systolic Array}
\if 0
\begin{figure}
\centerline{\input{figs/fig1.eps_tex}}
\caption{Connection scheme of a three-dimensional Cartesian grid of PEs for the proposed systolic array architecture. \(d_p\) values of \(A\) and \(B\) pass through the \(j\) and \(i\) directions of the grid. The partial result \(\bar{c}_{ij}\) passes through the \(L\) direction.}
\label{fig:3dPE}
\end{figure}
\fi
The systolic array architecture investigated in this paper differs from the one defined in the previous section for two aspects. First, these PEs are made of a dot product unit (such as in \cite{fblas} \cite{yinger2017customizable} \cite{hagiescu2019bfloat}) performing more floating-point operations than a single multiply-accumulation.
Second, the grid structure is three-dimensional. The time dimension of the classical systolic array is partially projected into the third dimension.
In this regard, the investigated architecture can be considered as a stack of bi-dimensional layers, as shown in Figure \ref{fig:gridstack}. The value computed by the dot product unit is no more stationary within a PE but it is sent through the third dimension.
\begin{definition}[Investigated Systolic Array Matrix Multiplication] \label{pr:systommm}
Given \(A \in (d^0_i \times K)\) and \(B \in (K \times d^0_j)\), \(A B = C \in (d^0_i \times d^0_j)\)
can be computed in pipeline by a three-dimensional Cartesian grid of \(d^0_i \times d^0_j \times \frac{d^0_k}{d_p}\) dot-product units of size \(d_p\) with a total latency of
\begin{equation*}
l_{tot} = d^0_i + d^0_j + \frac{K}{d^0_k} - 1 + \frac{d^0_k}{d_p}\ l_{\mathbf{dot}d_p} \quad [\text{cycles}]
\end{equation*}
The matrices enter the grid by two of its faces,
the \(A\) values enter in
\(\{\ \text{PE}_{i0L}\ |\ 0 \leq i < d^0_i,\ 0 \leq L < \tfrac{d^0_k}{d_p}\ \}\),
the \(B\) values in
\(\{\ \text{PE}_{0jL}\ |\ 0 \leq j < d^0_j,\ 0 \leq L < \tfrac{d^0_k}{d_p}\ \}\).
\if0
Let us define
\begin{equation}
\lambda(k) = \Big(\ \lfloor\frac{k}{d_p}\rfloor \bmod \frac{d^0_k}{d_p} \ \Big)\quad \forall\quad 0 \leq k < K
\end{equation}
as the map that for each matrix coordinate \(K\) assigns a layer within the third dimension.
The values enter the grid in a skew way, i.e.,
\(a_{ik} \in A\) enters in PE\(_{i0\lambda(k)}\) at time
\begin{equation}\label{eq:tdatain_a}
t_{a_{ik}} = i + \lfloor\frac{k}{d^0_k}\rfloor + \lambda(k)\ l_{\mathbf{dot}d_p} \quad [\text{cycle}]
\end{equation}
\(b_{kj} \in B\) enters in PE\(_{0j\lambda(k)}\)
\begin{equation}\label{eq:tdatain_b}
t_{b_{kj}} = j + \lfloor{\frac{k}{d^0_k}}\rfloor + \lambda(k)\ l_{\mathbf{dot}d_p} \quad [\text{cycle}]
\end{equation}
Consider \(0 \leq \mathcal{K}< \tfrac{K}{d^0_k} \). At
\(t = i + j + \mathcal{K} + L\ l_{\mathbf{dot}d_p}\)
, PE\(_{ijL}\) receives
\begin{equation}\label{eq:tcomp_a}
\{a_i\}_t\ =\ \{\ a_{ik}\in A \ |\ (\mathcal{K}\ d^0_k + L\ d_p) \leq k < (\mathcal{K}\ d^0_k + L\ (d_p+1))\ \}
\end{equation}
from PE\(_{i(j-1)L}\) and
\begin{equation}\label{eq:tcomp_b}
\{b_j\}_t\ =\ \{\ b_{kj}\in B \ |\ (\mathcal{K}\ d^0_k + L\ d_p) \leq k < (\mathcal{K}\ d^0_k + L\ (d_p+1))\ \}
\end{equation}
from PE\(_{(i-1)jL}\), then it starts a dot product including the partial result already arrived from PE\(_{ij(L-1)}\).
At the same time, PE\(_{ijL}\) sends the \(d_p\) elements of \(A\) and \(B\) received in the previous clock to PE\(_{i(j+1)L}\) and PE\(_{(i+1)jL}\). After \(l_{\mathbf{dot}d_p}\) clocks, PE\(_{ijL}\)sends its partial result to PE\(_{ij(L+1)}\).
\fi
\begin{figure}
\centerline{\scalebox{.7}{\input{figs/fig3.eps_tex}}}
\caption{Three-dimension systolic array architecture made of 9 PEs distributed on three \(3\time3\) layers. The diagonal dashed lines represent the activation times of the intersected PEs.}
\label{fig:gridstack}
\end{figure}
The floating-point throughput is
\begin{equation}\label{eq:3dfloatT}
\mathcal{T}_{flop} = 2\ d^0_i\ d^0_j\ d^0_k \quad {\small \Big[ \tfrac{\text{FLOP}}{\text{cycle}} \Big]}
\end{equation}
whereas the data throughput of \(A\) and \(B\) values entering the grid is
\begin{equation}
\label{eq:3din} \mathcal{B}_{A} = d^0_i\ d^0_k \quad
\mathcal{B}_{B} = d^0_k\ d^0_j \quad {\small \Big[\tfrac{\text{sp-floats}}{\text{cycle}}\Big] }
\end{equation}
{For the rest of the paper, superscript \(0\) denotes systolic array architecture sizes.}
\end{definition}
It is important to note that \(d^0_k\) effects linearly the throughput of floating-pointing operations and data. This third dimension can be considered a parameter useful in design space exploration.
\subsection{HLS implementation} \label{sec:systocode}
Listings \ref{lst:dotpfunc} and \ref{lst:systo} show a possible HLS implementation of the three-dimensional systolic array architecture in Definition \ref{pr:systommm}, whereas,
\(\texttt{dim0\_i} = d^0_i\),
\(\texttt{dim0\_j} = d^0_j\),
\(\texttt{dim0\_k} = d^0_k\),
\(\texttt{DP\_SIZE} = d_p \),
and \(\texttt{K} = K\).
The arrays \texttt{A} and \texttt{B} defined in Listing \ref{lst:dotpfunc} contain \(A \in (d^0_i \times K)\) and \(B \in (K \times d^0_j)\) distributed in \(\frac{K}{d^0_k}\) blocks of size \((d^0_i \times d^0_k)\) and \((d^0_k \times d^0_j)\), i.e.,
\begin{equation*}
\begin{cases}
\ \texttt{A[T][}i\texttt{][}k\texttt{]} &=\ A_{i\mathbf{k}} \quad \forall\ 0 \leq i < d^0_i \\
\ \texttt{B[T][}k\texttt{][}j\texttt{]} &=\ B_{\mathbf{k}j} \quad \forall\ 0 \leq j < d^0_j
\end{cases}
\end{equation*}
such that \(\mathbf{k}\ =\ \tfrac{K}{d^0_k}\ \texttt{T} + k \quad \forall\ 0 \leq \texttt{T} < \frac{K}{d^0_k} \text{,}\ 0 \leq k < d^0_k\).
Each iteration of the loop at line 7 in Listing \ref{lst:dotpfunc} passes a block of \(\texttt{A}\) and a block of \(\texttt{B}\) to the function \texttt{systolic\_mmm} defined in Listing \ref{lst:systo}. This function multiply-accumulates \(\texttt{A[T]} \in ( d^0_i \times d^0_k )\) and \(\texttt{B[T]} \in ( d^0_k \times d^0_j )\) in \(\texttt{C} \in ( d^0_i \times d^0_j )\) for all \( 0 \leq \texttt{T} < {K}/{d^0_k}\). After the execution of the loop in Listing \ref{lst:dotpfunc}, \texttt{C} contains the solution computed as the inner product between \(\texttt{A}\) and \(\texttt{B}\).
\begin{lstlisting}[caption=Implementation of Definition \ref{pr:systommm}., style=customc, label=lst:dotpfunc]
float A[K/dim0_k][dim0_i][dim0_k] __attribute((numbanks(dim0_i*dim0_k)));
float B[K/dim0_k][dim0_k][dim0_j] __attribute((numbanks(dim0_k*dim0_j)));
float C[dim0_k][dim0_j];
// filling of A and B ...
for(int T=0; T<K/dim0_k; ++T)
systolic_mmm(C, A[T], B[T]);
\end{lstlisting}
\if0
\begin{lstlisting}[caption=Three-dimensional systolic array architecture., style=customc, label=lst:systo]
void systolic_mmm( float C[dim0_i][dim0_j], float A0[dim0_i][dim0_k], float B0[dim0_k][dim0_j] )
{
float A[dim0_i][dim0_j];
float B[dim0_i][dim0_j];
#pragma unroll
for(int k=0; k<(dim0_i+dim0_j+dim0_k-2); ++k)
#pragma unroll
for(int i=dim0_i-1; i>=0; --i)
#pragma unroll
for(int j=dim0_j-1; j>=0; --j)
if((i+j<=k)&&(k<i+j+dim0_k))
{
A[i][j] = (j) ? __fpga_reg(A[i][j-1]) : __fpga_reg(A0[i][k-i]);
B[i][j] = (i) ? __fpga_reg(B[i-1][j]) : __fpga_reg(B0[k-j][j]);
C[i][j] += A[i][j]*B[i][j];
#ifdef DP_SIZE
const char _k = k-i-j;
if( (_
C[i][j] = __fpga_reg(C[i][j]);
#endif
}
}
\end{lstlisting}
\fi
The fact that loops at Line 7, 9 and 11 in Listing \ref{lst:systo} are completely unrolled allows Line 16 to allocate
\begin{equation}\label{eq:numdsp}
\#DSP = d^0_i d^0_j d^0_k
\end{equation}
DSP blocks. Each of them performs \(2\) FLOP per clock cycle providing the floating-point throughput in \eqref{eq:3dfloatT}. These DSP blocks are distributed in
\begin{equation}\label{eq:numpe}
\#PE = d^0_i d^0_j \tfrac{d^0_k}{d_p}
\end{equation}
PEs within a \((d^0_i \times d^0_j \times \tfrac{d^0_k}{d_p} )\) Cartesian grid. Each PE is made of a dot product unit with a size of \(d_p\). This size can be set defining \verb|DP_SIZE|, otherwise \(d_p\) is equal to \(d^0_k\) forming a single layer systolic array architecture. In case of multiple layers (i.e., \({d^0_k}/{d_p}>1\) ), Line 21 in Listing \ref{lst:systo} transmits the partial solution to the upper layer in the \(L\) direction.
\begin{lstlisting}[caption=Three-dimensional systolic array architecture., style=customc, label=lst:systo]
void systolic_mmm( float C[dim0_i][dim0_j], float A0[dim0_i][dim0_k], float B0[dim0_k][dim0_j] )
{
float A[dim0_i][dim0_j];
float B[dim0_i][dim0_j];
#pragma unroll
for(int k=0; k<(dim0_i+dim0_j+dim0_k-2); ++k)
#pragma unroll
for(int i=dim0_i-1; i>=0; --i)
#pragma unroll
for(int j=dim0_j-1; j>=0; --j)
if((i+j<=k)&&(k<i+j+dim0_k))
{
A[i][j] = (j) ? __fpga_reg(A[i][j-1]) : __fpga_reg(A0[i][k-i]);
B[i][j] = (i) ? __fpga_reg(B[i-1][j]) : __fpga_reg(B0[k-j][j]);
C[i][j] += A[i][j]*B[i][j];
#ifdef DP_SIZE
const char _k = k-i-j;
if( (_
C[i][j] = __fpga_reg(C[i][j]);
#endif
}
}
\end{lstlisting}
The unrolling of Line 14 in Listing \ref{lst:systo} at \(\texttt{j==0}\) produces \(d^0_i d^0_k\)
load units that read the values of A. Each load unit is connected to a partition of \(\texttt{A}\) declared at Line 1 in Listing \ref{pr:systommm}. Same applies for \(\texttt{B}\), where the unrolling of Line 15 in Listing \ref{lst:systo} at \(\texttt{i==0}\) produces
\(d^0_j d^0_k\)
load units for B, each load unit is connected to a partition of \(\texttt{B}\) declared at Line 2 in Listing \ref{pr:systommm}.
These load units read one floating-point value for each clock cycle producing the input data throughputs in \eqref{eq:3din}.
Moreover, Line 14 and 15 in Listing \ref{lst:systo} propagate the \(A\) and \(B\) values through the PEs in the \(i\) and \(j\) directions by mean of the \verb|__fpga_reg()| function, which provides at least one register between a PE and its neighbor, adding a clock cycle delay in data propagation.
In particular, Line 14 produces the creation of \(d^0_i d^0_k\) chains, that are \(d^0_j\)-register-long. Each chain is fed by one load unit generated by \(\texttt{A}\) partitions. Same applies for Line 15, whereas \(d^0_j d^0_k\) chains, that are \(d^0_i\)-register-long, are fed by load units generated by \(\texttt{B}\) partitions.
These register chains are very important for two reasons: first, they can break critical paths between PEs; second, they reduce the fan-out of data passing from load units to DSP blocks.
Setting the sizes of the systolic array architecture changes the number and the length of the register chains, this allows to balance the input-data throughput requirements of the dot product units \eqref{eq:BIN} between different sources. For example, keeping \(\#DSP\) constant while decreasing \(d^0_k\) lowers \(\mathcal{B}_A\) and \(\mathcal{B}_B\) coming from block memories and increase the data throughput provided by the registers, having fewer register chains but longer.
Ideally, the loop in Listing \ref{lst:dotpfunc} could produce a pipeline with \(II=1\) and a loop-body latency of
\begin{equation}\label{eq:sytoloopbody}
l_{body} = d^0_i + d^0_j - 1 + \frac{d^0_k}{d_p}\ l_{\mathbf{dot}d_p} \quad [\text{cycles}]
\end{equation}
executing \(\frac{K}{d^0_k}\) iterations.
Unfortunately, this is not the case since it is not possible to obtain \(II=1\) with the accumulation in successive iterations. Moreover, the aforementioned loop-body latency does not consider global memory accesses, its real value is higher for pipelines reading and writing the global memory. Anyway, \eqref{eq:sytoloopbody} influences the loop-body latency of the pipeline where it is included allowing the user to interact with the HLS tool by changing the systolic array architecture sizes in order to explore the design space.
In the following sections, we describe how to integrate the function in Listing \ref{lst:systo} in a design in order to compute off-chip matrix multiplications.
\section{Memory throughput analysis for off-chip matrix multiplication}
In order to test the three-dimensional systolic array architecture described in the previous section, we use it for performing a matrix multiplication in which the operands and the result cannot fit into the FPGA on-chip memory.
\begin{problem}[Off-chip Matrix Multiplication] \label{pr:largemmm}
Given \(A \in (d^2_i \times d^2_k)\) and \(B \in (d^2_k \times d^2_j)\),
compute \(A B = C \in (d^2_i \times d^2_j)\).
Whereas none of the matrices can fit entirely into the on-chip memory.
{For the rest of the paper, superscript \(2\) denotes off-chip matrix sizes.}
\end{problem}
So, our investigation must consider that data need to transit from/to the global memory to/from the systolic array architecture.
The systolic array architecture in Definition \ref{pr:systommm} is able to ingest \(\mathcal{B}_A\) and \(\mathcal{B}_B\) floating-point numbers for each clock cycle. As seen in Section \ref{sec:globalmem}, a global memory LSU is able to request \(\mathcal{B}_{ddr}\) floating-point numbers for each clock cycle without stalls.
A systolic array architecture with a large \(\mathcal{T}_{flop}\) implies \(\mathcal{B}_A > \mathcal{B}_{ddr}\) and \(\mathcal{B}_B > \mathcal{B}_{ddr}\).
\if 0
\begin{table}[!t]
\renewcommand{\arraystretch}{2}
\caption{Throughput examples for different systolic array architecture sizes, where the reuse ratios are computed considering \(\mathcal{B}_{gA} = \mathcal{B}_{gB} = \mathcal{B}_{ddr}\).}
\label{tab:th4096}
\centering
\begin{tabular}{ccc|c|ccc|cc}
\hline\hline
\multicolumn{3}{c|}{\emph{sizes}} & \(\mathcal{T}_{flop}\) & \(\mathcal{B}_A\) & \(\mathcal{B}_B\) & \(\mathcal{B}_{ddr}\) & \multicolumn{2}{|c}{\emph{reuse ratios}} \\
\(d^0_i\) & \(d^0_i\) & \(d^0_k\)& \(\Big[\tfrac{\text{FLOP}}{\text{cycle}}\Big]\) &\multicolumn{3}{|c|} { \(\Big[\tfrac{\text{sp-float}}{\text{cycle}}\Big]\) } &\(r_A\) & \(r_B\)\\
\hline
64 & 32 & 2 & 8192 & 128 & 64 & 8 & 16 & 8 \\
32 & 32 & 4 & 8192 & 128 & 128 & 8 & 16 & 16 \\
32 & 16 & 8 & 8192 & 256 & 128 & 8 & 32 & 16 \\
\hline\hline
\end{tabular}
\end{table}
\fi
We can formulate the problem as follows. How to connect the systolic array architecture to the global memory system since a global memory LSU is not able to provide enough data throughput in order not to stall the pipeline?
The answer to this question involves the utilization of a cache system living into on-chip memories. This cache contains some values of \(A\) and \(B\) that need to be reused for a certain number of times in order to let the global memory feed the systolic array architecture without stalls.
We define the \emph{reuse ratio} \(r\) as the minimal number of times that a datum in the on-chip memory needs to be reused in order to let a global memory LSU cope with \(\mathcal{B}_A\) and \(\mathcal{B}_B\) needed by the systolic array architecture.
The reuse ratios for the element of matrices \(A\) and \(B\) can be computed as
\begin{equation}\label{eq:datareuse}
r_A = \frac{\mathcal{B}_A}{\mathcal{B}_{gA}} \quad\quad r_B = \frac{\mathcal{B}_B}{\mathcal{B}_{gB}}
\end{equation}
whereas \(\mathcal{B}_{gA} \leq \mathcal{B}_{ddr}\) and \(\mathcal{B}_{gB} \leq \mathcal{B}_{ddr}\) are the number of \(A\) and \(B\) elements read in each clock cycle.
At this point, it is useful to define a notation for expressing the partition of a matrix into blocks.
\begin{definition}[Block matrix representation]
Given \(M \in (d^2_i \times d^2_j)\), it is possible to represent its partition into \(\tfrac{d^2_i}{d^1_i} \tfrac{d^2_j}{d^1_j}\) blocks of size \((d^1_i \times d^1_j)\), as \(\bar{M}: (d^2_i/d^1_i \times d^2_j/d^1_j) \to ( d^1_i \times d^1_j)\), whereas
{\small
\begin{equation}
\bar{M}^{Ii}_{Jj}\ =\ M_{\mathbf{i}\mathbf{j}}\ \text{s.t.}
\begin{cases}
\ \mathbf{i}\ =\ d^1_i I + i \quad \forall\ 0 \leq I < \rfrac{d^2_i}{d^1_i} \text{,}\ 0 \leq i < d^1_i\\
\ \mathbf{j}\ =\ d^1_j J + j \quad \forall\ 0 \leq J < \rfrac{d^2_j}{d^1_j} \text{,}\ 0 \leq j < d^1_j
\end{cases}
\end{equation}}
where \(d^1_i\) is a multiple of \(d^2_i\) and \(d^1_j\) of \(d^2_j\). Practically, \(\bar{M}^I_J\) identifies the block in the \(I-\)row and \(J-\)column of the partition. This definition can be applied recursively adding another order of indexes.
\end{definition}
\subsection{Implemented Algorithm}\label{sec:twolevalg}
The algorithm, that solves Problem \ref{pr:largemmm} by means of the systolic array architecture discussed in Section \ref{sec:systocode}, needs to address two aspects. First, it needs to set as parameters the data reuse for \(A\) and \(B\) in order not to stall computation. Second, it needs to avoid the floating-point accumulation between successive iterations since the Variable Precision DSP blocks cannot achieve it in pipeline with \(II=1\). For the sake of this, we implement the following algorithm.
\begin{definition}[Two-level blocked Matrix Multiplication]\label{def:twolevalg}
Problem \ref{pr:largemmm} can be solved with a two-level blocked algorithm.
The first level acts on the following partition
\begin{align*}
\bar{A}:\ (d^2_i/d^1_i \times 1)\ \to\ (d^1_i \times d^2_k) \\
\bar{B}:\ (1 \times d^2_j/d^1_j)\ \to\ (d^2_k \times d^1_j) \\
\bar{C}:\ (d^2_i/d^1_i \times d^2_j/d^1_j)\ \to\ (d^1_i \times d^1_j)
\end{align*}
aiming to solve Problem \ref{pr:largemmm} by computing \(\bar{C}\) single blocks as
\begin{equation} \label{eq:second}
\bar{C}^I_J = \bar{A}^I_0\ \bar{B}^0_J \quad \forall\ 0\leq I < d^2_i/d^1_i ,\ \ 0\leq J < d^2_j/d^1_j
\end{equation}
Each block \(\bar{C}^I_J\) is computed by means of a second level partition
\begin{align*}
\bar{\bar{A}}:\ (d^1_i/d^0_i \times d^2_k/d^0_k)\ \to\ (d^0_i \times d^0_k) \\
\bar{\bar{B}}:\ (d^2_k/d^0_k \times d^1_j/d^0_j)\ \to\ (d^0_k \times d^0_j) \\
\bar{\bar{C}}:\ (d^1_i/d^0_i \times d^1_j/d^0_j)\ \to\ (d^0_i \times d^0_j)
\end{align*}
that allows the systolic array architecture implemented in Listing \ref{lst:systo} with a size of \(d^0_i \times d^0_j\times \frac{d^0_k}{d_p}\) to solve
\begin{equation}\label{eq:outer}
\bar{\bar{C}}^{Ii}_{Jj} = \sum_k \bar{\bar{A}}^{Ii}_{0k}\ \bar{\bar{B}}^{0k}_{Jj} \quad \forall\ 0\leq i < d^1_i/d^0_i ,\ \ 0\leq j < d^1_j/d^0_j
\end{equation}
as a cyclical accumulation of outer products between the columns of \(\bar{\bar{A}}\) and the rows of \(\bar{\bar{B}}\) (i.e., \(k\) is the slowest index) in order to avoid the accumulation in successive iterations of the values in \(\bar{\bar{C}}\).
The reuse ratio in \eqref{eq:datareuse} are applied by setting \(d^1_i\) and \(d^1_j\) as
\begin{equation}\label{eq:d1}
d^1_i = r_B\ d^0_i \quad d^1_j = r_A\ d^0_j
\end{equation}
which implies that each element of \(\bar{\bar{A}}\) is reused \(r_A\) times and each element of \(\bar{\bar{B}}\) is reused \(r_B\) times in the computation of the outer product in \eqref{eq:outer}.
\end{definition}
\section{Implementation}\label{sec:implementation}
\begin{figure}
\centerline{\scalebox{.6}{\input{figs/cubes2.eps_tex}}}
\caption{Graphical representation of the connections between the parts of the design. Where \(d^0_i=4\), \(d^0_j=3\), \(d^0_k=3\), \(\mathcal{B}_{gA} = 2\) and \(\mathcal{B}_{gB} = 1\). MMPs stands for the partitions of the memory mapped systems.}
\label{fig:cubes}
\end{figure}
The algorithm in Definition \ref{def:twolevalg} can be implemented as the sequential computation of \(\bar{C}\) blocks done in three phases:
\begin{enumerate}
\item Read \(\bar{A}^I_0\ \) and \(\bar{B}^0_J\) from the global memory, store them into on-chip memory.
\item Compute \(\bar{C}^I_J = \bar{A}^I_0\ \bar{B}^0_J\) as in \eqref{eq:outer}, \(\bar{C}^I_J\) is stored in the on-chip memory.
\item Write \(\bar{C}^I_J\) to the global memory.
\end{enumerate}
Although these three phases can be executed sequentially, we want avoid it by partially overlapping \emph{Read} and \emph{Compute}. Basically, the goal is to write in the local memory some portions of \(\bar{A}^I_0\) and \(\bar{B}^0_J\) from the global memory, while at the same time, the systolic array architecture is reading from the local memory some other portions of \(\bar{A}^I_0\) and \(\bar{B}^0_J\).
So basically, given \(I\) and \(J\), \(\bar{C}^I_J\) can be computed as in \eqref{eq:second} by four sequential phases:
{\small
\begin{enumerate}
\item \emph{Read} from global memory \(\{\bar{\bar{A}}^{Ii}_{00}\ |\ 0 \leq i < d^1_i / d^0_i \}\) and \(\{\bar{\bar{B}}^{00}_{Jj}\ |\ 0 \leq j < d^1_j / d^0_j \}\) and store them into on-chip memory and \emph{Initialize} \(\bar{C}^I_J\) to zero.
\item For all \(\{\ k\ |\ 0 \leq k< (d^2_k/d^0_k -1)\ \} \)
\begin{enumerate}
\item \emph{Read} from global memory \(\{\bar{\bar{A}}^{Ii}_{0(k+1)}\ |\ 0 \leq i < d^1_i / d^0_i \}\) and \(\{\bar{\bar{B}}^{0(k+1)}_{Jj}\ |\ 0 \leq j < d^1_j / d^0_j \}\).
\item \emph{Compute} \(\bar{\bar{C}}^{Ii}_{Jj}\ += \bar{\bar{A}}^{Ii}_{0k}\ \bar{\bar{B}}^{0k}_{Jj} \ \) for all \(\ 0 \leq i < d^1_i / d^0_i,\ 0 \leq j < d^1_j / d^0_j\).
\end{enumerate}
\item \emph{Compute} \(\bar{\bar{C}}^{Ii}_{Jj}\ += \bar{\bar{A}}^{Ii}_{0(d^2_k/d^0_k -1)}\ \bar{\bar{B}}^{0(d^2_k/d^0_k -1)}_{Jj} \ \) for all \(\ 0 \leq i < d^1_i / d^0_i,\ 0 \leq j < d^1_j / d^0_j\).
\item \emph{Write} \(\bar{C}^I_J\) to the global memory.
\end{enumerate}}
Whereas \emph{Read} spans from 1 to 2, \emph{Compute} from 2 to 3 being completely overlapped in 2, and \emph{Write} is executed alone in 4, as shown in Figure \ref{fig:itspaces}.
The implemented design is made of three main parts: a three-dimensional systolic array, two mapped memory systems, and a FIFO system.
The three dimensional systolic array is the central core of the design. During \emph{Compute}, it constantly multiplies two blocks of \(\bar{\bar{A}}\) and \(\bar{\bar{B}}\) loading their values from the two \emph{mapped memory systems} and accumulating the results in the \emph{FIFO system}.
Overlapping \emph{Read} and \emph{Compute} implies that just two columns of \(\bar{\bar{A}}\) and two rows of \(\bar{\bar{B}}\) need to fit entirely into the mapped memory systems. These memory systems are made respectively of \(d^0_i d^0_k\) and \(d^0_j d^0_k\) partitions,
their load units are connected to the systolic array architecture by the register chains as described in Section \ref{sec:systocode}.
In order to fill these partitions with the values coming from global memory, their store units are connected to two
global memory load units. Each of them read \(\mathcal{B}_{gA} \leq \mathcal{B}_{ddr}\) and \(\mathcal{B}_{gB} \leq \mathcal{B}_{ddr}\) floating-point values per clock cycle, in order to avoid stalls.
All accesses are performed by burst-coalesced LSUs for achieving an high memory controller efficiency, i.e., \(e\) in \eqref{eq:stt} approaches to 1. For this reason \(A\) is saved in a column-major format since it is accessed by columns and \(B\) is saved in a row-major format since it is accessed by rows.
Since a \(\bar{C}\) block is accessed entirely during \emph{Compute}, it needs to fit completely into the on-chip memory.
The outer product computation allows to store it in a collection of \(d^0_i d^0_j\) FIFOs.
During \emph{Write}, a global memory store unit writes \(d^0_j\) floating-point values per clock cycle, which could be greater than \(\mathcal{B}_{ddr}\), causing stalls not effecting computation since \emph{Write} happens alone in Phase 4. \(C\) is saved in row-major format allowing the store unit to be burst-coalesced.
\if 0
\begin{itemize}
\item two mapped memory systems, made of partition in order to contain num values, different depth for accommodating them results in different kind of on chip memory resources. the store port are connected to global memory, the load port to the systolic array.
The two mapped memory systems contains the values of two x of \(\bar{\bar{A}}\) and two y of \(\bar{\bar{B}}\) coming from the global memory directed to the systolic array.
These memory systems are made respectively of \(d^0_i d^0_k\) and \(d^0_j d^0_k\) partitions, each of them contains \(2r_B\) and \(2r_A\) elements.
Once loaded from the global memory, the values of \(A\) and \(B\) are stored in two mapped memory systems, having respectively \(d^0_i d^0_k\) and \(d^0_j d^0_k\) partitions with a depth of
\begin{align*}
d_A &= 2\ r_B \\
d_B &= 2\ r_A
\end{align*}
\note{depth is rounded to the next power of two}
\note{connections of these port | one side the cube | other side the global memory}
\item
The FIFO system is made of \(d^0_i d^0_j\) FIFOs that contain the values of \(\bar{C}\) that are constantly enqueue and dequeue from the systolic array in order to compute the outer product.
a FIFO memory system, made of num FIFO that for each clock cycle enqueue and dequeue a value of bar C
Since \(\bar{C}\) is accessed entirely during \emph{Compute}, it needs to fit completely into the on-chip memory.
The outer product computation lets us implement this memory system as a collection of \(d^0_i d^0_j\) FIFOs of \(d^0_i d^0_j\) FIFOs.
with a depth of
\[d_C = r_A\ r_B\]
\note{load port of the fifo connected to the top | store to the bottom}
\end{itemize}
the FIFOs are connected to the bottom and top face of the systolic array by the dequeue and enqueue port.
Problem \ref{pr:largemmm} operators (i.e. \(A\), \(B\) and \(C\)) are stored in global memory.
The implemented design contains three global memory LSUs, one for each matrix. The \(A\) and \(B\) load units read \(\mathcal{B}_{gA} \leq \mathcal{B}_{ddr}\) and \(\mathcal{B}_{gB} \leq \mathcal{B}_{ddr}\) floating-point values per clock, in order to not create a stall in computation, since \emph{Read} and \emph{Compute} are overlapped in Phase 2. The \(C\) store unit writes \(d^0_j\) floating-point values per clock, which could be greater than \(\mathcal{B}_{ddr}\), causing a stall, since \emph{Write} happens alone in Phase 4.
In order to use efficiently the global memory controllers, all accesses are performed by burst-coalesced LSUs. For this reason the \(B\) and \(C\) are saved in row-major format since they are accessed by rows, \(A\) is saved in column-major since it is accessed by columns. The data is accessed by sequential bursts long \(d^1_j\) floating-point values for \(B\) and \(C\), \(d^1_i\) for \(A\). If \(d^1_j\) and \(d^1_i\) are power of two, the corresponding LSUs are burst-coalesced and aligned, this increase even more the efficiency making the global memory throughput closer to the peak performance of the memory controller, i.e., \(e\) in \eqref{eq:stt} approaches to 1.
\fi
\begin{figure}
\centerline{\scalebox{.75}{\input{figs/itspaces.eps_tex}}}
\caption{The four phases described in Section \ref{sec:implementation} for the computation of a block of \(\bar{C}\).}
\label{fig:itspaces}
\end{figure}
The implementation is made of a single-kernel containing a single for-loop, in a way that allows the HLS tool to produce a single efficient pipeline, as suggested in \cite{Intel:FPGA:bestpracticeguide} for Stratix 10 FPGAs. In order to obtain a single loop, we manually fused all the phases.
\if 0
In fact, given a \(k\), the number of iterations for reading \(\{ \bar{\bar{A}}^{Ii}_{0k}\ |\ 0 \leq i < d^1_i / d^0_i \}\) is equal to the number of its elements divided by \(\mathcal{B}_{gA}\) values read from global memory in each iteration, considering \eqref{eq:d1} and \eqref{eq:datareuse}, it is possible to obtain
\begin{equation*}
i_A = \frac{d^1_i d^0_k}{\mathcal{B}_{gA}} = \frac{r_B d^0_i\ d^0_k}{\mathcal{B}_{gA}} = r_A r_B
\end{equation*}
the same applies for \(\{ \bar{\bar{B}}^{0k}_{Jj} \ |\ 0 \leq j < d^1_j / d^0_j \}\), obtaining
\begin{equation*}
i_B = \frac{d^1_j d^0_k}{\mathcal{B}_{gB}} = \frac{r_A d^0_j\ d^0_k}{\mathcal{B}_{gB}} = r_A r_B
\end{equation*}
For a given \(k\), the number of iterations taken by the systolic array architecture for computing the outer product between \(\{ \bar{\bar{A}}^{Ii}_{0k}\ |\ 0 \leq i < d^1_i / d^0_i \}\) and \(\{ \bar{\bar{B}}^{0k}_{Jj} \ |\ 0 \leq j < d^1_j / d^0_j \}\) are equal to
\begin{equation*}
i_{comp} = \frac{d^1_i}{d^0_i} \frac{d^1_j}{d^0_j} = r_A r_B
\end{equation*}
All these iteration numbers are the same and depend on the reuse ratio defined in \eqref{eq:datareuse}
\begin{equation*}
i_{*} = i_A = i_B = i_{comp} = r_A r_B
\end{equation*}
The number of iterations of \emph{Write} are equal to the number of \(\bar{C}^I_J\) elements divided by \(d^0_j\) values written to global memory in each iteration,
\begin{equation}
i_C = \frac{d^1_i d^1_j}{d^0_j} = r_A r_B\ d^0_i
\end{equation}
since \emph{Write} can stall, the iteration number needs to be adjusted based upon a stall factor defined as
\begin{equation}
s \approx \frac{d^0_j}{\mathcal{B}_{ddr}}
\end{equation}
Considering that \emph{Read} and \emph{Compute} are partially overlapped,
the total number of iteration needed for computing a blocks of \(\bar{C}\) in \eqref{eq:second}, is
\begin{equation}
i_{tot} = i_* + i_* \frac{d^2_k}{d^0_k} + i_C\ s \approx i_* \ \Big(\ 1 + \frac{d^2_k}{d^0_k} + \frac{d^0_i d^0_j}{\mathcal{B}_{ddr}}\ \Big)
\end{equation}
This lets us estimate analytically the DSP efficiency as
\fi
The fraction of iterations in which the dot-product units are computing is
\begin{equation}\label{eq:compper}
c_{\%} = \frac{\#it_{comp}}{\#it_{tot}} \approx \frac{\frac{d^2_k}{d^0_k}}{1 + \frac{d^2_k}{d^0_k} + \frac{d^0_i d^0_j}{\mathcal{B}_{ddr}}}
\end{equation}
whereas \(\#it_{tot}\) are the iterations taken by all the phases and \(\#it_{comp}\) are only the \emph{Compute} ones.
\section{Evaluation}
In our experiments, we used the BittWare 520N Stratix 10 GX2800 accelerator card with has a board support package (BSP) based upon Quartus 19.4.0 Build 64 Pro, on top of that there is the Intel FPGA SDK for OpenCL version 20.4.0 Build 72. The BSP occupies part of the FPGA resources, 4713 of 5760 Variable Precision DSPs are available for the kernel logic.
Our designs are able to use up to 4704 of them, the 99.8\% of the available. All designs get the \emph{Hyperflex optimization on} allowing them to reach higher \(f_{max}\).
\input{result_table}
Table \ref{tab:results} contains the best \(f_{max}\) of the designs varying systolic array architecture sizes. In particular,
\(d^0_i,\ d^0_j,\ d^0_k\) and \(d_p\) are the parameters in Definition \ref{pr:systommm}, \emph{\#PEs} is defined in \eqref{eq:numpe}.
The \emph{DSPs} column shows the number of DSP block forming the systolic array architecture, they are equal to
\(\#DSP\) as defined in \eqref{eq:numdsp},
these values are confirmed by the \verb|report.html| generated by the Intel HLS tool.
The \(f_{max}\) values are taken from the \verb|Kernel fmax| field in the \verb|acl_quartus_report.txt| file within the design directory.
The peak floating-point throughput (\(T_{peak}\)) is computed as \eqref{eq:tpeak}.
The designs A, B, and D fail the synthesis since the \emph{fitter} is not able to place dot product units with a size larger than 1 for the considered architecture sizes.
Table \ref{tab:pC}-\ref{tab:pGN} show the floating-point throughput and the DSP efficiency for the considered designs varying the sizes of the matrices involved in the matrix multiplication.
For reference, we present also the floating-point throughput for an Intel Xeon Gold 6148 CPU and a Nvidia GeForce RTX 2080 Ti GPU doing the same operation using optimized BLAS libraries: MKL version 20.2 for the CPU, CUBLAS version 11.2 for the GPU. In all cases, we report the performance obtained by measuring the actual execution time of the multiplication of matrices within the global memory of the devices.
The sizes of the matrices are different between the designs since they depend on \eqref{eq:d1} and \eqref{eq:datareuse}.
The measured floating-point throughput is computed by the total number of single-precision floating-point operations executed for the matrix multiplication
\[\#FLOP = d^2_i d^2_j ( 2 d^2_k - 1 ) \]
divided by the actual kernel execution times measured with OpenCL profiling events, i.e.,
\[T_{flops}=\frac{\#FLOP}{kernel\ execution\ time} \quad [FLOPS] \]
The measured DSP efficiency is computed as the ratio between the obtained and the peak floating-point throughput, i.e.,
\begin{equation*}
e_{D} = {T_{flops}}/{T_{peak}}
\end{equation*}
As expected, the measured DSP efficiencies are close to their evaluations shown in \eqref{eq:compper}.
\input{performance_table_C}
\input{performance_table_E}
\input{performance_table_F}
\input{performance_table}
In order to compare our results, we selected works that use the Intel Stratix 10 GX2800. The FBLAS library includes a systolic SGEMM function that is able to use 3270 DSPs at 216 MHz \cite{fblas}. It has a performance similar to the Cannon matrix multiplication algorithm implemented in \cite{gorlani2019opencl} that uses 3323 DSPs at 294 MHz. Unfortunately, these designs do not achieve Hyperflex optimization and reach a floating-point throughput just below the 1.5 TFLOPS.
Another reference is established by the matrix multiplication example code optimized for Stratix 10 that is shipped within the Intel FPGA SDK for OpenCL.
This is far from being a simple example code, it is a complex design involving multiple kernels connected by channels that multiplies off-chip matrices using a configurable bi-dimensional systolic array architecture. Unfortunately, its source code does not allow to isolate functions that can be reused, e.g, on-chip matrix multiplication function.
The user can specify the grid size by setting the number of PEs in rows (\verb|PE_ROWS|) and columns (\verb|PE_COLS|). Each PE contains a dot product unit having a size of 4, 8 ,or 16, other sizes are not possible. An optional flag (\verb|FORCE_DOT_4|) allows to split these dot product units in multiple ones having a size of 4. In order to guarantee the fairest comparison, we synthesized the Intel SDK example with different grid sizes and seeds, Table \ref{tab:andrei_results} reports the best \(f_{max}\) obtained.
The configuration reported as optimal in the Intel SDK example README is made of a \(32\times14\) grid, in which each PE is made of a dot product unit of size 8. The resulting design has 3584 DSPs working at 412 MHz with a peak floating-point performance of 2953 GFLOPS.
We went further and tried different grid sizes in order to use more DSPs. Many attempts, using 4096 DSPs or more, failed during the \emph{fitter} phase. A \(32\times16\) grid, in which each PE contains two dot product unit of size 4, achieves the best result that we are able to obtain, producing a design made of 4096 DSPs working at 407 MHz and providing a peak floating-point performance of 3334 GFLOPS.
\input{result_table_andrei}
We evaluate the floating-point throughput of these designs in the same way of ours, results are shown in Table \ref{tab:andrei3214} and \ref{tab:andrei3216}.
Also the Intel SDK example has constraints on matrix sizes. In the case of the \(32\times14\) grid, \(d^2_i\) needs to be multiple of 1024 and \(d^2_j\) of 448, whereas for the \(32\times16\) grid, \(d^2_i\) needs to be multiple of 1024 and \(d^2_j\) of 512.
The floating-point throughput shows that the DSP efficiency is above 0.9 for \(d^2_k \geq 2048\), our designs reach this efficiency just for \(d^2_k > 4096\). Unfortunately, the DSP efficiency of our designs is lower due to the fact that \emph{Write} (i.e., Phase 4) is performed without any kind of overlapping computation.
The higher efficiency of the Intel SDK example comes at the cost that all the matrices need to be reordered by the host in order to be multiplied by the accelerator card.
In fact, considering off-chip matrices in row-major format, A needs to be reordered block wise, B need to be transposed and then reordered block-wise. C needs to sustain a two level reverse block-wise reordering in order to be in the row-major format. This implies that the result matrix has a different format respect to both operands, implying that it needs to be transferred back to the host in order to be reordered in case we want use it as an operand for the next matrix multiplication on the accelerator card.
In our design, the only transformation that needs to be applied to off-chip matrices in row-major format is the transposition of \(A\) in order to save it in column-major format. On the other side, \(C\) has the same row-major format of \(B\). So, we can use the multiplication result as an operand for another multiplication without any transfer to the host for the reordering.
\if 0
In our opinion, the our design has the following advantages over the one of the SDK matrix multiplication.
It is possible to isolate and reuse the function for on-chip matrix multiplication, that can accept arbitrary parameters that allows it to scale to many logic resources
1. isolate function for on-chip matrix multiplication that can scale, and can be reused
Confronting the synthesis result of this design with the ones obtained for ours, we notice that the possibility to synthesize systolic array architectures with PEs made of smaller and arbitrary large dot product units allows to utilize more DSPs getting higher peak floating-point performance.
\fi
\input{performance_table_andrei3214}
\input{performance_table_andrei3216}
\section{Conclusion}
We presented a HLS design for off-chip matrix multiplication, its main component is a three-dimensional systolic array architecture for on-chip matrix multiplication expressed by a simple function that can be adapted and reused. Our systolic array architecture allows the user to fine tune its sizes in order to increase logic resource utilization and explore the design space.
Our investigation does not provide the ultimate answer to matrix multiplications in HPC because GPUs deliver easily higher performance. The performance of our implementation is in the same ranges as highly optimized CPU codes.
However, we think that this investigation is useful for suggesting new HLS design methods within the wide theoretical background of systolic array architectures. The possibility to describe these architectures in an analytical way and the fact that they can be implemented efficiently could be fundamental for establishing FPGA accelerators in HPC.
In the future, we plan to use the function in Listing \ref{lst:systo} into designs implementing complete numerical solvers entirely into the FPGA logic
for the sake of achieving a performance improvement over GPUs. Source code available at \verb|https://github.com/pc2/3d-systo-fpga|
\IEEEtriggeratref{10}
\bibliographystyle{IEEEtran}
|
1,314,259,995,640 | arxiv | \section{Introduction}
\label{intro}
Fluids of two-dimensional hard anisotropic particles are paradigmatic examples of systems
exhibiting entropy-driven phase transitions to orientationally and positionally ordered phases.
The elucidation of the phase behavior of two-dimensional fluids composed of hard particles
is not an academic study, since hard particles enjoy many experimental realizations.
To cite a representative recent experiment, extreme confinement of three-dimensional lithographically
synthesized prisms with different polygonal cross-sections in quasi-2D geometries has been accomplished
\cite{Zhao,Wang,Rossi,Qi}. The phase behavior of these effectively two-dimensional Brownian particles
were reported, and their tendency to produce chiral phases \cite{Zhao,Rossi} or racemic mixtures of
monomers and dimers was emphasized \cite{Wang}. Also, the phase behavior of colloidal monolayers of
particles with exotic shapes has recently been reported \cite{Qi}. Research on three-dimensional
colloidal particles of different shapes, especially in connection with packing and partial or
complete crystallization, has also been very active (see Ref. \cite{Manoharan} for a recent review).
Several theoretical works have concentrated on the elucidation of the phase behavior of hard polygonal particles
\cite{Frenkel,M-R1,Donev,Avendano,Gantapara,M-R2,Anderson,Shen,Thapar}. The results show that it strongly
depends on the symmetries of particle shapes. Apart from the usual uniaxial nematic phase present in fluids of
elongated rods, other more `exotic' orientational fluid phases, such as triatic, tetratic and hexactic phases,
also exist. For example, hard rectangles may order into uniaxial nematic phases, but also in
tetratic arrangements at low particle aspect ratios \cite{Donev,M-R1}. Different plastic or orientationally
ordered crystals have been classified as a function of particle shape \cite{Anderson,Shen}.
Especially interesting is the case of hard squares because of their
plane-filling properties and the mathematical simplicity of their interaction potential.
Classical work on the numerical calculation of virial coefficients \cite{Hoover1,Hoover2}
demonstrate the importance
of hard squares as a simple model to elucidate important problems in statistical mechanics.
The lattice-gas version of the model has attracted some attention \cite{Lafuente,Ramola,Singh}
The parallel hard square model has also been investigated \cite{Hoover3,Belli,Pinto}.
Simulations have shown that freely-oriented hard squares present
nematic tetratic and crystal square phases \cite{Frenkel}. Rounded hard squares
have been investigated and their phase behaviours seen to depend
on the degree of roundness \cite{Avendano}. An experimental realization of
this system has been reported, together with evidence for a rich phase
diagram \cite{Zhao1}. Also, demixing transitions in mixtures of hard squares have been
explored by simulation \cite{Buhot}.
The effect of confinement on two-dimensional fluids of rod-like particles in cavities of square,
rectangular or circular geometries has been extensively studied
\cite{Heras2,Heras3,Geigenfeind,Garlea,Lewis,Manyuhina,Heras4,Gonzalez-Pinto}.
When the confining geometry is incompatible with the symmetry of the bulk phase the system usually
responds to the geometric frustration by creating point defects or domain walls in the orientational field.
Hard particles exhibit preferred orientations at the boundary of the confining walls, which are controlled
solely by entropy. These `anchoring' effects are strong enough that creation of defects is unavoidable.
The number and symmetry of the defects strongly depends on the geometry of the confining cavity and on the
symmetries of the bulk phases.
When confinement of 2D hard particles (an also of 3D hard spheres inside a cylindrical pore)
between two hard lines is so extreme that the system is close to the 1D limit
the partition function can be calculated for nearest-neighbor or next-nearest-neighbor interactions
using the Transfer Matrix Method (TMM).
This method becomes a useful (and potentially exact) theoretical tool to extract information about the structure of the
confined fluid. In essence, the technique calculates, apart from the partition function, the probability density
and pair correlations between particles. The method was successfully applied to the study of hard disks,
squares, rhombuses and rectangles under strong confinement
\cite{Kofke,Gurin5,Gurin4,Godfrey,Gurin2,Gurin1,Gurin3,Hu}. The results can be summarized as follows:
(i) Phase transitions between different spatial structures are ruled out, a confirmation of the general result
that fluids composed of particles interacting via hard-core potentials do not exhibit phase transitions
in $1+\epsilon$ dimensions. (ii) From the behavior of probability densities and pair correlation functions
smooth crossovers between different spatial structures can be shown to exist.
For example, the system may change from
a one-layer structure that behaves approximately as a 1D Tonk's gas to a structure consisting of two
highly-correlated layers adsorbed at each wall. Correlation may be different depending on the specific
particle geometry (circular vs. square). (iii) Although phase transitions can be ruled out, the equation
of state (EOS) may exhibit, in a range of packing fraction associated with the structural crossover,
a plateau, and consequently the specific heat exhibits a sharp peak in this range of packing fraction.
The implementation of the TMM to such systems serves as an ideal testbed to study the performance
of available Density Functionals (DF) developed for 2D fluids of: hard disks \cite{Roth},
parallel hard squares (PHS) \cite{cuesta1}, rectangles within the restricted orientation approximation
\cite{cuesta2}, or freely-rotating disco-rectangles \cite{Wittmann}. All of these DF are based on the
Fundamental-Measure Theory (FMT), initially developed for hard spheres and further extended to
anisotropic particles. For reviews of this theoretical tool see Refs. \cite{Roth2,Tarazona}.
Recent work on highly confined PHS and rectangles (in the orientation-restricted or Zwanzig approximation)
in slit geometry using both theories, TMM and FMT, demonstrated the high performance of FMT to predict
changes in the structural properties of the fluid induced by confinement, and also to describe the
anomalous behavior of the EOS at the crossover between different structures \cite{Gurin3,pinto1}.
In the present article we go beyond the one-component fluid studied previously, and focus on
the effect of extreme confinement on the structural and thermodynamical properties of binary mixtures of PHS,
using a FMT-based formalism. Mixtures of small (edge-length equal to $\sigma_1$)
and big (edge-length equal to $\sigma_2$) squares are confined into a channel of width $H$.
The value of $H$ is selected in such a way that at most two layers of small squares can fit into the channel,
whereas only one layer (but not two) of big squares can fit. We analyze two different mixtures characterized by
the ratios $\sigma_2/\sigma_1=1.5$ and 2. We found micro- and macrosegregation
first-order transitions for the first and second mixtures, respectively. In the former case,
different species are preferentially adsorbed at different walls, while in the latter species phase-separate,
with a dividing surface perpendicular to the walls. We explain, using entropic arguments, why these mixtures
segregate. We claim that a TMM applied to these mixtures could confirm the appearance of large clusters of
micro/macrosegregated particles as the packing fraction is increased, despite the fact that an exact theory
should discard the existence of a true phase transition between different structures.
The paper is organized as follows: In Sec. \ref{model} the model is introduced
and details are provided on the theory. Also, the numerical procedure
used to find the phase behavior of the system is discussed.
Technical details to prove the nonexistence of fluid-fluid demixing at bulk, along with
the method to find the spinodal instability of uniform phases with respect to 1D spatial density
modulations, are relegated to Sec. \ref{uniform} and \ref{app_sf}, respectively.
In Sec. \ref{results} the results are presented. This section is in turn divided into two parts,
where results obtained for mixtures with $\sigma_2/\sigma_1=1.5$
[Sec. \ref{tres_medio_uno}] and $\sigma_2/\sigma_1=2$ [Sec. \ref{dos_uno}] are given.
The end of Sec. \ref{dos_uno} is devoted to describing the
phase behavior of the $\sigma_2/\sigma_1=2$ mixture that results from
imposing periodic boundary conditions,
instead of a confining external potential.
Finally some conclusions are drawn in Sec. \ref{conclusions}.
\begin{figure}[H]
\epsfig{file=Fig1.eps,width=3.5in}
\caption{Schematic of close-packing configurations of binary mixtures of PHS with
$\sigma_2/\sigma_1=1.5$ and molar fractions (a) ${\sf x}>3/5$, and (b) ${\sf x}<3/5$. The small and big
clusters are indicated with blue and red solid lines, respectively.}
\label{fig0_new}
\end{figure}
\section{Model and Theory}
\label{model}
Our model consists of a binary mixture of PHS confined into a channel (or a slit pore)
formed by two parallel hard lines (or walls) with a relative distance between them
of $H$ (the pore width). See Fig. \ref{fig0_new}
for a sketch of the system. Small and large particles have edge-lengths equal to
$\sigma_1$ and $\sigma_2$, respectively. Coordinates parallel and
perpendicular to the walls are chosen as $x$ and $y$, respectively, and the walls are
located at $y=0$ and $y=H$. Our system is described in terms of the
density profile of species $i$, $\rho_i(y)$, which is assumed to depend only on the $y$-coordinate.
The mean density, averaged in the channel, of the $i$th species is defined as
\begin{eqnarray}
\rho_i\equiv\frac{1}{H}\int_0^H dy \rho_i(y),\quad \rho=\rho_1+\rho_2,
\end{eqnarray}
with $\rho$ the total mean density. The mixture composition is described in terms of the mean
molar fraction of small species:
\begin{eqnarray}
{\sf x}\equiv {\sf x}_1=\frac{\rho_1}{\rho},\quad {\sf x}_2=\frac{\rho_2}{\rho}=1-{\sf x},
\quad \sum_i {\sf x}_i=1.
\end{eqnarray}
The mean packing fraction of the mixture is, as usual, calculated as
\begin{eqnarray}
\eta=\sum_{i=1}^2 \eta_i=\sum_{i=1}^2\rho_i\sigma_i^2.
\end{eqnarray}
The theoretical model used is a version of DFT, the so-called FMT, which was formulated
for PHS in the '90 \cite{cuesta1} and has been extensively tested before in several studies
\cite{Gurin3,pinto1}. The main assumption of the
theory, adapted to the present system, is that the excess (or interaction) part of the
free-energy density of the PHS fluid only depends on four weighted densities,
\begin{eqnarray}
&&n_0(y)=\frac{1}{2}\sum_i \left[\rho_i(y_i^{-})+\rho_i(y_i^+)\right],\\
&&n_2(y)=\sum_i\sigma_i\int_{y_i^-}^{y_i^+}dy'\rho_i(y'),\\
&&n_{1x}(y)=\frac{1}{2}\sum_i\sigma_i\left[\rho_i(y_i^{-})+\rho_i(y_i^+)\right],\\
&&n_{1y}(y)=\sum_i\int_{y_i^-}^{y_i^+}dy'\rho_i(y'),
\end{eqnarray}
where $y_i^{\pm}=y\pm\sigma_i/2$. The explicit expression for the excess free-energy
density, in reduced thermal units, is \cite{cuesta1}
\begin{eqnarray}
\Phi_{\rm exc}(y)=-n_0(y)\log[1-n_2(y)]+\frac{n_{1x}(y)n_{1y}(y)}{1-n_2(y)},
\end{eqnarray}
while the ideal part, neglecting the thermal areas, is
\begin{eqnarray}
\Phi_{\rm id}(y)=\sum_i \rho_i(y)\left[\log \rho_i(y)-1\right],\\
\end{eqnarray}
The grand-potential per unit length can then be calculated as
\begin{eqnarray}
\frac{\Omega[\{\rho_i\}]}{L}=\frac{{\cal F}[\{\rho_i\}]}{L}-\sum_i \int_0^H dy
\left(\mu_i-v^{(i)}_{\rm ext}(y)\right)\rho_i(y),
\end{eqnarray}
with ${\cal F}[\{\rho_i\}]$ the Helmholtz free-energy DF,
\begin{eqnarray}
&&\frac{\beta {\cal F}[\{\rho_i\}]}{L}=\frac{\beta {\cal F}_{\rm id}[\{\rho_i\}]}{L}+
\frac{\beta {\cal F}_{\rm exc}[\{\rho_i\}]}{L}\nonumber\\
&&=\int_0^H dy \Phi_{\rm id}(y)+
\int_0^H dy\Phi_{\rm exc}(y),
\end{eqnarray}
with $\beta=(k_B T)^{-1}$ the inverse of temperature,
$\mu_i$ the chemical potential of species $i$ and $L$ the length of the system.
The external potential acting on particle $i$
is defined as
\begin{eqnarray}
\beta v_{\rm ext}^{(i)}(y)=\left\{
\begin{matrix}
0, & \displaystyle{\frac{\sigma_i}{2}\leq y\leq H-\frac{\sigma_i}{2}},\\
\infty, & \text{otherwise.}
\end{matrix}
\right.
\end{eqnarray}
By minimizing the grand potential with respect to $\rho_i(y)$, i.e.
$\displaystyle{\frac{\delta \beta \Omega[\{\rho_i\}]}{\delta \rho_i(y)}=0}$,
we obtain
\begin{eqnarray}
\rho_i(y)=\left\{
\begin{matrix}
e^{-\Psi_i(y)+\beta\mu_i}, & \displaystyle{\frac{\sigma_i}{2}\leq y\leq H-\frac{\sigma_i}{2}},\\
0, & \text{otherwise,}
\end{matrix}
\right.
\label{solve1}
\end{eqnarray}
where we have used the short-hand notation
\begin{eqnarray}
&&\Psi_i(y)\equiv \frac{\delta \beta {\cal F}_{\rm exc}[\{\rho_i\}]/L}{\delta \rho_i(y)}
\end{eqnarray}
The longitudinal pressure inside the channel can be calculated as
\begin{eqnarray}
\beta p&=&\frac{1}{H}\left\{\sum_i\left[\int_0^H dy \rho_i(y)\left(1+\Psi_i(y)\right)\right]
-\frac{\beta {\cal F}_{\rm exc}}{L}\right\}\nonumber\\
&=&\frac{1}{H}
\int_0^Hdy\left[\frac{n_0(y)}{1-n_2(y)}+\frac{n_{1x}(y)n_{1y}(y)}{(1-n_2(y))^2}\right].
\end{eqnarray}
By fixing the values of both mean packing fractions $\eta_i$ inside the channel,
the constrained minimization of the free-energy,
$\beta {\cal F}[\{\rho_i\}]$, with respect to $\rho_i(y)$ leads to
\begin{eqnarray}
\rho_i(y)=\frac{\eta_ie^{-\Psi_i(y)}}
{\sigma_i^2H^{-1}\int_0^H dy' e^{-\Psi_i(y')}},
\label{solve2}
\end{eqnarray}
for $\sigma_i/2\leq y\leq H-\sigma_i/2$, and zero otherwise. Obviously the two routes: (i) to fix
the chemical potentials $\mu_i$, and (ii) to fix the packing fractions $\eta_i$, are equivalent.
Using the second route to calculate the equilibrium density profiles,
the chemical potentials can be calculated as
\begin{eqnarray}
\beta\mu_i=\log\left[\frac{\eta_i}{\sigma_i^2H^{-1}\int_0^Hdy e^{-\Psi_i(y)}}\right].
\label{chepocon}
\end{eqnarray}
To study the thermodynamics of the confined fluid mixture, which is necessary to calculate
possible phase transitions, it is more convenient
to use the Gibbs free-energy per-particle in reduced thermal units, defined as
\begin{eqnarray}
g\equiv \frac{\beta}{\rho}\left(\frac{{\cal F}}{LH}+p_0\right).
\label{elgibbs}
\end{eqnarray}
Here the pressure of the confined mixture is fixed,
\begin{eqnarray}
p\left({\sf x},\rho\right)=p_0,
\label{solve3}
\end{eqnarray}
and $\rho$ can be numerically calculated as a function of the mixture composition
${\sf x}$ once the equilibrium
density profiles, $\{\rho_i^{(\rm eq)}(y)\}$ are obtained from Eq. (\ref{solve2}).
The function $g({\sf x})$ can then be obtained. In case of first-order phase transitions
a double-tangent construction on $g({\sf x})$ allows us to calculate the coexisting
values of molar and packing fractions. For convenience we will use a dimensionless
pressure $p_0^*\equiv \beta p_0\sigma_1^2$.
Sec. \ref{uniform} of the Appendix presents a proof that the uniform mixture of PHS
is always stable at bulk, i.e. no demixing is possible. In Sec. \ref{app_sf} of the same
Appendix the spinodal instability of uniform phases with respect to one-dimensional periodic
inhomogeneities is discussed by means of a bifurcation analysis.
\section{Results}
\label{results}
This section is devoted to presenting the results obtained from the numerical solutions
of Eqs. (\ref{solve2}) and (\ref{solve3}), which provide the equilibrium density profiles
$\rho_i(y)$ for fixed pressure $p_0^*$, and for a given
mixture composition ${\sf x}$. Varying ${\sf x}$ inside a given set of values
$\{{\sf x}_i=i/N_{\sf x},\ i=0,\dots,N_{\sf x}, \ N_{\sf x}\sim 100\}$ allows to obtain
a sufficiently accurate Gibbs free-energy per particle, $g({\sf x})$ [from Eq. (\ref{elgibbs})]
to search for possible phase transitions and
calculate the phase diagrams. This section is divided into two parts. Sec. \ref{tres_medio_uno}
is concerned with a confined binary mixture of PHS with $\sigma_2/\sigma_1=1.5$ and several values
of pore width $H/\sigma_1$. Values of $H$ were chosen to ensure that only two small squares
(but not three) or one big plus one small square can fit
inside the channel along its transverse direction, whereas only one big square (but not two)
is allowed to fit (i.e. $2.5=1+\sigma_2/\sigma_1<H/\sigma_1<2\sigma_2/\sigma_1=3$). In Sec.
\ref{dos_uno} a mixture with $\sigma_2/\sigma_1=2$ is studied. This time configurations
where one big plus one small or, again two big squares, are both forbidden, which is
expressed by the condition $2<H/\sigma_1<1+\sigma_2/\sigma_1=3$.
\begin{figure}
\epsfig{file=Fig2.eps,width=2.5in}
\caption{Packing fraction $\eta$ vs. molar fraction ${\sf x}$ for SYM (red) and ASYM (blue)
configurations of a confined binary mixture of PHS with $\sigma_2/\sigma_1=1.5$,
$H/\sigma_1=2.6$ and $p_0^*=4$. The green curve
corresponds to the packing fraction for the close-packed configuration $\eta_{\rm cp}$
(see the text).
Note that the maxima of the curves $\eta({\sf x})$ in both
SYM and ASYM configurations are located close to ${\sf x}=0.6$, the maximum close-packing
value. Red and blue circles indicate the SYM and ASYM coexisting states, respectively.
Thus the red curve between red circles corresponds to metastable states. These states also occur
in the blue curve at left and right of the blue circles.}
\label{fig3}
\end{figure}
\subsection{The $\sigma_2/\sigma_1=3/2$ mixture}
\label{tres_medio_uno}
First we analyze the close-packing properties of the mixture.
For composition ${\sf x}\geq 3/5$ the close-packing configuration can be reached
by adding up along the channel two kinds of clusters in close contact. Big clusters,
${\cal N}_{\rm b}$ in number, consist of groups of five particles: two big squares
joined in the direction along the channel and in contact with one wall, and three small squares,
also joined along the channel, located on top of (or below) the big squares and occupying
the same length, parallel to the walls, as the big squares.
The other, smaller clusters, ${\cal N}_{\rm s}$ in number, are made of small squares grouped together
in dimers and consist of two squares, perfectly aligned along the transverse direction,
each one in contact with opposite walls. See Fig. \ref{fig0_new} for a sketch of a
possible close-packing configuration. The number of clusters should fulfill
the relation $3{\cal N}_{\rm b}+2{\cal N}_{\rm s}={\sf x} {\cal N}$ and $2{\cal N}_{\rm b}=(1-{\sf x}){\cal N}$
(and thus ${\cal N}_{\rm b}=(1-{\sf x}){\cal N}/2$
and ${\cal N}_{\rm s}=(5{\sf x}-3){\cal N}/4$) with ${\cal N}$ the total number of particles. The
packing fraction at close packing can be calculated as the ratio between the total area
occupied by all clusters divided by the total area, i.e.
\begin{eqnarray}
\eta_{\rm cp}=\frac{{\cal N}_{\rm b}(2\sigma_2^2+3\sigma_1^2)+2{\cal N}_{\rm s}\sigma_1^2}
{\left[3{\cal N}_{\rm b}\sigma_1+{\cal N}_{\rm s}\sigma_1\right]H}
=\frac{9-5{\sf x}}{3-{\sf x}}\times\frac{\sigma_1}{H},
\label{cp1}
\end{eqnarray}
for $\displaystyle{{\sf x}\geq 3/5}$.
For the case ${\sf x}\leq 3/5$ the close packing configuration can be reached by adding in close contact
along the channel the same big clusters as defined previously, with a total amount of ${\cal N}_{\rm b}$,
and small clusters, with a total number of ${\cal N}_{\rm s}$, this time formed by a single big square
in any position along the transverse direction. The
numbers $\{{\cal N}_{\rm b},{\cal N}_{\rm s}\}$ fulfills
$3{\cal N}_{\rm s}={\sf x} {\cal N}$ and $2{\cal N}_{\rm s}+{\cal N}_{\rm b}=(1-{\sf x}){\cal N}$ (and
consequently ${\cal N}_{\rm s}={\sf x} {\cal N}/3$ and ${\cal N}_{\rm b}=(3-5{\sf x}){\cal N}/3$ .
Then the packing fraction at close packing can be calculated for $\displaystyle{{\sf x}\leq 3/5}$ as
\begin{eqnarray}
\eta_{\rm cp}=\frac{{\cal N}_{\rm b}(2\sigma_2^2+3\sigma_1^2)+{\cal N}_{\rm s}\sigma_2^2}
{\left(3{\cal N}_{\rm b}\sigma_1+{\cal N}_{\rm s}\sigma_2\right)H}=\frac{9-5{\sf x}}{6(1-{\sf x})}
\times \frac{\sigma_1}{H}.
\label{cp2}
\end{eqnarray}
\begin{figure}
\epsfig{file=Fig3a.eps,width=2.5in}
\epsfig{file=Fig3b.eps,width=2.5in}
\caption{
Gibbs free-energy per particle in reduced thermal units minus a straight line
vs. mean molar fraction ${\sf x}$.
(a) $g^*\equiv g-17.856+13.184 {\sf x}$ and (b)
$g^*\equiv g-15.959+8.873 {\sf x}$. Two different intervals of ${\sf x}$ are shown,
located where transitions from SYM (red curve) to ASYM (blue
curve) [shown in (a)] and from ASYM to SYM [shown in (b)] phases take place.
Results correspond to a confined binary mixture of PHS with $\sigma_2/\sigma_1=1.5$,
$H/\sigma_1=2.6$ and $p_0^*=4$. Solid curves are least-square
polynomial fits to the red and blue symbols, which represent the calculated points.
The coexisting points are indicated with black symbols joined by dashed lines.}
\label{fig2}
\end{figure}
\begin{figure*}
\epsfig{file=Fig4a.eps,width=1.6in}
\epsfig{file=Fig4b.eps,width=1.6in}
\epsfig{file=Fig4c.eps,width=1.6in}
\epsfig{file=Fig4d.eps,width=1.6in}
\caption{Coexistence density profiles corresponding to the two first-order phase transitions shown
in Fig. \ref{fig2} for $\sigma_2/\sigma_1=1.5$, $H/\sigma_1=2.6$ and $p_0^*=4$. (a) and (b) correspond
to the low molar fraction region, whereas (c) and (d) refer to the high molar fraction region.
(a) and (c) are SYM phases, while (b) and (d) are ASYM phases. Blue and red curves
correspond to scaled density profiles $\rho_1(y)\sigma_1^2$ and $\rho_2(y)\sigma_2^2$ of
small and big species, respectively.}
\label{fig4}
\end{figure*}
The function $\eta_{\rm cp}({\sf x})$, given by Eqs. (\ref{cp1}) and (\ref{cp2}) for
the case $H/\sigma_1=2.6$, is plotted in green in Fig. \ref{fig3}. The maximum packing fraction
obviously corresponds to ${\sf x}=3/5$ with
$\displaystyle{\eta_{\rm cp}^{(\rm max)}=\eta_{\rm cp}(3/5)=5\sigma_1/2H}$.
The packing fractions for the one-component fluids composed of big and small
particles are respectively $\displaystyle{\eta_{\rm cp}(0)= 3\sigma_1/2H}$ and
$\displaystyle{\eta_{\rm cp}(1)=2\sigma_1/H}$. In the same figure we plot the results
from DFT calculations for the same pore-width $H/\sigma_1=2.6$
at a fixed pressure $p_0^*=4$. Two different solutions are obtained, corresponding
to two different local minima of the Gibbs free-energy per particle. The red line represents the so-called
symmetric (SYM) solution, which has density profiles symmetric with respect to a line parallel to the
$x$ axis that passes through the middle of the channel, i.e. $\rho_i(y)=\rho_i(H-y)$.
The blue line, in contrast, represent an asymmetric (ASYM) solution, with $\rho_i(y)\neq \rho_i(H-y)$.
Note that the ASYM solution only exists in a particular interval of molar fractions,
whereas the SYM profile exists for all ${\sf x}$. The former gives a higher value of mean packing
fraction $\eta$ (since the blue curve is above the red one). Both curves
have their maxima located close to ${\sf x}\approx 3/5$, where the maximum value at close-packing
is reached. The confined mixture exhibits two first-order phase transitions that take place as the molar
fraction is increased from 0 to 1.
Both the SYM-ASYM and ASYM-SYM transitions are correspondingly labeled in Fig. \ref{fig3}, with
the coexisting values shown by two pairs of red and blue circles.
The ASYM-phase
is stable in an interval of ${\sf x}$ between the blue circles of Fig. \ref{fig3}. This can be concluded from
Fig. \ref{fig2}, where we plot the Gibbs free-energy per particle
for two different ranges of ${\sf x}$ [(a) and (b)] located close to both
phase transitions. In both cases straight lines have been subtracted to improve visualization.
The circles correspond to values of ${\sf x}$ where DFT calculations were performed, and the
pressure was fixed to $p_0^*=4$. Red and blue curves are polynomial fits of the SYM and ASYM
solutions respectively, which were used to calculate coexistence through a double-tangent construction.
We checked that the energy $g$ of the ASYM-phase is always below that of SYM-phase in the interval
${\sf x}\in[0.2,0.5]$. The four coexisting density profiles are shown in Fig. \ref{fig4}. Their symmetric
or asymmetric character are quite apparent. In the ASYM-phase big and small squares are preferentially
absorbed at different walls, a type of microsegregation transition. In contrast, in the SYM phase
both species are equally adsorbed at both walls.
\begin{figure}
\epsfig{file=Fig5a.eps,width=2.5in}
\epsfig{file=Fig5b.eps,width=2.48in}
\caption{Phase diagrams of a confined binary mixture of PHS with $\sigma_2/\sigma_1=1.5$ and
$H/\sigma_1=2.6$.
(a) $p_0^*$ vs. ${\sf x}$. (b) $\eta$ vs. ${\sf x}$. Red and blue filled circles represent the coexisting
values of the SYM and ASYM states, respectively. Open circle represents the left-tricritical
point separating the coexisting binodals (solid lines) from the continuous phase-transition curve
(dashed line). The open triangle represents the right-tricritical point.
}
\label{fig5}
\end{figure}
The driving force for microsegregation is entropy. It is clear that, at close packing,
two possible configurations of big clusters in the SYM phase are equally represented,
i.e. big clusters containing big squares in contact with different walls are equally likely.
In contrast, in the ASYM-phase this symmetry is broken, with one of the configurations overrepresented with
respect to the other. Close packing can be attained by both ASYM and SYM phases, but the latter
is more disordered in terms of mixing entropy and consequently has a lower free energy. However, far from
close packing, when pressure is not too large (e.g. $p_0^*=4$), the situation can be different.
Since big squares will be alternatively absorbed at both walls in the SYM phase, while the
the space between big squares in contact with the same wall is moderately filled with small squares,
it is clear that it not possible for big squares to overpass each other: the motion
of small squares along the $x$-axis is severely restricted due to the jammed configuration of large particles.
Thus the configurational entropy, related to the total number of allowed particle configurations,
drops and consequently the free-energy increases as compared to that of the quasi-perfect ASYM-phase.
In the latter big squares are not jammed (since most of them are adsorbed at the same wall) and
therefore particles can move along the channel with much more freedom (the only constraint being
hard-core interactions with the lateral neighbors).
Of course particles can also move along the $y$-axis, but they have similar
freedom in both phases.
\begin{figure}
\epsfig{file=Fig6a.eps,width=2.5in}
\epsfig{file=Fig6b.eps,width=2.5in}
\caption{Phase diagrams for the confined binary mixture of PHS with $\sigma_2/\sigma_1=1.5$
and $H/\sigma_1=2.8$. (a) $p_0^*$ vs. ${\sf x}$, and (b) $\eta$ vs. ${\sf x}$.
Red and blue circles represent the coexisting values corresponding to
SYM and ASYM states, respectively. The open circle indicates the azeotropic point.}
\label{fig6}
\end{figure}
\begin{figure*}
\epsfig{file=Fig7a.eps,width=1.6in}
\epsfig{file=Fig7b.eps,width=1.6in}
\epsfig{file=Fig7c.eps,width=1.6in}
\epsfig{file=Fig7d.eps,width=1.6in}
\caption{Coexistence density profiles corresponding to the two first-order phase transitions shown
in Fig. \ref{fig6} for $\sigma_2/\sigma_1=1.5$, $H/\sigma_1=2.8$ and $p_0^*=4$. (a) and (b) correspond
to the low molar fraction region, whereas (c) and (d) refer to the high molar fraction region.
(a) and (c) are SYM phases, while (b) and (d) are ASYM phases. Blue and red curves
correspond to scaled density profiles $\rho_1(y)\sigma_1^2$ and $\rho_2(y)\sigma_2^2$ of
small and big species, respectively.}
\label{fig7}
\end{figure*}
We performed coexistence calculations for several values of pressure to construct a phase diagram for
$H/\sigma_1=2.6$. This is shown in Fig. \ref{fig5}, in the $p^*-{\sf x}$ and $\eta-{\sf x}$ planes.
We see that the ASYM stability region is laterally bounded (in the ${\sf x}$ direction)
by first-order SYM-ASYM and ASYM-SYM transitions lines.
At low pressures the SYM-ASYM transition terminates in a
left-tricritical point (open circle). From this point the
transition becomes continuous. This line meets the
binodals of the ASYM-SYM transition at the right-tricritical point
(open triangle). Note the strong fractionation
of the SYM-ASYM transition: the compositions of the coexisting phases
are much more different than
those of the ASYM-SYM transition. As more packed configurations are
reached by increasing the amount of small squares, the phase diagram in
the $\eta-{\sf x}$ plane [panel (b)] becomes highly asymmetric, i.e.
there is a large difference in packing fraction values of the coexistence
binodals at left and right of the end-critical point.
The phase diagram for a wider pore width of $H/\sigma_1=2.8$, shown
in Fig. \ref{fig6}, was also calculated.
In wider pores the entropically-driven microsegregation, resulting from
particle-motion restrictions in jammed SYM-configurations, still operates,
but to a lesser extent because the transverse spatial
freedom of particles increase with $H/\sigma_1$. Thus the ratio
between the gain in lateral free length (resulting from microsegregation)
and the transverse free length is lower. We should remind ourselves that
configurational entropy competes with mixing entropy (which prevents
microsegregated states). As a final result the region of ASYM-phase
stability in the $p_0^*-{\sf x}$ phase diagram shrinks with
$H/\sigma_1$, a fact that can be confirmed by looking at Fig. \ref{fig7} (a).
It can be seen that for the highest pressure used ($p^*=6.2$)
the stability interval in ${\sf x}$ of the ASYM-phase is now
$\sim [0.23,0.65]$, smaller than $[0.15,0.75]$ (which corresponds
to the $H/\sigma_1=2.6$-case). Another interesting feature of the
phase diagram is the weaker character of the first-order SYM-ASYM transition
at the left of the azeotropic point (open circle). The azeotropic
character of the latter can be inferred from panel (b), which demonstrates
the existence of a coexisting gap in $\eta$ at this point,
despite the fact that the composition of the coexisting phases is the same.
The binodals are monotonically-increasing functions of ${\sf x}$, showing the
higher packing inside the pore resulting from
an increase in the number of small squares.
Fig. \ref{fig7} depicts the
four coexisting density profiles
for this pore-width and with pressure fixed to $p_0^*=4$.
The following results can be extracted: (i) density profiles are broadened
compared to those for the thinner pore, and (ii) adsorption of big squares
at the walls is increased: the heights of the central plateau in the density
profile $\rho_2(y)$ [see (a) and (b)] are lower than those of
Fig. \ref{fig3}(a) and (b). This effect can be understood in terms of
the low values of coexisting compositions for the SYM-ASYM transition
in the thin pore: There exists a large amount of big squares which do not
contribute to the formation of the big clusters and they freely
fluctuate between both walls.
\begin{figure}
\epsfig{file=Fig8a.eps,width=2.5in}
\epsfig{file=Fig8b.eps,width=2.5in}
\caption{(a) Differences in coexisting molar and packing
fractions of ASYM phases, $\Delta {\sf x}\equiv {\sf x}^{(a,2)}-{\sf x}^{(a,1)}$ (blue) and
$\Delta\eta\equiv \eta^{(a,2)}-\eta^{(a,1)}$ (red), as a function of the scaled free length
$(H-\sigma_1-\sigma_2)/\sigma_1$, for a binary mixture with $\sigma_2/\sigma_1=1.5$
and pressure $p_0^*=5$.
(b) Equation of state (EOS) of a confined binary mixture with
$\sigma_2/\sigma_1=1.5$ and $H/\sigma_1=2.8$. The
red and blue solid lines correspond to SYM and ASYM states, with compositions fixed to
their corresponding coexisting values at $p_0^*=4$
(i.e. ${\sf x}=0.36611$ and ${\sf x}=0.38352$ respectively).
Coexisting states are indicated by empty circles.
The EOS corresponding to a SYM phase with ${\sf x}=0.38352$ is shown by a dashed red curve.
Note that this curve intersects the blue line at high pressures, indicating that the ASYM states will
become unstable and an upper azeotropic point probably exists in the phase diagram.
}
\label{fig8}
\end{figure}
The decrease in ASYM phase stability with pore-width $H/\sigma_1$
at a fixed pressure (in particular for $p_0^*=4$)
is confirmed in Fig. \ref{fig8} (a), where we plot the interval $\Delta {\sf x}={\sf x}^{a,2}-{\sf x}^{a,1}$ (with
${\sf x}^{a,i}$ the coexisting values of the left ($i=1$) and right ($i=2$) ASYM-binodals)
in which stable ASYM solutions are found vs. the free transversal length
$(H-\sigma_1-\sigma_2)/\sigma_1$. Also plotted are the difference in
packing fractions
at these points ($\Delta \eta=\eta^{(a,2)}-\eta^{(a,1)}$) which does not
change much but exhibits a maximun.
As mentioned before, the entropic mechanism that drives microsegregation
at finite pressure does not operate at close packing because in this case
(infinite pressure) the lateral free length that allows particle motion
is absent, while mixing entropy favors the formation of SYM states.
Therefore, at very high pressure, an upper azeotropic point is expected
in the phase diagram: we conjecture the existence of a finite region
in the phase diagram where a reentrant ASYM-phase is stable. An indication
that this could certainly be the case can be seen in
Fig. \ref{fig8} (b), where we plot the EOS of the
SYM (solid-red) and ASYM (solid-blue) phases for fixed values of compositions,
${\sf x}=0.36611$ and ${\sf x}=0.38352$ respectively. These are the
coexisting values of the SYM-ASYM transition at $p_0^*=4$.
The SYM and ASYM phases are stable in the intervals
$0<p_0^*<4$ and $4<p_0^*\lesssim 9$, respectively. Note how the EOS of
a SYM phase with a fixed composition
${\sf x}=0.38352$ (dashed red line) intersects the blue solid curve, which
indicates that for pressures $p^*\gtrsim 9$ the ASYM phase
might loose stability with respect to the SYM phase.
Unfortunately our numerical scheme to implement the DF minimization
becomes unstable at these high pressures, and an alternative method, such
as a density-profile parameterization, is needed to validate this conjecture.
\begin{figure}
\epsfig{file=Fig9a.eps,width=2.5in}
\epsfig{file=Fig9b.eps,width=2.5in}
\caption{(a) Packing fraction vs. molar fraction for a confined binary mixture of PHS with
$\sigma_2/\sigma_1=2$, $p_0^*=5$ and $H/\sigma_1=2.2$ (solid black) $2.4$ (solid red), and
$2.6$ (solid blue).
Close-packing values, $\eta_{\rm cp}$, for the same values of $H/\sigma_1$, are shown by dashed lines.
(b) Scaled Gibbs free-energy per particle minus a straight line,
$g^*\equiv \beta g-27.232+18.124 {\sf x}$, vs.
molar fraction for the same mixture and for $H/\sigma_1=2.4$.
The solid circles joined with a dashed line indicate the coexistence values of ${\sf x}$.}
\label{fig9}
\end{figure}
\subsection{The $\sigma_2/\sigma_1=2$ mixture}
\label{dos_uno}
\begin{figure}
\epsfig{file=Fig10a.eps,width=2.5in}
\epsfig{file=Fig10b.eps,width=2.5in}
\caption{Phase diagrams in the (a) pressure-composition, and (b) packing fraction-composition
planes of a confined binary mixture of PHS with $\sigma_2/\sigma_1=2$, and $H=2.4$. In (b)
four isobars, for $p_0^*=6$ (blue), 5 (red), 4 (green), and $3.2$ (orange) are shown
with dashed lines.}
\label{fig10}
\end{figure}
\begin{figure}
\epsfig{file=Fig11a.eps,width=2.5in}
\epsfig{file=Fig11b.eps,width=2.5in}
\caption{Density profiles of a confined binary mixture with
$\sigma_2/\sigma_1=2$, $H/\sigma_1=2.4$ and $p_0^*=5$ corresponding to the coexisting phases
with (a) low and (b) high molar fractions.}
\label{fig11}
\end{figure}
\begin{figure}
\epsfig{file=Fig12.eps,width=2.5in}
\caption{Differences in the coexisting molar fraction,
$\Delta {\sf x}={\sf x}^{(2)}-{\sf x}^{(1)}$ (black), and packing
fraction, $\Delta \eta=\eta^{(2)}-\eta^{(1)}$ (red), of the demixed phases as
a function of the free length,
$(H-\sigma_2)/\sigma_1$, for a binary mixture of confined PHS with $\sigma_2/\sigma_1=2$ and $p_0^*=5$.}
\label{fig12}
\end{figure}
To find the close-packing configurations for $\sigma_2/\sigma_1=2$
we apply the same reasoning as before: The close-packed
limit can be reached by joining ${\cal N}_{\rm b}$
big clusters (constituted by a single big square) with ${\cal N}_{\rm s}$
small clusters (formed by dimers of, perfectly
aligned along $y$, small squares). The total area occupied by both clusters
is ${\cal N}_{\rm b}\sigma_2^2+2{\cal N}_{\rm s}\sigma_1^2$, whereas
the total occupied length along the channel is
${\cal N}_{\rm b}\sigma_2+{\cal N}_{\rm s}\sigma_1$.
As the numbers $\{{\cal N}_{\rm b},{\cal N}_{\rm s}\}$
fulfill the condition ${\cal N}_{\rm b}=(1-{\sf x}){\cal N}$ and ${\cal N}_{\rm s}={\sf x}{\cal N}/2$, we arrive at
\begin{eqnarray}
\eta_{\rm cp}&=&\frac{{\cal N}_{\rm b}\sigma_2^2+2{\cal N}_{\rm s}\sigma_1^2}{\left
({\cal N}_{\rm b}\sigma_2+{\cal N}_{\rm s}
\sigma_1\right)H}=\frac{(1-{\sf x})\left(\sigma_2/\sigma_1\right)^2+{\sf x}}
{2(1-{\sf x})\sigma_2/\sigma_1+{\sf x}}\times \frac{2\sigma_1}{H}\nonumber\\&=&\frac{2\sigma_1}{H}.
\end{eqnarray}
The close-packing value does not depend on composition.
In Fig. \ref{fig9} (a) these limits are shown for $H/\sigma_1=2.2$,
2.4 and 2.6. Also the functions $\eta({\sf x})$ are plotted
for the same values of
pore widths as obtained from the DF minimization, by fixing the pressure
to $p_0^*=5$. Clearly the packing fractions, monotonic decreasing functions
of ${\sf x}$, do not change too much with composition as compared to the
case $\sigma_2/\sigma_1=1.5$. The intervals of ${\sf x}$ between the solid
circles represent the instability region in mixture composition with
respect to demixing transitions. This behaviour can be confirmed by plotting
the Gibbs free-energy per particle (minus a straight line)
$g^*$ vs. ${\sf x}$, as we do in panel (b) for $H/\sigma_1=2.4$ and $p_0^*=5$.
Strong demixing between two confined phases, each one rich in one of species,
is confirmed. The phase separation has a clear lateral symmetry,
i.e. both phases are separated along the channel with a
Gibbs-dividing interface perpendicular to the channel. This is a kind of
macrosegregation, completely different to the microsegregation obtained
before for the case $\sigma_2/\sigma_1=1.5$. However, the density profiles are
now symmetric always.
\begin{figure}
\epsfig{file=Fig13a.eps,width=2.5in}
\epsfig{file=Fig13b.eps,width=2.5in}
\caption{(a) Phase diagram $p_0^*-{\sf x}$
of a binary mixture of PHS with $\sigma_2/\sigma_1=2$,
$H/\sigma_1=2.4$ and PBC.
Solid lines represent the coexisting binodals, whereas dashed lines
indicate continuous phase transitions.
Regions of stability of fluid (F) and different columnar $\rm{C}^{(\alpha)}_{\it{i}}$ are
correspondingly shown. (b) The same phase diagram as in (a), but
in the $\eta-{\sf x}$ plane. Filled circles:
calculated binodal points. Open circle: critical point. Open square: tricritical point.}
\label{fig13}
\end{figure}
\begin{figure*}
\epsfig{file=Fig14a.eps,width=1.6in}
\epsfig{file=Fig14b.eps,width=1.6in}
\epsfig{file=Fig14c.eps,width=1.6in}
\epsfig{file=Fig14d.eps,width=1.6in}
\caption{Coexisting density profiles of a binary mixture with
$\sigma_2/\sigma_1=2$, $H/\sigma_1=2.4$, $p_0^*=4$ and PBC.
From (a) to (d) density profiles correspond to
${\sf x}^{(1)}>{\sf x}^{(2)}>{\sf x}^{(3)}>{\sf x}^{(4)}$,
i.e. the coexistence values of molar fractions for both demixing
transitions found in the phase diagram of Fig. \ref{fig13} at
the corresponding pressure.}
\label{fig14}
\end{figure*}
To find the phase diagram, we have calculated the coexisting values of
${\sf x}$ and $\eta$ for a set of different values of $p_0^*$,
and for a fixed pore width $H/\sigma_1=2.4$,
via the double-tangent construction of $g({\sf x})$. Phases diagrams
in $p^*-{\sf x}$ and $\eta-{\sf x}$ coordinates are plotted
in Fig. \ref{fig10} (a) and (b) respectively.
Dashed lines in (b) correspond to different isobars inside the demixed region.
The phase separation ends in a critical point (white circle), below which
the mixture is stable. As pressure is increased from that point,
the coexisting phases become more similar to the confined one-component
fluids. As an example of coexisting phases, Fig. \ref{fig11} shows
the density profiles of small and big squares for the (a) low-${\sf x}$ and
(b) large-${\sf x}$ coexisting phases, with $p_0^*=4$ and $H/\sigma_1=2.4$.
Note that in panel (a) the density profile of small species is not visible at
the scale of the figure, demonstrating the quasi-one-component character of the mixture.
We can see in panel (b) that, while big squares always fluctuate close to
the center of the pore, small squares are strongly adsorbed at both walls.
The phase separation once again is related to entropy.
When both species are mixed,
e.g. when clusters formed by dimers of small squares are surrounded by big
squares, lateral motion of small particles is strongly restricted because
small and big species cannot overpass each other. Also,
if one dimer of small particles is located between two big squares,
motion of these highly constrained small squares entails the
breaking of dimers, with a lowering in the local packing fraction.
When the mixture is well separated, small squares can move in the lateral
direction much more freely because the presence of other small particles
in front do not constrain their motion. Thus, the dimers can be continuously
formed and destroyed without altering the local packing of particles.
\begin{figure}
\epsfig{file=Fig15.eps,width=3.in}
\caption{Schematic of particle configurations of two different columnar phases found in the phase
diagram with PBC: (a) ${\rm C}_1$, and (b) ${\rm C}_2^{(a,b)}$.}
\label{fig15}
\end{figure}
An interesting issue is how the demixing transition depends
on pore width $H$. The answer to this question is given by Fig. \ref{fig12},
where the demixing gaps in ${\sf x}$ (black) and $\eta$ (red) are plotted
as a function of the free length $(H-\sigma_2)/\sigma_1$ for a fixed pressure
$p_0^*=5$. As the pore becomes wider the demixing, in terms of
fractionation, is stronger ($\Delta {\sf x}\equiv
{\sf x}^{(2)}-{\sf x}^{(1)}$ is an increasing function of $H$),
while the gap in packing fraction decreases.
The latter result is expected because packing of squares inside the pore
is less effective as the pore becomes wider, so the two
coexistence values of $\eta$ decrease, to such an extent that the difference
$\Delta\eta\equiv \eta^{(2)}-\eta^{(1)}$ is a monotonically
decreasing function of ${\sf x}$. However the relative gap,
$\Delta\eta/\eta^{(1)}$, turns out to be constant with a value close to $0.07$.
This interesting trend, namely a stronger demixing as $H$ increases,
is opposite to that obtained for the $\sigma_2/\sigma_1=1.5$ mixture.
As shown in the previous section, the microsegregation transition
is enhanced when the pore becomes narrower.
To end this section, we comment on the relation between the
phase behavior of the confined system and that of a similar system
subject to periodic boundary conditions (PBC). To investigate this,
we have imposed the conditions $\rho_i(y+H)=\rho_i(y)$ on the density
profiles, focusing on the binary mixture with $\sigma_2/\sigma_1=2$.
The period $H$ was chosen to be equal to the pore width of one of the
mixtures analysed previously, i.e. $H/\sigma_1=2.4$.
We did not minimize the DF with respect to $H$, with the aim to making
the comparison of the two results meaningful. Consequently the phase diagram
presented below is not the bulk one. Note that PBC are normally used to
mimic bulk phase behavior when the system is infinite along the $y$
direction, with the inhomogeneous phase being periodic along the same
direction. But the same condition can also describe a finite system of
dimension $H$ in the $y$ direction. Unfortunately the DF is unable to
distinguish both situations, which is a strong drawback of this
theoretical tool. Obviously, if a DF based on the two-body probability density,
instead of the one-body density, could be constructed, it would certainly
contain information on the finiteness of the system along $y$. Therefore,
at present, results from the (one-body density-based) DF and the TMM
applied to the study of systems with PBC cannot be compared \cite{Gurin7}.
Fig. \ref{fig13} shows the phase diagram as obtained from DF minimization.
The dashed lines represent continuous transitions
between a fluid of PHS and
a periodic columnar phase with period $H/\sigma_1=2.4$.
The latter was
calculated by searching for the divergence of the
structure-factor inverse matrix, as described in Sec. \ref{app_sf}. The solid lines
(which join the calculated points) are the coexisting binodals of the
demixing transitions.
For relatively high composition, ${\sf x}\gtrsim 0.7$, and fixed pressure $p_0^*=4$, we find that the stable phase is the so-called
${\rm C}_1$ columnar phase, formed by two layers of small squares (of period $d\equiv H/2=1.2\sigma_1$) where
the centers of mass of big squares occupy interstitial
positions between the layers and the density
profiles are out of phase: $\rho_2(y)=\rho_1(y+d/2)$.
See Fig. \ref{fig14} (a), where these density profiles are
plotted, and Fig. \ref{fig15} (a) for a sketch of particle configurations. Note that big squares
intersect the two adjacent layers formed by small squares.
As ${\sf x}$ is decreased this phase looses stability at
${\sf x}\sim 0.7$, and the
system exhibits strong demixing to the so-called ${\rm C}_2^{(b)}$-columnar phase,
with a composition ${\sf x}\sim 0.3$ and formed by layers of big squares with small squares mostly microsegregated
at the interstitials [see Fig. \ref{fig14} (b) for the density profiles and Fig. \ref{fig15} (b) for a
sketch of particle configurations]. Now the periodicity is $d=H$. By further decreasing ${\sf x}$ it is found that this
phase is stable up to ${\sf x}\sim 0.2$, where a new phase transition takes place to the so-called ${\rm C}_2^{(a)}$
phase. This is very similar in structure to the ${\rm C}_2^{(b)}$ phase, but the former exhibits a domed-like
density profile for the big squares
[see Fig. \ref{fig14} (c)], while the latter has the usual
sharply-peaked form
[see Fig. \ref{fig14} (d)], with a small amount of small
squares located at the interstitials. The
${\rm C}_2^{(a)}-{\rm C}_2^{(b)}$ and ${\rm C}_1-{\rm C}_2^{(b)}$ transitions end in critical and tricritical points,
respectively. The main conclusion drawn from these results is that scenarios,
strong demixing and microsegregation,
are also present in a PHS fluid subject to PBC. This is an indication that the
the bulk phase diagram will also contain these two features.
Crystalline phases (where both density profiles depend on both spatial coordinates)
were not included in our study. At high pressure crystals will certainly become
more stable than the exotic one-dimensional profiles we have found in the region of
stability of $C_2^{(b)}$ at very high pressures (not shown here).
\section{Discussion and Conclusions}
\label{conclusions}
We have used the DF formalism, based on the FMT, to study the packing properties of extremely
confined mixtures of PHS in a slit pore. Two types of mixtures have been analysed in detail
by appropriately choosing particle sizes and pore width. In a first study, parameters were tuned
to avoid configurations where two big squares are located opposite to each other while dimers of
one big and one small or two small squares, but not three of them, can fit into the channel.
In a second study parameters were arranges so as to avoid dimers formed by one big and one small
square to fit into the channel, while two small squares can fit.
We have shown that the theory predicts micro- and macrosegregation phase transitions for the first
and second mixture, respectively.
Using the Gibbs free energy potential for a set of fixed pressures,
coexisting packing and molar fractions were calculated via a double-tangent construction.
Thus the first-order character of phase transitions at most pressures was identified,
and boundaries of stability regions for mixed and micro (macro)-demixed states were traced out.
All phase transitions found have an entropic character, which is ultimately related to the
jammed configurations of particles. These configurations arise when two big
squares are close to each other and, at the same time and for the first mixture, they are
symmetrically adsorbed at both walls.
The jammed configurations severely restrict lateral motion of small particles (thus decreasing the
configurational entropy), and these can explore a limited space as compared to that in macro- or
microsegregated mixtures. Finally, by imposing PBC, we showed that
demixing transitions between different columnar phases
also take place in systems without external potentials restricting particle positions.
In this case all demixed phases found also have a microsegregated structure, with one of the
species forming the main columns and the others occupying the interstitial regions.
A comment on the real nature of phase transitions obtained here for the confined system
is in order. Exact calculations using the TMM for one-component hard disks, squares, rectangles
or rhombuses confined in a slit geometry, with at most two layers of particles, show that these
$1+\epsilon$-dimensional systems do not exhibit true phase transitions \cite{Gurin5,Gurin4,Gurin2,Gurin1}. However their structural properties can dramatically change as pressure is increased.
This behaviour is usually associated with a peculiar shape of the EOS which,
under certain conditions, contains a plateau-like segment and a corresponding sharp
peak is visible in the heat capacity. With these results in mind,
our analysis based on the (mean-field) DFT suggests that changes in particle configurations,
driven by entropic forces, are adequately described by the theory, while the corresponding
phase transitions are not. Our claim is that, for high enough pressures, the two confined mixtures
studied here will contain an important number of large micro- and macrosegregated clusters, respectively.
Although these clusters can symmetrically adopt two configurations in the case of the first mixture
studied, their presence can be confirmed by calculating the two-body particle correlation function
using TMM.
The most important result from our mean-field model, as applied to the second confined mixture
(with $\sigma_2/\sigma_1=2$), consists in the prediction of a lateral demixing transition
between two phases, each one rich in one species, at high enough pressures.
We should bear in mind that, once monomers or dimers of small particles become located between
two big squares, they will not be able to escape from the cage formed by big particles,
due to the impossibility that small and big squares can pass each other. Thus, if an equimolar
mixture is initially prepared in a configuration where particles are randomly positioned
(and consequently there is a high probability to find many small particles between the large ones)
at high enough packing fraction, the mixture will become thermodynamically unstable with respect
to phase segregation. However, the system will be unable to reach equilibrium (with two phases
laterally segregated), as predicted from the thermodynamical analysis, due to severe particle jamming.
These equilibrium states are not accessible in our strictly two-dimensional system.
However our system could still approximately describe an experimental realization consisting of a
colloidal binary mixture of hard cubes sedimented in a container with
a nano-sculpted bottom surface under micro-gravity conditions. The surface could be nano-structured
with quasi-2D channels, such that monolayers of sedimented cubes inside each channel were in contact
with a ``bath'' of particles. The fact that particles can now enter or escape from the channel
avoids jamming effect and the mixture could reach a final state with two segregated ``phases''
inside the channels. Note that the system would not be strictly two-dimensional as the channels
would interact with the bulk regions in a nontrivial manner.
|
1,314,259,995,641 | arxiv | \section{Weighted floating bodies and polytopal approximation in $\R^n$}\label{sec:weighted}
Let $\cK({\R^{n}})$ denote the set of convex bodies (that is, compact convex sets) in ${\R^{n}}$ with non-empty interior.
For $K\in\cK({\R^{n}})$ and $\phi,\psi:K\to(0,\infty)$ integrable, define, for $A\subset {\R^{n}}$ measurable, the measure $\Phi$ by $\Phi(A)=\int_A \phi$ and the measure $\Psi$ by $\Psi(A)=\int_A \psi$. If $\int_{\R^{n}} \phi=1$, then $\Phi$ is a probability measure and we write $\E_\Phi$ for the expectation with respect to $\Phi$.
\goodbreak
\subsection{Weighted floating bodies}\label{weight_float}
For $\delta >0$, the weighted floating body $K^\phi_\delta$ is the intersection of all
closed half-spaces
whose defining hyperplanes $H$ cut off sets of $\Phi$-measure less than
or equal to $\delta$ from $K$, that is,
\begin{equation}\label{eqn:wfb}
K^\phi_{\delta}=\bigcap \big\{ H^{\scriptscriptstyle -} : \Phi(K \cap H^{\scriptscriptstyle +})\le \delta\big\},
\end{equation}
where $H^\pm$ are the closed half-spaces bounded by the hyperplane $H$.
For $\phi\equiv 1$, we obtain (convex) floating bodies, which were introduced (independently) in \cite{BL:1988, SW:1990} as a generalization of the classical {\em floating bodies} (see \cite[Chapter 10.6]{Schneider:2014} for more information). Weighted floating bodies were introduced in \cite{Werner:2002} and generalizations of \eqnref{eqn:floatingbody} were established there.
The following result generalizes those results from volume to a general measure $\Psi$.
\begin{theorem}\label{thm:wfb}
For $K\in\cK({\R^{n}})$ and $\phi, \psi: K\to (0,\infty)$ continuous,
\begin{align}\label{eqn:limit}
\lim_{\delta \to 0} \frac{\Psi(K) - \Psi(K^\phi_\delta)}{\delta^\frac{2}{n+1}}
&= \alpha_n \int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \phi(x)^{-\frac2{n+1}} \psi(x) \,dx,
\end{align}
where
\begin{align}\label{eqn:const2}
\alpha_n:=\frac{1}{2} \bigg(\frac{n+1}{v_{n-1}}\bigg)^{\frac{2}{n+1}}
\end{align}
and $v_{n-1}$ is the $(n-1)$-dimensional volume of the $(n-1)$-dimensional unit ball.
\end{theorem}
\noindent
The proof is given in Section \ref{sec:proof}.
\subsection{Random polytopes}\label{random_polytope}
For $K\in\cK(\R^n)$, let $\phi: K\to (0,\infty)$ be a probability density and $K^\Phi_m$ the convex hull of $m$ independent random points chosen according to $\Phi$. The following generalization of (\ref{eqn:schuett}) was established by B\"or\"oczky, Fodor, and Hug \cite[Theorem~3.1]{BFH:2010}.
\begin{theorem}[\! \cite{BFH:2010}]\label{thm:wr}
Let $K\in\cK({\R^{n}})$ and $ \psi: K\to(0,\infty)$ be continuous.
If $\phi: K\to (0,\infty)$ is a continuous probability density and the random polytope $K^\Phi_m$ is the convex hull of $m$ independent random points chosen according to $\Phi$, then
\begin{align}\label{eqn:randomapprox}
\lim_{m\to \infty} \E_{\Phi}\big(\Psi(K) - \Psi(K^\Phi_m)\big) \,{m^\frac{2}{n+1}}
&= \beta_n \, \int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \phi(x)^{-\frac2{n+1}} \psi(x) \, dx,
\end{align}
where
\begin{align}\label{eqn:const}
\beta_n
:= \frac{(n^2+n+2)(n^2+1)}{2(n+3)\cdot (n+1)!}\, \Gamma\bigg(\frac{n^2+1}{n+1}\bigg)\,\bigg(\frac{n+1}{v_{n-1}}\bigg)^{\frac{2}{n+1}}.
\end{align}
\end{theorem}
Efron showed that from the expected volume of a random polytope, the expected number of vertices $f_0(K_m)$ can be easily obtained. The same argument applies here and
\begin{align*}
\E_{\Phi} f_0(K^\Phi_m) = m \big(1-\E_\Phi \Phi(K_{m-1}^\Phi)\big),
\end{align*}
(cf.\ \cite{Hug:2013}).
B\"or\"oczky, Fodor, and Hug \cite[Corollary 3.2]{BFH:2010} deduced the following result.
\begin{corollary}[\!\! \cite{BFH:2010}]\label{cor:wr}
Let $K\in\cK({\R^{n}})$.
If $\phi: K\to (0,\infty)$ is a continuous probability density and the random polytope $K^\Phi_m$ is the convex hull of $m$ independent random points chosen in $K$ according to the probability measure $\Phi$, then
\begin{align*}
\lim_{m\to \infty} \E_{\Phi}f_0(K^\Phi_m) \,m^{-\frac{n-1}{n+1}}
= \beta_n \, \int_{\partial K} H_{n-1}(K,x)^\frac{1}{n+1} \phi(x)^{\frac{n-1}{n+1}}\, dx,
\end{align*}
where $\beta_n$ is the constant defined in \eqnref{eqn:const}.
\end{corollary}
\goodbreak
\subsection{Random polyhedral sets}
Another model for random polytopes, that was also suggested by R\'enyi and Sulanke \cite{RS:1968}
and that can be considered as dual to the above, is the following:
Given a convex body $K$ in $\R^n$, choose $m$ random closed half-spaces that contain $K$ in a way that is described below
and denote their intersection by $K^m$.
The \emph{random polyhedral set} $K^m$ may be unbounded and therefore one usually considers $K^m$
intersected with a bounded neighborhood of $K$.
The classical choice is the parallel body $K+\mathbb B^n$ of $K$, where $\mathbb B^n$ is the closed Euclidean unit ball,
that is, $K+\mathbb B^n$ is the set of all points of distance at most $1$ from $K$.
To describe our choice of random half-spaces, we first consider the set $\mathcal{H}$ of all closed half-spaces in $\R^n$. We parametrize closed half-spaces $H^-(u,t)$ by its normal $u\in\S^{n-1}$ and the distance $t$ from the origin, i.e.,
\begin{align*}
H^-(u,t) := \{ x\in\R^n : x\cdot u \leq t\}.
\end{align*}
The \emph{support function} $h_K$ of $K$ is defined, for $u\in\R^n$, by $h_K(u) = \max \{u\cdot x : x\in K\}$.
For $u\in\S^{n-1}$, the support function measures the signed distance between the origin and a hyperplane with outer normal $u$ that touches $K$ and the width of $K$ in direction $u$ is given by $h_K(u)+h_K(-u)$. The average width $W(K)$, also known as \emph{mean width} of $K$, is,
\begin{align}\label{mean_width}
W(K) = \frac{1}{n v_n} \int_{\S^{n-1}} \big(h_K(u)+h_K(-u)\big)\, du
= \frac{2}{n v_n} \int_{\S^{n-1}} h_K(u)\, du.
\end{align}
On $\mathcal{H}$, there is a uniquely determined rigid motion invariant Borel measure $\mu$ such that
\begin{align*}
\mu\big(\{H^-\in \mathcal{H} : 0 < V_n(K\cap H^-)/V_n(K) < 1 \}\big) = W(K).
\end{align*}
For a Borel subset $A$ of $\mathcal{H}$, it is defined by
\begin{align*}
\mu(A) = \frac{1}{n v_n} \int_{\S^{n-1}} \int_{\R} \mathbf{1}\big[H^-(u,t)\in A\big] \, dt \, du,
\end{align*}
where $\mathbf{1}[P]$ is the indicator function of the proposition $P$, that is, $\mathbf{1}[P]=1$ if $P$ holds and $\mathbf{1}[P]=0$ otherwise. For $K\in \cK({\R^{n}})$, we consider the set of all half-spaces that contain $K$ and whose boundary hyperplanes meet $K+\mathbb B^n$, i.e.,
\begin{align*}
\mathcal{H}_K=\big\{H^-(u,t) : u\in\S^{n-1}, h_K(u)\leq t \leq h_K(u)+1\big\}.
\end{align*}
This yields $\mu(\mathcal{H}_K) = 1$ and therefore the restriction $\mu_K$ of $\mu$ to $\mathcal{H}_K$ is a probability measure. Write $\E_{\mu_K}$ for the expectation with respect to $\mu_K$.
B\"or\"oczky, Fodor, and Hug \cite{BFH:2010} obtained the following result, which can be seen as dual to Theorem \ref{thm:wr} and Corollary \ref{cor:wr}.
\begin{theorem}[\!\! \cite{BFH:2010}]\label{thm:wdr}
Let $K\in\cK({\R^{n}})$. If the random polyhedral set $K^m$ is the intersection of $m$ independent random half-spaces chosen from $\mathcal{H}_K$ according to $\mu_K$, then
\begin{align*}
\lim_{m\to \infty} \E_{\mu_K}\Big(W\big(K^m\cap (K+\mathbb B^n)\big)-W(K)\Big)\, m^{\frac{2}{n+1}}
&= 2 \beta_n\, (nv_n)^{-\frac{n-1}{n+1}} \int_{\partial K} H_{n-1}(K,x)^{\frac{n}{n+1}}\, dx,\\
\intertext{and}
\lim_{m\to\infty} \E_{\mu_K} f_{n-1}(K^m)\, m^{-\frac{n-1}{n+1}}
&= \beta_n\, (nv_n)^{-\frac{n-1}{n+1}} \int_{\partial K} H_{n-1}(K,x)^{\frac{n}{n+1}}\, dx,
\end{align*}
where $f_{n-1}(K^m)$ is the number of facets of $K^m$ and $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{theorem}
\goodbreak
\subsection{Weighted best approximation}
Problems of asymptotic best approximation have been extensively studied since the 1940's (cf.\ \cite{Gruber:1993a}). We restrict our attention to two problems and just remark that further notions of distance and approximation by inscribed and circumscribed polytopes with a given number of faces have also been studied (cf.\ \cite{Gruber:1993a}).
For $K,P\subset {\R^{n}}$, write $K\triangle P$ for the symmetric difference of $K$ and $P$. Set
\begin{align*}
\operatorname{dist}_\Psi\!\big(K,\cP_m\big)
&=\inf\big\{\Psi(K\triangle P): P \text{ polytope with at most $m$ vertices}\big\}\\
\intertext{and}
\operatorname{dist}_\Psi\!\big(K,\cP_{(m)}\big)
&=\inf\big\{\Psi(K\triangle P): P \text{ polytope with at most $m$ facets}\big\}.
\end{align*}
Extending results by L. Fejes T\'oth \cite{FejesToth:1948} and Gruber \cite{Gruber:1993b}, the following asymptotic result was established in \cite{Ludwig:1998} for convex bodies with positive curvature and in \cite{Boroczky:2000} the curvature condition was dropped.
\begin{theorem}[\!\!\cite{Ludwig:1998, Boroczky:2000}]\label{thm:wb}
For $K\in\cK({\R^{n}})$ with $C^2$ boundary and $\psi:K\to(0,\infty)$ continuous,
\begin{align}\label{eqn:bestapprox}
\lim_{m\to \infty} \operatorname{dist}_\Psi\!\big(K, \cP_m\big) \,{m^\frac{2}{n-1}}
&=\frac12 \operatorname{ldel}_{n-1} \bigg(\int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \psi(x) ^{\frac{n-1}{n+1}} \, dx\bigg)^{\frac{n+1}{n-1}},\\
\intertext{and}\notag
\lim_{m\to \infty} \operatorname{dist}_\Psi\!\big(K, \cP_{(m)}\big) \,{m^\frac{2}{n-1}}
&=\frac12 \operatorname{ldiv}_{n-1} \bigg(\int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \psi(x) ^{\frac{n-1}{n+1}} \, dx\bigg)^{\frac{n+1}{n-1}},
\end{align}
where $\operatorname{ldel}_{n-1}$ and $\operatorname{ldiv}_{n-1}$ are positive constants.
\end{theorem}
\noindent
The exact values of $\operatorname{ldel}_{n-1}$ and $\operatorname{ldiv}_{n-1}$ are only known for $n=2$ and $n=3$ (see \cite{BL:1999}). Weighted best approximation was first considered by Glasauer (see \cite{GG:1997}).
\goodbreak
\section{Spherical space}\label{sec:sphere}
Let $\S^n$ denote the unit sphere in $\R^{n+1}$. A set $K\subset \S^n$ is a proper convex body, if it is closed, contained in an open hemisphere and its positive hull $\mathrm{pos}\, K=\{\lambda x: x\in K, \lambda \geq 0\}$ is a convex set in $\R^{n+1}$.
Let $\cK(\S^n)$ denote the set of proper convex bodies in $\S^n$ with non-empty interior.
A hypersphere in $\S^n$ is a set $H=\{x\in\S^n: x\cdot e=0\}$ with $e\in\S^n$, where $\lq\lq\cdot"$ is the inner product in $\R^{n+1}$. Let $H^{\scriptscriptstyle \pm}$ be the closed hemispheres bounded by $H$.
For $\delta >0$, the spherical floating body $K_\delta$ was introduced in \cite{BW:2015} by
\begin{equation}\label{eqn:sfb}
K_{\delta}=\bigcap \big\{ H^{\scriptscriptstyle -} : \operatorname{vol}_n(K\cap H^{\scriptscriptstyle +})\le \delta\big\},
\end{equation}
where $\operatorname{vol}_n$ is spherical volume, that is, the $n$-dimensional Hausdorff measure on $\S^n$.
Without loss of generality, we may restrict our attention to convex bodies contained in the hemisphere $\S^n_{\scriptscriptstyle +}=\{x\in\S^n: x\cdot e_{n+1}>0\}$, where $e_{n+1}$ is a vector of an orthonormal basis of $\R^{n+1}$. The gnomonic (or central) projection $g:\S^n_{\scriptscriptstyle +}\to \R^n$ is defined by
\begin{align*}
g(x) = \frac x{x\cdot e_{n+1}} -e_{n+1},
\end{align*}
where we identify $\R^n$ with $\{x\in\R^{n+1}: x\cdot e_{n+1}=0\}$ (cf.~\cite[Sec.~4]{BS:2016}). We write $\bar x=g(x)$ and $\bar K =g(K)$.
Note that $g^{-1}: \R^n \to \S^n$ maps the point $\bar x$ to $(1+\|\bar x\|^2)^{-1/2} (\bar x+e_{n+1})$ and has therefore the Jacobian
$(1+\|\bar x\|^2)^{-(n+1)/2}$ (cf.\ \cite[Proposition 4.2]{BW:2015}). Thus
the pushforward of $\operatorname{vol}_n$ under $g$ is the measure $\Psi_n$ with density $\psi_n(\bar x)= (1+\|\bar x\|^2)^{-(n+1)/2}$. For the spherical Gauss-Kronecker curvature, we have
\begin{equation*}\label{sgk}
H_{n-1}^{\scriptscriptstyle\S^{\scriptscriptstyle n}}(K, x)
= H_{n-1}(\bar K, \bar x) \bigg(\frac{1+\|\bar x\|^2}{1+(\bar x\cdot n_{\bar K}(\bar x))^2}\bigg)^{\frac{n+1}2}
\end{equation*}
(cf.\ \cite[Lemma 4.4]{BW:2015}), where $n_{\bar K}(\bar x)$ is the outer unit normal vector to $\bar K$ at $\bar x$,
and consequently
\begin{equation}\label{eqn:area}
\int_{\partial K} H_{n-1}^{\scriptscriptstyle\S^{\scriptscriptstyle n}}(K,x) ^\frac{1}{n+1} \ dx
= \int_{\partial \bar K} H_{n-1}(\bar K,\bar x) ^\frac{1}{n+1} (1+\|\bar x\|^2)^{-\frac{n-1}2} \, d\bar x
\end{equation}
(cf.\ \cite[p.\ 897]{BW:2015}).
These transformation rules allow us to translate the results from Section~\ref{sec:weighted} to spherical space.
The following result is a corollary to Theorem \ref{thm:wfb} and was first established in \cite{BW:2015}.
\begin{theorem}[\!\! \cite{BW:2015}]\label{thms:wfb}
For $K\in\cK(\S^n)$,
\begin{align*}
\lim_{\delta \to 0} \frac{\operatorname{vol}_n(K) - \operatorname{vol}_n(K_\delta)}{\delta^\frac{2}{n+1}}
= \alpha_n \int_{\partial K} H_{n-1}^{\scriptscriptstyle\S^{\scriptscriptstyle n}}(K,x) ^\frac{1}{n+1} dx,
\end{align*}
where $\alpha_n$ is the constant from \eqnref{eqn:const2}.
\end{theorem}
\begin{proof}
Since $g(K_\delta)= g(K)^{\psi_n}_\delta$, we have
\begin{align*}
\operatorname{vol}_n(K)- \operatorname{vol}_n(K_\delta)=\int_{g(K)\backslash g(K)^{\psi_n}_\delta} \psi_n.
\end{align*}
Hence Theorem \ref{thm:wfb} with $\phi=\psi=\psi_n$ shows that
\begin{align*}
\lim_{\delta \to 0} \frac{\operatorname{vol}_n(K) - \operatorname{vol}_n(K_\delta)} {\delta^\frac{2}{n+1}}
= \alpha_n \int_{\partial \bar K} H_{n-1}(\bar K,\bar x) ^\frac{1}{n+1} (1+\|\bar x\|^2)^{-\frac{n-1}2} \ d\bar x.
\end{align*}
By \eqnref{eqn:area}, this completes the proof.
\end{proof}
Next, we consider random polytopes that are the spherical convex hull of points chosen uniformly according to $\operatorname{vol}_n$ in $K\in\cK(\S^n)$. In the following, the expectation $\E_K$ is with respect to the probability density $\operatorname{vol}_n/\operatorname{vol}_n(K)$.
\begin{theorem}\label{thms:wr}
Let $K\in\cK(\S^n)$.
If $K_m$ is the spherical convex hull of $m$ random points chosen uniformly in $K$, then
\begin{align*}
\lim_{m\to \infty} \E_K\big(\!\operatorname{vol}_n(K) - \operatorname{vol}_n(K_m)\big) \,{m^\frac{2}{n+1}}
&= \beta_n \, \operatorname{vol}_n(K)^{\frac{2}{n+1}} \int_{\partial K} H_{n-1}^{\scriptscriptstyle\S^{\scriptscriptstyle n}}(K,x)^\frac{1}{n+1} \, dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{theorem}
\begin{proof}
Set $\Phi_n=\Psi_n/\Psi_n(g(K))$. Since $g(K_m)=g(K)^{\Phi_n}_m$, we have
\begin{align*}
\E_K\big(\!\operatorname{vol}_n(K) - \operatorname{vol}_n(K_m)\big )
& = \E_{\Phi_n} \big( \Psi_n(g(K)- \Psi_n(g(K)^{\Phi_n}_m)\big).
\end{align*}
Thus the statement follows from Theorem \ref{thm:wr} with $\psi=\psi_n$ and \eqnref{eqn:area}.
\end{proof}
\noindent Theorem \ref{thms:wr} complements a recent result by B\'ar\'any, Hug, Reitzner and Schneider \cite{BHRS:2016} for random polytopes in hemispheres.
\goodbreak
As a consequence of Corollary \ref{cor:wr}, we obtain the following result.
\begin{corollary}\label{cors:wr}
Let $K\in\cK(\S^n)$.
If $K_m$ is the spherical convex hull of $m$ random points chosen uniformly in $K$, then
\begin{align*}
\lim_{m\to \infty} \E_K f_0(K_m) \,m^{-\frac{n-1}{n+1}}
= \beta_n\, \operatorname{vol}_n(K)^{-\frac{n-1}{n+1}} \int_{\partial K} H^{\scriptscriptstyle \S^n}_{n-1}(K,x) ^\frac{1}{n+1} \, dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{corollary}
Finally, we consider best approximation.
Let
\begin{align*}
\operatorname{dist}_n\!\big(K,\cP^{\scriptscriptstyle \S^n}_m\big)&=\inf\big\{\!\operatorname{vol}_n(K\triangle P): P \text{ spherical polytope with at most $m$ vertices}\big\},\\
\intertext{and}
\operatorname{dist}_n\!\big(K,\cP^{\scriptscriptstyle \S^n}_{(m)}\big)&=\inf\big\{\!\operatorname{vol}_n(K\triangle P): P \text{ spherical polytope with at most $m$ facets}\big\}.
\end{align*}
We obtain the following result.
\begin{theorem}\label{thms:wb}
For $K\in\cK(\S^n)$ with $C^2$ boundary,
\begin{align*}
\lim_{m\to \infty} \operatorname{dist}_n\!\big(K, \cP^{\scriptscriptstyle \S^n}_m\big) \,{m^\frac{2}{n-1}}
&= \frac12 \operatorname{ldel}_{n-1} \bigg(\int_{\partial K} H_{n-1}^{\scriptscriptstyle\S^{\scriptscriptstyle n}}(K,x)^\frac{1}{n+1} \ dx\bigg)^{\frac{n+1}{n-1}},\\
\intertext{and}
\lim_{m\to \infty} \operatorname{dist}_n\!\big(K, \cP^{\scriptscriptstyle \S^n}_{(m)}\big) \,{m^\frac{2}{n-1}}
&= \frac12 \operatorname{ldiv}_{n-1} \bigg(\int_{\partial K} H_{n-1}^{\scriptscriptstyle\S^{\scriptscriptstyle n}}(K,x)^\frac{1}{n+1} \ dx\bigg)^{\frac{n+1}{n-1}},
\end{align*}
where $\operatorname{ldel}_{n-1}$ and $\operatorname{ldiv}_{n-1}$ are the constants from Theorem \ref{thm:wb}.
\end{theorem}
\begin{proof}
Note that
$\operatorname{dist}_n(K,\cP^{\scriptscriptstyle \S^n}_m)= \operatorname{dist}_{\Psi_n}(g(K), \cP_m)$ and $\operatorname{dist}_n(K,\cP^{\scriptscriptstyle \S^n}_{(m)})= \operatorname{dist}_{\Psi_n}(g(K), \cP_{(m)})$. Thus the statement follows directly from Theorem~\ref{thm:wb} with $\psi=\psi_n$ and \eqnref{eqn:area}.
\end{proof}
\subsection{Duality principle}
Let $K$ be a proper spherical convex body. Instead of random polytopes $K_m$ contained in $K$ we now consider random polytopes $K^m$ containing $K$. The space of closed hemispheres $\mathcal{H}$ of $\S^n$ has a uniquely determined rotation invariant probability measure $\mu$.
For each point $x\in\S^n$ there is a uniquely determined hemisphere $H^-(x)=\{y\in\S^n:x\cdot y\leq 0\}$ and for a Borel subset $A$ of $\mathcal{H}$ we have
\begin{align*}
\mu(A) = \frac{1}{\operatorname{vol}_n(\S^n)} \int_{\S^n} \mathbf{1}\big[H^-(x) \in A\big] \, dx.
\end{align*}
A random polytope $K^m$ is obtained as intersection of $m$ closed hemispheres chosen from $\mathcal{H}_K:=\{H^-\in \mathcal{H}: K\subseteq H^-\}$ independently and according to $\mu_K:=\mu/\mu(\mathcal{H}_K)$.
For $K\in\cK(\S^n)$, define the \emph{polar body} $K^\circ$ by
\begin{align*}
K^\circ = \{y\in\S^n : x\cdot y \leq 0 \text{ for all $x\in K$}\} =\bigcap_{x\in K} H^-(x)
\end{align*}
(cf.~\cite[Sec.~6.5]{SW:2008}).
Since $K^{\circ\circ} = K$, a hemisphere $H^-(y)$ contains $K$ if and only if $y\in K^\circ$. Thus we have $\mathcal{H}_K = \{H^-(y):y\in K^\circ\}$ and $\mu(\mathcal{H}_K) = \operatorname{vol}_n(K^\circ)$.
Let $K^m$ be the intersection of $m$ randomly chosen closed hemispheres in $\mathcal{H}_K$, that is, there are $x_i \in K^\circ$, $i=1,\ldots,m$, such that $K^m = \bigcap_{i=1}^m H^-(x_i)$. We have
\begin{align*}
K^m = \big(\mathrm{conv}\{x_1,\ldots,x_m\}\big)^\circ = \big(K^\circ_m\big)^\circ,
\end{align*}
where $K^\circ_m := (K^\circ)_m$. This means, that the polar of a random polytope that contains $K$ is a polytope inside $K^\circ$. In this way we can transfer results about $K^{\circ}_m$ to $(K^m)^{\circ}$.
\begin{theorem}\label{thm:dual}
If $\,\mathcal{F}$ be a non-negative measurable functional on spherical convex polyhedral sets, then
\begin{align*}
\E_{\mu_K} \mathcal{F}(K^m) = \E_{K^\circ} \mathcal{F}\big(\left(K^\circ_m\right)^\circ\big).
\end{align*}
\end{theorem}
\noindent
In the Euclidean setting a similar results was obtained in \cite[Prop.~5.1]{BFH:2010}.
As an application of this theorem we consider the \emph{spherical mean width} $U_1(K)$ of a spherical convex body $K$, which is defined by
\begin{align*}
U_1(K) = \frac{1}{2} \int_{G(n+1,n)} \chi(K\cap H)\, d\nu(H),
\end{align*}
where $\chi$ is the Euler characteristic, $G(n+1,n)$ is the Grassmannian of all $n$-dimensional linear subspaces in $\R^{n+1}$ and $\nu$ denotes the invariant probability measure on $G(n+1,n)$. The probability that a random hypersphere hits $K$ is equal to $2U_1(K)$. The name {\em spherical mean width} corresponds to the Euclidean notion of mean width $W(\bar{K})$ for $\bar{K}\in \cK(\R^n)$, which can be defined as the probability of a random affine hyperplane hitting $\bar{K}$. Equivalently, $W(\bar{K})$ is given by (\ref{mean_width}), which, however, does not have a natural analog in the spherical setting.
\begin{corollary}\label{cors:meanwidth}
Let $K\in \cK(\S^n)$. If $K^m$ is the intersection of $m$ random hemispheres containing $K$ and chosen uniformly according $\mu_K$, then
\begin{align*}
\lim_{m\to \infty} \E_{\mu_K}\big(U_1(K^m)-U_1(K)\big)\, m^{\frac{2}{n+1}}
&= \frac{\beta_n}{\operatorname{vol}_n(\S^n)}\, \operatorname{vol}_n(K^\circ)^{\frac{2}{n+1}} \int_{\partial K} H_{n-1}^{\S^n}(K,x)^{\frac{n}{n+1}}\, dx,\\
\intertext{and}
\lim_{m\to \infty} \E_{\mu_K} f_{n-1}(K^m)\, m^{-\frac{n-1}{n+1}}
&= \beta_n\, \operatorname{vol}_n(K^\circ)^{-\frac{n-1}{n+1}} \int_{\partial K} H_{n-1}^{\S^n}(K,x)^{\frac{n}{n+1}} \, dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{corollary}
\begin{proof}
By \cite[Eqn.~(20)]{GHS:2002}, we have
\begin{align*}
U_1(K) = \frac{1}{2} - \frac{\operatorname{vol}_n(K^\circ)}{\operatorname{vol}_n(\S^n)}.
\end{align*}
Also, the facets of $K^m$ correspond to the vertices of $(K^m)^\circ = K^\circ_m$. Thus $f_{n-1}(K^m) = f_0(K_m^\circ)$.
Hence, by Theorem \ref{thm:dual}, we find
\begin{align*}
\E_{\mu_K}\big(U_1(K^m)-U_1(K)\big)
= \frac{\E_{K^\circ}\big(\!\operatorname{vol}_n(K^\circ) - \operatorname{vol}_n(K^\circ_m)\big)}{\operatorname{vol}_n(\S^n)},
\end{align*}
and
\begin{align*}
\E_{\mu_K} f_{d-1}(K^m) = \E_{K^\circ} f_0(K_m^\circ).
\end{align*}
Applying Theorem \ref{thms:wr} and Corollary \ref{cors:wr} on $K^\circ$ we obtain
\begin{align*}
\lim_{m\to \infty} \E_{\mu_K}\big(U_1(K^m)-U_1(K)\big)\, m^{\frac{2}{n+1}}
&= \frac{\beta_n}{\operatorname{vol}_n(\S^n)}\, \operatorname{vol}_n(K^\circ)^{\frac{2}{n+1}} \int_{\partial K^\circ} H_{n-1}^{\S^n}(K^\circ,x)^{\frac{1}{n+1}}\, dx,
\end{align*}
and
\begin{align*}
\lim_{m\to \infty} \E_{\mu_K} f_{n-1}(K^m)\, m^{-\frac{n-1}{n+1}}
= \beta_n\, \operatorname{vol}_n(K^\circ)^{-\frac{n-1}{n+1}} \int_{\partial K^\circ} H_{n-1}^{\S^n}(K^\circ,x)^{\frac{1}{n+1}} \, dx,
\end{align*}
By \cite[Thm.~7.4]{BW:2015}, we have
\begin{align*}
\int_{\partial K^\circ} H_{n-1}^{\S^n}(K^\circ,x)^{\frac{1}{n+1}}\, dx =
\int_{\partial K} H_{n-1}^{\S^n}(K,x)^{\frac{n}{n+1}}\, dx,
\end{align*}
which concludes the proof.
\end{proof}
\goodbreak
\section{Hyperbolic space}\label{sec:hyperbolic}
Let $\R^{n,1}$ denote the Lorentz-Minkowski space of dimension $n+1$, that is, $\R^{n+1}$ with the indefinite inner product ``$\circ$'' defined by
\begin{align*}
x\circ x
= x_1^2+\dots+ x_n^2 - x_{n+1}^2.
\end{align*}
Then the hyperboloid model of hyperbolic space is given by
\begin{align*}
\H^n
=\big\{x\in\R^{n,1} : x\circ x = -1 \text{ and } x_{n+1} > 0 \big\}.
\end{align*}
The hyperbolic distance $d_H$ between two points $x,y\in\H^n$ is determined by $\cosh\, d_H(x,y) = -x\circ y$.
A set $K\subset \H^n$ is a convex body, if it is compact and the positive hull is a convex set in $\R^{n+1}$. Let $\cK(\H^n)$ denote the set of convex bodies in $\H^n$ with non-empty interior.
For a hyperplane $H$ let $H^{{\scriptscriptstyle \pm}}$ be the closed half-spaces bounded by $H$. For $\delta>0$, the hyperbolic floating body $K_\delta$ was introduced in \cite{BW:2016} by
\begin{align*}
K_{\delta} = \bigcap \big\{H^{\scriptscriptstyle -}:\operatorname{vol}_n(K\cap H^{\scriptscriptstyle +})\le \delta\big\},
\end{align*}
where $\operatorname{vol}_n$ is the hyperbolic volume on $\H^n$.
We fix a Lorentz-orthonormal basis $e_1,e_2,\ldots, e_{n+1}$ in $\R^{n,1}$ such that $e_{n+1}$ is in $\H^n$. The gnomonic (or central) projection $g\colon \H^n\to \R^n$ is defined by
\begin{align*}
g(x) = \frac{x}{x\circ e_{n+1}} + e_{n+1},
\end{align*}
where we identify $\R^n$ with $\{x\in\R^{n,1}:x\circ e_{n+1} = 0\}$. We write $\bar{x} = g(x)$ and $\bar{K}=g(K)$.
Since
\begin{align*}
\|\bar{x}\|^2 = 1-(x\circ e_{n+1})^{-2} = \tanh^2\, d_H(x,e_{n+1}),
\end{align*}
we have $\|\bar{x}\|\in [0,1)$.
Therefore the gnomonic projection maps $\H^n$ into the open unit ball $\operatorname{int} \mathbb B^n\subset \R^n$.
Note that $g^{-1}\colon \operatorname{int} \mathbb B^n\to \H^n$ maps the point $\bar{x}$ to $(1-\|\bar{x}\|^2)^{-1/2}(\bar{x}+e_{n+1})$. The gnomonic projection is an isometry between the hyperboloid model $\H^n$ and the projective model (or Beltrami--Cayley--Klein model) $\operatorname{int} \mathbb B^n$. Thus the pushforward of $\operatorname{vol}_n$ under $g$ is the measure $\Psi_n$ with density $\psi_n(\bar{x}) = (1-\|\bar{x}\|^2)^{-(n+1)/2}$. For the hyperbolic Gauss--Kronecker curvature, we have
\begin{align*}
H_{n-1}^{\scriptscriptstyle\H^{\scriptscriptstyle n}}(K,x) = H_{n-1}(\bar{K},\bar{x})\left(\frac{1-\|\bar{x}\|^2}{1-(\bar{x}\cdot n_{\bar{K}}(\bar{x}))^2}\right)^{\frac{n+1}{2}}
\end{align*}
(cf.\ \cite[Cor.\ 3.16]{BW:2016}), and furthermore
\begin{equation}\label{eqn:areah}
\int_{\partial K} H_{n-1}^{\scriptscriptstyle\H^{\scriptscriptstyle n}}(K,x)^{\frac{1}{n+1}}\, dx = \int_{\partial \bar{K}} H_{n-1}(\bar{K},\bar{x})^{\frac{1}{n+1}} (1-\|\bar{x}\|^2)^{-\frac{n-1}{2}}\, d\bar{x}
\end{equation}
(cf.\ \cite[(3.12)]{BW:2016}). So again, these transformation rules allow us to translate the results from Section~1 to hyperbolic space. The proofs are identical to those in spherical space (just replace (\ref{eqn:area}) by (\ref{eqn:areah})) and are therefore omitted.
As a corollary to Theorem \ref{thm:wfb} we obtain the existence of floating area for hyperbolic space, which was originally established in \cite{BW:2016}.
\begin{theorem}[\!\! \cite{BW:2016}]\label{thmh:wfb}
For $K\in\cK(\H^n)$,
\begin{align*}
\lim_{\delta \to 0} \frac{\operatorname{vol}_n(K) - \operatorname{vol}_n(K_\delta)} {\delta^\frac{2}{n+1}}
= \alpha_n \int_{\partial K} H_{n-1}^{\scriptscriptstyle\H^{\scriptscriptstyle n}}(K,x) ^\frac{1}{n+1} dx,
\end{align*}
where $\alpha_n$ is defined in Theorem \ref{thm:wfb}.
\end{theorem}
Next, we consider random polytopes that are the hyperbolic convex hull of points chosen uniformly according to $\operatorname{vol}_n$ in $K\in\cK(\H^n)$. In the following, the expectation $\E_K$ is with respect to the density $\operatorname{vol}_n/\operatorname{vol}_n(K)$.
\begin{theorem}\label{thmh:wr}
Let $K\in\cK(\H^n)$.
If $K_m$ is the hyperbolic convex hull of $m$ random points chosen uniformly in $K$, then
\begin{align*}
\lim_{m\to \infty} \E_K\big(\!\operatorname{vol}_n(K) - \operatorname{vol}_n(K_m)\big) \,{m^\frac{2}{n+1}}
&= \beta_n \, \operatorname{vol}_n(K)^{\frac{2}{n+1}} \int_{\partial K} H_{n-1}^{\scriptscriptstyle\H^{\scriptscriptstyle n}}(K,x)^\frac{1}{n+1} \ dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{theorem}
As a consequence, we obtain the following result.
\begin{corollary}\label{corh:wr}
Let $K\in\cK(\H^n)$.
If $K_m$ is the hyperbolic convex hull of $m$ random points chosen uniformly in $K$, then
\begin{align*}
\lim_{m\to \infty} \E_K f_0(K_m) \,m^{-\frac{n-1}{n+1}}
&= \beta_n \, \operatorname{vol}_n(K)^{-\frac{n-1}{n+1}} \int_{\partial K} H^{\scriptscriptstyle \H^n}_{n-1}(K,x) ^\frac{1}{n+1} \, dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{corollary}
Finally, we consider best approximation.
Let
\begin{align*}
\operatorname{dist}_n\big(K,\cP^{\scriptscriptstyle \H^n}_m\big)&=\inf\big\{\!\operatorname{vol}_n(K\triangle P): P \text{ hyperbolic polytope with at most $m$ vertices}\big\},\\
\intertext{and}
\operatorname{dist}_n\big(K,\cP^{\scriptscriptstyle \H^n}_{(m)}\big)&=\inf\big\{\!\operatorname{vol}_n(K\triangle P): P \text{ hyperbolic polytope with at most $m$ facets}\big\}.
\end{align*}
We obtain the following result.
\begin{theorem}\label{thmh:wb}
For $K\in\cK(\S^n)$ with $C^2$ boundary,
\begin{align*}
\lim_{m\to \infty} \operatorname{dist}_n\big(K, \cP^{\scriptscriptstyle \H^n}_m\big) \,{m^\frac{2}{n-1}}
&= \frac12 \operatorname{ldel}_{n-1} \bigg(\int_{\partial K} H_{n-1}^{\scriptscriptstyle\H^{\scriptscriptstyle n}}(K,x)^\frac{1}{n+1} \ dx\bigg)^{\frac{n+1}{n-1}},\\
\intertext{and}
\lim_{m\to \infty} \operatorname{dist}_n\big(K, \cP^{\scriptscriptstyle \H^n}_{(m)}\big) \,{m^\frac{2}{n-1}}
&= \frac12 \operatorname{ldiv}_{n-1} \bigg(\int_{\partial K} H_{n-1}^{\scriptscriptstyle\H^{\scriptscriptstyle n}}(K,x)^\frac{1}{n+1} \ dx\bigg)^{\frac{n+1}{n-1}},
\end{align*}
where $\operatorname{ldel}_{n-1}$ and $\operatorname{ldiv}_{n-1}$ are the constants from Theorem \ref{thm:wb}.
\end{theorem}
\goodbreak
\section{Hilbert geometries}\label{sec:hilbert}
Hilbert's Fourth Problem asks for a characterization of metric geometries
whose geodesics are straight lines. Hilbert constructed a special class of
examples, now called Hilbert geometries (see \cite[Ch.~15]{HandbookHilbert:2014} for more information).
A Hilbert geometry $(C,d_C)$ is defined on the interior of a convex body $C\in \cK({\R^{n}})$
in the following way:
For distinct points $x, y \in \operatorname{int} C$, the line passing through $x$ and $y$ meets $\partial C$ at two points $p$ and $q$, say, such that one has $p, x, y, q$ in that order on the line. Define the Hilbert distance of $x$ and $y$ by
\begin{align*}
d_C(x, y) = \frac12 \log [p,x, y, q],
\end{align*}
where $[p,x,y, q]$ is the cross ratio of $p,x, y, q$, that is,
\begin{align*}
[p,x, y, q] = \frac{\| y - p\|}{\|x - p\|} \, \frac{\| x - q\|}{\|y - q\|}.
\end{align*}
Note that the invariance of the cross ratio by projective maps implies the
projective invariance of $d_C$.
Unbounded closed convex sets with nonempty interiors and not containing a
straight line are projectively equivalent to convex bodies. Hence the definition of Hilbert geometry naturally extends to the interiors of such convex sets. If $C$ is an ellipsoid, then the Hilbert geometry on $\operatorname{int} C$ is isometric to hyperbolic space.
Straight lines are geodesics in a Hilbert geometry $(C,d_C)$ and if $C$ is strictly convex,
then the affine segment between to distinct points is the unique geodesic joining them (see e.g. \cite[p.~60]{HandbookHilbert:2014}).
Hence, if $C$ is strictly convex, then hyperplanes are the totally geodesic submanifolds of co-dimension $1$.
A convex body $K\in\mathcal{K}({\R^{n}})$ that is contained in $\operatorname{int} C$
is therefore also a convex body of the Hilbert geometry $(C,d_C)$ and polytopes are an intrinsic notion of $(C, d_C)$.
Thus we may consider polytopal approximation in
a Hilbert geometry $(C,d_C)$ for a strictly convex body $C$. In the following $\mathcal{K}(C)$ denotes the space of convex bodies $K\subset \operatorname{int} C$.
The Hilbert metric $d_C$ is induced by a weak Finsler structure in the following way:
For $x\in \operatorname{int} C$ define a (weak) Minkowski norm $\|.\|_x$ by
\begin{align*}
\|v\|_{x}= \frac12 \Big( \frac1{t^{\scriptscriptstyle +}} +\frac1{t^{\scriptscriptstyle -}}\Big),
\end{align*}
for $v\in\R^n$, where $t^{\scriptscriptstyle \pm}$ is determined by $x\pm t^{\scriptscriptstyle \pm} v\in \partial C$.
If we identify $\R^n$ with the tangent space $T_x\,\R^n$,
then $\|\cdot\|_x$ defines a Minkowski norm on $T_x\,\R^n$ for every $x\in\operatorname{int} C$.
The map $F_C:x \mapsto \|\cdot\|_x$ defines a (weak) Finsler structure on $\operatorname{int} C$. The length of a $C^1$ curve $\gamma : [a,b]\to \operatorname{int} C$ is defined by
\begin{align*}
\ell(\gamma) = \int_a^b \|\dot{\gamma}(t)\|_{\gamma(t)}\, dt,
\end{align*}
and the Hilbert metric between two distinct points $x,y\in\operatorname{int} C$ is just the minimal length of a $C^1$ curve joining them.
In particular, if $C$ is $C^2_+$, that is, the boundary of $C$ is a $C^2$ manifold
with positive curvature, then $(\operatorname{int} C, F_C)$
defines a Finsler manifold in the classical sense.
The unit ball of the Minkowski norm $\|\cdot\|_x$ is
$I^C_{x}=\{v\in {\R^{n}}: \| v \|_{x}\le 1\}$.
Recall that the polar body $K^*$ of a convex body $K$ is defined by
$K^*=\{y\in\R^n: x\cdot y \leq 1 \text{ for all $x\in K$}\}$
and the difference body $D\,K$ is defined by $D(K)=\frac{1}{2}(K-K) =\frac12\{x-y: x,y\in K\}$.
For a fixed $x\in \operatorname{int} C$ we find
\begin{align*}
\|v\|_{x} = h\big(D (C-x)^*,v\big) \text{ and }
I^C_{x} = \big(D (C-x)^*\big)^*.
\end{align*}
Hence $I^C_x$ is the \emph{harmonic symmetrization} of $C$ in $x$ (see \cite{PT:2009}).
\goodbreak
There are several good choices for volume $\operatorname{vol}_C$ in $(C,d_C)$
which give a projective invariant notion of volume;
for example, the Busemann volume or the Holmes-Thompson volume of the associated Finsler manifold.
The Busemann volume is the $n$-dimensional Hausdorff volume of the metric space $(C,d_C)$.
Its density function with respect to Lebesgue measure $\lambda_n$ is given by
$v_n/\lambda_n(I^C_x)$. The Holmes-Thompson volume has density
$\lambda_n((I^C_x)^*)/v_n$.
Both, the Busemann and the Holmes-Thompson volume,
have the property that the density $\sigma_C$ is non-negative and continuous.
This allows us to directly apply the results from Section~1
to Hilbert geometries with these volume densities.
First, we consider random polytopes that are the convex hull of
points chosen uniformly according to $\operatorname{vol}_C$ in $K\in\cK(C)$.
In the following, the expectation $\E_K$ is with respect to the density $\operatorname{vol}_C/\operatorname{vol}_C(K)$.
\begin{theorem}\label{thmhg:wr}
Let $K\in\cK(C)$.
If $K_m$ is the convex hull of $m$ random points chosen uniformly in $K$ with respect to $\operatorname{vol}_C$, then
\begin{align*}
\lim_{m\to \infty} \E_K\big(\!\operatorname{vol}_C(K) - \operatorname{vol}_C(K_m)\big) \,{m^\frac{2}{n+1}}
&= \beta_n \, \operatorname{vol}_C(K)^{\frac{2}{n+1}} \int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \sigma_C(x)^{\frac{n-1}{n+1}}\, dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{theorem}
As a consequence of Corollary \ref{cor:wr}, we obtain the following result.
\begin{corollary}\label{corhg:wr}
Let $K\in\cK(C)$.
If $K_m$ is the convex hull of $m$ random points chosen uniformly in $K$ with respect to $\operatorname{vol}_C$, then
\begin{align*}
\lim_{m\to \infty} \E_K f_0(K_m) \,m^{-\frac{n-1}{n+1}}
&= \beta_n \, \operatorname{vol}_C(K)^{-\frac{n-1}{n+1}} \int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \sigma_C(x)^{\frac{n-1}{n+1}}\, dx,
\end{align*}
where $\beta_n$ is the constant from \eqnref{eqn:const}.
\end{corollary}
Next, we consider best approximation.
Let
\begin{align*}
\operatorname{dist}_C\big(K,\cP^C_m\big)
&=\inf\big\{\!\operatorname{vol}_C(K\triangle P): P\subset \operatorname{int} C \text{ polytope with at most $m$ vertices}\big\},\\
\intertext{and}
\operatorname{dist}_C\big(K,\cP^C_{(m)}\big)
&=\inf\big\{\!\operatorname{vol}_C(K\triangle P): P\subset \operatorname{int} C \text{ polytope with at most $m$ facets}\big\}.
\end{align*}
We obtain the following result.
\begin{theorem}\label{thmhg:wb}
For $K\in\cK(C)$ with $C^2$ boundary,
\begin{align*}
\lim_{m\to \infty} \operatorname{dist}_C\big(K, \cP^C_m\big) \,{m^\frac{2}{n-1}}
&= \frac12 \operatorname{ldel}_{n-1} \bigg(\int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \sigma_C(x) ^{\frac{n-1}{n+1}} \, dx\bigg)^{\frac{n+1}{n-1}},\\
\intertext{and}
\lim_{m\to \infty} \operatorname{dist}_C\big(K, \cP^C_{(m)}\big) \,{m^\frac{2}{n-1}}
&= \frac12 \operatorname{ldiv}_{n-1} \bigg(\int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \sigma_C(x) ^{\frac{n-1}{n+1}} \, dx\bigg)^{\frac{n+1}{n-1}},
\end{align*}
where $\operatorname{ldel}_{n-1}$ and $\operatorname{ldiv}_{n-1}$ are the constants from Theorem \ref{thm:wb}.
\end{theorem}
Finally, we obtain the following result for the weighted floating body $K^{\sigma_C}_\delta$.
\begin{theorem}\label{thmhg:wfb}
For $K\in\cK(C)$,
\begin{align*}
\lim_{\delta \to 0} \frac{\operatorname{vol}_C(K) - \operatorname{vol}_C(K^{\sigma_C}_\delta)} {\delta^\frac{2}{n+1}}
&= \alpha_n \int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \sigma_C(x)^{\frac{n-1}{n+1}} \,dx,
\end{align*}
where $\alpha_n$ is defined in Theorem \ref{thm:wfb}.
\end{theorem}
Note that the floating area
\begin{align*}
\Omega_C(K)=\int_{\partial K} H_{n-1}(K,x) ^\frac{1}{n+1} \sigma_C(x)^{\frac{n-1}{n+1}} \,dx,
\end{align*}
depends on the Hilbert geometry $(C,d_C)$ and the choice of the volume density $\sigma_C$.
Let $\cK_{(0)}({\R^{n}})$ be the set of convex bodies in ${\R^{n}}$ containing the origin in their interiors.
For $C\in \cK_{(0)}({\R^{n}})$ and $\lambda <1$, the floating area
$\Omega_C(\lambda C)$ is a centro-affine (or $\operatorname{GL}(n)$) invariant by the definition of floating area and
the projective invariance of the volume $\operatorname{vol}_C$ (however, note that $\Omega_C(\lambda C)$ is not a projective invariant).
For the limiting case $\lambda\to1$ and the Busemann floating area, we obtain the following result.
The proof is based on results by Berck, Bernig, and Vernicos \cite{BBV:2010},
who studied the limiting behavior of the volume entropy of $\lambda C$.
\begin{theorem}\label{thm:omegacp}
For $C\in\cK_{(0)}({\R^{n}})$ with $C^{1,1}$ boundary,
\begin{align*}
\Omega_{n}(C)
&= 2^{\frac{n-1}2} \lim_{\lambda \to 1-} \Omega_C(\lambda C) (1-\lambda)^{\frac{n-1}2},
\end{align*}
where $\Omega_C$ is the Busemann floating area.
\end{theorem}
\noindent
Here $\Omega_{n}(C)$ is the classical {\em centro-affine surface area} of $C$ which is defined as
\begin{align*}
\Omega_{n}(C)
&=\int_{\partial C} \frac{ H_{n-1}(C,x)^\frac{1}{2}}{ \big(x\cdot n_C(x)\big)^{\frac{n-1}2}} \,dx.
\end{align*}
Centro-affine surface area is an upper semicontinuous and $\operatorname{GL}(n)$ invariant valuation on $\cK^n_{(0)}({\R^{n}})$. Moreover, it is basically the only such functional (see \cite{LR:2010}). For more information on centro-affine surface area, which is also called $L^n$-affine surface area, see \cite{Lutwak:1996, SW:2004, MW:2000}.
\begin{proof}
Berck, Bernig, and Vernicos \cite[Proposition 2.8]{BBV:2010} obtained that
\begin{equation}\label{eqn:berck}
\lim_{\lambda \to 1-} \sigma_C(\lambda x) (1-\lambda)^{\frac{n+1}2}
= \frac{H_{n-1}(C, x)^{\frac12}}{\big(2\,x\cdot n_C(x)\big)^\frac{n+1}2},
\end{equation}
for $x\in\partial C$. Using a version of Blaschke's rolling theorem, they also showed in \cite[Proposition 2.10]{BBV:2010} that
\begin{equation}\label{eqn:roll}
\sigma_C(\lambda x) \le c (1-\lambda)^{-\frac{n+1}2},
\end{equation}
where the constant $c$ does not depend on $x$ and $\lambda$.
Thus,
\begin{align*}
\lim_{\lambda \to 1-} \Omega_C(\lambda C) (1-\lambda)^{\frac{n-1}2}
&= \lim_{\lambda \to 1-} \lambda^{n\frac{n-1}{n+1}}
\int_{\partial C} H_{n-1}(C,x)^\frac{1}{n+1}
\Big((1-\lambda)^{\frac{n+1}2} \sigma_C(\lambda x)\Big)^\frac{n-1}{n+1} \,dx \\
&= \int_{\partial C} H_{n-1}(C,x)^\frac{1}{n+1}
\Bigg( \frac{H_{n-1}(C, x)^{\frac12}}{\big(2\,x\cdot n_C(x)\big)^\frac{n+1}2}\Bigg)^\frac{n-1}{n+1} \,dx\\
&= 2^{-\frac{n-1}2} \Omega_n(C),
\end{align*}
where the last inequality uses Lebesgue's Dominated Convergence Theorem and (\ref{eqn:roll}).
\end{proof}
Theorem \ref{thm:omegacp} holds true not only for the Busemann volume, but also for other notions of volume. This follows, since according to Berck, Bernig, and Vernicos \cite{BBV:2010}, equation \eqnref{eqn:berck} holds true for the volume densities of all volumes that satisfies the following very general assumptions:
\begin{itemize}
\item The volume measure $\operatorname{vol}_C$ is a Borel measure on $\operatorname{int} C$ and absolutely continuous with respect to the Lebesgue measure.
\item If $A\subset C \subset C'$ where $C,C'\in\cK(\R^n)$, then $\operatorname{vol}_C(A)\geq \operatorname{vol}_{C'}(A)$.
\item If $C$ is an ellipsoid, then $\operatorname{vol}_C$ is the hyperbolic volume.
\end{itemize}
All volume measures that satisfy these conditions are equivalent, i.e., if $\sigma_C$ and $\bar{\sigma}_C$ are the volume densities of two volume measures $\operatorname{vol}_C$ and $\bar{\operatorname{vol}}_C$, then there exist positive real constants $a,b$ such that
\begin{align*}
a \sigma_C(x) \leq \bar{\sigma}_C(x) \leq b \sigma_C(x),
\end{align*}
see e.g.\ \cite[p.~249]{HandbookHilbert:2014}.
Hence, by \eqnref{eqn:roll}, we conclude that
\begin{align*}
\bar{\sigma}_C(\lambda x) \le bc (1-\lambda)^{-\frac{n+1}{2}}.
\end{align*}
Therefore Theorem \ref{thm:omegacp} also holds for any volume measure that satisfies these conditions and
in particular for the Holmes-Thompson volume.
\section{Proof of Theorem \ref{thm:wfb}}\label{sec:proof}
The first step of the proof is the following disintegration result, which follows easily from the area formula (see e.g.\ \cite[Prop.~3.7]{BW:2016} or \cite[Lem.~4.2]{BFH:2010} for related results).
\begin{lemma}\label{lem:coneformula}
Let $K,L$ be convex bodies such that $L\subseteq K$ and $0\in\operatorname{int} L$. For $x\in\partial K$,
\begin{align*}
\Psi(K)-\Psi(L)
&= \int_{\partial K} n_K(x)\cdot (x\|x\|^{-n}) \int_{\|x_L\|}^{\|x\|} \psi(tx\|x\|^{-1})t^{n-1}\, dt\, dx,
\end{align*}
where $\{x_L\} = \partial L \cap [0,x]$.
\end{lemma}
The next step is to give upper and lower bounds of the weighted floating body $K^\phi_\delta$ by a re-parametrized Euclidean floating body. To be more precise, we find $\delta_1=\delta_1(\delta)$ and $\delta_2=\delta_2(\delta)$ such that $0<\delta_1\leq \delta_2$ and $K_{\delta_2} \subseteq K_{\delta}^\phi \subseteq K_{\delta_1}$. Before we go into the details of this proof, we need to fix a few notions.
For $v\in\S^{n-1}$ and $t\in \R$, define, as before, the closed halfspaces
$H^-(v,t):=\{y\in\R^n : y\cdot v \leq t\}$ and $H^+(v,t) := H^-(-v,-t)$.
The weighted floating body $K_\delta^\phi$ can be expressed as
\begin{align}\label{eqn:floating_par}
K_\delta^\phi = \bigcap \Big\{H^-\big(v,t_{\delta}(v)\big) : v\in\S^{n-1}\Big\},
\end{align}
where $t_\delta(v)=t(K,\phi,\delta,v)$ is determined implicitly by
\begin{align}\label{eqn:floating_par_t}
\delta
= \Phi\Big(K\cap H^+\big(v,t_{\delta}(v)\big)\Big)
= \int_{t_\delta(v)}^{h_K(v)} \int_{K\,\cap\,H(v,s)} \phi(x) \,d\lambda_{H(v,s)}(x)\, ds.
\end{align}
Here $\lambda_{H(v,s)}$ is the Lebesgue measure in the affine hyperplane $H(v,s)=\{y\in\R^n:y\cdot v = s\}$.
Note that there exists $\delta_0>0$ such that the function $t_\delta(v)$ is continuous for $(\delta,v)\in [0,\delta_0)\times \S^{n-1}$ and $t_0(v) = h_K(v)$.
\begin{lemma}
Let $K\in\cK(\R^n)$ and $\varepsilon\in(0,\min_{\partial K} \phi)$.
For
\begin{align}\label{eqn:par_bound}
\alpha &:= \min_{\partial K} \phi-\varepsilon,
&
\beta &:= \max_{\partial K} \phi+\varepsilon,
\end{align}
there exists $\delta_0 =\delta_0(\varepsilon)>0$ such that for all $\delta\in(0,\delta_0)$, we have
\begin{align*}
K_{\delta/\alpha} \subseteq K_\delta^\phi \subseteq K_{\delta/\beta}.
\end{align*}
\end{lemma}
\begin{proof}
Note that by our assumptions $\phi$ is continuous and positive on $\partial K$ and therefore $\min_{\partial K}\phi>0$.
First we show that there is $\delta_1 = \delta_1(\varepsilon)>0$ such that for all $\delta \in(0,\delta_1)$ and $v\in\S^{n-1}$ we have
\begin{equation}\label{eqn:proofbound1}
K\cap H^+\big(v,t_\delta(v)\big) \subseteq \big\{ x\in K : \phi(x)\leq \beta\big\}.
\end{equation}
Assume the opposite. Then for all $\delta>0$ there exists $v(\delta)\in\S^{n-1}$ and $y(\delta)\in K$ such that $\phi(y(\delta)) \geq \beta$ and
$y(\delta)\cdot v(\delta)\geq t_\delta(v(\delta))$.
By compactness there are converging subsequences with limits $v_0\in\S^{n-1}$ and $y_0\in\partial K$ such that $\phi(y_0)\geq \beta$ and
$y_0\cdot v_0 \geq t_0(v_0) = h_K(v_0)$.
Thus $y_0\in\partial K$ and therefore $\phi(y_0) \leq \max_{\partial K} \phi < \beta \leq \phi(y_0)$ -- a contradiction.
By (\ref{eqn:proofbound1}), we have that
\begin{align*}
\delta
&= \Phi\Big(K\cap H^+\big(v, t_\delta(v)\big)\Big)
\leq \beta \lambda_n\Big(K\cap H^+\big(v,t_\delta(v)\big)\Big),
\end{align*}
which yields $t(K,1,\delta/\beta,v)\geq t(K,\phi,\delta,v)$. Thus, by \eqnref{eqn:floating_par} and \eqnref{eqn:floating_par_t}, $K^\phi_\delta\subseteq K_{\delta/\beta}$.
Conversely, there is $\delta_2 = \delta_2(\varepsilon)>0$ such that for all $\delta\in(0,\delta_2)$ and $v\in\S^{n-1}$ we have
\begin{align*}
K\cap H^+\big(v,t(K,\phi,\delta,v)\big) \subseteq \big\{x\in K: \phi(x) \geq \alpha\big\}.
\end{align*}
Similar to the above we first have
\begin{align*}
\delta
&= \Phi\Big(K\cap H^+\big(v,t_\delta(v)\big)\Big)
\geq \alpha \lambda\Big(K\cap H^+\big(v,t_\delta(v)\big)\Big),
\end{align*}
and therefore $K_{\delta/\alpha} \subseteq K^\phi_{\delta}$. Setting $\delta_0 = \min\{\delta_1,\delta_2\}$ concludes the proof.
\end{proof}
For two distinct points $x,y\in\R^n$ the affine segment joining $x$ and $y$ is denoted by $[x,y]$. The previous lemma immediately implies the following result.
\begin{corollary}\label{cor:bound1}
Let $K\in\cK(\R^n)$, let $\alpha,\beta$ as in \eqref{eqn:par_bound}, and let $z\in\operatorname{int}\, K$. For $x\in\partial K$ we set
\begin{align*}
\big\{x_{\delta/\alpha}\big\} &= \partial K_{\delta/\alpha} \cap [x,z],&
\big\{x_{\delta/\beta}\big\} &= \partial K_{\delta/\beta} \cap [x,z],&
\big\{x_{\delta}^{\phi}\big\} &= \partial K_{\delta}^\phi \cap [x,z].
\end{align*}
Then for $\delta>0$ sufficiently small, we have
\begin{align*}
\big\|x_{\delta/\alpha}-z\big\| \leq \big\|x_\delta^\phi-z\big\| \leq \big\|x_{\delta/\beta}-z\big\|.
\end{align*}
\end{corollary}
To complete the proof, we proceed as follows: The left hand-side of \eqnref{eqn:limit} can be written as an integral over $\partial K$ by Lemma \ref{lem:coneformula}. Theorem \ref{thm:wfb} follows by applying Lebesgue's Dominated Convergence Theorem and calculating the point-wise limit of the integrand. To do so, we need to bound the integrand from above by an integrable function.
We denote by $r_K\colon\partial K \to [0,+\infty)$ the maximal radius of a Euclidean ball that contains $x\in\partial K$ and is contained in $K$. It was proven in \cite{SW:1990}, that for $\alpha>-1$ we have
\begin{align*}
\int_{\partial K} r_K(x)^{\alpha}\,dx <+\infty.
\end{align*}
Hence $r_K$ is an integrable function and it was already used as upper bound of the integrand for the Euclidean floating body. The following upper bound for the weighted floating body follows by the Euclidean results obtained in \cite{SW:1990}.
\begin{lemma}\label{lem:asym2}
Let $K\in\cK(\R^n)$ with $0\in\operatorname{int}\, K$. There exists $C>0$ such that for $\delta>0$ sufficiently small
\begin{align*}
\frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^\phi\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&\leq C\, \big(\max_{K} \psi\big)\, r_K(x)^{-\frac{n-1}{n+1}},
\end{align*}
for almost all $x\in \partial K$.
\end{lemma}
\begin{proof}
Since $0\in\operatorname{int} K$, by Corollary \ref{cor:bound1} we have $\|x_{\delta}^\phi\| \geq \|x_{\delta/\alpha}\|$. We conclude
\begin{align*}
\frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^\phi\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&\leq \big(\max_K \psi\big) \, \frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n}
\int_{\|x_{\delta/\alpha}\|}^{\|x\|} t^{n-1} \, dt.
\end{align*}
Furthermore,
\begin{align*}
\frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_{\delta/\alpha}\|}^{\|x\|} t^{n-1} \, dt
&\leq \, \frac{x\cdot n_K(x)}{\|x\|} \frac{\big\|x-x_{\delta/\alpha}\big\|}{\delta^{2/(n+1)}}
\leq C r_K(x)^{-\frac{n-1}{n+1}},
\end{align*}
where the last inequality is the Euclidean result established in \cite[Lemma 6]{SW:1990}.
\end{proof}
To calculate the point-wise limit of the integrand, we also use the Euclidean result to obtain the result for the weighted floating body. We recall some notions for boundary points of a convex body (see, for example, \cite[Section 2.2, Section 2.5]{Schneider:2014}).
A boundary point $x$ of $K$ is called {\em regular} if there is a unique outer unit normal $n_K(x)$ to $K$ at $x$. Almost all boundary points are regular.
Recall that for a convex body $K$ the boundary $\partial K$ is $C^2$ almost everywhere in the following sense:
If $x$ is a regular boundary point, there is $\varepsilon>0$ and an open neighborhood $U$ of $x$ such that $U\cap \partial K$ can be described as
\begin{align*}
U\cap \partial K = \big\{x+v-f(v)\,n_K(x):v\in n_K(x)^\bot \cap \varepsilon \,\mathbb B^n\big\},
\end{align*}
where $f\colon n_K(x)^\perp\cap \varepsilon\,\mathbb B^n\to \R$ is a convex function which satisfies $f\geq 0$, $f(0)=0$ and $n_K(x)^\perp=\{y\in\R^n: y\cdot n_K(x)=0\}$. A regular boundary point $x\in\partial K$ is \emph{normal} (or second order differentiable), if $f$ is twice differentiable at $0$ in the following sense: $f$ is differentiable at $0$ and there exists a symmetric linear map $A\colon \R^n\to \R^n$ such that for $v,w\in n_K(x)^\perp$,
\begin{align*}
f(w) = f(v)+ \nabla f(v)\cdot (w-v) + \frac{1}{2} A(w-v)\cdot (w-v) + o\big(\|w-v\|^2\big),
\end{align*}
as $\| w-v\|\to 0$.
Note that almost all boundary points are normal (see \cite[Thm.~2.5.5]{Schneider:2014}), and the (generalized) Gauss--Kronecker curvature $H_{n-1}(K,x)=\det(A)$ exists for normal boundary points.
\begin{lemma}\label{lem:lim2}
Let $K \in \cK(\R^n)$. If $x\in\partial K$ is a normal boundary point, then
\begin{align*}
\lim_{\delta\to 0^+} \frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^\phi\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&= \alpha_n\, H_{n-1}(K,x)^{\frac{1}{n+1}}\, \phi(x)^{-\frac{2}{n+1}}\, \psi(x).
\end{align*}
\end{lemma}
In the proof of Lemma \ref{lem:lim2} we will use the following two results.
\begin{lemma}[\!\! {\cite[Lemma 2.9]{BW:2016}}]
Let $K\in\cK(\R^n)$ with $0\in\operatorname{int}\, K$ and $\varepsilon>0$.
If $x\in\partial K$ is a normal boundary point such that $H_{n-1}(K,x)>0$, then
there is $\delta_0=\delta_0(\varepsilon)$ such that for all $\delta\in(0,\delta_0)$,
\begin{align*}
[x,0]\cap L_\delta^\phi = [x,0] \cap K_\delta^\phi,
\end{align*}
where $L=K\cap (x+ \varepsilon\, \mathbb B^n)$.
In particular, if we set $\{x_\delta^{\phi,K}\} = \partial K_\delta^\phi\cap [x,0]$ and $\{x_\delta^{\phi,L}\} = \partial L_\delta^\phi\cap [x,0]$,
then $x^{\phi,K}_\delta=x^{\phi,L}_\delta$.
\end{lemma}
\begin{lemma}[\!\!{\cite{SW:1990}}]\label{lem:lim1}
Let $K \in\cK(\R^n)$. If $x\in\partial K$ is a normal boundary point, then
\begin{align*}
\lim_{\delta\to 0^+} \frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta\|}^{\|x\|} t^{n-1} \, dt
&= \alpha_n\, H_{n-1}(K,x)^{\frac{1}{n+1}}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:lim2}]
Since $x$ is normal, $H_{n-1}(K,x)$ exists. First, if $H_{n-1}(K,x)=0$, then
\begin{align*}
\frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^\phi\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&\leq \big(\max_K \psi\big)\,
\frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_{\delta/\alpha}\|}^{\|x\|} t^{n-1}\, dt.
\end{align*}
By Lemma \ref{lem:lim1}, we conclude
\begin{align*}
\limsup_{\delta\to 0^+} \frac{x\cdot n_K(x)}{\delta^{\frac{2}{n+1}}\|x\|^n} \int_{\|x_\delta^\phi\|}^{\|x\|} t^{n-1} \psi\Big(\frac{tx}{\|x\|}\Big)\, dt
& \leq \frac{\max_K \psi}{\alpha^{\frac{2}{n+1}}}
\limsup_{\delta\to 0^+} \frac{x\cdot n_K(x)}{(\delta/\alpha)^{\frac{2}{n+1}}\|x\|^n}
\int_{\|x_{\delta/\alpha}\|}^{\|x\|} t^{n-1}\, dt
=0.
\end{align*}
Now assume $H_{n-1}(K,x)>0$ and let $\varepsilon>0$ be arbitrary. Set $L = K\cap (x+\varepsilon\, \mathbb B^n)$. Then $H_{n-1}(L,x) = H_{n-1}(K,x)$ and $n_K(x)=n_L(x)$. Furthermore, for $\delta$ small enough, we have
\begin{align*}
\frac{x\cdot n_K(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^{\phi,K}\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&= \frac{x\cdot n_L(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^{\phi,L}\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt.
\end{align*}
Let $z\in (\operatorname{int} L)\cap [0,x]$. We apply Corollary \ref{cor:bound1} on $L$ with $\varepsilon$ and obtain, for $y\in\partial L$,
\begin{align*}
\|y_{\delta/\beta}-z\| \geq \|y_{\delta}^{\phi,L}-z\| \geq \|y_{\delta/\alpha}-z\|,
\end{align*}
where $\beta = \max_{\partial L} \phi +\varepsilon $ and $\alpha = \min_{\partial L} \phi - \varepsilon$.
Since $\|x\| = \|z\|+\|x-z\|$, this yields $\|x_{\delta/\beta}\| \geq \|x_{\delta}^{\phi,L}\| \geq \|x_{\delta/\alpha}\|$. We conclude
\begin{align*}
\frac{x\cdot n_L(x)}{\delta^{\frac{2}{n+1}}\|x\|^n} \int_{\|x_\delta^{\phi,L}\|}^{\|x\|} t^{n-1} \psi\Big(\frac{tx}{\|x\|}\Big)\, dt
&\leq \frac{x\cdot n_L(x)}{\delta^{\frac{2}{n+1}}\|x\|^n} \,
\Bigg(\max_{t\in \big[\|x_{\delta/\alpha}^L\|,\|x\|\big]} \psi\Big(\frac{tx}{\|x\|}\Big)\Bigg)
\int_{\|x_{\delta/\alpha}^L\|}^{\|x\|} t^{n-1} \, dt,
\end{align*}
and therefore
\begin{align*}
\limsup_{\delta\to 0^+} \frac{x\cdot n_L(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^{\phi,L}\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&\leq \alpha_n \, H_{n-1}(L,x)^{\frac{1}{n+1}}\, \frac{\psi(x)}{\alpha^{2/(n+1)}}.
\end{align*}
Conversely, we have
\begin{align*}
\frac{x\cdot n_L(x)}{\delta^{\frac{2}{n+1}}\|x\|^n} \int_{\|x_\delta^{\phi,L}\|}^{\|x\|} t^{n-1} \psi\Big(\frac{tx}{\|x\|}\Big)\, dt
&\geq \frac{x\cdot n_L(x)}{\delta^{\frac{2}{n+1}}\|x\|^n}\,
\Bigg(\min_{t\in \big[\|x_{\delta/\beta}^L\|,\|x\|\big]} \psi\Big(\frac{tx}{\|x\|}\Big)\Bigg)
\int_{\|x_{\delta/\beta}^L\|}^{\|x\|} t^{n-1} \, dt,
\end{align*}
and hence
\begin{align*}
\liminf_{\delta\to 0^+} \frac{x\cdot n_L(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^{\phi,L}\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&\geq \alpha_n\, H_{n-1}(L,x)^{\frac{1}{n+1}} \frac{\psi(x)}{\beta^{2/(n+1)}}.
\end{align*}
Since $\varepsilon>0$ can be chosen arbitrarily small and $\beta,\alpha \to \phi(x)$ for $\varepsilon\to 0$, we conclude
\begin{align*}
\lim_{\delta\to 0^+} \frac{x\cdot n_L(x)}{\delta^{2/(n+1)}\|x\|^n} \int_{\|x_\delta^{\phi,L}\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt
&= \alpha_n\, H_{n-1}(L,x)^{\frac{1}{n+1}}\, \phi(x)^{-\frac{2}{n+1}}\, \psi(x).
\end{align*}
This finishes the proof, as, for $\delta>0$ sufficiently small, we have $n_L(x) = n_K(x)$, $H_{n-1}(L,x) = H_{n-1}(K,x)$ and $x_\delta^{\phi,L} = x_\delta^{\phi,K}$.
\end{proof}
The proof of \eqnref{eqn:limit} is now straightforward. By Lemma \ref{lem:coneformula} we have
\begin{align*}
\frac{\Psi(K)-\Psi(K^\phi_\delta)}{\delta^{\frac{2}{n+1}}} = \int_{\partial K} \frac{x\cdot n_K(x)}{\delta^{(n+1)/2}\|x\|^n} \int_{\|x_\delta\|}^{\|x\|} t^{n-1} \psi(tx/\|x\|)\, dt\, dx.
\end{align*}
By Corollary \ref{cor:bound1}, there is $\delta_0>0$ such that the integrand is bounded by an integrable function for all $\delta<\delta_0$. By Lebesgue's Dominated Convergence Theorem and Lemma \ref{lem:lim2}, we conclude
\begin{align*}
\lim_{\delta\to 0^+} \frac{\Psi(K)-\Psi(K^\phi_\delta)}{\delta^{\frac{2}{n+1}}} = \alpha_n \int_{\partial K} H_{n-1}(K,x)^{\frac{1}{n+1}} \phi(x)^{-\frac{2}{n+1}} \psi(x)\, dx.
\end{align*}
\subsection*{Acknowledgments}
The authors thank Juan Carlos Alvarez Paiva and Matthias Reitzner for helpful discussions.
The work of Monika Ludwig was supported, in part, by Austrian Science Fund (FWF) Project P25515-N25. Elisabeth Werner was partially supported by NSF grant 1504701.
\goodbreak
|
1,314,259,995,642 | arxiv | \section{Introduce a modified integration}
Nearly twenty years ago Kloeden and Platen~\cite{Kloeden92} described schemes for numerically integrating stochastic differential equations (\sde{}s). Intervening research led to recent developments of useful Runge--Kutta like methods for It\^o \sde{}s by Andreas Rossler~\cite{Rossler2010, Rossler2009} and for Stratonovich \sde{}s by Yoshio Komori~\cite{Komori2007c, Komori2007b, Komori2007a}.
These numerical integration schemes for \sde{}s are quite complicated, and typically do not easily reduce to accurate deterministic schemes.
This short article introduces a Runge--Kutta scheme for \sde{}s that does straightforwardly reduce to a well known deterministic scheme---the variously called Improved Euler, Heun, or Runge--Kutta~2 scheme.
As well as being a novel practical scheme for the numerical integration of \sde{}s, because of the strong connection to a well known deterministic integration scheme, the scheme proposed here serves as an entry level scheme for teaching stochastic dynamics. One could use this scheme together with Higham's~\cite{Higham01} introduction to the numerical simulation of \sde{}s. Section~\ref{sec:edet} on the method applied to examples assumes a background knowledge of basic numerical methods for ordinary differential equations and deterministic calculus as typically taught in early years at university. Section~\ref{sec:peg} on the underlying theory assumes knowledge of stochastic processes such as continuous time Markov Chains, and, although not essential, preferably at least a formal introduction to stochastic differential equations (such as the book~\cite{Roberts08g} or article~\cite{Higham01} with material that is successfully taught at second\slash third year university).
Consider the vector stochastic process~$\vec X(t)\in \mathbb R^n$ that satisfies the general It\^o \sde
\begin{equation}
d\vec X=\vec a(t,\vec X)\,dt+\vec b(t,\vec X)\,dW,
\label{eq:sde1ab}
\end{equation}
where drift~$\vec a$ and volatility~$\vec b$ are sufficiently smooth functions of their arguments. The noise is represented by the differential~$dW$ which symbolically denotes infinitesimal increments of the random walk of a Wiener process~$W(t,\omega)$. The symbolic form of the \sde~\eqref{eq:sde1ab} follows from the most basic approximation to an evolving system with noise that over a time step~$\Delta t_k$ the change in the dependent variable is
\begin{equation*}
\Delta\vec X_k\approx \vec a(t_k,\vec X_k)\Delta t_k
+\vec b(t_k,\vec X_k){\Delta W}_k
\end{equation*}
where ${\Delta W}_k=W(t_{k+1},\omega)-W(t_k,\omega)$ symbolises some `random' effect. This basic approximation is low accuracy and needs improving for practical applications, but it does form a basis for theory, and it introduces the noise process~$W(t,\omega)$, called a Wiener process. We use~$\omega$ to denote the realisation of the noise. Such a Wiener process is defined by $W(0,\omega)=0$ and that the increment $W(t,\omega)-W(s,\omega)$ is distributed as a zero-mean, normal variable, with variance~$t-s$\,, and independent of earlier times. Consequently, crudely put, $dW/dt$~then is a `white noise' with a flat power spectrum. The \sde~\eqref{eq:sde1ab} may then be interpreted as a dynamical system affected by white noise.
The proposed modified Runge--Kutta scheme for the general \sde~\eqref{eq:sde1ab} is the following. Given time step~$h$, and given the value $\vec X(t_k)=\vec X_k$\,, estimate $\vec X(t_{k+1})$ by~$\vec X_{k+1}$ for time $t_{k+1}=t_k+h$ via
\begin{align}
&\vec K_1=h\vec a(t_k,\vec X_k)+({\Delta W}_k-S_k\sqrt h)\vec b(t_k,\vec X_k),
\nonumber\\&\vec K_2=h\vec a(t_{k+1},\vec X_k+\vec K_1)+({\Delta W}_k+S_k\sqrt h)\vec b(t_{k+1},\vec X_k+\vec K_1),
\nonumber\\&\vec X_{k+1}=\vec X_k+\rat12(\vec K_1+\vec K_2),
\label{eq:ieuabj}
\end{align}
\begin{itemize}
\item where ${\Delta W}_k=\sqrt hZ_k$ for normal random $Z_k\sim N(0,1)$;
\item and where $S_k=\pm1$\,, each alternative chosen with probability~$1/2$.
\end{itemize}
The above describes only one time step.
Repeat this time step $(t_m-t_0)/h$~times in order to integrate an \sde~\eqref{eq:sde1ab} from time $t=t_0$ to $t=t_m$\,.
The appeal of the scheme~\eqref{eq:ieuabj} as an entry to stochastic integrators is its close connection to deterministic integration schemes.
When the stochastic component vanishes, $\vec b=\vec 0$\,, the integration step~\eqref{eq:ieuabj} is precisely the Improved Euler, Heun, or Runge--Kutta~2 scheme that most engineering, science and mathematics students learn in undergraduate coursework.
This connection has another useful consequence in application: for systems with small noise we expect that the integration error of the \sde\ is only a little worse than that of the deterministic system. Although Section~\ref{sec:peg} proves the typical \Ord{h}~error of the stochastic scheme~\eqref{eq:ieuabj}, as demonstrated in the examples of the next Section~\ref{sec:edet}, when the noise is small expect the error to be practically better than the order of error suggests.
Section~\ref{sec:peg} also proves that the scheme~\eqref{eq:ieuabj} integrates Stratonovich \sde{}s to~\Ord{h} provided one sets $S_k=0$ throughout (instead of choosing~$\pm 1$).
An outstanding challenge is to generalise this method~\eqref{eq:ieuabj} to multiple noise sources.
\section{Examples demonstrate \Ord{h} error is typical}
\label{sec:edet}
This section applies the scheme~\eqref{eq:ieuabj} to three example \sde{}s for which, for comparison, we know the analytic solution from Kloeden and Platen~\cite{Kloeden92}. Two of the examples exhibit errors~\Ord{h}, as is typical, whereas the third exhibits a error~\Ord{h^2}, which occurs for both deterministic \ode{}s and a class of \sde{}s. These errors are `pathwise' errors which means that for any one given realisation~$\omega$ of the noise process~$W(t,\omega)$ we refer to the order of error as the time step~$h\to0$ for a fixed realisation~$\omega$.
\subsection{Autonomous example}
\label{sec:aeg}
Consider the `autonomous' \sde
\begin{equation}
dX=\left[\rat12X+\sqrt{1+X^2}\right]dt +\sqrt{1+X^2}\,dW
\qtq{with}X(0)=0\,,
\label{eq:aeg}
\end{equation}
for some Wiener process~$W(t,\omega)$. The \sde\ is not strictly autonomous because the noise~$dW$ introduces time dependence; we use the term `autonomous' to indicate the drift~$a$ and volatility~$b$ are independent of time. For the \sde~\eqref{eq:aeg}, Kloeden and Platen~\cite{Kloeden92} list the analytic solution as $X(t,\omega)=\sinh\big[t+W(t,\omega)\big]$.
Such analytic solutions are straightforwardly checked via the basic version of It\^o's formula,
\begin{equation}
\text{if }X=f(t,w)\text{ for }w=W(t,\omega),\text{ then }
dX=\left(\D tf+\frac12\DD wf\right)\,dt +\D wf\,dW,
\label{eq:ito}
\end{equation}
which may be understood as the usual deterministic derivative rule $dX=(\D tf)\,dt +(\D wf)\,dW$ with the extra term~$\frac12(\DD wf)\,dt$ arising from a formal multi-variable Taylor series in the infinitesimals $dt$~and~$dw$, recognising formally that $dW^2=dt$ in effect, and all remaining infinitesimal products negligible~\cite[e.g.]{Higham01, Roberts08g}.
\begin{figure}
\centering
\begin{tabular}{c@{}cc}
\rotatebox{90}{\hspace{20ex}$X(t)$} &
\includegraphics[scale=1.2]{sde1absim2}\\
& time $t$
\end{tabular}
\caption{As the time step is successively halved, $n=16,32,64,128,256$ time steps over $0\leq t\leq1$\,, the numerical solutions of the \sde~\eqref{eq:aeg} via the method~\eqref{eq:ieuabj} appear to converge.}
\label{fig:sde1absim2}
\end{figure}
The proposed numerical scheme~\eqref{eq:ieuabj} was applied to integrate the \sde~\eqref{eq:aeg} from $t=0$ to end time $t=1$ with a time step of $h=1/n$ for $n=2^{16},2^{15},\ldots,2^4$ steps. For each of $700$~realisations of the noise~$W(t,\omega)$, the Wiener increments, ${\Delta W}\sim N(0,2^{-16})$, were generated on the finest time step, and subsequently aggregated to the corresponding increments for each realisation on the coarser time steps. Figure~\ref{fig:sde1absim2} plots the predicted~$X(t,\omega)$ obtained from the numerical scheme~\eqref{eq:ieuabj} for just one realisation~$\omega$ using different time steps. The predictions do appear to converge to a well defined stochastic process as the step size is repeatedly halved.
\begin{figure}
\centering
\begin{tabular}{cc}
\rotatebox{90}{\hspace{15ex}\textsc{rms} error} &
\includegraphics[scale=1.2]{sde1abll2}\\
& time step~$h$
\end{tabular}
\caption{Average over $700$ realisations at each of 13 different step sizes for the \sde~\eqref{eq:aeg}: at $t=1$\,, the \textsc{rms} error in the predicted~$X(1,\omega)$ decreases linearly in time step~$h$.}
\label{fig:sde1abll2}
\end{figure}
For each size of time step, Figure~\ref{fig:sde1abll2} uses the analytic solution to find the \textsc{rms} error of the predicted~$X(1,\omega)$, averaged over $700$~realisations~$\omega$. This \textsc{rms} error estimates the square-root of the expectation $E[(X_m-X(1,\omega))^2]$. Figure~\ref{fig:sde1abll2} uses a log-log plot to show that the \textsc{rms} error decreases linearly with time step size~$h$ (over four orders of magnitude in time step). That is, empirically we see the scheme~\eqref{eq:ieuabj} has \textsc{rms} error~\Ord{h}.
\subsection{Non-autonomous example}
Consider the `non-autonomous' \sde
\begin{equation}
dX=\left[\frac{X}{1+t}-\frac32X\left(1-\frac{X^2}{(1+t)^2}\right)^2\right]dt
+(1+t)\left(1-\frac{X^2}{(1+t)^2}\right)^{3/2}dW,
\label{eq:naeg}
\end{equation}
with initial condition that $X(0)=0$\,,
for some Wiener process~$W(t,\omega)$. Here both the drift~$a$ and the volatility~$b$ have explicit time dependence. It\^o's formula~\eqref{eq:ito} confirms that the analytic solution to this \sde~\eqref{eq:aeg} is $X(t,\omega)=(1+t)W(t,\omega)/\sqrt{1+W(t,\omega)^2}$.
\begin{figure}
\centering
\begin{tabular}{cc}
\rotatebox{90}{\hspace{15ex}\textsc{rms} error} &
\includegraphics[scale=1.2]{sde1abll8}\\[-1ex]
& time step~$h$
\end{tabular}
\caption{Average over $700$ realisations at each of 13~different step sizes for the \sde~\eqref{eq:naeg}: at $t=1$\,, the \textsc{rms} error in the predicted~$X(1,\omega)$ decreases linearly in time step~$h$.}
\label{fig:sde1abll8}
\end{figure}
To determine the order of error of the scheme~\eqref{eq:ieuabj}, the same approach was adopted here as described in the previous
Section~\ref{sec:aeg}. The slope of the log-log plot in Figure~\ref{fig:sde1abll8} shows that again the \textsc{rms} error of the predicted~$X(1,\omega)$ is~\Ord{h} for time step~$h$ over four orders of magnitude in~$h$.
\subsection{Example with second order error}
\label{sec:esoe}
Consider the following \sde\ linear in~$X$:
\begin{equation}
dX=\left[\frac{2X}{1+t}+(1+t)^2\right]dt +(1+t)^2dW
\quad\text{with }X(0)=1\,.
\label{eq:qeg}
\end{equation}
for some Wiener process~$W(t,\omega)$. It\^o's formula~\eqref{eq:ito} confirms that the analytic solution to this \sde~\eqref{eq:qeg} is $X(t,\omega)=(1+t)^2\big[1+t+W(t,\omega)\big]$.
\begin{figure}
\centering
\begin{tabular}{cc}
\rotatebox{90}{\hspace{15ex}\textsc{rms} error} &
\includegraphics[scale=1.2]{sde1abll1}\\
& time step~$h$
\end{tabular}
\caption{Averaging over $700$ realisations at each of 13 different step sizes for the linear \sde~\eqref{eq:qeg}: at $t=1$\,, the \textsc{rms} error in the predicted~$X(1,\omega)$ decreases quadratically, like~$h^{2}$.}
\label{fig:sde1abll1}
\end{figure}
To determine the order of error of the scheme~\eqref{eq:ieuabj}, the same approach was adopted here as described in
Section~\ref{sec:aeg}. The difference is that the slope of the log-log plot in Figure~\ref{fig:sde1abll1} shows that here the \textsc{rms} error of the predicted~$X(1,\omega)$ is~\Ord{h^2}. There appears to be some \sde{}s for which the error of the scheme~\eqref{eq:ieuabj} is quadratic in the time step~$h$ rather than linear.
\begin{exercise}
Use It\^o's formula~\eqref{eq:ito} to confirm the solutions given below satisfy the corresponding given \sde.
Apply the scheme~\eqref{eq:ieuabj} to some of the following \sde{}s and compare the predictions, for different time steps sizes, to the given analytic solution. Perhaps adapt some of the code given by Higham~\cite[Listing~6]{Higham01}.
\begin{enumerate}
\item $dX=\rat12(X-t)\,dt+(X-t-2)\,dW$, $X(0)=3$; solution $X=2+t+\exp[W(t)]$.
\item $dX=X\,dW$, $X(0)=1$; solution $X=\exp[W(t)-t/2]$.
\item $dX=-X(1-X^2)\,dt+(1-X^2)\,dW$, $X(0)=0$; solution $X=\tanh[W(t)]$.
\item $dX=-X\,dt+e^{-t}dW$, $X(0)=0$; solution $X=e^{-t}W(t)$.
\item $dX=-\rat32X(1-X^2)^2dt+(1-X^2)^{3/2}dW$, $X(0)=0$; solution $X=W(t)/\sqrt{1+W(t)^2}$.
\end{enumerate}
For which \sde{}s is the error~\Ord{h^2}?
\end{exercise}
\section{Prove \Ord{h} global error in general}
\label{sec:peg}
This section uses stochastic integration to establish the general order of accuracy of the proposed numerical integration scheme.
Proofs that numerical schemes do indeed approximate \sde\ solutions are often complex. My plan here is to elaborate three successively more complicated cases, with the aim that you develop a feel for the analysis before it gets too complex. Lemma~\ref{thm:rkbdw} first proves that the Runge--Kutta like scheme~\eqref{eq:ieuabj} approximates the simplest It\^o integrals $X=\int_a^b b(t)\,dW$ to first order in the time step. Second, section~\ref{sec:elsan} identifies a class of linear \sde{}s with additive noise when the scheme~\eqref{eq:ieuabj} is of second order. Third, section~\ref{sec:gegs} proves the first order global error of scheme~\eqref{eq:ieuabj} when applied to general \sde{}s. Those familiar with stochastic It\^o integration could proceed directly to the third section~\ref{sec:gegs}.
One outcome of this section is to precisely `nail down' the requisite properties of the choice of signs~$S_j$ in the scheme~\eqref{eq:ieuabj}.
\subsection{Error for It\^o integrals}
This subsection establishes the order of error in computing the It\^o integral $X=\int_a^b b(t)\,dW$ if one were to invoke the scheme~\eqref{eq:ieuabj} on the scalar \sde\ $dX=b(t)\,dW$. Before proceeding, recall that two fundamental properties on the expectation and variance of It\^o integrals are widely useful~\cite[p.2]{Kloeden01} \cite[pp.101--3]{Roberts08g}:
\begin{align}
&\text{martingale property},
&&\operatorname{E}\left[\int_a^b f(t,\omega)\,dW\right]=0\,;
\label{eq:marty}
\\&\text{It\^o isometry},
&&\operatorname{E}\left[\left(\int_a^b f(t,\omega)\,dW\right)^2\right]
=\int_a^b \operatorname{E}\left[f(t,\omega)^2\right]\,dt\,.
\label{eq:isom}
\end{align}
These empower us to quantify errors in the integrals that approximate solutions of \sde{}s as in the following lemma.
\begin{lemma} \label{thm:rkbdw}
The Runge--Kutta like scheme~\eqref{eq:ieuabj} has \Ord{h}~global error when applied to $dX=b(t)\,dW$ for functions~$b(t)$ twice differentiable.
\end{lemma}
\begin{proof}
Without loss of generality, start with the time step from $t_0=0$ to $t_1=t_0+h=h$\,. Applied to the very simple \sde\ $dX=b(t)\,dW$ one step of the scheme~\eqref{eq:ieuabj} computes
\begin{equation*}
K_1=({\Delta W}-S\sqrt h)b_0\,,\quad
K_2=({\Delta W}+S\sqrt h)b_1\,,
\end{equation*}
and then estimates the change in~$X$ as
\begin{equation}
\Delta\hat X=\rat12(b_0+b_1){\Delta W}+\rat12(b_1-b_0)S\sqrt h\,,
\label{eq:dxbdw}
\end{equation}
where the integrand values $b_0=b(0)$ and $b_1=b(h)$.
The classic polynomial approximation theorem~\cite[p.800, e.g.]{Kreyszig9} relates this estimate~\eqref{eq:dxbdw} to the exact integral. Here write the integrand as the linear interpolant with remainder:
\begin{equation*}
b(t)=\rat12(b_1+b_0)+\rat1h(b_1-b_0)(t-h/2)
+\rat12t(t-h)b''(\tau)
\end{equation*}
for some $0\leq\tau(t)\leq h$\,.
Then the exact change in~$X(t)$ is
\begin{align}
\Delta X=\int_0^hb(t)\,dW
={}&\rat12(b_1+b_0){\Delta W}
+\rat1h(b_1-b_0)\int_0^h(t-h/2)\,dW
\nonumber\\&{}
+\rat12\int_0^ht(t-h)b''(\tau)\,dW.
\label{eq:nmbdwa}
\end{align}
The error in one step of the scheme~\eqref{eq:ieuabj} is the difference between the changes~\eqref{eq:dxbdw} and~\eqref{eq:nmbdwa}.
That is, the true integral change $\Delta X=\Delta\hat X+\epsilon_0$ where the error
\begin{equation}
\epsilon_0=\frac{b_1-b_0}h\left[-\rat12S h^{3/2}+\int_0^h(t-h/2)\,dW\right]
+\rat12\int_0^ht(t-h)b''(\tau)\,dW.
\label{eq:oserrbdw}
\end{equation}
How big is this error? First take expectations, invoke the martingale property~\eqref{eq:marty} for the two stochastic integrals, and see that $\operatorname{E}[\epsilon_0]=0$ provided $\operatorname{E}[S]=0$\,. \emph{Thus the signs~$S$ must be chosen with mean zero.}
Second compute the variance of the error~$\epsilon_0$ to see the size of the fluctuations in the error. Since the expectation $\operatorname{E}[\epsilon_0]=0$\,, the variance $\operatorname{Var}[\epsilon_0]=\operatorname{E}[\epsilon_0^2]$.
Look at various contributions in turn. The first term in the error~\eqref{eq:oserrbdw} has variance $\operatorname{E}[(Sh^{3/2})^2]=h^3\operatorname{E}[S^2]=\Ord{h^3}$ provided the signs~$S$ have bounded variance. Choosing the signs~$S$ independently of the noise~$W$ there are then no correlations between the $S$~terms and the other two terms. The second term in the error~\eqref{eq:oserrbdw} has variance
\begin{align*}
\operatorname{E}\left[\left(\int_0^h(t-h/2)\,dW\right)^2\right]
&{}=\int_0^h (t-h/2)^2 dt\quad\text{by It\^o isometry~\eqref{eq:isom}}
\\&{}=\rat1{12}h^3=\Ord{h^3}.
\end{align*}
The third term in the error~\eqref{eq:oserrbdw}, by the It\^o isometry~\eqref{eq:isom}, has variance
\begin{align}
\operatorname{E}\left[\left(\int_0^ht(t-h)b''(\tau)\,dW\right)^2\right]
&{}=\int_0^h t^2(t-h)^2b''(\tau)^2 dt
\nonumber\\&{}\leq B_2^2\int_0^h t^2(t-h)^2 dt =\frac1{30}B_2^2h^5,
\label{eq:bdwvar}
\end{align}
when the second derivative is bounded, $|b''(t)|\leq B_2$\,.
Lastly, the correlation between these previous two integrals is small as, by a slightly more general version of the It\^o isometry~\eqref{eq:isom},
\begin{align*}
&\left|\operatorname{E}\left[\int_0^h(t-h/2)\,dW\int_0^ht(t-h)b''(\tau)\,dW\right] \right|
\\&{}=\left|\int_0^h (t-h/2)t(t-h)b''(\tau)\,dt\right|
\\&{}\leq B_2\int_0^h \left|(t-h/2)t(t-h)\right|dt
=\Ord{h^4}.
\end{align*}
Hence the local, one step, error is dominated by the first two contributions and has variance $\operatorname{Var}[\epsilon_0]=\Ord{h^3}$.
To estimate the global integral, $\int_a^b b(t)\,dW$, we take $n=\Ord{1/h}$ time steps. With $n$~steps the global error is the sum of $n$~local errors: the scheme~\eqref{eq:ieuabj} approximates the correct solution with global error
$\epsilon=\sum_{j=0}^{n-1}\epsilon_j$\,. Firstly, $\operatorname{E}[\epsilon]=0$ as $\operatorname{E}[\epsilon_j]=0$ for all time steps. Secondly, as the errors on each time step are independent, the variance
\begin{align*}
\operatorname{Var}[\epsilon]&{}=\sum_{j=0}^{n-1}\operatorname{Var}[\epsilon_j]
= n\operatorname{Var}[\epsilon_0]=\Ord{nh^3}=\Ord{h^2}.
\end{align*}
Thus, for the \sde\ $dX=b(t)\,dW$, the scheme~\eqref{eq:ieuabj} has global error of size~\Ord{h}.
\end{proof}
\subsection{Error for linear SDEs with additive noise}
\label{sec:elsan}
This second lemma addresses somewhat more general scalar \sde{}s. It not only serves as a `stepping stone' to a full theorem, but illustrates two other interesting properties. Firstly, we identify a class of \sde{}s for which the scheme~\eqref{eq:ieuabj} is second order accurate in the time step as seen in Example~\ref{sec:esoe}. Secondly, the proof suggests that the sign~$S$ in the scheme~\eqref{eq:ieuabj} relates to sub-step properties of the noise~$W$ that are independent of the increment~${\Delta W}$.
\begin{lemma} \label{thm:rkaxbdw}
The Runge--Kutta like scheme~\eqref{eq:ieuabj} has global error~\Ord{h} when applied to the additive noise, linear \sde\ $dX=a(t)X\,dt+b(t)\,dW$ for functions $a$~and~$b$ twice differentiable. Further, in the exact differential case when $ab=db/dt$ (a solution to the \sde\ is then $X=b(t)W$) the global error is~\Ord{h^2}.
\end{lemma}
\begin{proof}
In this case, straightforward algebra shows the first step in the scheme~\eqref{eq:ieuabj} predicts the change
\begin{align}
\Delta X={}&h\rat12(a_0+a_1)X_0
+\rat12h^2a_0a_1X_0
+\rat12(b_0+b_1){\Delta W}
\nonumber\\&{}
+\rat12a_1b_0h({\Delta W}-S\sqrt h)
+\rat12S\sqrt h\Delta b\,,
\label{eq:nmanls}
\end{align}
where the coefficient values $a_0=a(0)$, $a_1=a(h)$, $b_0=b(0)$ and $b_1=b(h)$.
We compare this approximate change over the time step~$h$ with the true change using iterated integrals. For simplicity we also use subscripts to denote dependence upon `time' variables~$t$, $s$~and~$r$. Start by writing the \sde\ $dX=a_tX_t\,dt+b_t\,dW$ as an integral over the first time step:
\begin{align}
\Delta X=
{}&\int_0^ha_tX_t\,dt +\int_0^hb_t\,dW_t
\nonumber\\&\text{[substituting $X_t=X_0+\Delta X$ inside the first integral]}
\nonumber\\={}& \int_0^ha_t\left[X_0+\int_0^ta_sX_s\,ds +\int_0^tb_s\,dW_s\right]\,dt +\int_0^hb_t\,dW_t
\nonumber\\={}& X_0\int_0^ha_t\,dt+\int_0^ha_t\int_0^ta_sX_s\,ds\,dt
\nonumber\\&{}
+\int_0^ha_t\int_0^tb_s\,dW_s\,dt +\int_0^hb_t\,dW_t
\nonumber\\&\text{[substituting $X_s=X_0+\Delta X$ inside the second integral]}
\nonumber\\={}& X_0\int_0^ha_t\,dt
+\int_0^ha_t\int_0^ta_s\left[X_0+\int_0^sa_rX_r\,dr +\int_0^sb_r\,dW_r\right]\,ds\,dt
\nonumber\\&{}
+\int_0^ha_t\int_0^tb_s\,dW_s\,dt
+\int_0^hb_t\,dW_t
\nonumber\\={}&
X_0\int_0^ha_t\,dt
+X_0\int_0^ha_t\int_0^ta_s\,ds\,dt
+\int_0^ha_t\int_0^ta_s\int_0^sa_rX_r\,dr\,ds\,dt
\nonumber\\&{}
+\int_0^ha_t\int_0^ta_s\int_0^sb_r\,dW_r\,ds\,dt
+\int_0^ha_t\int_0^tb_s\,dW_s\,dt
+\int_0^hb_t\,dW_t\,.
\label{eq:nmanli}
\end{align}
For the last part of the lemma on the case of higher order error, we need to expand to this level of detail in six integrals.
Of these six integrals, some significantly match the components of the numerical step~\eqref{eq:nmanls} and some just contribute to the error. Recall that the proof of Lemma~\ref{thm:rkbdw} identified that errors had both mean and variance. To cater for these two characteristics of errors, and with perhaps some abuse of notation, I introduce the notation~\Ord{h^p,h^q} to denote quantities with mean~\Ord{h^p} and variance~\Ord{h^q}. For example, \Ord{h^p,0} classifies deterministic quantities~\Ord{h^p}, whereas \Ord{0,h^q}~characterises zero mean stochastic quantities of standard deviation scaling like~$h^{q/2}$. The previous proof looked closely at the variances of error terms; here we simplify by focussing only upon their order of magnitude. In particular, let's show that the six integrals in~\eqref{eq:nmanli} match the numerical step~\eqref{eq:nmanls} to an error~$\Ord{h^3,h^5}$.
Consider separately the integrals in~\eqref{eq:nmanli}.
\begin{itemize}
\item Firstly, $X_0\int_0^ha_t\,dt=X_0h\rat12(a_0+a_1)+\Ord{h^3}$ by the classic trapezoidal rule. This matches the first component in the numerical~\eqref{eq:nmanls}.
\item Secondly, using the linear interpolation $a_t=a_0+\frac{\Delta a}ht+\Ord{t^2}$, where as usual $\Delta a=a_1-a_0$\,, the double integral
\begin{align*}
\int_0^ha_t\int_0^ta_s\,ds\,dt
&{}=\int_0^h \Big(a_0+\frac{\Delta a}ht\Big)\Big(a_0t+\frac{\Delta a}{2h}t^2\Big)+\Ord{t^3}\,dt
\\&{}=\int_0^h a_0^2t+a_0\frac{3\Delta a}{2h}t^2 +\Ord{t^3}\,dt
\\&{}= \rat12a_0^2h^2+a_0\frac{\Delta a}{2}h^2 +\Ord{h^4}
\\&{}= \rat12a_0a_1h^2 +\Ord{h^4}.
\end{align*}
Multiplied by~$X_0$, this double integral matches the second term in the numerical~\eqref{eq:nmanls}.
\item Thirdly, the triple integral
\begin{equation*}
\int_0^ha_t\int_0^ta_s\int_0^sa_rX_r\,dr\,ds\,dt =\Ord{h^3}
\end{equation*}
because, as seen in the previous two items, each ordinary integration over a time of~$\Ord{h}$ multiplies the order of the term by a power of~$h$.
\item Fourthly, look at the single stochastic integral in~\eqref{eq:nmanli}, the last term. From the proof of the previous lemma, equations~\eqref{eq:nmbdwa} and~\eqref{eq:bdwvar} give
\begin{equation}
\int_0^hb_t\,dW_t
=\rat12(b_1+b_0){\Delta W}
+\frac{\Delta b}h\int_0^h\big(t-\rat h2\big)\,dW_t
+\Ord{0,h^5}.
\label{eq:nmaxbbdw}
\end{equation}
The first term here matches the third term in the numerical~\eqref{eq:nmanls}. The second term on the right-hand side is an integral remainder that will be dealt with after the next two items.
\item Fifthly, change the order of integration in the double integral
\begin{align*}
&\int_0^ha_t\int_0^tb_s\,dW_s\,dt
\\&{}=\int_0^hb_s\int_s^ha_t\,dt\,dW_s
\\&{}=\int_0^hb_s\int_s^ha_1+\Ord{h-t}\,dt\,dW_s
\\&{}=\int_0^hb_sa_1(h-s)+\Ord{(h-s)^2}\,dW_s
\\&{}=\int_0^hb_0a_1(h-t)+\Ord{h^2}\,dW_t
\\&\qquad\text{[by the martingale property~\eqref{eq:marty} and It\^o isometry~\eqref{eq:isom}]}
\\&{}=\int_0^hb_0a_1(h-t)\,dW_t+\Ord{0,h^5}
\\&{}=\int_0^h\rat h2b_0a_1+b_0a_1\big(\rat h2-t\big)\,dW_t+\Ord{0,h^5}
\\&{}=\rat12hb_0a_1{\Delta W}-b_0a_1\int_0^h\big(t-\rat h2\big)\,dW_t+\Ord{0,h^5}
\end{align*}
The first term here matches the first part of the fourth term in the numerical~\eqref{eq:nmanls}. The second term on the right-hand side is an integral remainder that will be dealt with after the last item.
\item Lastly, the triple integral
\begin{equation*}
\int_0^ha_t\int_0^ta_s\int_0^sb_r\,dW_r\,ds\,dt=\Ord{0,h^5}
\end{equation*}
because, as in the last item, changing the order of integration to do the stochastic integral last, the integral transforms to $\int_0^h\Ord{h^2}\,dW$ which by the martingale~\eqref{eq:marty} and It\^o isometry~\eqref{eq:isom} is~$\Ord{0,h^5}$.
\end{itemize}
Hence we now identify that the difference between the Runge--Kutta like step~\eqref{eq:nmanls} and the change~\eqref{eq:nmanli} in the true solution is the error
\begin{align}
\epsilon_0={}&{}-\rat12a_1b_0h^{3/2}S+\rat12S\sqrt h\Delta b
+b_0a_1\int_0^h\big(t-\rat h2\big)\,dW_t
\nonumber\\&{}
-\frac{\Delta b}h\int_0^h\big(t-\rat h2\big)\,dW_t
+\Ord{h^3}+\Ord{0,h^5}
\nonumber\\={}&
\left[\rat12Sh^{3/2}-\int_0^h\big(t-\rat h2\big)\,dW_t\right]
\left\{-a_1b_0+\frac{\Delta b}h\right\}
+\Ord{h^3,h^{5}}.
\label{eq:nmaxbdwd}
\end{align}
Two cases arise corresponding to the main and the provisional parts of lemma~\ref{thm:rkaxbdw}.
\begin{itemize}
\item In the general case, the factor in square brackets,~$[\cdot]$, in~\eqref{eq:nmaxbdwd} determines the order of error. Choosing the signs~$S$ randomly with mean zero then $Sh^{3/2}=\Ord{0,h^3}$. Recall the integral $\int_0^h\big(t-\rat h2\big)\,dW_t=\Ord{0,h^3}$ also. Thus the leading error is then~\Ord{h^3,h^3}. This is the local one step error. Summing over \Ord{1/h} time steps gives that the global error is~\Ord{h^2,h^2}. That is, the error due to the noise dominates, variance~\Ord{h^2}, and is generally first order in~$h$ as the standard deviation of the error is of order~$h$.
But as the noise decreases to zero, $b\to0$, the factor in curly braces,~$\{\cdot\}$, goes to zero. In this decrease the order of error~\eqref{eq:nmaxbdwd} transitions smoothly to the deterministic case of local error~\Ord{h^3} and hence global error~\Ord{h^2}.
\item The second case is when the factor in braces in~\eqref{eq:nmaxbdwd} is small: this occurs for the integrable case $ab=db/dt$ as then the term in braces is~\Ord{h} so that the whole error~\eqref{eq:nmaxbdwd} becomes~\Ord{h^3,h^5}. Again this is the local one step error. Summing over \Ord{1/h}~time steps gives that the global error is~\Ord{h^2,h^4}. That is, in this case the error is of second order in time step~$h$, both through the deterministic error and the variance of the stochastic errors. Figure~\ref{fig:sde1abll1} shows another case when the error is second order.
\end{itemize}
This concludes the proof.
\end{proof}
Interestingly, we would decrease the size of the factor in brackets in the error~\eqref{eq:nmaxbdwd} by choosing the sign~$S$ to cancel as much as possible the integral $\int_0^h\big(t-\rat h2\big)\,dW_t$\,. This sub-step integral is one characteristic of the sub-step structure of the noise, and is independent of~${\Delta W}$. If we knew this integral, then we could choose the sign~$S$ to cause some error cancellation; however, generally we do not know the sub-step integral. But this connection between the signs~$S$ and the integral $\int_0^h\big(t-\rat h2\big)\,dW_t$ does suggest that the sign~$S$ relates to sub-step characteristics of the noise process~$W$.
For example, if one used \idx{Brownian bridge}s to successively refine the numerical approximations for smaller and smaller time steps, then it may be preferable to construct a Brownian bridge compatible with the signs~$S$ used on the immediately coarser step size.\footnote{The Brownian bridge stochastically interpolates a Wiener process to half-steps in time if all one knows is the increment~${\Delta W}$ over a time step~$h$. The Brownian bridge asserts that the change over half the time step,~$h/2$, is $\rat12{\Delta W}-\rat12\sqrt hZ$ for some $Z\sim N(0,1)$; the change over the second half of the time step is correspondingly $\rat12{\Delta W}+\rat12\sqrt hZ$\,. Factoring out the half, these sub-steps are $\rat12({\Delta W}\mp Z\sqrt h)$ which match the factors $({\Delta W} \mp S\sqrt h)$ used by the scheme~\eqref{eq:ieuabj}: the discrete signs~$S=\mp1$ have mean zero and variance one just like the normally distributed~$Z$ of the Brownian bridge.}
\subsection{Global error for general SDEs}
\label{sec:gegs}
The previous section~\ref{sec:elsan} established the order of error for a special class of linear \sde{}s. The procedure is to repeatedly substitute integral expressions for the unknown whereever it appears (analogous to Picard iteration). In section~\ref{sec:elsan} each substitution increased the number of integrals in the expression by two. For general \sde{}s, this subsection employs the same procedure, but now the number of integrals doubles in each substitution. The rapid increase in the number of integrals is a major complication, so we only consider the integrals necessary to establish that the global error is~\Ord{h}.
Further, the following theorem is also proven for vector \sde{}s in~$\mathbb R^n$, whereas the previous two subsection sections only considered special scalar \sde{}s.
\begin{theorem} \label{thm:nm1ito}
The Runge--Kutta like numerical scheme~\eqref{eq:ieuabj} generally has global error~\Ord{h} when applied to the \sde~\eqref{eq:sde1ab} for sufficiently smooth drift and volatility functions $\vec a(t,x)$~and~$\vec b(t,x)$.
\end{theorem}
\begin{proof}
The proof has two parts: the first is the well known, standard, expansion of the solution of the general \sde~\eqref{eq:sde1ab} by iterated stochastic integrals leading to the Milstein scheme~\cite[e.g.]{Higham01, Kloeden01}; the second shows how the scheme~\eqref{eq:ieuabj} matches the integrals to an order of error.
First look at the repeated integrals for one time step; without loss of generality, start with a time step from $t_0=0$ to $t_1=t_0+h=h$ as the analysis for all other time steps is identical with minor shifts in the times of evaluation and integration.
The stochastic `Taylor series' analysis starts from the integral form of It\^o formula~\eqref{eq:ito}: for a stochastic process~$\vec X(t)$ satisfying the general It\^o \sde~\eqref{eq:sde1ab}, for operators~$L_t^k$, any smooth function~$f(t,\vec x)$ of the process satisfies
\begin{align}&
f(t,\vec X_t)=f(0,\vec X_0)+\int_0^t L_s^0f(s,\vec X_s)\,ds+\int_0^tL^1_sf(s,\vec X_s)\,dW_s\,,
\label{eq:itol}
\\\text{where}\quad&
L^0_s=\left[\D t{}+a_i\D {x_i}{}+\frac12b_ib_j\Dx{x_i}{x_j}{}\right]_{t=s},
\quad
L^1_s=\left[b_i\D {x_i}{}\right]_{t=s}.
\nonumber
\end{align}
For conciseness we use subscripts~$0$, $t$, $s$ and~$r$ to denote evaluation at these times, and similarly $f_t=f(t,\vec X_t)$, and use subscripts~$i$ and~$j$ to denote components of a vector, with the Einstein summation convention for repeated indices.
As you would expect, when stochastic effects are absent, $\vec b=\vec 0$\,, the integral formula~\eqref{eq:itol} reduces, through the first two components of~$L^0_s$, to an integral version of the well known deterministic chain rule: $f(t,\vec X_t)=f(0,\vec X_0)+\int_0^t \big[\partial_tf(s,\vec X_s) +a_i\partial_{x_i}f(s,\vec X_s)\big]\,ds$\,.
Now turn to the \sde~\eqref{eq:sde1ab} itself: it is a differential version of an integral equation which over the first time step gives
\begin{align}
\Delta \vec X={}& \vec X(h,\omega)-\vec X(0,\omega)
=\int_0^h d\vec X
\nonumber\\={}&
\int_0^h\vec a(t,\vec X_t)\,dt+\int_0^h\vec b(t,\vec X_t)\,dW_t
\nonumber\\&[\text{apply the It\^o formula~\eqref{eq:itol} to both }\vec a(t,\vec X_t)\text{ and }\vec b(t,\vec X_t)]
\nonumber\\={}&
\int_0^h \left[ \vec a_0+\int_0^t L_s^0\vec a_s\,ds+\int_0^tL^1_s\vec a_s\,dW_s \right]\,dt
\nonumber\\&{}
+\int_0^h \left[ \vec b_0+\int_0^t L_s^0\vec b_s\,ds+\int_0^tL^1_s\vec b_s\,dW_s\right]\,dW_t
\nonumber\\&[\text{apply the It\^o formula~\eqref{eq:itol} to }L^1_s\vec b_s]
\nonumber\\={}&
\int_0^h \vec a_0\,dt
+\int_0^h\int_0^t L_s^0\vec a_s\,ds\,dt
+\int_0^h\int_0^tL^1_s\vec a_s\,dW_s\,dt
\nonumber\\&{}
+\int_0^h \vec b_0\,dW_t
+\int_0^h\int_0^t L_s^0\vec b_s\,ds\,dW_t
\nonumber\\&{}
+\int_0^h\int_0^t\left[ L^1_0\vec b_0 +\int_0^s L_r^0L_r^1\vec b_r\,dr+\int_0^sL^1_rL_r^1\vec b_r\,dW_r\right]\,dW_s\,dW_t
\nonumber\\&[\text{now rearrange these eight integrals in order of magnitude}]
\nonumber\\={}&
\vec a_0\int_0^h dt
+\vec b_0\int_0^h dW_t
+L^1_0\vec b_0\int_0^h\int_0^tdW_s\,dW_t
\nonumber\\&{}
+\left[
\int_0^h\int_0^tL^1_s\vec a_s\,dW_s\,dt
+\int_0^h\int_0^t L_s^0\vec b_s\,ds\,dW_t
\right.\nonumber\\&\left.\qquad{}
+\int_0^h\int_0^t\int_0^sL^1_rL_r^1\vec b_r\,dW_r\,dW_s\,dW_t
\right]
\nonumber\\&{}
+\left\{
\int_0^h\int_0^t L_s^0\vec a_s\,ds\,dt
+\int_0^h\int_0^t\int_0^s L_r^0L_r^1\vec b_r\,dr\,dW_s\,dW_t
\right\}
\label{eq:dvie}
\end{align}
\begin{itemize}
\item Simplify the first line in this last expression~\eqref{eq:dvie} for~$\Delta\vec X$ using the well known integrals $\int_0^h dt=h$\,, $\int_0^hdW_t={\Delta W}$ and $\int_0^h\int_0^tdW_s\,dW_t=\int_0^hW_t\,dW_t=\rat12({\Delta W}^2-h)$ \cite[(3.6), e.g.]{Higham01}. The last of these three integrals follow from applying It\^o's formula applied to $F(t,W_t)=\rat12W_t^2$ to deduce $dF=\rat12\,dt+W_t\,dW_t$\,, and integrating a rearrangement gives $\int W_t\,dW_t=\int dF-\int\rat12\,dt=\rat12W_t^2-\rat12t$\,. Also simplify the first line by defining the matrix $\vec b'_0=\begin{bmatrix} \D{x_j}{b_i} \end{bmatrix}_{t=0}$ so that $L^1_0\vec b_0=\vec b'_0\vec b_0$\,.
\item The three integrals above in square brackets in expression~\eqref{eq:dvie} all have expectation zero and variance~\Ord{h^3}. Recall that with two arguments \Ord{h^p,h^q}~denotes quantities with mean~\Ord{h^p} and variance~\Ord{h^q}. Thus these three integrals in square brackets are~\Ord{0,h^3}.
\item The two integrals above in curly braces in expression~\eqref{eq:dvie} are all~\Ord{h^2} in magnitude and hence are~\Ord{h^2,h^4}.
\end{itemize}
Combining all these leads to the well established Milstein scheme for the change in~$\vec X$ over one time step from~$t_0$ to~$t_1$ as
\begin{equation}
\Delta \vec X=\vec a_0h
+\vec b_0{\Delta W}
+\vec b'_0\vec b_0\rat12({\Delta W}^2-h)
+\Ord{h^2,h^3}.
\label{eq:mils}
\end{equation}
Second, we proceed to show the scheme~\eqref{eq:ieuabj} matches this Milstien scheme~\eqref{eq:mils}.
Note $\vec K_1=h\vec a_0+({\Delta W}-S\sqrt h)\vec b_0=\Ord{h,h}$ so the product $\vec K_1\vec K_1=\Ord{h,h^2}$ and so on. Hence, by Taylor series in the arguments of the smooth drift~$\vec a$ and volatility~$\vec b$,
\begin{align*}
\vec K_2={}& h\left[\vec a_0+\vec a'_0\vec K_1+\Ord{h,h^2}\right]
\\&{}+({\Delta W}+S\sqrt h)\left[\vec b_0+h\dot{\vec b}_0+\vec b'_0\vec K_1
+\rat12\vec b''_0\vec K_1\vec K_1 +\Ord{h^2,h^3} \right]
\end{align*}
where $\vec b''\vec K_1\vec K_1$ denotes the tensorial double sum~$\Dx{x_i}{x_j}{\vec b}K_{1i}K_{1j}$, and where the overdot denotes the partial derivative with respect to time,~$\dot{\vec b}=\D t{\vec b}$.
Combining $\vec K_1$~and~$\vec K_2$, the corresponding first step in the scheme~\eqref{eq:ieuabj} predicts the change
\begin{align}
\Delta \vec X={}&\vec a_0h+ \vec b_0{\Delta W} +\rat12\vec b'_0\vec b_0({\Delta W}^2-S^2h)
\nonumber\\&{}
+\rat12({\Delta W}-S\sqrt h)\left[ h\vec a'_0\vec b_0+\rat12({\Delta W}^2-S^2h)\vec b''_0\vec b_0\vec b_0\right]
\nonumber\\&{}
+\rat12h({\Delta W}+S\sqrt h)(\dot {\vec b}_0+\vec b'_0\vec a_0)
+\Ord{h^2,h^4}\,.
\label{eq:nmrkab}
\end{align}
Provided $S^2=1+\Ord{h,h}$ the first lines match to~\Ord{h^2,h^3}: normally $S^2=1$ as specified in~\eqref{eq:ieuabj}. Other terms detailed in~\eqref{eq:nmrkab} are~\Ord{0,h^3} provided $\operatorname{E}(S)=\Ord{0,h}$: normally set to be zero as specified in~\eqref{eq:ieuabj}. Hence one step of the scheme~\eqref{eq:ieuabj} matches the solution to~\Ord{h^2,h^3}. The local error over one step of~\Ord{h^2,h^3} leads to, over \Ord{1/h}~steps, a global error of \Ord{h,h^2}.
\end{proof}
This proof confirms the order of error seen in the earlier examples. Further, because we can readily transform between It\^o and Stratonovich \sde{}s, we now prove that a minor variation of the numerical scheme applies to Stratonovich \sde{}s.
\begin{corollary}[Stratonovich SDEs] \label{cor:nm1strat}
The Runge--Kutta like scheme~\eqref{eq:ieuabj}, but setting $S=0$\,, has errors~\Ord{h} when the \sde~\eqref{eq:sde1ab} is to be interpreted in the Stratonovich sense.
\end{corollary}
\begin{proof}
Interpreting the \sde~\eqref{eq:sde1ab} in the Stratonovich sense implies solutions are the same as the solutions of the It\^o \sde
\begin{equation*}
d\vec X=(\vec a+\rat12\vec b'\vec b)\,dt+\vec b\,dW.
\end{equation*}
Apply the scheme~\eqref{eq:ieuabj} (with $S=\pm1$ as appropriate to an It\^o \sde), or the analysis of the previous proof, to this It\^o \sde. Then, for example, the one step change~\eqref{eq:nmrkab} becomes
\begin{equation*}
\Delta \vec X=(\vec a_0+\rat12\vec b'_0\vec b_0)h+\vec b_0{\Delta W}+\rat12\vec b'_0\vec b_0({\Delta W}^2-h)+\Ord{h^2,h^3}.
\end{equation*}
The component of the deterministic drift term that involves~$\vec b_0\vec b'_0$ cancel leaving, in terms of the coefficient functions of the Stratonovich \sde,
\begin{equation}
\Delta \vec X=a_0h+\vec b_0{\Delta W}+\rat12\vec b'_0\vec b_0{\Delta W}^2 +\Ord{h^2,h^3}.
\label{eq:nm1strat}
\end{equation}
Now apply the scheme~\eqref{eq:ieuabj} with $S=0$ to the Stratonovich \sde:
Taylor series expansions obtain the one step numerical prediction as~\eqref{eq:nmrkab} upon setting $S=0$\,. This one step numerical prediction is the same as~\eqref{eq:nm1strat} to the same order of errors. Thus the scheme~\eqref{eq:ieuabj} with $S=0$ solves the Stratonovich interpretation of the \sde~\eqref{eq:sde1ab}.
\end{proof}
\begin{exercise}[iterated integrals]
Consider the scalar \sde\ $dX=X\,dW$. This \sde\ is shorthand for the It\^o integral $X_t=X_0+\intw s{X_s}$\,.
Over a small time interval~$\Delta t=h$ this integral gives $X_h=X_0+\intw t{X_t}$\,. Use this as the start of an iteration to provide successively more accurate approximations to~$X_h$: successive approximations are successive truncations of
\begin{equation*}
X_h\approx X_0+X_0\, {\Delta W}
+X_0\left[\rat12({\Delta W} )^2-\rat12h\right]
+X_0\left[\rat16({\Delta W} )^3-\rat12h{\Delta W} \right].
\end{equation*}
Determine the integral remainders for each of the approximations.
\end{exercise}
\begin{exercise}[quadratic convergence]
Adapt the proof of Lemma~\ref{thm:rkaxbdw} to prove that in the specific case when the drift~$\vec a=\vec\alpha(t)+\beta(t)\vec X$ and the volatility, independent of~$x$, satisfies $\dot {\vec b}=\beta \vec b$, then the scheme has local error~$\Ord{h^3,h^5}$ and hence global error~$\Ord{h^2,h^4}$, as seen in Figure~\ref{fig:sde1abll1}.
\end{exercise}
\section{Conclusion}
A good basic numerical scheme for integrating It\^o \sde{}s is the Runge--Kutta like scheme~\eqref{eq:ieuabj} (set $S_k=0$ to integrate Stratonovich \sde{}s). A teacher could introduce it in the context of the introduction to numerical \sde{}s outlined by Higham~\cite{Higham01}.
One of the appealing features of the scheme~\eqref{eq:ieuabj} is that it reduces, for small noise, to a well known scheme for deterministic \ode{}s. Consequently, we expect the global error \Ord{\|\vec a\|h^2+\|\vec b\|h} for some norms of the drift and volatility. Such more general expressions of the error should be useful in multiscale simulations where the strength of the noise depends upon the macroscale time step, such as in the modelling of a stochastic Hopf bifurcation~\cite[\S5.4.2]{Roberts06k}.
One required extension of the scheme~\eqref{eq:ieuabj} is to generalise it, if possible, to the case of multiple independent noises. I am not aware of an attractive generalisation to this practically important case.
\bibliographystyle{plain}
|
1,314,259,995,643 | arxiv | \section{Introduction}
\IEEEPARstart{I}{n} dexterous reach-to-lift-and-grasp tasks without vision, non-amputee individuals use tactile sensations from their hand and fingers to localize and form their hand to the contours of the object upon contact in order to securely yet economically grasp it \cite{Karl2013NonvisualReach,Winges2003TheGrasp,Gentilucci1997TactileMovements}. Haptic (tactile, kinesthetic, and proprioceptive) cues are then used to inform motor coordination to successfully lift the object \cite{Johansson1996SensoryHumans}. In addition, reflexive control is induced in response to cutaneous signals indicating tactile events such as slippage or unanticipated deformation of the object. These reactive sensorimotor controllers compensate for grip-force errors by facilitating rapid grip-force adjustments \cite{Johansson1992Sensory-MotorActions,Nakajima2006Location-specificMuscles} and thus serve to complement volitional motor control.
Amputees using clinical myoelectric prostheses lack the haptic sensation that is essential for dexterous sensorimotor control. Instead, they must rely heavily on vision to complete activities of daily living \cite{Atkins1996EpidemiologicPriorities,Sobuh2014}. This visual dependency is not only cognitively burdensome \cite{Thomas2020}, but it also significantly limits manipulation abilities in activities where vision is constrained or unavailable (e.g., watching a screen, searching for an object in the dark).
Thus, in order for an amputee to dexterously accomplish a reach-to-grasp-and-lift task with their prosthesis in the absence of vision, the prosthesis must support volitional and reflexive control in a manner consonant with the intact sensorimotor system. In particular, it should include support for: (1) tactile sensing capable of detecting contact location; (2) haptic feedback mechanisms for conveying contact information to the amputee; (3) an ability to reflexively react to adverse events, like slip or excessive grasping force, which could unintentionally deform or break objects.
Various approaches for sensing contact location for robotic hands and fingers have been discussed in the literature. One common approach is to create individual tactile sensing elements arranged in an array or matrix \cite{Ponraj2019ActiveGrippers,Osborn2014}. While this taxel-based approach is capable of measuring pressure and contact location, it requires many sensing elements to cover a large area, and the sensed location is discrete. Furthermore, increasing the resolution of the system requires reducing the size of the taxels as well as the distance between them, which can be difficult to construct. Electronic skins that do not require as many sensing elements as a typical tactile array can provide continuous, multi-site contact location but generally require substantial computational power to compute accurate measures \cite{Lee2021Piezo}. Finally, there are some commercial sensors like the BioTac \cite{Jimenez2014} that have been used for sensing in prosthetics, but they provide tactile data only for the fingertip, which may be insufficient for ensuring a stable whole-hand grasp, and they are typically expensive and delicate.
Considerable research has also focused on the challenge of providing haptic feedback of contact location \cite{Antfolk2013b,Hartmann2014TowardsEmbodiment,Shehata2020MechanotactileUse}. Often, these approaches have been limited to discrete location feedback and involve the use of multiple mechanotactile actuators. Researchers have, for example, used both servo motors and vibrotactile actuators mounted on the forearm to portray contact location from each of the five fingers of a prosthesis \cite{Antfolk2013b, Antfolk2013a}. These methods, however, cannot provide continuous feedback of contact location and also tend to be bulky. Electrotactile feedback has also been investigated to provide discretized contact location and force \cite{Hartmann2014TowardsEmbodiment,Ward2018Multi-channelHand, Scott1980Sensory-feedbackControl}. While it may be more compact than mechanical actuators, the stimulation from electrotactile feedback has been shown to interfere with EMG signals and elicit sensations that can be perceived as unpleasant \cite{Antfolk2013SensoryProsthetics}.
Paralleling advancements in haptic sensing and haptic feedback technologies for upper-limb prostheses are investigations into the efficacy and utility of autonomous control approaches for prosthetic hands \cite{Salisbury1967APrehension,Chappell1987ControlHand,Nightingale1985MicroprocessorArm}. Researchers have shown that slip prevention and compliant grasping controllers reduce object slips and breaks
\cite{Osborn2016, Matulevich2013UtilityControl}. Similar controllers have even been implemented in commercial hands such as the Ottobock SensorHand Speed \cite{OttobockSensorHand}. In addition to improving functional performance, these controllers alleviate some of amputees' mental burden, as they operate without the user in the control loop. However, these controllers can still fail because of false negatives or false positives. In the former scenario, the controller misses an adverse event, while in the latter, an unwanted reaction is generated (e.g., increasing grip force during a fragile object transfer). Both these failure modes could cause the user to distrust the system because they are unaware of the contexts or reasons for failure.
In an effort to overcome these limitations on contact-location sensing, contact-location feedback, and autonomous control, we previously developed a sensorimotor-inspired prosthesis system featuring a novel contact-location sensor and vibrotactile feedback with anti-slip and anti-overgrasping reflex controllers \cite{Thomas2021Sensorimotor-inspiredVision}. Our contact-location sensor uses only three electrical leads and provides continuous, single-site contact location over the outer and inner surfaces of the fingers. Additionally, we provided continuously amplitude- and frequency-modulated vibrotactile feedback to convey continuous contact location and the presence of an object in the grasp, as sensed by the thumb-mounted pressure sensor. We showed that the combination of vibrotactile feedback and reflexive control in a myoelectric prosthesis improved performance consistency of a reach-to-pick-and-place task without direct vision \cite{Thomas2021Sensorimotor-inspiredVision} compared to performance with a standard myoelectric prosthesis.
While our prior work demonstrated the potential utility of a sensorimotor-inspired prosthesis control system, there are still many knowledge gaps that need to be addressed to advance such a system towards clinical viability. First, the sensitivity and resolution of the contact-location sensor need to be thoroughly characterized to inform future research. Second, it should be determined whether the addition of haptic feedback offers a significant improvement over pure reflexive control. Third, a modality-matched haptic feedback approach should be investigated to see whether it offers a significant improvement over non-modality-matched vibrotactile display. Indeed, modality-matched feedback is thought to be more intuitive and feel more natural to the average user \cite{Kim2010OnProsthetics}. Thus, we advance our prior work here with: (1) a characterization of the contact-location sensor, (2) an additional experimental condition consisting of the myoelectric prosthesis with only tactile reflexes, and (3) an additional experimental condition of the myoelectric prosthesis with tactile reflexes and modality-matched distributed pressure feedback of contact location. We hypothesize that the particular combination of reflex controllers and distributed pressure feedback would result in the largest improvement over the standard prosthesis, due to the modality-matched contact-location feedback. With this work, we aim to provide additional contexts for how a hybrid approach may work to improve prosthesis performance.
\begin{figure}[t]
\centering
\vspace{1em}
\includegraphics[width=\columnwidth]{Figures/setup_NoDAQ_V4.pdf}
\caption{The experiment involves picking up and moving a cylindrical object using a myoelectric prosthesis fitted with two custom sensors. Furthermore, participants had to fixate on a visual target in front of them, rather than on the interaction of the prosthetic hand and the object. Peripheral vision was not occluded. In addition to receiving aid from reflex controllers in the prosthesis, some participants also received haptic feedback in the form of either vibrotactile feedback (C-2 tactor) or distributed pressure (Bellowband).}
\label{fig:setup}
\end{figure}
\section{Methods}
\subsection{Participants}
Under the MPI-IS Haptic Intelligence Department's framework agreement with the Max Planck Ethics Council (protocol number F005D, approved in February of 2021), we recruited 31 new participants
to perform a reach-to-pick-and-place task using a myoelectric prosthesis in a between-subjects study with four experimental conditions. Data from 17 participants from our previous study \cite{Thomas2021Sensorimotor-inspiredVision} were re-used in the analysis for the present study; 8 were in the Standard condition, and 9 were in the condition with both vibrotactile feedback and tactile reflexes. In total, there were 48 participants (13 female, 35 male, age 31.4 $\pm$ 6.68).
Participants were randomly assigned to one of four conditions that were balanced for gender and handedness; the (self-reported) five left-handed participants and one ambidextrous participant all did the study with the right hand. The experiment lasted approximately one hour, and participants not employed by the Max Planck Society received 8 euros per hour as compensation.
\subsection{Experimental Task}
Participants used the myoelectric prosthesis to grasp and relocate a cylindrical aluminum object (12 cm long, 2 cm diameter) from one fixed bin ($3.8 \times 3.8 \times 7.6$ cm) to another stationary bin ($3.8 \times 3.8 \times 5.1$ cm) that was 17.5 cm away (see \ref{fig:setup}. This object roughly resembles the size and shape of a thick highlighter pen or an electric toothbrush. Additionally, participants were required to complete said task without looking directly at the prosthetic hand or object. Rather, they had to fixate on a visual target 3\,m away on the wall in front of them (peripheral vision was not occluded). The eye-tracking glasses provided a measure of the participants' exact gaze direction. This visual constraint mimics multitasking situations where visual attention is diverted away from a dexterous task, such as when grabbing a cup of tea while focusing on a video presentation.
The absence of direct vision rendered this reach-to-pick-and-place task especially difficult in two important ways. First, due to the slim profile of the cylindrical object and the geometry of the hand, the object should be grasped at an ideal location with the correct orientation. Second, excessive grasping force caused the object to slide out of hand's grasp. Thus, it was hypothesized that haptic feedback of contact location and the presence of the object in the grasp, in combination with autonomous grasping controllers, would assist in these two challenges.
\subsection{Experimental Hardware and Software}
The measurement devices in the system included a seven-camera Vicon Vantage motion-capture system, a custom-built three-axis force plate, and Tobii Pro 2 eye-tracking glasses to identify the participant's gaze direction. A Delsys Bagnoli surface electromyography (sEMG) system was used for proportional myoelectric control of the Ottobock SensorHand Speed prosthesis using two sEMG electrodes on the wrist flexor and extensor muscle groups. Two custom-built tactile sensors were placed on the thumb and finger of the prosthesis (see Fig. \ref{fig:hand}). Control was implemented through an NI myRIO DAQ and Simulink with QUARC Real-Time software at a 1000\,Hz sampling rate. Complete details of our measurement hardware are presented in \cite{Thomas2021Sensorimotor-inspiredVision}.
The two haptic feedback displays were a C-2 tactor to provide vibrotactile feedback (driven by NI myRIO and a linear current amplifier) and an eight-tactor Bellowband pneumatic display \cite{Young2019Bellowband:Vibration} to provide distributed pressure feedback. The Bellowband was programmed in C and controlled with an NI cDAQ-9174 housing an analog input module (NI-9205) and an analog output module (NI-9264) at a 250\,Hz sample rate.
The 1-DoF Ottobock SensorHand Speed myoelectric prosthesis was worn by able-bodied individuals using a 3D-printed adaptor attached to a wrist brace. A counterweight pulley system was implemented to offset 80\% of the prosthesis's mass (500\,g) to replicate the load typically experienced by amputees.
The entire setup is shown in Fig. \ref{fig:setup}.
\subsection{Tactile Sensors}
Two custom-built fabric-based sensors were used to separately obtain pressure and contact-location information. The piezoresistive pressure sensor was similar to the one developed by Osborn et al. \cite{Osborn2016} and was placed on the prosthesis thumb. It operates on the same principles as a force-sensitive resistor sensor. Our novel contact-location sensor \cite{Thomas2021Sensorimotor-inspiredVision} was wrapped around the fingers, covering both palmar and dorsal regions. The contact-location sensor consists of two layers, both of which are fixed separately within a silicone frame. The bottom layer consists of a long piece of piezoresistive fabric, while the top layer consists of a long piece of conductive fabric. When a voltage is applied across the length of the piezoresistive fabric, a voltage gradient is created; when the top conductive layer contacts the bottom at a specific point, a distinct output voltage is generated, similar to a potentiometer. For a depiction of the sensor layers, refer to Fig. 3 in \cite{Thomas2021Sensorimotor-inspiredVision}.
The relationship between contact location along the length of the sensor and output voltage is shown in Fig. \ref{fig:characterization}a: the sensor's response was characterized for both a 2.6\,mm by 0.6\,mm point probe and a cylindrical object whose dimensions matched the test object in our experiments. The cylindrical object was oriented perpendicular to the sensor, as though it was being grasped. The mapping exhibits nonlinear behavior due to the concavity in the inner region of the prosthesis fingers. When the contact-location sensor was not pressed, the baseline voltage reading was around 0.4\,V. This is the lower limit for contact with the point probe, which occurs around 105\,mm. In contrast, the cylindrical probe is able to elicit lower voltages because it makes contact with a larger area of the sensor. To determine the minimum force required to activate the sensor, we used an ATI Nano 17 force/torque sensor with both the cylindrical probe and a 17\,mm circular flat probe to press at 29 evenly distributed locations. The average activation force when using the cylindrical probe was 1.5 $\pm$ 0.55 N, and it was 2.2 $\pm$ 0.56 N for the flat probe.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/contact-loc_sensor_characterization.pdf}
\caption{The voltage output of the contact-location sensor when pressed with a point probe and a cylindrical probe the size of the test object. 0\,mm is the most proximal dorsal location, while 140\,mm is the most proximal palmar location.}
\label{fig:characterization}
\end{figure}
\subsection{Haptic Feedback Systems}
The haptic feedback is designed to provide two kinds of information that are important for grasping accuracy: 1) where contact occurs on the prosthesis fingers, and 2) whether an object is in the grasp of the prosthesis.
\subsubsection{Vibrotactile Feedback}
\indent The C-2 tactor was worn just above the elbow on the biceps muscle. As in \cite{Thomas2021Sensorimotor-inspiredVision}, it provided amplitude-modulated feedback of contact location and frequency-modulated binary feedback of grasping pressure.
The contact-location sensor's signals were first normalized between 0 (proximal) and 1 (distal), regardless of whether contact was on the dorsal or palmar region. Contact on the dorsal side was mapped to a constant vibration, while contact on the palmar side was mapped to a pulsed vibration. The mapping equation for current input to the C-2 tactor was
\begin{equation}
I = \begin{cases}
A(x) \cdot \sin{(W \cdot 250\,\textrm{Hz} \cdot t)} &, \, \text{dorsal}\\
E_v(t) \cdot f(x) \cdot \sin{(W \cdot f \cdot t)} &, \, \text{palmar}\\
\end{cases}
\label{Eqn:VibMapping}
\end{equation}
\noindent where $x$ is the normalized contact-location sensor signal, $A(x)$ is $0.5\, \textrm{A} \cdot \sqrt{1-x}$, $W$ is $2 \pi\,\frac{\textrm{rad}}{\textrm{cycle}}$, $E_v(t)$ is an envelope function denoted by $|\sin{(W \cdot 4.75\,\textrm{Hz} \cdot t)}|$, $f$ is the frequency in Hz, and $t$ is the time in seconds. Example signals are shown in Fig. \ref{fig:sensor}. When the pressure sensor on the prosthesis thumb exceeds a heuristically determined threshold ($p_g>0.2$\,V signifying object contact), the frequency of the vibration stimulus decreases linearly from 250\,Hz to 150\,Hz over a 2\,s period, as shown in \ref{fig:sensor}(e).\\
\begin{figure}[t]
\centering
\vspace{1em}
\includegraphics[width=\columnwidth]{Figures/contactLocationSensorV10.pdf}
\caption{
Left: Example voltages measured at different points on the contact-location sensor. (a) A proximal dorsal location. (b) A distal dorsal location. (c) A distal palmar location. (d) A proximal palmar location. (e) An object grasped by the prosthesis; it touches the same distal palmar point contacted in (c). Right: Feedback signals for the vibrotactile and pneumatic pressure actuators corresponding to the four example contact locations (a--d); the depicted vibration signals are 1\,s long, and (e) starts 1\,s after object contact. As shown in (e), the haptic feedback is altered distinctively when both the pressure sensor and the contact-location sensor are activated, since this condition indicates an object is most likely within the grasp of the prosthesis.}
\label{fig:sensor}
\end{figure}
\subsubsection{Pneumatic Pressure Feedback}
Each bellow of the Bellowband represented a different region of the contact-location sensor. The Bellowband was worn on the upper arm, with the bellow representing the fingertip located at the posterior part of the arm, above the elbow. This orientation of the Bellowband was chosen because the two-point discrimination threshold is smaller on the posterior part of the upper arm compared to the anterior part \cite{Nolan1982Two-PointWomen,Koo2016}, which we confirmed in pilot studies. At most, two neighboring bellows were activated to indicate transitions between regions.
\begin{figure}[t]
\centering
\vspace{1em}
\includegraphics[width=\columnwidth]{Figures/traces3.pdf}
\caption{(a) Excerpt of time-series traces from a representative participant's trial in the Reflex-Vib condition as they found, grasped, picked up, moved, and set down the object. The vertical dashed line indicates the time point when the participant successfully placed the object into the end bin. The traces shown are the closing command $u_c$, the grip aperture $a$, the pressure sensor signal $p$, the contact-location sensor signal $V_x$, the C-2 tactor signal $I$, and the object's displacement $D$ from the end bin and height $H$ above the force plate as measured by the motion-capture system. The participant first attempts to localize the prosthesis hand on the object, as shown by the contact-location signal and C-2 current traces. Next, the participant activates their EMG, which is modulated once the pressure sensor signal ramps up. One fast slip event is also detected from the pressure sensor signal and compensated for. (b) Activation signals for four of the bellows on the pneumatically actuated pressure band as they would be driven by the pressure sensor and contact-location sensor signals from (a). Bellows 5, 6, 7, and 8 (not shown) would be completely deflated at a constant 0.1\,V because all contacts occurred on the interior surface of the fingers in this trial.}
\label{fig:Traces}
\vspace{-1em}
\end{figure}
To map the detected contact location to the bellows on the Bellowband, $B_1$, $B_2$, $\cdots$, $B_8$, the sensor output voltage was delineated into eight regions. Defining voltages $\nu_1$, $\nu_2$, $\cdots$, $\nu_9$ were used to demarcate the boundaries between sensor regions, such that any voltage $V_x$ elicited by the sensor falls between the two consecutive defining voltages $\nu_i$ and $\nu_{i+1}$ ($i$ $\in$ 1...8), which
corresponds to bellow $B_i$. We command the pressure profile $P_i(t)$ to this bellow as follows:
\begin{equation}
P_i(t) = E_p(t) \cdot \left [(p_{\max}-p_{\min}) \cdot \gamma_i + p_{\min} \right]
\label{Eqn: pressure bband}
\end{equation}
\noindent where $E_p(t)={0.25\cdot\sin{(2\pi\frac{\textrm{rad}}{\textrm{cycle}} \cdot 3\,\textrm{Hz} \cdot t)}+0.75}$ is an envelope function that was used to prevent sensory adaptation to the pressure stimuli, $p_{\max}$ is the maximum allowable pressure, $\gamma_i$ is a proportional gain (defined in Eqn. \ref{eqn_gamma}) that represents where $V_x$ falls between $\nu_i$ and $\nu_{i+1}$, and $p_{\min}=0.1$\,V is the minimum pressure (completely deflated bellow). We set $p_{\max}=0.8$\,V (partially inflated) when providing only contact location feedback from the contact location sensor. Alternatively, when the pressure sensor on the prosthesis thumb exceeds a heuristically determined threshold ($p_g>0.2$\,V signifying object grasp), we set $p_{\max}=1.5$\,V (completely inflated). This larger maximum pressure value differentiates the simultaneous activation of the pressure sensor and contact-location sensor from just the contact-location sensor alone, similar to the change of the vibration stimulus frequency.
Pilot testing showed that discrete jumps between neighboring bellows were difficult to interpret, so we developed a method for distributing actuation between two neighboring bellows. For each sensor region, we defined a threshold voltage $\tau_{i}$, where $\nu_i < \tau_{i} < \nu_{i+1}$. If $\nu_i < V_x < \tau_{i}$, neighboring bellow $B_{i-1}$ was activated in addition to $B_i$. Otherwise, neighboring bellow $B_{i+1}$ was activated. The proportion that each bellow is actuated depends on the gain $\gamma_i$, which is calculated as
\begin{equation}
\gamma_i = \hspace{-.3em}
\begin{cases}
0.5 + 0.5 \frac{V_x - \tau_{i}}{\nu_{i+1}-\tau_i}, & \hspace{-.5em}
\begin{cases}
i=1,\cdots,7 ; V_x \geq \tau_{i}\\
\end{cases} \\
0.5 + 0.5 \frac{V_x - \nu_{i}}{\tau_i - \nu_i}, &
\hspace{-.5em}
\begin{cases}
i=2,\cdots,8; V_x < \tau_{i}\\
\end{cases}\\
1, & \hspace{-.5em}
\begin{cases}
i=1; V_x < \tau_{i} \\
\text{or } i=8; V_x \geq \tau_{i} \\
\end{cases}
\end{cases}
\hspace{-2em}
\label{eqn_gamma}
\end{equation}
\noindent The pressure $P_{i\pm1}$ of the closer neighboring bellow $B_{i\pm1}$ is set to
\begin{equation}
P_{i\pm1} = E_p(t) \cdot \left [(p_{\max}-p_{\min}) \cdot \gamma_{i\pm1} + p_{\min} \right]
\end{equation}
\noindent where $\gamma_{i+1}$ is equal to $1-\gamma_i$ for $i = 1,...,7 \text{ \& } V_x \geq \tau_{i}$, and $\gamma_{i-1}$ is equal to $1-\gamma_i$ for $i=2,...,8 \text{ \& } V_x < \tau_{i}$.
Fig. \ref{fig:sensor} shows the location of each bellow relative to the upper arm, as well as example actuation outputs.
\subsection{Reflex System}
The reflex system consisted of three autonomous controllers to comprehensively prevent various grasp errors such as over-grasping of the objects, high-speed slips, and slow-speed slips. These controllers build on work done by Osborn et al. \cite{Osborn2016} and rely on the pressure signal from the piezoresistive pressure sensor on the prosthetic thumb.
\subsubsection{Over-grasp Controller}
\indent This controller uses the pressure sensor signal to prevent excessive grasp force by modulating the closing command $u_{c}$ to the motor according to the control law
\begin{equation}
u_{c} = \begin{cases}
u_{c} \cdot e^{-K \cdot p} &, \, p \geq p_{g}, \ \ \text {palmar}\\
u_{c} &, \, \text {otherwise}\\
\end{cases}
\label{Eqn:AntiOverGrasp}
\end{equation}
\noindent where $K$ is $3$ V$^{-1}$, $p$ is the pressure sensor voltage, and $p_{g} = 0.2$\,V is the pressure threshold for detecting object contact.
\subsubsection{Anti-slip Controller}
This controller uses the pressure sensor signal to detect and respond to fast and slow slips.
{\em Fast slips} were detected by rapid decreases in pressure according to the following equation:
\begin{equation}
{\text{Slip}}_{f} = \begin{cases}
1 &, \, \frac{dp}{dt} \le {q}_{fs}\\
0 &, \, \text {otherwise}\\
\end{cases}
\label{Eqn:Fastlip}
\end{equation}
\noindent where $\frac{dp}{dt}$ is the time derivative of the pressure sensor signal and $q_{fs}=-20$ Vs$^{-1}$ is a heuristically determined threshold.
When a fast slip occurs, a closing command is sent to the motor at maximum voltage for 60\,ms to prevent the object from falling out of the hand.
{\em Slow slips} were detected by moderate decreases in pressure according to the following equation:
\begin{equation}
{\text{Slip}}_{s} = \begin{cases}
1 &, \, p(t) - p(t-0.5\,\textrm{s}) < {p}_{ss} \\
0 &, \, \text {otherwise}\\
\end{cases}
\label{Eqn:SlowSlip}
\end{equation}
\noindent where $p(t)$ is the pressure sensor signal at the current time and $p_{ss}= -0.35$\,V is a heuristically determined threshold.
When a slow slip occurs, a closing command is sent to the motor at maximum voltage for 30\,ms. See Fig. \ref{fig:Traces} for excerpts of relevant signals including the haptic feedback and slip control.
\subsection{Experimental Protocol}
Participants were randomized into one of four conditions: Standard (myoelectric prosthesis with no additional features), Reflex (prosthesis featuring reflex controllers), Reflex-Vib (prosthesis featuring reflex controllers and vibrotactile feedback), and Reflex-Pneu (prosthesis featuring reflex controllers and pneumatic pressure feedback).
Each participant completed the task in only one of the four conditions (between-participants design).
Because the eye-tracking glasses are not compatible with prescription glasses, all participants were required to successfully read the largest (topmost) line of an eye exam chart from a distance of 3~m before proceeding with the experiment.
Next, participants completed a demographics survey with questions regarding occupation, age, gender, handedness, and experience with myoelectric and haptic devices.
The experimenter then helped the participant don the prosthesis via the wrist-brace attachment. The participant's skin was cleaned with an alcohol wipe in preparation for the sEMG electrode placement.
sEMG signals were calibrated using maximum voluntary contractions of the wrist flexor and extensor. For more details on calibration steps, refer to \cite{Thomas2021Sensorimotor-inspiredVision}.
Following calibration, the participant practiced controlling the prosthetic hand using their muscle activity.
To account for typical EMG drift \cite{Kyranou2018CausesProstheses} that could occur during the experiment, participants were instructed on how to re-zero their signals. Participants re-zeroed their signals whenever they wished and also when prompted by the experimenter.
If the participant was assigned to a condition receiving haptic feedback (Reflex-Vib or Reflex-Pneu), the proper device (C-2 tactor or the Bellowband) was attached to their upper arm. Finally, the experimenter helped the participant don the eye-tracking glasses. Calibration of the glasses was done through iMotions software.
The experimenter then trained the participant on the ideal reach-to-pick-and-place strategy. After this coaching, participants were asked to complete the task successfully two times while being able to observe the prosthetic hand and object. They were then given 5 minutes to try to complete the task while looking only at the visual target on the wall. This timed practice session ended early if they successfully completed the task twice. Participants then completed twenty trials of the reach-to-pick-and-place task while keeping their gaze on the visual target. A trial began when the cylindrical object was placed inside the start bin, and it ended when the object was placed into the end bin or when 60 seconds had passed.
After all twenty trials, participants completed a survey based on the NASA-TLX. Survey questions are described in Section \ref{Sec:survey metrics}.
\subsection{Metrics}
\subsubsection{Task Success}
To evaluate success in the reach-to-pick-and-place task, the following three milestones with binary outcomes were extracted from each trial: (1) successfully lifting the object from the start bin, (2) successfully reaching the end bin with the object, and (3) successfully setting the object inside the end bin. A lift is defined as holding the object in the air for at least 1 second. Reaching the end bin is defined as coming within an 8\,cm radius of the end bin. Motion-capture and early trial-completion data were used to track milestone achievement. The time required to reach each of the milestones was also measured.
We also counted the number of drops that occurred after the object was successfully lifted. This number was determined by assessing sharp decreases in the object's height (relative to the prosthesis) using motion-capture data.
\subsubsection{Grasping Location}
To obtain the most reliable ground truth measurements of grasping location, the angle of the prosthesis fingertip relative to the object during attempted grasping was calculated using motion-capture data. These angles were measured for 1.75\,s before the completion of grasping.
\subsubsection{Proportion of Time Spent Looking at the Task (Cheating)}
Gaze direction was analyzed using the eye-tracking data recorded in iMotions software. A horizontal line below the visual target was drawn for each frame of an individual's point-of-view recording. The gaze direction was automatically computed by iMotions. The proportion of time spent looking toward the task was calculated by dividing the time spent fixating below the horizontal line by the trial time.
\subsubsection{Survey}
\label{Sec:survey metrics}
The post-experiment survey asked participants to rate their perceived performance at: (1) finding the object, (2) grasping the object, (3) lifting the object, (4) moving the object to the end bin, and (5) setting the object inside the end bin. It further asked them to rate their perceived mental effort, physical effort, and level of physical comfort during the experiment. Next it asked them to evaluate how much they relied on auditory, visual, and somatosensory cues to complete the task. Each of the rating questions was a sliding scale from 0 to 100. Finally, the survey prompted participants to provide comments and suggestions about their experience.
\subsection{Statistical Analysis}
All statistical tests were performed in RStudio v1.2.1335. For all mixed-model analysis, participant was treated as a random effect. We use $\alpha=0.05$ to determine significance. We report the estimates of the fixed effects $\beta$ and their standard error $SE$.
\subsubsection{Time Spent Fixating on the Task (Cheating)}
A linear mixed model was used to gauge differences in the proportion of time spent visually cheating among the conditions. The fixed effect was condition.
\subsubsection{Task Milestones}
Three separate logistic mixed-effects models were used to analyze the binary outcomes of lifting the object, reaching the end bin with the object, and setting down the object into the end bin. The fixed effects for these models were condition, the proportion of time spent looking toward the task (cheating), and trial number. These models were run on trials where participants looked away no more than 37\% of the time. This threshold was chosen as a compromise value halfway between the 75th percentile (24\%) and 50\% to balance removing too many and too few trials. This threshold penalized only those trials for which participants looked away from the visual target more than 37\% of the time, and the resulting dataset contained 701 out of 800 possible trials. 20 of the removed trials were from a participant for whom the eye-tracking system failed to record, while the remaining 79 trials removed were those who cheated more than 37\% of the time.
Three separate linear mixed models were run to assess the time required to lift the object, move the object to the end bin, and set the object into the end bin. The fixed and random effects were the same as the previously mentioned logistic mixed models.
\subsubsection{Number of Drops}
A linear mixed model was used to assess the effect of condition and trial on the number of drops that occurred per trial.
This model was run only on the 647 trials in which the object was lifted, regardless of cheating.
\subsubsection{Grasp Location}
Finally, all successful grasps were analyzed to determine what the successful range of grasping locations was. A logistic mixed model was used on all grasps to assess the fixed effect of condition on whether the grasp location was within the successful range or not. In addition, the earth mover's distance metric was calculated to understand how the histograms of successful grasping locations compared to the histograms of all grasping locations for each condition.
\subsubsection{Survey}
Separate linear models were run for each of the rating questions described in Section \ref{Sec:survey metrics}, where condition was the fixed effect.
\section{Results}
\label{Sec:Results}
Eight participants were excluded from data analysis due to feedback and control issues that affected task performance. One participant mentioned that they could not feel the vibrotactile stimulus at all. The pressure sensor was not functioning for another participant, while the contact-location sensor was not functioning for a third participant. Finally, five participants experienced unreliable EMG signal quality throughout the experiment, as evidenced by their high average number of re-zeroing actions (at least two per trial). The following results represent the data from the remaining 40 participants; ten were in each of the four conditions.
\begin{figure}[tb!]
\centering
\vspace{1em}
\includegraphics[width=.75\columnwidth]{Figures/milestones_vertical.pdf}
\caption{The probability of accomplishing the three task milestones versus the proportion of time spent cheating, by condition. Solid lines indicate the average predicted probabilities from the mixed models, while individual markers show the average metric for each participant.}
\label{fig:Milestones}
\end{figure}
\begin{table*}[!t]
\vspace{-1em}
\caption{Summary of model statistics for odds of reaching task milestones}
\vspace{-1em}
\label{Table:TaskMilestones}
\tiny
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccc|ccc|ccc|ccc|ccc|ccc}
\hline
& \multicolumn{3}{c}{Intercept (Standard)} & \multicolumn{3}{c}{Reflex} & \multicolumn{3}{c}{Reflex-Vib} & \multicolumn{3}{c}{Reflex-Pneu} & \multicolumn{3}{c}{Cheating} &
\multicolumn{3}{c}{Trial Number}\\
\hline
& $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ \\
Lifting object & 0.10 & 0.41 & 0.81 & 0.96 & 0.46 & 0.04 & 1.00 & 0.46 & 0.03 & 0.94 & 0.49 & 0.06 & 4.31 & 1.33 & 0.001 & 0.02 & 0.02 & 0.18\\
Reaching bin & --0.26 & 0.46 & 0.58 & 0.94 & 0.55 & 0.09 & 1.11 & 0.55 & 0.04 & 0.42 & 0.56 & 0.45 & 4.22 & 1.29 & 0.001 & 0.04 & 0.02 & 0.04 \\
Placing object & --0.47 & 0.43 & 0.27 & 0.77 & 0.50 & 0.12 & 0.82 & 0.50 & 0.10 & 0.33 & 0.52 & 0.53 & 2.14 & 1.14 & 0.06 & 0.04 & 0.02 & 0.007\\
\hline
\end{tabular}}
\end{table*}
\begin{table*}[!tb]
\vspace{-1em}
\caption{Summary of model statistics for time to achieve milestones}
\vspace{-1em}
\label{Table:TimeMilestones}
\tiny
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccc|ccc|ccc|ccc|ccc|ccc}
\hline
& \multicolumn{3}{c}{Intercept (Standard)} & \multicolumn{3}{c}{Reflex} & \multicolumn{3}{c}{Reflex-Vib} & \multicolumn{3}{c}{Reflex-Pneu} & \multicolumn{3}{c}{Cheating} & \multicolumn{3}{c}{Trial Number}\\
\hline
& $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$\\
Lifting object & 30.8 & 2.28 & $<$ 0.001 & -3.26 & 2.29 & 0.16 & -1.53 & 2.24 & 0.50 & -2.47 & 2.34 & 0.30 & -3.57 & 6.76 & 0.60 & -0.35 & 0.11 & $<$0.001\\
Reaching bin & 33.0 & 2.59 & $<$ 0.001 & -3.68 & 2.68 & 0.18 & -1.41 & 2.66 & 0.60 & -1.20 & 2.82 & 0.67 & 0.63 & 7.25 & 0.93 & -0.39 & 0.11 & $<$ 0.001\\
Placing object & 36.8 & 2.59 & $<$ 0.001 & -1.82 & 2.54 & 0.48 & 0.71 & 2.53 & 0.78 & -0.52 & 2.70 & 0.85 & 5.59 & 7.36 & 0.45 & -0.50 & 0.11 & $<$ 0.001\\
\hline
\end{tabular}}
\end{table*}
\subsection{Task Milestones}
Fig. \ref{fig:Milestones} and Table \ref{Table:TaskMilestones} show complete task milestones results. An increased amount of cheating significantly increased the participant's odds of lifting the object. The trial number, however, had no effect on the odds of lifting the object. When controlling for amount of cheating and number of trials, both Reflex and Reflex-Vib significantly increased the odds of being able to lift the object, in comparison with the Standard condition. The same comparison is close to significant ($p$~=~0.06) for Reflex-Pneu. The odds of lifting the object from the start bin in the Standard condition was not significantly different from 50\%.
An increased amount of cheating also significantly improved the odds of reaching the bin. Similarly, higher trial number (experience with the task) significantly increased the odds of reaching the bin. When controlling for amount of cheating and number of trials, only Reflex-Vib significantly increased the odds of being able to move the object to the end bin, in comparison with the Standard condition. The odds of reaching the end bin in the Standard condition was not significantly different from 50\%.
An increased amount of cheating (looking toward the task) was close to significant in affecting the odds of setting the object in the end bin ($p=0.06$). However, higher trial number significantly improved the odds of complete success. When controlling for the amount of cheating and trial number, no condition resulted in odds that were significantly better than 50\%.
\begin{figure}[t]
\centering
\vspace{1em}
\includegraphics[width=\columnwidth]{Figures/polar_grasp.pdf}
\caption{Normalized polar histograms for the relative angle (degrees) between the prosthesis fingertips and the object. The Earth Mover's Distance (EMD) was computed between the successful grasp histogram and each grasp histograms from all conditions. A smaller value indicates more similarity with the successful grasp histogram.}
\label{Fig:grasphistogram}
\vspace{-1em}
\end{figure}
\begin{table*}[!t]
\vspace{-1em}
\caption{Summary of model statistics for survey results}
\vspace{-1em}
\label{Table:Survey}
\tiny
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lccc|ccc|ccc|ccc}
\hline
& \multicolumn{3}{c}{Intercept (Standard)} & \multicolumn{3}{c}{Reflex} & \multicolumn{3}{c}{Reflex-Vib} & \multicolumn{3}{c}{Reflex-Pneu} \\
\hline
& $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$ & $\beta$ & SE & $p$\\
Finding object & 70.8 & 6.97 & $<$ 0.001 & 1.1 & 9.86 & 0.91 & 4.7 & 9.86 & 0.64 & --7.1 & 9.86 & 0.48\\
Grasping object & 41.1 & 6.51 & $<$ 0.001 & 7.1 & 9.21 & 0.52 & --6.0 & 9.21 & 0.52 & 4.1 & 9.21 & 0.66\\
Lifting object & 75.4 & 6.52 & $<$ 0.001 & --2.8 & 9.23 & 0.76 & --7.7 & 9.23 & 0.41 & --13.6 & 9.23 & 0.15\\
Moving object & 67.1 & 6.66 & $<$ 0.001 & 2.9 & 9.42 & 0.76 & 7.6 & 9.42 & 0.43 & --5.7 & 9.42 & 0.55\\
Placing object & 75.4 & 5.93 & $<$ 0.001 & --21.3 & 8.39 & 0.02 & --5.1 & 8.39 & 0.55 & --8.4 & 8.39 & 0.32\\
Mental effort & 60.9 & 6.73 & $<$ 0.001 & --1.0 & 9.53 & 0.92 & 3.1 & 9.53 & 0.75 & --1.5 & 9.53 & 0.88\\
Physical effort & 49.6 & 6.49 & $<$ 0.001 & --3.5 & 9.18 & 0.71 & 9.4 & 9.18 & 0.31 & 14.8 & 9.18 & 0.12\\
Physical comfort & 64.5 & 6.67 & $<$ 0.001 & --1.9 & 9.43 & 0.84 & --3.7 & 9.43 & 0.70 & --14.3 & 9.43 & 0.14\\
Frustration & 50.9 & 6.32 & $<$ 0.001 & --4.4 & 8.94 & 0.63 & 9.40 & 8.94 & 0.30 & --1.9 & 8.94 & 0.83\\
Time pressure & 59.6 & 7.27 & $<$ 0.001 & --14.7 & 10.3 & 0.16 & --12.2 & 10.3 & 0.24 & --10.3 & 10.3 & 0.32\\
Auditory cues & 51.6 & 8.52 & $<$ 0.001 & 4.4 & 12.1 & 0.71 & 3.9 & 12.1 & 0.75 & 0.1 & 12.1 & 0.99\\
Visual cues & 34.3 & 7.93 & $<$ 0.001 & --1.0 & 11.1 & 0.92 & 7.5 & 11.1 & 0.50 & 19.1 & 11.1 & 0.09\\
Somatosensory cues & 85.2 & 4.22 & $<$ 0.001 & --2.9 & 5.96 & 0.63 & 2.5 & 5.96 & 0.68 & --11.8 & 5.96 & 0.055\\
\hline
\end{tabular}}
\end{table*}
Table \ref{Table:TimeMilestones} displays detailed statistics for milestone timing. An increased amount of cheating did not significantly affect the time required to complete any of the task milestones. The time required to reach each of the milestones did not significantly differ by condition. However, the trial number did significantly decrease the time required to lift, move, and set the object down, again showing the benefits of task experience.
\subsection{Number of Drops}
Trial number had no effect on the number of drops ($\beta$~=~\mbox{--0.004}, $SE$~=~0.004, $p$~=~0.33). The number of drops in the Standard condition was significantly greater than 0 ($\beta$~=~0.22, $SE$~=~0.09, $p$~$<$~0.02). Both the Reflex and the Reflex-Pneu conditions caused a significant increase in the number of drops (Reflex: $\beta$~=~0.24, $SE$~=~0.11, $p$~=~0.046; Reflex-Pneu: $\beta$~=~0.23, $SE$~=~0.11, $p$~=~0.049). The number of drops in the Reflex-Vib condition did not differ from that in the Standard condition ($\beta$~=~0.12, $SE$~=~0.11, $p$~=~0.30).
\subsection{Grasping Location}
Fig. \ref{Fig:grasphistogram} shows the polar histogram of the relative angles between the prosthetic finger and the object during attempted grasping for each condition compared to all successful grasps. Successful grasps are most often found between 40 and 60 degrees. The odds of being within the optimal grasp angle range were significantly less than 50\% in the Standard condition ($\beta$~=~--0.45, $SE$~=~0.10, $p$~$<$~0.001). Reflex and Reflex-Pneu had significantly lower odds compared to the Standard condition (Reflex: $\beta$~=~--0.25, $SE$~=~0.12, $p$~=~0.038; Reflex-Pneu: $\beta$~=~--0.31, $SE$~=~0.10, $p$~=~0.002). However, Reflex-Vib did not significantly differ from the Standard condition ($\beta$~=~0.07, $SE$~=~0.11, $p$~=~0.55). Post-hoc tests with a Bonferroni correction indicated that participants in Reflex-Vib had significantly better odds of being in the successful grasping range than those in the Reflex ($\beta$~=~0.32, $SE$~=~0.14, $p$~=~0.039) and Reflex-Pneu ($\beta$~=~0.38, $SE$~=~0.13, $p$~=~0.009) conditions. The earth mover's distance (EMD) metrics calculated for each condition support the results of the mixed-model analysis.
\subsection{Cheating Frequency}
The frequency of looking away from the visual target was greater than zero in the Standard condition ($\beta$~=~0.15, $SE$~=~0.03, $p$~$<$~0.001). The Reflex ($\beta$~=~--0.001, $SE$~=~0.04, $p$~=~0.97), Reflex-Vib ($\beta$~=~--0.007, $SE$~=~0.04, $p$~=~0.86), and Reflex-Pneu ($\beta$~=~0.05, $SE$~=~0.04, $p$~=~0.24) conditions did not significantly differ from the Standard condition. Thus, there was an equal amount of cheating across all conditions.
\subsection{Survey}
Participants in the Standard condition provided ratings for all survey questions that were significantly different from 0 (see Table \ref{Table:Survey} for complete results). The majority of survey responses did not significantly differ by condition, except for the following few cases. Participants in the Reflex condition rated their ability to set the object down (placing object) as significantly lower than the Standard condition. In a post-hoc test with a Bonferroni correction, participants in the Reflex-Pneu condition rated their use of somatosensory cues as significantly lower than those in the Reflex-Vib condition ($\beta$~=~--14.3, $SE$~=~5.96, $p$~=~0.02).
\section{Discussion}
In this study, we investigated how autonomous reflexes and two different forms of haptic feedback affect performance in a reach-to-pick-and-place task using a myoelectric upper-limb prosthesis without direct visual feedback. Our intent was to replicate tasks where observation is undesirable or impossible. We compared four conditions in a between-subjects study: a standard prosthesis, a prosthesis with reflex controllers to mitigate object slip and excessive grasping (Reflex condition), a prosthesis with reflex controllers and vibrotactile feedback of contact location (Reflex-Vib condition), and a prosthesis with reflex controllers and spatial pressure-based feedback of contact location (Reflex-Pneu condition). We also presented the design and characterization of a novel contact-location sensor that enabled the reflex controllers and haptic feedback.
While the prosthesis with reflex controllers improved the odds of lifting the object compared to the standard prosthesis condition, the prosthesis with reflex controllers and vibrotactile feedback was the only condition to improve performance in both lifting and moving the object to the end bin. The reflex controller that contributed the most to these results was likely the anti-overgrasping controller, as this system was active for every grasp attempt. Contrarily, the slip prevention algorithms were not always triggered for every grasp attempt. The fast slip controller was active approximately three times per trial, while the slow slip prevention was active about two times per trial; this indicates that fast slips were the more common type of slip. Compared to reflexes alone, the vibration feedback improved grasping location accuracy, enabling a more secure grasp that was more robust to disturbances introduced during the transportation phase. This result aligns with previous research, where vibration feedback was shown to be especially relevant for grasp-and-lift tasks, enabling grasp consistency during fragile object manipulation \cite{Engels2019WhenHand}. That no condition outperformed the standard prosthesis in being able to place the object in the end bin is likely due to the fact that the most difficult parts of the task are the lifting and moving stages. Once a participant accomplishes the first two milestones, it is straightforward to place the object in the end bin given enough practice, regardless of the condition. Indeed, only the trial number, which represents task experience, played a significant role in the outcome of placing the object in the end bin.
Contrary to expectations, the modality-matched haptic feedback in the Reflex-Pneu condition resulted in similar performance to the Standard prosthesis condition, indicating that the benefits of the reflex controller were cancelled out by the pneumatic pressure feedback.
Normally, modality-matched haptic feedback is thought to be easier to understand than non-matched feedback \cite{Kim2010OnProsthetics, Stephens-Fripp2018} and has been perceived favorably by amputees \cite{Wijk2020SensoryUse}. However, previous work has also shown no functional benefit of more modality-matched haptic feedback compared to a non-matched modality \cite{Thomas2019}.
Therefore, we postulate that the difference between the vibrotactile and pressure feedback in our study stems from the discriminability of the way feedback was presented: amplitude discrimination of a single tactor versus spatial discrimination of eight bellows. Based on this finding, it seems that not all types of haptic feedback are equal, and some may provide no quantifiable improvement over simpler alternatives.
The Bellowband's pneumatic bellows are 16\,mm in diameter and are spaced 24\,mm apart \cite{Young2019Bellowband:Vibration}. For the age range tested, this is slightly below the spatial discrimination of 30\,mm on the back of the arm \cite{Stevens2009SpatialSpan}. Prior work \cite{Guemann2019EffectAmputees} also reports that the localization accuracy for a circular array of 6 vibrotactile motors equally spaced at 30\,mm around the upper arm was around 42\%. While this result is specifically for 20\,mm diameter vibrotactors, we argue that the pulsed activation of the bellows is similar to vibration feedback. Furthermore, this value likely represents the upper bound for localization accuracy in the Reflex-Pneu condition due to the smaller spacing between bellows and the added difficulty of performing the reach-to-pick-and-place task without direct vision. Moreover, only four of the eight bellows represented the critical region of the inner part of the prosthesis finger. This is in contrast to the vibrotactile feedback, which has eight discriminable amplitude levels based on a difference threshold of 0.3 \cite{Choi2013VibrotactileApplications}. So although the Reflex-Pneu feedback condition was modality-matched with respect to feedback location mapping and feedback type (i.e., pressure), it likely suffered from lower resolution compared to the vibrotactile condition. Survey feedback supports this idea: participants in the Reflex-Pneu condition felt that they used somatosensory cues less than those in the Reflex-Vib condition, indicating a breakdown in the understanding of the localized pressure feedback compared to vibrotactile feedback.
Although one could alter the original design presented in \cite{Young2019Bellowband:Vibration} by changing the spacing between the bellows, this change would also reduce the total number of bellows and consequently the overall resolution of the haptic feedback. Future work to improve the Bellowband includes reducing the size of the bellows themselves so that more bellows can be added while maintaining an appropriate spacing.
Due to the challenges of interpreting and using unfamiliar feedback, participants in both Reflex-Vib and Reflex-Pneu conditions may have seen improvement in performance with extended training on their particular feedback modalities. This additional training may also improve grasping location accuracy over the Standard condition. Several comments in the surveys from both conditions confirmed this uncertainty in using the feedback to precisely determine the correct grasping location. Previous literature has also indicated that more practice with haptic feedback yields substantial benefits \cite{Stepp2012RepeatedPerformance}.
Another possible improvement for vibrotactile feedback in this study would be to customize the mapping function of the sensor signal to the vibration amplitude. The current mapping (Eq. \ref{Eqn:VibMapping}) may not have maximized differences within the critical inner region of the contact-location sensor, as demonstrated in Fig. \ref{fig:characterization}. Furthermore, optimizing the sensor construction for the curved surfaces of the fingers would improve the sensor's activation profile and thus facilitate improved feedback strategies. Finally, future work should include psychometric and psychophysical assessments of the feedback modalities to ensure the feedback works as intended \cite{Marasco2011,Shehata2018,DAnna2019}.
Participants in both the Standard and Reflex-Vib conditions were more likely to correctly orient the prosthetic hand for optimal grasping, while grasping in the Reflex and Reflex-Pneu conditions were the most dissimilar. This could indicate that participants in the Reflex condition employed a suboptimal grasping strategy. The over-grasp reflex controller caused the prosthesis to stop closing once an object made contact with the thumb. This could have encouraged participants to make fast and frequent grasp attempts without much consideration for finger placement. This hypothesis is supported by the observed higher number of drops with the reflex controller compared to the Standard condition, and the significantly lower ratings of participants' ability to set the object in the bin with just the synthetic reflexes. On the other hand, participants in the Standard condition likely realized that fast grasp attempts would cause the object to slip out of the prosthesis' grasp. They were thus incentivized to find an ideal grasping position, which may have made them more aware of incidental mechanical cues transmitted through the prosthesis to their arm. In fact, two participants in the Standard condition remarked that the mechanical sensation of the contact-location sensor touching the object helped them orient the hand relative to the object. Previous research has indicated that incidental feedback is adequate to appropriately tune grasping force levels \cite{Markovic2018}. However, this type of feedback in which mechanical impacts are transmitted through the prosthesis to the user may be dampened if the prosthetic hand is encased in a rubber aesthetic glove. Nevertheless, future research could investigate ways to maximize the discriminability of incidental feedback by customizing fingerpads with ridges, bumps or other mechanical features to assist with localization.
Although there were no statistical differences between the Standard and Reflex-Vib feedback conditions in terms of successful grasping positions, participants in the Reflex-Vib condition still achieved higher performance in lifting and moving the object. Thus, in addition to correctly positioning the prosthesis for grasping, Reflex-Vib participants must have also appropriately modulated their grasping force, likely aided by the synthetic reflexes. Furthermore, participants in the Reflex-Vib condition positioned the prosthesis more accurately than participants in the Reflex and Reflex-Pneu conditions, indicating that reflex controllers alone are not enough to fully optimize performance, and higher resolution haptic feedback is needed. All things considered, without the combination of effective haptic feedback and reflex controllers, the studied task is difficult to perform without direct visual observation. This lack of tactile feedback and control is analogous to how performance deteriorates in reach-to-grasp tasks performed with anesthetized fingers \cite{Gentilucci1997TactileMovements} or by deafferented patients \cite{Parry2021AnticipationNeuropathy,Carteron2016TemporaryTasks,Jeannerod1984THELESION}. Although the experimental task here is closely related to activities of daily living (ADL), future work should also include established tests to evaluate the utility of the system more directly in ADL \cite{Hill2009FunctionalGroup}. Improvements to the grasping controller include allowing the user to override the control to produce larger grip forces that may be required for heavier objects. If the presented results can be validated with amputee users and different types of objects, we believe future myoelectric upper-limb prostheses should include tactile sensing, automatic reflexes, and socket-integrated vibrotactile feedback about contact. In addition, these results can be extended to even multi-grasp myoelectric hands; as in the present study, knowledge of contact location can be used to adjust the prosthesis location for grasping without direct vision. Furthermore, it could even help inform users as to which grasp type is most appropriate given the initial contact point. More broadly, the findings presented here could be used to improve other teleoperated systems such as robotic surgery.
\section*{Acknowledgment}
The authors thank Eric M.\ Young for providing the Bellowband. We also thank the US-German Fulbright Program, the Germanistic Society of America, Mastercard, the National Science Foundation Graduate Fellowship, and the Max Planck Society for funding the first author. Finally, we thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Farimah Fazlollahi.
\vspace{-.1em}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtranNoURL}
|
1,314,259,995,644 | arxiv | \subsection{\large Supplemental Material for ``Higher Order Topology and Nodal Topological Superconductivity in Fe(Se,Te) Heterostructures"}
\section{Full Hamiltonian with Superconductivity and Bicollinear Antiferromagnetism}
The full effective model for the FTS/FT heterostructure is
\bea
H_{\rm FTS} &=& {\cal G}_1 \otimes [\frac{A}{2I}(e^{ik_x}\Gamma_{167}-e^{ik_y}\Gamma_{2})+B(e^{ik_x}+e^{ik_y})\Gamma_5] - {\cal G}_2 \otimes [\frac{A}{2I}(e^{-ik_x}\Gamma_{167}-e^{-ik_y}\Gamma_2)-B(e^{-ik_x}+e^{-ik_y})\Gamma_5] \nonumber \\
&&+ {\cal G}_0 \otimes [(m-4B) \Gamma_5 + \Delta \Gamma_{137} -\mu \Gamma_{67}] + \frac{M}{\sqrt{1+\alpha^2}} {\cal G}_3 \otimes \tau_z \otimes (\sigma_0 + \alpha \sigma_z) \otimes s_x.
\label{Eq: FTS Hamiltonian}
\eea
The ${\cal G}$ matrices describe the sublattice degree of freedom; the hopping matrix elements are described by
\bea
{\cal G}_1 = \begin{pmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
\end{pmatrix},\
{\cal G}_2 = \begin{pmatrix}
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
\end{pmatrix},
\eea
and the onsite matrix elements are described by diagonal matrices ${\cal G}_0 = \text{diag}[1,1,1,1]$ and ${\cal G}_3=\text{diag}[1,1,-1,-1]$. Here $M$ deontes the interlayer exchange coupling between FTS and FT. In particular, ${\cal G}_3$ describes opposite coupling for electrons with sublattice $i=1,2$ and $i=3,4$, which captures the bicollinear AFM texture.
For convenience, we have assumed the alignment of magnetic moments of FeTe to be along the $\pm {\bf x}$ direction. The model parameter $\alpha \in [-1,1]$ accounts for the possible difference in $g$ factors of $p$-electrons and $d$-electrons. Numerically, we find that changing $\alpha$ gives little contribution to the physics we are interested in, so without loss of generality we take $\alpha=0$ in our discussion.
\section{Topological Criterion for Higher Order Topology}
The key to realize corner Majorana physics is to understand the competition between FM and SC on the $\tilde{\bf y}$ edge. It is instuctive to start from an effective model of the edge theory
\bea
H_{\tilde{\bf y}} = k_{\tilde{y}} \tau_0 \otimes s_z + \delta_M \tau_z \otimes s_x + \Delta \tau_y\otimes s_y - \mu \tau_z \otimes s_0.
\label{Eq: Edge Theory}
\eea
Here $\delta_M$ is the induced magnetic gap on $\tilde{\bf y}$ edge, which is generally smaller than the exchange coupling effect $M$ in the bulk. For a given $M$, the value of $\delta_M$ can be identified by calculating the energy gap on the $\tilde{\bf y}$ edge at $\mu=\Delta=0$. In fact, we have numerically confirmed the existence of a simple linear relation between $M$ and $\delta_M$,
\bea
\delta_M \approx \beta_M M.
\eea
For our choice of parameters with $A=B=1,m=2$, we find that
\bea
\beta_M = 0.678...
\eea
The eigenvalues of $H_{\tilde{\bf y}}$ can be solved analytically,
\bea
E=\pm\sqrt{k_{\tilde{y}}^2+\delta_M^2+\Delta^2+\mu^2 \pm 2\sqrt{m^2(\Delta^2+\mu^2)+k^2\mu^2}}.
\eea
Therefore, the edge topological phase transition happens when the edge energy gap closes. This is equivalent to finding $k_{\tilde{y}}$ such that $E=0$ is satisfied. It is straightforward to show that the equation we are solving is
\bea
k_{\tilde{y}}^4 + 2 k_{\tilde{y}}^2 (\delta_M^2+\Delta^2-\mu^2) + (\delta_M^2-\Delta^2-\mu^2)^2 =0,
\eea
which leads to
\bea
k_{\tilde{y}}^2 &=& - [(\delta_M^2 -\mu^2) + \Delta^2 \pm 2\sqrt{\Delta^2 (\delta_M^2 - \mu^2)}] \nonumber \\
&=& -(\sqrt{\delta_M^2-\mu^2}\pm |\Delta|)^2 \nonumber \\
&\geq& 0.
\eea
Therefore, the topological phase transition can only happen at $k_{\tilde{y}}=0$ when
\bea
\delta_M^2 &=& \mu^2 + \Delta^2.
\eea
This leads to the topological criterion shown in the main text
\bea
M^2 > \frac{1}{\beta_M^2} (\mu^2 + \Delta^2).
\eea
\section{Topological Charge for Topological Nodal Superconductivity}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{Topocharge.pdf}
\caption{(a) Two closed loops that encloses nodal points are shown in white lines. The directions of the loops are shown by white arrows. (b) and (c) show the evolution of the winding phase for loop 1 and loop 2, respectively. Clearly, the nodal point enclosed by loop 1 (loop 2) has a topological charge of $Q=-2$ ($Q=+2$).}
\label{Fig: Topological Charge}
\end{figure}
As discussed in the main text, the effective TRS symmetry $\Theta_M$ and particle-hole symmetry $\Pi$ lead to an emergent AFM chiral symmetry ${\cal C}=\Theta_M\Pi$. In the nodal SC phase, a topological charge $Q\in\mathbb{Z}$ can be defined based on ${\cal C}$ to characterize the topological nature of the nodal point. Given $\{{\cal C},H_\text{FTS}\}=0$, we notice that the unitary transformation $U_{\cal C}$ that diagonalizes ${\cal C}$ makes $H_\text{FTS}$ off-block-diagonal,
\bea
U_{\cal C} H_\text{FTS} U_{\cal C}^{\dagger} = \begin{pmatrix}
0 & N(k) \\
N(k)^{\dagger} & 0 \\
\end{pmatrix}.
\eea
We then perform a singular value decomposition to $N(k)$,
\bea
N(k) = {\cal U} (k) \Sigma (k) {\cal V}^{\dagger} (k)
\eea
and define
\bea
{\cal D}(k) = {\cal U}(k) {\cal V}^{\dagger}(k).
\eea
The topological charge is simply the winding number of $\det N(k)$ along a closed loop ${\cal L}$ that encloses the nodal point, which is mathematically \cite{schnyder2011topological,yu2018singlet}
\bea
Q = \frac{1}{2\pi} \oint_{\cal L} d{\bf k} \cdot \nabla_k \text{Arg}[\det {\cal D}(k)].
\label{Eq: Winding Phase}
\eea
As shown in Fig. \ref{Fig: Topological Charge} (a), we have studied the topological charge of the nodal points for the same nodal phase in Fig. 3 of the main text. We have chosen two counter-clockwise closed loops (white lines) that encloses two inequivalent nodal points to study their winding number. To better visualize the winding number, we define a 1d momenta $k_{\cal L}$ along the loop ${\cal L}$ to parametrize the loop, and further define a winding phase at each $k_{\cal L}$,
\bea
{\cal A}(k_{\cal L}) = \partial{k_{\cal L}} \text{Arg}[\det {\cal D}(k_{\cal L})]
\eea
to track the evolution of winding phase integral in Eq. \ref{Eq: Winding Phase}. As shown in Fig. \ref{Fig: Topological Charge} (b) and (c), the evolution of the winding phase for loop 1 and loop 2 clearly show the topological charge $Q$ of the enclosed nodal points to be $-2$ and $+2$, respectively.
\section{Estimate on the Proximity Induced Exchange Coupling in Fe(Te,Se) Monolayers}
We perform first principles calculations to obtain an estimate on the proximity induced exchange coupling in FeTe/Fe(Te,Se) heterostructures. For our purpose, we calculate the energy spectrum of a bilayer FeSe with an experimental lattice constant $a=3.905$~\AA~. In particular, the FeSe in the bottom layer is assumed to be ferromagnetic, while the top layer FeSe is initially non-magnetic. Consequently, the spin splitting for the top layer FeSe in the energy spectrum is expected to provide an approximate strength of the proximity induced exchange coupling effect.
The obtained band structure is displayed in Fig. \ref{band}, where the up and down triangles denote the spin up and spin down $d_{xz}/d_{yz}$ states of the top FeSe layer. We find that the magnetic moment of the bottom FeSe layer is about 2.75 $\mu_B$ and the obtained exchange splitting for the top FeSe layer is about 100 meV (as shown by the spacing between black dashed lines in Fig. \ref{band}). Considering the known overestimation of magnetic moment for Fe in first principle calculations, we use the measured magnetic moment in neutron scattering experiments (1.65 $\mu_B$ along b axis, parallel spin axis)\cite{bao2009tunable} to further rescale our first principles results, which leads to an induced exchange coupling of 60 meV for the top FeSe layer.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{FeSe_bilayer_magnetic_orbital.pdf}
\caption{Band structure for the bilayer FeSe. Initially the bottom layer is ferromagnetic and the top layer is nonmagneitc. The up and down triangles denotes the $d_{xz}/d_{yz}$ states for the top FeSe layer. The induced spin splitting is found to be about 100 meV
\label{band}}
\end{figure}
\section{Stability of Higher Order Topology against Magnetic Disorder}
An important disorder mechanism for our proposed Fe(Se,Te) heterostructure arises from individual flipped spins (relative to the perfectly ordered ground state) in the antiferromagnetic layer. Although previous work has established some robustness of various higher-order topological phases against weak chemical potential disorder, the energy scale associated with a flipped spin in the AFM is necessarily the exchange coupling $M$, which we expect to be relatively large. Therefore, one can reasonably question if the HOTSC phase we predict requires perfect ordering of the bicollinear AFM. To address this issue, we numerically diagonalize $H_{\rm FTS}$ in real space for magnetic textures with a fixed fraction $n_{\rm imp}$ of ``impurity" sites, i.e., randomly selected, uncorrelated lattice points where the coupling $\pm M \rightarrow \mp M$. Following the main text, this is done for a $20 \tilde{a}_y \times 10 \tilde{a}_x$ system, with $(M, \Delta, \mu) = (0.6, 0.2, 0.0)$.
We first characterize the effect of disorder on the edge gap in the spectrum, defined here as the energy of the lowest quasiparticle state above the corner states. In the inset of Fig.~\ref{fig:SI_dis1} we plot the \textit{disorder averaged} edge gap as a function of $n_{\rm imp}$. This average systematically trends downward as one might have expected a priori. More interesting, though, is that in the main panel we histogram the distribution of edge gaps over 500 independent disorder realizations at $n_{\rm imp}=0.05$, and see that it is generally irregular with a few distinct peaks. In other words, most disorder realizations bind low-energy (including zero energy) states well below the clean edge gap $\sim 0.12 B$. Although the \textit{average} gap is necessarily nonzero, the interplay of these individual low-energy states with the Majoranas must be investigated further to see if particular disorder realizations can trivialize them.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{scafmqsh_spectrum_gapdist.pdf}
\caption{Distribution of gaps for 500 disorder realizations at $n_{\rm imp}=0.05$. The distribution is irregular, but demonstrates that additional quasiparticle states at all energies including zero can be generated by disorder. The inset shows the expected downward trend but nonzero value of the disorder averaged gap.
\label{fig:SI_dis1}}
\end{figure}
This motivates us to consider the effect of individual impurities based on their spatial location. In Fig.~\ref{fig:SI_dis2} we plot spectra for a single (a) bulk, (b) AFM edge, and (c) FM edge impurity. We observe that magnetic impurities (in the form of imperfect bicollinear ordering) do not result in bound states in the bulk below the edge gaps as expected. However, if the impurities are located on the edge, then sub-edge-gap bound states appear (shown in insets). We also see in Fig.~\ref{fig:SI_dis2} that these individual bound states have no apparent impact, however, on the Majorana manifold.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{scafmqsh_spectrum_a.pdf}\\
\includegraphics[width=0.4\textwidth]{scafmqsh_spectrum_b.pdf}\\
\includegraphics[width=0.4\textwidth]{scafmqsh_spectrum_c.pdf}
\caption{Spectra of $H_{\rm FTS}$ with a single spin-flip impurity located (a) inside the bulk, (b) along an AFM edge, and (c) along an FM edge. In (a) the clean edge gap persists with no additional low-energy bound states; in contrast for (b) and (c) a single sub-edge-gap bound state accompanies the impurity, and the spatial profile of the bound state is shown in inset.
\label{fig:SI_dis2}}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{scafmqsh_spectrum_gapless.pdf}
\caption{Spectrum of a typical ``gapless'' disorder realization, in the sense that the energy window below the clean edge gap is nearly uniformly filled with states, at $n_{\rm imp} = 0.1$. Nonetheless, the zero modes persist with only slightly decreased localization, as seen from the probability density plotted in the lower-right inset. The upper-left inset shows the disorder averaged IPR for the zero modes, a quantitative measure of localization; up to at least $n_{\rm imp} = 0.1$, increasing impurity density only slightly increases the localization length of the zero modes.
\label{fig:SI_dis3}}
\end{figure}
We investigate this stability of the Majorana states further by quantitatively characterizing the localization of corner modes through their disorder-averaged inverse participation ratio (averaged also over the four Majorana corner states and the eight particle-hole $\otimes$ orbital $\otimes$ spin basis states),
\begin{equation}
{\rm IPR} = \frac{1}{4 \times 8}\sum_{n \in {\rm MZMs}}\left( \sum_{x,\tau \sigma s} \left| \psi_n(x,\tau \sigma s) \right|^4 \right)^{-1}.
\end{equation}
We show in an inset to Fig.~\ref{fig:SI_dis3} that even as the disorder averaged gap decreases rapidly with increasing $n_{\rm imp}$, the averaged IPR - defined to roughly count the number of sites where the wavefunction is nonzero - is essentially flat, corresponding to a negligible effect of any additional low-energy edge bound states on the corner Majoranas. In Fig.~\ref{fig:SI_dis3} we also show results for a single typical ``gapless'' realization ($n_{\rm imp} = 0.1$) with constant density of states inside the edge gap. Since these subgap states are also localized, they cannot simultaneously hybridize with two corner modes to trivialize them; correspondingly, the combined probability density of the Majorana modes (shown in an inset) still has four disjoint regions of support, but with slightly decreased localization to the corners.
In summary, we have demonstrated that the HOTSC phase we uncover does not rely on perfect bicollinear AFM order: even for relatively large $n_{\rm imp}$ the bulk remains essentially inert (since the bulk QSH does not require the AFM texture). The main effect of increasing $n_{\rm imp}$ then is increasing the likelihood that a particular realization will contain edge impurities. These edge impurities bind additional low-energy localized states, which can hybridize with the Majorana corner modes to ``push" the zero-mode away from the corners, but an isolated zero-mode cannot be eliminated. Instead, a high density of edge impurities is required to push Majoranas from two separate corners together to trivialize them.
\end{document} |
1,314,259,995,645 | arxiv | \section{Introduction}\label{sec:Intro}
Entanglement \cite{Guehne.Toth2009,Horodecki.etal2009} is one of the most notable characteristics of quantum theory as compared to classical theory.
Apart from its fundamental importance, quantum entanglement also plays a vital role in various tasks in quantum information processing.
Then, a basic yet crucial question to ask is how to determine whether a given quantum state is entangled or not.
Although considerable results have been obtained, a universal method for checking entanglement is still not available.
A bipartite quantum state $\rho_{AB}$ is called separable if it can be written as a convex combination of product states, i.e.,
\begin{equation}
\rho_{AB}=\sum_{k}p_{k}\ket{a_{k}}\bra{a_{k}}\otimes\ket{b_{k}}\bra{b_{k}}\,,
\end{equation}
where the coefficients $\{p_{k}\}$ form a probability distribution with ${p_{k}\geqslant 0}$ and ${\sum_{k}p_{k}=1}$; Otherwise, $\rho_{AB}$ is entangled.
The well-known positive partial transpose (PPT) criterion \cite{Peres1996,Horodecki.eta1996} is both necessary and sufficient for detecting entanglement for the simple ${2\times2}$ and ${2\times3}$ systems, but not for higher dimensions.
For instance, there exist the so-called bound entangled states \cite{Horodecki1998} which are PPT and nondistillable.
The equivalent generalization of PPT, namely the entanglement witness \cite{Guehne.Toth2009,Horodecki.etal2009,Horodecki.eta1996}, is a universal method for detecting an arbitrary entangled state $\rho$ with ${\mathrm{tr}(W\rho) < 0}$ where $W$ is a suitably-chosen (but may not be unique) Hermitian observable.
While the construction of an entanglement witness can be difficult sometimes, a number of non-universal but easily applicable methods have been proposed.
The most prominent example is the computable cross-norm or realignment (CCNR) criterion \cite{Horodecki.eta2006,Rudolph2005,Albeverio2003}, which is based on the correlations of local orthogonal observables (LOOs).
Then, the local uncertainty relation (LUR) \cite{Hofmann2003,Otfried2006} criterion extends the CCNR by adding extra nonlinear terms which can also be regarded as a natural nonlinear entanglement witness for CCNR.
Another class of linear correlation-based entanglement criteria is constructed using the normalized symmetric informationally complete positive operator-valued measures (SIC POVMs) \cite{Renes2004}, dubbed as the ESIC criterion \cite{Shang.etal2018}; see also Refs.~\cite{chen2015,li2020entanglement}.
Very recently, a general approach for checking separability is taken by considering the linear correlations of specific operators in Refs~\cite{SarbickiGniewomir2020,Sarbicki2020}, which incidentally covers both the CCNR and the ESIC criteria.
Apart from SIC POVMs, another extension for entanglement detection is via mutually unbiased bases (MUBs) \cite{William1989,chen2013,tao2015,Spengler2012,Bennett1999,Chen2014}, both of which are special cases of quantum $2$-designs.
For instance, a measurement-device-independent entanglement detection method is discussed in Ref.~\cite{Bae2019} using SIC POVMs and MUBs.
Recently, depending on the random moments calculated from quantum designs, method for characterizing multipartite entanglement is proposed \cite{Ketterer2019,Ketterer2020}.
Generally speaking, however, the random moments are not directly measurable in experiments.
Therefore, here we focus on the correlations defined via quantum designs from an operational point of view, namely to ease experimental realizations.
Specifically, we propse several new entanglement criteria using quantum $2$-designs by extending the CCNR, ESIC, and LUR criteria.
This paper is organized as follows.
We first briefly review the concept of quantum designs in Sec.~\ref{sec:Design}, with a special emphasis on the discussion of SIC POVMs.
In Sec.~\ref{sec:EntCriteria}, upon recalling some well-known entanglement criteria including the CCNR, ESIC, and LUR, we construct several new criteria using quantum designs.
Then these criteria are tested using various bipartite entangled states in Sec.~\ref{sec:Appl},
and we conclude in Sec.~\ref{sec:Summary}.
\section{Quantum Designs}\label{sec:Design}
Design is an important mathematical concept which can be used to imitate uniform averages over certain groups, which in turn can be regarded as a pseudorandom process.
Designs are denominated either unitary or spherical, hinging on which group one chooses.
For qubit systems, the local measurement settings can be characterized over the Bloch sphere, then it is more convenient to use spherical designs than unitary designs.
A spherical $t$-design is a collection of points on the unit sphere for which the $t$th-order polynomials can be averaged over to obtain the same value as that integrating over the surface with certain measures.
Formally, a probability distribution over the set of quantum states $(p_i,\ket{\phi_i})$ is a quantum spherical $t$-design if
\begin{eqnarray}
\sum_i p_i (\ket{\phi_i}\bra{\phi_i})^{\otimes t}=\int_{\psi} (\ket{\psi}\bra{\psi})^{\otimes t}\mathrm{d}\psi\,,
\end{eqnarray}
where the integral over \ket{\psi} is taken over the Haar measure on the unit sphere \cite{Ambainis.etal2007}.
We can associate the complete set of points of a spherical $t$-design as a measurement $\{\widetilde{\Pi}_k\}_{k=1}^N$, where $N$ denotes the number of settings.
For concreteness and later use, let’s rewrite
\begin{equation}
\Pi_k=\sqrt{\frac{N(d+1)}{2d}}\widetilde{\Pi}_k
\end{equation}
as the normalized version of the $t$-design, where $d$ is the dimension.
Then, for a given quantum state $\rho$, the probability of obtaining the outcome $\Pi_k$ is given by the Born rule
\begin{equation}
p_k=\mathrm{tr}(\rho\Pi_k)=\langle\Pi_k\rangle\,.
\end{equation}
For pure states, one has \cite{Slomczynski2020}
\begin{equation}\label{eq:probS}
\sum_{k=1}^N p_k^2 = 1\,.
\end{equation}
In this work, we choose ${t=2}$ in particular to focus on the investigation of bipartite entanglement.
Among the typical examples of quantum $2$-designs are SIC POVMs and MUBs \cite{William1989,chen2013,tao2015}.
Both of them have been demonstrated being useful for entanglement detection \cite{Shang.etal2018,Spengler2012,Bennett1999,Chen2014}.
Here we look at SIC POVMs in specific.
A SIC POVM in dimension $d$ comprises of $d^2$ subnormalized projectors $\ket{\psi_k}\bra{\psi_k}/d$ with equal pairwise fidelity, such that
\begin{equation}
|\langle\psi_i|\psi_j\rangle|^2=\frac{d\delta_{ij}+1}{d+1}\,,\quad i,j=1,2,...,d^2\,.
\end{equation}
One can check that the normalized version of the SIC POVM that satisfies the condition in Eq.~\eqref{eq:probS} takes on the form
\begin{equation}\label{eq:SIC}
E_k=\sqrt{\frac{d+1}{2d}}\ket{\psi_k}\bra{\psi_k}\,.
\end{equation}
Although being widely believed and numerically supported, the existence of SIC POVMs in any finite dimension remains as an open problem \cite{ZaunerThesis2011}.
For a recent review, see Refs.~\cite{Fuchs.etal2017, Ling.etal2006, Medendorp.etal2011, Bent.etal2015}.
\section{Correlation-based entanglement criteria}\label{sec:EntCriteria}
As discussed early, entanglement criteria constructed using correlations are particularly relevant for easier experimental realizations.
To be specific, in this work we first re-investigate the CCNR and ESIC criteria \cite{Shang.etal2018} which are linear, as well as the LUR criterion \cite{Otfried2006} which is nonlinear.
Then, these criteria are extended straightforwardly by utilizing quantum $2$-designs, and the question whether the new criteria thus obtained are improved or not naturally follows.
\subsection{Linear criteria}
Consider a bipartite quantum state $\rho_{AB}$ with the dimension ${d=d_A\times d_B}$, where $d_A$ and $d_B$ represent the local dimensions of the subsystems ${\rho_A=\mathrm{tr}_B(\rho_{AB})}$ and ${\rho_B=\mathrm{tr}_A(\rho_{AB})}$ respectively.
Let $\{M_k^A\}_{k=1}^{K_A}$ and $\{M_k^B\}_{k=1}^{K_B}$ denote the local operations acting on the two subsystems.
Then, the linear correlation matrix between these two measurements can be written as
\begin{equation}
[\mathcal{C}]_{ij}=\langle M_{i}^A\otimes M_{j}^B\rangle=\mathrm{tr}\bigl(\rho_{AB}M_{i}^A\otimes M_{j}^B\bigr),
\end{equation}
the size of which is $K_A\times K_B$.
Without loss of generality, we assume $K_A=K_B=K$ unless specified otherwise.
Then, we have the following proposition.
\begin{proposition}\label{pro1}
Let $\{M^A\}$ and $\{M^B\}$ be the properly normalized local measurements acting on the bipartite state $\rho_{AB}$.
If $\rho_{AB}$ is separable, then
\begin{equation}
||\mathcal{C}||_{\mathrm{tr}}\leqslant 1
\end{equation}
has to hold; Otherwise, it is entangled.
The symbol $||\!\cdot\!||_{\mathrm{tr}}$ denotes the trace norm.
\end{proposition}
Depending on the local measurements one chooses, different entanglement criteria can be derived from Proposition~\ref{pro1}.
For instance, if $\{M^A\}$ and $\{M^B\}$ are LOOs,
we get the CCNR criterion \cite{Horodecki.eta2006,Rudolph2005,Albeverio2003}.
The LOOs can be found by invoking the Schmidt decomposition (assuming $d_A\leqslant d_B$), such that
\begin{equation}
\rho_{AB}=\sum_{k=1}^{d_A^2} \lambda_k G_{k}^{A}\otimes G_{k}^{B},
\end{equation}
where ${\lambda_k=\langle G_k^A\otimes G_k^B\rangle}$ are the Schmidt coefficients.
It is easy to check that the set of orthonormal bases of the Hermitian observables $\{G_k^A\}$ and $\{G_k^B\}$ fulfill the conditions
\begin{equation}
\mathrm{tr}(G_k^A G_l^A)=\mathrm{tr}(G_k^B G_l^B)=\delta_{kl}\,,
\end{equation}
and
\begin{equation}
\sum_k(G_k^A)^2=d_A\openone\,,\quad\sum_k(G_k^B)^2=d_B\openone\,.
\end{equation}
Hence, an equivalent form of the CCNR criterion is given by
\begin{equation}
\sum_k\lambda_k\leqslant 1\,,
\end{equation}
as the Schmidt coefficients $\lambda_k$s happen to be the singular values of $\mathcal{C}$.
If, instead, one chooses the local measurements to be the normalized SIC POVMs as in Eq.~\eqref{eq:SIC}, the ESIC criterion proposed in Ref.~\cite{Shang.etal2018} is recovered.
As demonstrated by various examples in Ref.~\cite{Shang.etal2018}, the ESIC criterion is more powerful as compared to CCNR.
So, here, we take a step further by asking the question whether the criterion as defined in Proposition~\ref{pro1} can be improved again if quantum $2$-designs besides SIC POVMs are utilized.
To distinguish the ESIC criterion, we dub the one using quantum $2$-designs as E$2$D.
\subsection{Nonlinear criteria}
The LUR criterion proposed in Ref.~\cite{Otfried2006} makes use of LOOs $\{G_k^A\}$ and $\{G_k^B\}$, such that for separable states,
\begin{equation}\label{eq:LUR}
1-\sum_k\langle G_k^{A}\otimes G_k^{B}\rangle-\frac{1}{2}\sum_k\langle G_k^{A}\otimes \openone-\openone\otimes G_k^{B}\rangle^2\geqslant0
\end{equation}
has to hold.
One notices that LUR is strictly stronger than CCNR due to the extra nonlinear quadratic term in Eq.~\eqref{eq:LUR}.
In view of this fact as well as the fact that ESIC is stronger than CCNR, we try to reformulate the LUR criterion by replacing the LOOs with the normalized SIC POVMs.
See the following proposition which we dub as the LSIC criterion.
The detailed derivation is postponed in Appendix~\ref{App:proofLSIC}.
\begin{proposition}[LSIC]\label{LSIC}
Let $\{E_k^A\}$ and $\{E_k^B\}$ be the normalized SIC POVMs acting on two subsystems.
If a bipartite state $\rho_{AB}$ is separable, then
\begin{equation}\label{eq:LSIC}
1+\sum_k\langle E_k^A\otimes E_k^B\rangle-\frac{1}{2}\sum_k\langle E_k^A\otimes\openone+\openone\otimes E_k^B\rangle^2\geqslant0
\end{equation}
has to hold; Otherwise, it is entangled.
\end{proposition}
Notice the sign differences in Eq.~\eqref{eq:LSIC} as compared to Eq.~\eqref{eq:LUR}.
Similarly, here, we are interested in the question whether the criterion as defined in Proposition~\ref{LSIC} can be further improved if quantum $2$-designs besides SIC POVMs are utilized.
To distinguish the LSIC criterion, we dub the one with quantum $2$-designs as L$2$D.
\section{Applications}\label{sec:Appl}
In this section, we test various entanglement criteria proposed above using simple $2\times2$, $3\times3$, and $2\times3$ entangled quantum states.
\subsection{$2\times2$ entangled states}
For the first application, we consider the noisy $2\times2$ quantum states with the form \cite{Otfried2006}
\begin{equation}\label{eq:rho2qb}
\rho_{\rm{2qb}}(p)=p\ket{\psi}\bra{\psi}+(1-p)\rho_{s}\,,
\end{equation}
where the entangled state $\ket{\psi}$ can be set to be one of the Bell states,
\begin{align}
\ket{\psi^{\pm}}&=\frac1{\sqrt{2}}\bigl(\ket{01}\pm\ket{10}\bigr)\,,\\
\ket{\phi^{\pm}}&=\frac1{\sqrt{2}}\bigl(\ket{00}\pm\ket{11}\bigr)\,,
\end{align}
and the separable noise $\rho_{s}$ is given by
\begin{equation}
\rho_{s}=\frac{2}{3}\ket{00}\bra{00}+\frac1{3}\ket{01}\bra{01}\,.
\end{equation}
Using the PPT criterion, these four families of states can be checked to be entangled for any ${p>0}$.
The number of elements of the quantum $2$-designs that we choose for testing are ${N=4,7,9}$ respectively.
Note that when ${N=4}$, it is simply the SIC POVM.
Table~\ref{tab:2qb} shows the threshold values of $p$ reported by various criteria.
The smaller the threshold is, the better the corresponding criterion is.
Several other features can be observed from the table.
First, ESIC is exactly equivalent to E$2$D, so is the pair of LSIC and L$2$D.
This tells us that using quantum $2$-designs with more settings like ${N=7,9}$ is not helpful for improving the detection power as compared to SIC POVM with ${N=4}$.
With this simple example, we provide additional evidence for the potentially unique role played by SIC POVMs in quantum information processing \cite{appleby2017}.
Second, except for the case of $\ket{\psi^{+}}$, LUR performs better than ESIC.
This observation, to some extent, refutes our intuition that nonlinear criteria such as LUR are always better than linear ones like ESIC.
Next, both the EISC and LUR criteria are better than CCNR.
Finally, the LSIC and L$2$D criteria are completely ineffective for detecting certain entangled states.
\begin{table}[t]
\caption{\label{tab:2qb}Threshold values of $p$ for detecting the entangled states as in Eq.~\eqref{eq:rho2qb} using various criteria.\footnote{Note that the LOOs used in the LUR criterion are different for each case by invoking the Schmidt decomposition.} Since the states are entangled for any ${p>0}$ with PPT, the smaller the threshold is, the better the corresponding criterion is.}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccccccc
\hline\hline
&& PPT & CCNR & ESIC & E$2$D & LUR & LSIC & L$2$D \\[2pt]
\hline
&$\ket{\psi^{-}}$ & $0$ & $0.2918$ & $0.2678$ & $0.2678$ & $0.2501$ & $0.2501$ & $0.2501$ &\\[2pt]
&$\ket{\psi^{+}}$& $0$ & $0.2918$ & $0.2678$ & $0.2678$ & $0.2779$ & $1$ & $1$ &\\[2pt]
&$\ket{\phi^{\pm}}$ & $0$ & $0.2164$ & $0.2053$ & $0.2053$ & $0.2028$ & $1$ & $1$ &\\[2pt]
\hline\hline
\end{tabular*}%
\end{table}
\begin{table}[h]
\caption{\label{tab:2qbrandom}For the $50\,000$ randomly generated $2\times2$ entangled states, the values in the table show the proportions that can be detected by various criteria.}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}ccccccccc
\hline\hline
&PPT & CCNR & ESIC & E$2$D & LUR & LSIC & L$2$D &\\[2pt]
\hline
&$100\%$ & $86.39\%$ & $88.52\%$ & $88.52\%$ & $87.48\%$ & $3.86\%$ & $3.86\%$ &\\[2pt]
\hline\hline
\end{tabular*}
\end{table}
To go a step further, we randomly generate $50\,000$ entangled two-qubit states according to the Haar measure, then check the proportions that can be detected by various criteria; see the results in Table~\ref{tab:2qbrandom}.
Apart from the same features that we can draw as those in Table~\ref{tab:2qb}, we find that ESIC and E$2$D are able to detect more states as compared to LUR and CCNR.
The LSIC and L$2$D criteria, however, are the weakest among all.
Moreover, all the states that can be detected by LSIC and L$2$D can also be detected by all the other criteria.
Here, we emphasize that SIC POVMs may play a fundamental role for detecting entanglement considering the criteria that we investigate in this work.
Experimentally, this special feature automatically provides the minimal number of settings that one should choose for the task of entanglement detection.
\subsection{$3\times3$ entangled states}
We move on the consider the ${3\times3}$ entangled quantum states.
In dimension ${d=3}$, there exist three different families of SIC POVMs, from which we arbitrarily choose one for testing.
For each SIC POVM, the number of elements is given by ${N=9}$.
For other quantum $2$-designs that we employ for the E$2$D and L$2$D criteria, we superimpose one set of SIC POVM over another (with proper rotations) to get a measurement with ${N=18}$ elements.
Note that the arbitrariness in choosing quantum $2$-designs is ensured by its rotational symmetry \cite{Ambainis.etal2007}.
\subsubsection{Bound entangled states}
We first consider the ${3\times3}$ bound entangled states \cite{Bennett1999} mixed with white noise
\begin{equation}\label{eq:BE}
\rho(p)=p\rho_{\rm{BE}}+(1-p)\frac{\openone}{9}\,,
\end{equation}
where
\begin{equation}
\rho_{\rm{BE}}=\frac1{4}\!\left(\openone-\sum_{i=0}^4\ket{\psi_i}\bra{\psi_i}\right)\!,
\end{equation}
with
\begin{align}
&\ket{\psi_0}=\frac1{2}\ket{0}(\ket{0}-\ket{1})\,,\quad \ket{\psi_1}=\frac1{2}(\ket{0}-\ket{1})\ket{2}\,,\nonumber\\
&\ket{\psi_2}=\frac1{2}\ket{2}(\ket{1}-\ket{2})\,,\quad \ket{\psi_3}=\frac1{2}(\ket{1}-\ket{2})\ket{0}\,,\nonumber\\
&\ket{\psi_4}=\frac1{3}(\ket{0}+\ket{1}+\ket{2})(\ket{0}+\ket{1}+\ket{2})\,.
\end{align}
Table~\ref{tab:BE} shows the threshold values of $p$ reported by various criteria.
Similar to Table~\ref{tab:2qb}, the smaller the threshold is, the better the corresponding criterion is.
One finds that ESIC and E$2$D are equivalent, which are better than LUR, and all of them are better than CCNR.
For this particular entangled state, the LSIC and L$2$D criteria are completely ineffective.
\begin{figure}[t]
\includegraphics[width=.96\columnwidth]{Fig1}
\caption{\label{fig:chessboard}Entanglement detection of the ${3\times3}$ chessboard states.
For the $50\,000$ randomly generated states, the plot shows the fractions that are detected by various criteria.
Within numerical fluctuations, the PPT criterion fails completely.
The ESIC and E$2$D criteria can detect roughly $1\%$ more states than that of LUR, and roughly $2\%$ more than that of CCNR.
}
\end{figure}
\begin{table}[h]
\caption{\label{tab:BE} Threshold values of $p$ for detecting the $3\times3$ bound entangled states mixed with white noise as in Eq.~\eqref{eq:BE} using various criteria. The smaller the threshold is, the better the corresponding criterion is.}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}ccccccccc
\hline\hline
&PPT & CCNR & ESIC & E$2$D & LUR & LSIC & L$2$D &\\[2pt]
\hline
&$1$ & $0.8897$ & $0.8844$ & $0.8844$ & $0.8885$ & $1$ & $1$ &\\[2pt]
\hline\hline
\end{tabular*}
\end{table}
\subsubsection{Chessboard states}
Next we consider the ${3\times3}$ chessboard states defined as \cite{PhysRevA.61.030301}
\begin{equation}
\rho_{\rm{chess}}=\mathcal{N}\sum_{j=1}^4|V_j\rangle\langle V_j|\,,
\end{equation}
where $\mathcal{N}$ is the normalization coefficient and the unnormalized vectors $\ket{V_j}$ are
\begin{align}
\ket{V_1} &=\ket{v_5,0,v_1v_3/v_6;0,v_6,0;0,0,0}\,,\nonumber\\
\ket{V_2} &=\ket{0,v_1,0;v_2,0,v_3;0,0,0}\,,\nonumber\\
\ket{V_3} &=\ket{v_6,0,0;0,-v_5,0;v_1v_4/v_5,0,0}\,,\nonumber\\
\ket{V_4} &=\ket{0,v_2,0;-v_1,0,0;0,v_4,0}\,.
\end{align}
We randomly generate $50\,000$ chessboard states with the six parameters $v_k$s taking values independently from a Gaussian distribution with standard deviation two and mean zero.
Figure~\ref{fig:chessboard} illustrates the fractions of states that are detected by various criteria.
Within numerical fluctuations, we find that the PPT criterion fails completely.
Again, the ESIC and E$2$D criteria are exactly equivalent.
They can detect roughly $1\%$ more states than that of LUR, and roughly $2\%$ more than that of CCNR.
Moreover, the LSIC and L$2$D criteria can hardly detect any chessboard states, thus are not shown in the figure.
\subsubsection{Horodecki states}
The ${3\times3}$ bound entangled states introduced by Horodecki are given by \cite{HORODECKI1997333}
\begin{equation}
\rho_{\rm{PH}}^x=
\frac{1}{8x+1}\!\left(\!\begin{array}{ccccccccc}
x & 0 & 0 & 0 & x & 0 & 0 & 0 & x \\
0 & x & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & x & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & x & 0 & 0 & 0 & 0 & 0 \\
x & 0 & 0 & 0 & x & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & x & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \frac{1+x}{2} & 0 & \frac{\sqrt{1-x^2}}{2} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & x & 0 \\
x & 0 & 0 & 0 & x & 0 & \frac{\sqrt{1-x^2}}{2} & 0 & \frac{1+x}{2} \\
\end{array}\!\right)\!,
\end{equation}
with the parameter ${0< x < 1}$.
Although these states cannot be detected by the PPT criterion and are not distillable, they are nevertheless all entangled.
Consider the mixture of $\rho_{\rm{PH}}^x$ with white noise
\begin{equation}\label{eq:PH}
\rho(x,p)=p\rho_{\rm{PH}}^x+(1-p)\frac{\openone}{9}\,,\quad 0\leqslant p\leqslant 1\,.
\end{equation}
In Fig.~\ref{fig:Horodecki}, we show the parameter ranges that are detected by various criteria.
All the states above the curves can be detected by the corresponding criterion.
One finds that the ESIC and E$2$D criteria are exactly equivalent, and both of them are better than LUR.
The CCNR criterion is the worst among all.
Moreover, the LSIC and L$2$D criteria can hardly detect any states, thus are not shown in the figure.
\begin{figure}[t]
\includegraphics[width=.96\columnwidth]{Fig2}
\caption{\label{fig:Horodecki}Entanglement detection of the ${3\times3}$ bound entangled Horodecki states mixed with white noise as in Eq.~\eqref{eq:PH}.
States above the curves can be detected by the corresponding criterion.
The ESIC and E$2$D criteria are exactly equivalent, and they are better than LUR.
The CCNR criterion is the worst among all.
}
\end{figure}
\subsection{$2\times3$ entangled states}
For the last application, we consider the $2\times3$ entangled states.
In this case, the LUR, LSIC, and L$2$D criteria simply do not apply since all of them require the balanced dimension of the two subsystems.
We randomly generate $50\,000$ entangled states according to the Haar measure, and the values shown in Table.~\ref{tab:2by3} represent the proportions that can be detected by various criteria.
Once more, we confirm that the ESIC criterion is exatly equivalent to the E$2$D criterion, and they are all better than CCNR.
\begin{table}[h]
\setlength{\tabcolsep}{3mm}
\caption{\label{tab:2by3}For the $50\,000$ randomly generated $2\times3$ entangled states, the values in the table show the proportions that can be detected by various criteria.}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccc
\hline\hline
&PPT & CCNR & ESIC & E$2$D &\\[2pt]
\hline
&$100\%$ & $38.13\%$ & $41.62\%$ & $41.62\%$ &\\[2pt]
\hline\hline
\end{tabular*}
\end{table}
\section{Summary}\label{sec:Summary}
From an operational point of view, we constructed several new entanglement criteria in this work.
We first generalized the ESIC criterion \cite{Shang.etal2018} to the E$2$D criterion using quantum $2$-designs.
Then, based on the LUR criterion \cite{Hofmann2003,Otfried2006}, we proposed the LSIC criterion using SIC POVMs, and more generally the L$2$D criterion using quantum $2$-designs.
Counter-intuitively, the E$2$D and L$2$D criteria with more settings are exactly equivalent to the corresponding ESIC and LSIC criteria respectively.
Fundamentally, this observation highlights the potentially unique role played by SIC POVMs in quantum information processing.
Experimentally, this provides the minimal number of settings that one should choose for entanglement detection.
Moreover, we find that nonlinear criteria such as LUR are not always better than linear ones like ESIC.
As an outlook, it is interesting to re-investigate the corresponding criteria using quantum $t$-designs with ${t > 2}$ for multipartite entanglement detection,
for which we would also expect no improvement over that of the simplest setting of SIC POVMs.
\acknowledgments
We are grateful to Ali Asadian and Huangjun Zhu for helpful discussions.
This work was supported by the National Natural Science Foundation of China through Grant No.~11805010.
|
1,314,259,995,646 | arxiv | \section{Introduction}
A black hole spacetime is characterized, as shown in Eddington-Finkelstein
coordinates for the Schwarzschild spacetime in Fig. 1. Contracting ellipses
depicting a gravitationally collapsing spherical star, are shown to form a
marginally outer trapped
null surface called the Event Horizon. Local null cones are shown
to align with the EH, exhibiting the trapping behaviour characteristic of such
spacetimes. These tilt further inside the EH with a shrinkage showing
how the causal structure of spacetime is about to disappear at the
singularity shown as the dotted vertical line in the middle of the diagram,
which is the inevitable fate of the collapsing star and anything else entering
the EH.
\begin{figure}
\begin{center}
\psfig{file=bhbw1.eps,width=5cm}
\end{center}
\caption{Schwarzschild spacetime in Eddington coordinates}
\label{aba:fig1}
\end{figure}
An alternative view of the same spacetime is shown in Fig. 2 in conformal
coordinates which allows us to discuss asymptopia as a surface in a well-defined
part of spacetime. ${\cal I}^{\pm}$ are asymptotic future and past null
infinities for an asymptotically flat spacetime. A spherical collapse is
shown, together with the EH and also the singularity which clearly shows the
incompleteness of this spacetime vis-a-vis null and timelike geodesics
entering the EH. As shown in the figure, the black hole spacetime is defined
as the set of events of the universe ${\cal M}$ which do not lie in the
chronological past of ${\cal I}^+$, i.e., from which information (as light
signals say) never make it to future null infinity. This is the region ${\cal
B}$ shown in the figure. The EH is just the boundary of this spacetime region.
\begin{figure}
\begin{center}
\psfig{file=grcol1.eps,width=5cm}
\end{center}
\caption{Schwarzschild spacetime: conformal diagram}
\label{aba:fig2}
\end{figure}
As Subrahmanian Chandrasekhar puts it so eloquently in his treatise
Mathematical Theory of Black Holes,
\noindent {\it Black holes ... are the most perfect macroscopic
objects there are in the universe. The only elements in their
construction are our notions of space and time ... and because they
appear as ... family of exact solutions of Einstein's equation, they
are the simplest objects as well.}
Yet black hole spacetimes have
\begin{itemize}
\item Singularities, where all known laws of physics break down.
\item Event horizon : boundary of spacetime accessible to asymptotic observers.
\end{itemize}
\noindent {\it It is unlikely that black holes can be
understood on the basis of {\bf classical GR} even though their
horizons may have macroscopic cross-sectional areas !}
Black holes have a further conundrum associated with the EH: the theorems on
Black Hole Mechanics \cite{bch72} derived from general relativity state that
\begin{eqnarray*}
\delta {\cal A}_{EH} ~& \geq &~ 0 \cr
\kappa_{EH} ~&=&~ const \cr
\delta M ~&=&~ \kappa_{EH}~ \delta {\cal A}_{EH} + \cdots
\end{eqnarray*}
While these theorems exhibit an intriguing analogy with the laws of
thermodynamics, in reality there is no room for microstates in classical
general relativity for a family of exact solutions of Einstein's equation.
In the early 1970s Bekenstein \cite{bek73} declared that {\bf Black holes
(must) have entropy}. The main argument is based on the Generalized Second Law
of Thermodynamics : $\delta(S_{out}~+~ S_{bh}) ~ \geq~ 0,$ where $S_{out}$ is
the entropy of all matter and radiaton outside the EH. Clearly, the existence of $S_{bh}$ is
essential for this law. In order for this entropy to respect the second law
of black hole mechanics, independent of black hole parameters, it must be
proportional to the horizon area
\begin{eqnarray}
S_{bh} ~=~ { {\cal A}_{hor} \over 4 l_P^2 }~ (k_B=1)~ \label{bhal}
\end{eqnarray}
\noindent where, $l_P \equiv (G \hbar /c^3)^{1/2} \sim 10^{-33} cm
\Rightarrow$ quantum gravity is necessary to provide the micro states whose
counting may eventually lead to black hole entropy. In any case, this implies that
one needs to go beyond classical general relativity, in order to make sense of
entropy of black hole space times. Thus, black hole physics is by far the
more compelling reason for quantizing spacetime geometry than
aesthetic reasons based on unification of fundamental interactions, of which
there is hardly any evidence observationally.
Two issues that will have to be addressed imperatively before this idea can be
implemented:
\begin{itemize}
\item {\bf What degrees of freedom contribute to $S_{bh}$ ?}
\item {\bf How is it that $S_{bh}~=~S_{bh}(A_{hor})$ while
$S_{thermo}=S_{thermo}(vol)$ ?}
\end{itemize}
\section{Holography: a different approach}
A possible answer to the second question is provided by the so-called
{\bf Holographic Hypothesis} \cite{thf93}, \cite{bou02}, stated as follows \cite{thf93},
\noindent {\it ... Given any closed surface, we can
represent all that happens (gravitationally) inside it by degrees of
freedom on this
surface itself. This ... suggests that quantum gravity should be
described by a {\bf topological} quantum field theory in which all
(gravitational) degrees of freedom are projected onto the boundary.}
However, rather than use this as a working hypothesis, we adopt an alternative
viewpoint. We
\begin{itemize}
\item Propose : {\bf Holography is an outcome of the diffeomorphism invariance
of general relativity}. A version can be {\it derived} (heuristically).
\item We show: how gravitational degrees of freedom are projected to the
boundary for a particular model of the boundary known as a Isolated Horizon.
We also argue how these boundary degrees of freedom
are described by a three dimensional topological gauge theory on the
boundary, thus providing an explicit demonstration of a gravitation theory and
gauge theory connection. Once again, this is not a conjecture.
\item Finally, we discuss important implications of this connection for $S_{bh}$.
\end{itemize}
\subsection{The proposal}
Diffeomorphism invariance $\Rightarrow$ {\bf there are no covariantly conserved
energy-momentum tensor for vacuum spacetimes in bulk in full nonlinear general
relativity}. Indeed, on the phase space of general relativity, diffeomorphism
generators appear as first class constraints. The Hamiltonian for bulk
spacetime is expressed as a linear combination of first class constraints,
\begin{eqnarray}
H_v &=& \int_{\cal S} \left[ N {\cal H} + {\bf N} \cdot {\bf P} \right] \\
&\approx& 0 ~{when}~ {\cal H} \approx 0,~ {\bf P} \approx 0 ~\label{ham}
\end{eqnarray}
where ${\cal H},{\bf P}$ are diffeomorphism generators and $N (\rm{lapse}), {\bf N}
(\rm{shift})$ are Lagrange multipliers. In other words, there is no analogue
of ${\bf E}^2 + {\bf B}^2$ in vacuum general relativity in the bulk. Thus,
\begin{eqnarray}
H_{GR} ~=~ \underbrace{H_v}_{bulk} ~+~ \underbrace{H_b}_{boundary} ~. \label{b+bd}
\end{eqnarray}
On the constraint surface, $H_{GR} \approx H_b$, which implies that primary
excitations of quantum general relativity are not particle-like,
but extended, like non-perturbative quantum chromodynamics.
But, what about gravitons ? They are of course particle excitations of
perturbative quantum gravity, around weak gravitational backgrounds :
\begin{eqnarray}
g_{ab} ~=~ \underbrace{{\hat g}_{ab}}_{fixed~ bkgd} ~+~
\underbrace{h_{ab}}_{graviton}
\end{eqnarray}
Thus, the description of gravitons requires
\begin{itemize}
\item a {\it fixed nondynamical} background
\item an expansion around a fiducial background, which is sensible only perturbatively
\item as such, it is quite inadequate for black hole thermodynamics.
\end{itemize}
In other words, black hole thermodynamics is {\it not} the thermodynamics of a
gas of gravitons in a non-dynamical gravitational background. It is rather the
thermodynamics of a black hole spacetime itself, i.e., of the geometry; this
is only possible if one can ascribe quantum states to spacetime geometry which
can be counted as microstates.
\subsection{`Thermal' holography}
We now consider a canonical ensemble of radiant black hole spacetimes in
contact with a radiation bath at an inverse temperature $\beta$. The canonical
partition function is given by
\begin{eqnarray}
Z(\beta) ~ = ~ Tr \exp -\beta {\hat H}~, ~\label{parti}
\end{eqnarray}
where,
\begin{eqnarray}
{\hat H} ~ = ~ \underbrace{{\hat H}_v}_{blk}~+~\underbrace{
{\hat H}_{b}}_{bdy} ~. \label{hamil}
\end{eqnarray}
The $Tr$ is over states defined as
\begin{eqnarray}
|\Psi \rangle ~ = ~ \sum_{v,b} c_{vb} \underbrace{
|\psi_v\rangle}_{blk} \underbrace{|\chi_b \rangle}_{bdy} \label{vxb}
\end{eqnarray}
i.e., the full Hilbert space ${\cal H}= {\cal H}_v \otimes {\cal H}_b$. The
{\bf Hamiltonian constraint} in the bulk implies that the quantum Hamiltonian
operator annihilates the bulk quantum states
\begin{eqnarray}
{\hat H}_v ~|\psi_v\rangle ~=~ 0~. \label{hamilc}
\end{eqnarray}
It follows that
\begin{eqnarray}
Z(\beta) &=& \sum_{{}_{b}} \left( \sum_{{}_v} |c_{{}_{vb}}|^2 ||
~|\psi_{{}_v}\rangle~||^2
\right) \langle \chi_{{}_b}|\exp - \beta {\hat H}_{{}_{bdy}} |\chi_{{}_b}
\rangle \nonumber \\
&=& Tr_{{}_{bdy}} \exp -\beta {\hat H}_{{}_{bdy}} \nonumber \\
& \equiv & Z_{{}_{bdy}} ~. \label{bdyz}
\end{eqnarray}
In other words, {\bf the bulk states decouple! } Boundary states determine
bh thermodynamics completely : a thermal version of {\bf holography !} This is
different from the holographic hypothesis quoted above wherein {\it all} bulk
states are stipulated to be projected onto the boundary.
\section{Isolated Horizons}
So far no specification of the kind of spacetime boundary we have in mind has been
made. Clearly, our interest is not in the asymptotic boundary. Instead we
focus on an {\it inner} boundary of spacetime. Recall that the event horizon
itself is a boundary of the chronological past of future asymptopia. But the
event horizon is too global for our purpose. It has the following lacunae:
\begin{itemize}
\item EH is {\it teleological} in nature, i.e., it is determined only after
{\it entire} spacetime is known.
\item Stationarity $\Rightarrow$ black hole metric has a {\it global} timelike isometry.
\item Cosmological horizons (like the de Sitter horizon) cannot be
characterized as event horizons.
\item The (ADM) mass of the black hole is not defined on the event horizon but
as an integral over spatial infinity ($i^0$ in Fig. 2).
\end{itemize}
In view of these shortcomings, we seek a {\it local} generalization of event
horizons.
Such an alternative has already been found \cite{ash03} and is called an
Isolated Horizon (IQ).
\begin{figure}
\begin{center}
\psfig{file=bh2bw.eps,width=10cm}
\end{center}
\caption{Isolated Horizons}
\label{aba:fig3}
\end{figure}
We summarize the main properties of such a horizon, referring the reader to
\cite{ash03} for more details.
\begin{itemize}
\item An IQ has no global timelike isometry $\Rightarrow$ it is a {\bf
nonstationary} generalization of stationary event horizons, cosmological
horizons, etc., allowing radiation to exist infitesimally close to it.
\item It is a null inner boundary of spacetime with topol ${\bf R} \otimes
{\bf S}^2$.
\item The cross-sectional area ${\cal A}_{IH}$ of an IH remains constant: this is precisely
the {\it isolation}. Thus, nothing ever crosses an IH.
\item {\it Zeroth law of IH Mechanics} The surface gravity $\kappa_{IH} = const$
\item On IH, can define mass $M_{IH} = M_{ADM} - {\cal E}_{rad}^{\infty} $ such
that $ \delta M_{IH} = \kappa_l \delta A_{hor} + \dots$ {\it (Ist law of IHM)}
\item Such horizons correspond thermodynamically to a
{\it microcanonical ensemble with fixed $ {\cal A}_{IH} $ }.
\end{itemize}
\subsection{Canonical entropy and thermal stability}
Consider now a canonical ensemble of IHs in contact with radiation; we proceed
to compute $S_{can}$ for this ensemble, assuming the equilibrium configuration
to be an IH with fixed $A_{IH}$ and $M_{IH} = M(A_{IH})$. Retaining Gaussian
fluctuations around a saddle point chosen to be this equilibrium configuration, we
get \cite{dmb01}, \cite{cm03}
\begin{eqnarray}
S_{can} ~=~ S_{IH} ~+~ \frac12~\log S_{IH} ~\label{scan}
\end{eqnarray}
where $S_{IH}$ is the {\it microcanonical} entropy of the equilibrium IH.
Two issues arise immediately:
\begin{itemize}
\item What is $S_{IH}$ ?
\item $S_{can} > 0 \Rightarrow$ black hole is thermally stable,
i.e., heat capacity $C > 0$. Under what conditions does this happen ?
\end{itemize}
We answer the second question first: the condition for thermal stability has
been determined \cite{cm05} \cite{pm07}
\begin{eqnarray}
{M_{IH} \over M_P} ~>~ {S_{IH} \over k_B}
\end{eqnarray}
This turns out to be the {\bf necessary and sufficient cond. for
$S_{can} > 0~ {\rm and} ~C > 0$}. Saturation of the inequality is seen to lead
to $C \nearrow \infty$ ! This is reminiscent of a first order phase
transition, even though here the transition is between {\it a stable and an
unstable phase}. This is similar to the Hawking-Page transition \cite{hawp83}
for an AdS-Schwarzschild black hole. The important distinction here is that
it is completely general and also to an extent quantum in nature, in
contradistinction to the Hawking-Page treatment which is restricted to a
semiclassical analysis in anti-de Sitter black hole spacetime. It is seemingly
generalizable to more general black holes with charge and angular
momentum, within the grand canonical ensemble.
\subsection{IH as a null boundary: gravity-gauge link}
\noindent Because IH is an {\it inner} boundary, we must add boundary term to the
action in order that the variational principle canbe used to derive Einstein's
equation. Thus,
\begin{eqnarray}
{\cal S} = {\cal S}_{EHL} + {\cal S}_{IH}
\end{eqnarray}
such that
\begin{eqnarray}
\delta {\cal S}_{EHL}|_{IH} + \delta {\cal S}_{IH} ~ =~ 0
\end{eqnarray}
Since the IH is null the induced metric on it is degenerate
$\sqrt{^3g_{IH}}~=~0$. This has the consequence that the quantum theory
describing IH degrees of freedom must be a three dimensional topological field
theory for which the action is indep. of $^3g_{IH}$. Which 3 dim topological
field theory ? It must be a theory such that the degrees of freedom are
related in some manner to the bulk spacetime degrees of freedom (metric,
tetrad, connection).
It turns out that with GR formulated in bulk as a gauge theory of the Poincare group (and
diffeomorphisms), the theory induced by the boundary conditions IH is an
$SU(2)$ Chern Simons gauge theory (in {\it time} gauge
where local Lorentz boosts are gauge fixed) with coupling constant $k \equiv
A_{IH} / 4\pi l^2_P >>> 1 $ \cite{ash97}. This provides one of the clearest
examples of a gravity-gauge theory link in the literature. This connection is
based on far firmer footing that others based on conjectured relationships.
Using ${\cal S}={\cal S}_{{}_{EHL}} + S_{{}_{IH}}$, the variational principle
works, provided the following {\it consistency} condition holds
\begin{eqnarray}
\left(\frac{k}{2\pi} F_{CS} + E \times E \right)_{{}_{S^2}} ~=~0~ . \label{cseom}
\end{eqnarray}
This is nothing but the Chern Simons theory equation of motion with the second
term functioning as source currents. This implies that the bulk spatial geometry characterized by
$ E $ plays the role of source for the Chern Simons degrees of freedom (given by $F_{CS}$)
characterizing boundary (IH) geometry. It is also a precise demonstration of
the {\it projection of bulk gravitational degrees of freedom to the boundary}
hypothesized in the Holographic Hypothesis.
\section{Microcanonical entropy}
\subsection{Loop Quantum Gravity : spin network basis}
We now address the question of the microcanonical entropy of the IH
($S_{IH}$). The calculation follows the approach and methodology laid out in
\cite{ash97} - \cite{dkm01}. It is based on Loop
Quantum Gravity as well as the connection between Chern Simons gauge theories
and Wess-Zumino-Witten models \cite{wit89}. Loop Quantum
Gravity is perhaps the only known quantum theory of spacetime geometry, which
is background-independent and non-perturbative. It is a canonical version of
quantum general relativity, describing quantum three dimensional space (on
every spatial slice) in terms of {\it Spin Network} states. The Spin Network
basis was first proposed by Penrose and adapted to loop quantum
gravity by Rovelli and Smolin \cite{rov06}. Three dimensional space is
supposed to consist of fluctuating network graphs whose links each carry an $SU(2)$
irreducible representation index (`spin', $j=0,1/2,1,3/2,\dots$). Links meet
at vertices containing invariant $SU(2)$ `transporter' tensors constructed out
of the Levi-Civita tensor, depending on the valence of each vertex. An
arbitrary quantum state is a superposition of
spin network states which form an overcomplete basis.
\begin{figure}
\begin{center}
\psfig{file=spinnet1.eps,width=6cm}
\end{center}
\caption{Spin network graph}
\label{aba:fig4}
\end{figure}
The great advantage of the spin network basis is that geometric observables
(represented as self-adjoint operators), like length, area, volume, are
diagonal in this basis and turn out to have {\it discrete} spectra. In
particular, consider a spacelike two surface inserted into an arbitrary spin
network graph. The actual area of this surface will fluctuate around the
classical area $A_{cl}$ by terms $O(l_P^2)$ when the graph fluctuates, with
different spins puncturing the two-surface and transfering their spins to the punctures.
The area operator, defined as
\begin{eqnarray}
{\hat A}_S \equiv \sum_{I=1}^N \int_{S_I} {\det}^{1/2}[ ^2g({\hat E})] ~/label{arop}
\end{eqnarray}
can be shown \cite{rov06} to possess the bounded, discrete spectrum
\begin{eqnarray}
a(j_1, \dots, j_N) &=& \frac14 \gamma l_P^2 \sum_{p=1}^N \sqrt{j_p(j_p+1)} \\
\lim_{N \rightarrow \infty} a(j_1,....j_N) & \leq & A_{cl} + O(l_P^2) ~. \label{arsp}
\end{eqnarray}
\subsection{ `Quantum' Isolated Horizon}
\begin{figure}
\begin{center}
\psfig{file=bh4bw.eps,width=6cm}
\end{center}
\caption{Quantum isolated horizon}
\label{aba:fig5}
\end{figure}
Loop quantum gravity has not yet reached a stage of development where one can unambiguously
exhibit an IH formation from an appropriate solution of the quantum Einstein (Wheeler-de
Witt) equation, in some semiclassical approximation. Instead, we adopt an {\it
effective theory} viewpoint whereby we insert a foliation of the IH into the
spin network characterizing quantum spatial geometry, and use the formalism of
Chern Simons theory to obtain the states on this spherical section of the IH,
with point sources carrying spin $j$ (arbitrary) on the punctures. Our
interest is to count $\dim {\cal H}_{CS + pt sources (j_1, ...j_n)}$ and get
$S_{IH}$ from
\begin{eqnarray}
S_{IH}~\equiv ~ \log~\dim ~{\cal H}_{CS+(j_1,...,j_n)} ~, \label{sih}
\end{eqnarray}
for some fixed $A_{IH} >> l_P^2$, {\it restricting to only states with
vanishing total spin}. This latter restriction is enforced by the $SU(2)$
Gauss law constraint which implies that only rotationally invariant states are
physical.
This computation is simplified by the
relation \cite{wit89} between the dimensionality of the CS theory Hilbert space and {\it
the conformal blocks of an $SU(2)_k$ WZW model living on the punctured
2-sphere}. Using this relation, and also the Verlinde formula, the
dimensionality of the Chern Simons Hilbert space is given by \cite{km98}
\begin{eqnarray}
\dim ~{\cal H}_{CS+(j_1,...,j_n)} &~=~& \prod_{p=1}^n \sum_{m_p=-j_p}^{j_p}
[\delta_{m_1 + \cdots + m_n,0} - \frac12 \delta_{m_1 + \cdots + m_n,-1} \\
&~-~& \frac12 \delta_{m_1 + \cdots + m_n,1}] ~. \label{kmf}
\end{eqnarray}
A moment's reflection on eq. (\ref{kmf}) is adequate to persuade us that
indeed the states with vanishing composite spin must have not only $m=0$ but
discounted by those states which have integral composite spin; the latter
have not only an $m=0$ sector, but also $m=\pm1$ sectors. These nonvanishing
composite spin states do not satisfy the Gauss law constraint and have
to be eliminated if we are to consider only spinless states as
physical. Without this elimination, we have a larger degeneracy which will
ensue if the residual gauge invariance is $U(1)$ \cite{ash97} rather than
$SU(2)$. The reason we think it is natural to take $SU(2)$ rather than
$U(1)$ \cite{ash97} as the remnant of the local Lorentz invariance is that the former
is the invariance group relevant to the Gauss law constraint on the entire
spacetime, once Lorentz boosts are frozen out by choosing the `time' gauge. A
further gauge fixing to $U(1)$ on the IH \cite{ash97} appears to us to be
overly restrictive formally. Of course, one may desire to obtain the
degeneracy of the Chern Simons states for the entire Lorentz group as the
gauge group on the IH, but that task is made difficult by the fact that unitary
irreps of the Lorentz group are infinite dimensional.
If, for simplicity we choose $j_p ~=~ \frac12~ \forall~ p=1, \dots, n$ we get
\begin{eqnarray}
S_{mc} ~=~ S_{IH} ~&=&~ \underbrace{{ A_{IH} \over 4 l_P^2}
}_{\rm Ashtekar ~et.al.~ 97} ~\\
&~-~& \underbrace{\frac32 \log \left( { A_{IH} \over 4 l_P^2} \right) ~+~
const.~+~O(A_{IH}^{-1})}_{\rm Kaul~ \&~ PM~ 2000} ~. \label{kmf2}
\end{eqnarray}
The remarkable aspect of (\ref{kmf2}) is that, perhaps for the first time
since Bekenstein's pioneering work, one has an ab initio computation of
$S_{IH}$ and obtained an infinite series, asymptotic in $A_{IH}$, of quantum spacetime fluctuation
corrections to the Bekenstein-Hawking area law; each term of this series is
finite and unambiguously calculable. The leading correction to the area law is
logarithmic and has what appears to be a robust coefficient. With due modesty,
one may say that these corrections are the only known {\it physical}
signatures of loop quantum gravity as applied to the computation of {\it
microcanonical} black hole entropy.
\section{Pending Issues}
\begin{itemize}
\item One needs to go beyond effective description in
terms of an embedded IH : we need to solve quantum dynamics and show the
formation of the horizon.
\item We need to determine if Hawking radiation from IH is at all
possible, given its isolation.
\item We need to determine if the thermal nature of Hawking
radiation spectrum is an artifact of the semiclassical approximation inherent
in the pioneering work. In other words, if a version of the horizon is shown
to radiate, then within a full quantum description, is the radiation of
quanta still in a thermal distribution ?
\item We need to understand if the lowest area quantum
$\sim l_{{}_P}^2$ has implications for the {\it information loss problem}.
\end{itemize}
|
1,314,259,995,647 | arxiv | \section{Introduction and Statement of Results}
The modular invariant $j$ is defined by
\[j(z):=\frac{E_4(z)^3}{\Delta(z)}= q^{-1}+744+ 196884q+ \ldots,\]
where $q:=e^{2\pi iz},$ $E_4(z)$ is the Eisenstein series of weight $4$, and $\Delta(z)$ is the modular discriminant function. The values of the $j$-function at CM points are known to be algebraic integers. Let $\mathcal{Q}_D$ be the set of positive definite binary quadratic forms of discriminant $-D$, and let $\mathcal{Q}_D/\Gamma_0(1)$ denote equivalence classes under the action of the modular group $\Gamma_0(1) = SL_2(\mathbb{Z})$. Given a binary quadratic form $Q$, we let $\alpha_Q$ denote the unique root of $Q$ in the upper half-plane. For $-D$ a fundamental discriminant, the \textit{Hilbert class polynomial}
\begin{equation}
\mathcal{H}_D(x):=\prod_{Q \in \mathcal{Q}_D/\Gamma_0(1)} (x-j(\alpha_Q))
\end{equation}
is a monic, irreducible polynomial whose splitting field is the Hilbert class field of $\mathbb{Q}(\sqrt{-D})$.
The classically difficult problem of computing $\mathcal{H}_D(x)$ can be answered by
the recent work of Zagier in \cite{traces}. Let $J(z) := j(z) - 744,$ and let $T_{\nu}$ denote the Hecke operator of index $\nu$. In revisiting Borcherd's infinite product formulas, Zagier shows that there exist half-integal weight modular forms whose coefficients describe modified traces of the forms $\nu J(z)\vert T_{\nu}$. Since $\nu J(z)\vert T_{\nu}$ is expressible as a degree $\nu$ polynomial in $j(z)$, this makes computing the Hilbert class polynomial an exercise in diagonalizing to find power sums and applying the Newton-Girard formulae to recover symmetric polynomials.
A natural problem is to compute the minimal polynomial of a Hauptmodul of level $N$ evaluated at Heegner points of level $N$. When the congruence subgroup $\Gamma_0(N)$ has genus zero, a \textit{Hauptmodul of level $N$} is a generator for the field of modular functions, chosen to have a simple pole at the cusp at infinity, and is unique up to a constant. The $j$-function is a Hauptmodul for level $1$. Recall that the Dedekind-eta function is defined by
\[\eta(z):= \Delta(z)^{1/24} = q^{1/24}\prod_{n=1}^{\infty}(1 - q^n).\]
The following table lists a Hauptmodul $j^{(N)}(z)$ in terms of an eta-quotient for the levels $N > 1$ where they are defined.
\begin{center}
\begin{table}[h] \caption{Hauptmoduln as eta-quotients} \label{table}
\begin{tabular}{| c | c | c | c | c | c | c | c| }
\hline
$N$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
$j^{(N)}(z)$ & ${{\eta(z)}^{24}\over \eta(2z)^{24}}$ & ${\eta(z)^{12}\over \eta(3z)^{12}}$ & ${\eta(z)^8\over \eta(4z)^8}$ & ${\eta(z)^6\over \eta(5z)^6}$ & ${\eta(2z)^3\eta(3z)^9\over \eta(z)^3\eta(6z)^9}$ & ${\eta(z)^4\over \eta(7z)^4}$ & ${\eta(z)^4\eta(4z)^2\over \eta(2z)^2\eta(8z)^4}$\\
\hline
\end{tabular}
\vspace{.2in}
\begin{tabular}{ | c | c | c | c | c | c | c |}
\hline
$N$ & 9 & 10 & 12 & 13 & 16 & 25\\
\hline
$j^{(N)}(z)$ & ${\eta(z)^3\over \eta(9z)^3}$ &${\eta(2z)\eta(5z)^5\over \eta(z)\eta(10z)^5}$ & ${\eta(4z)^4\eta(6z)^2\over \eta(2z)^2\eta(12z)^4}$ & ${\eta(z)^2\over \eta(13z)^2}$ & ${\eta(z)^2\eta(8z)\over \eta(2z)\eta(16z)^2}$ & ${\eta(z)\over \eta(25z)}$\\
\hline
\end{tabular}
\end{table}
\end{center}
We will write $J^{(N)}(z)$ for the normalized Hauptmodul of level $N$ with constant term equal to $0$.
Let $\mathcal{Q}_D^N$ be the set of binary quadratic forms of discriminant $-D$ corresponding to Heegner points of level $N$ (those forms in which the coefficient of $x^2$ is divisible by $N$). We define the class polynomials
\begin{equation}
\mathcal{H}_D^{(N)}(x):=\prod_{Q \in \mathcal{Q}_D^N/\Gamma_0(N)} (x-j^{(N)}(\alpha_Q)).
\end{equation}
In \cite{MP}, Miller and Pixton generalize Zagier's traces, showing that there exist modular forms of half-integal weight whose coefficients describe the ``traces" of certain integral weight Poincar\'e series. Generically, such Poincar\'e series will have transcendental coefficients, so the work of Zagier is a very special result.
We apply their results to a special family of polynomials in $j^{(N)}(z)$ to give explicit formulas for algebraic traces, thus determining the $\mathcal{H}_D^{(N)}(x)$. These polynomials are constructed using a generalization of the generating function given in Corollary 4 of ~\cite{kaneko}. Let $P_{\nu}^{(N)}(x)$ to be the polynomial defined by
\begin{equation} \label{defP}
\frac{j^{(N)'}(z)}{x - j^{(N)}(z)} = \sum_{\nu = 0}^{\infty} P_{\nu}^{(N)}(x)q^{\nu}.
\end{equation}
When $N = 1$, Asai, Kaneko, and Ninomiya ~\cite{kaneko} show that $P_{\nu}^{(1)}(j(z)) = \nu J(z)\vert T_{\nu}$. Our first task is to understand part of a larger framework that connects these polynomials to the Hecke algebra. For the levels $N$ on which these polynomials are defined, we show that $P_{\nu}^{(N)}(j^{(N)}(z))$ can be expressed as a linear combination of Hauptmoduln of levels dividing $N$ and $\nu$ hit with combinations of Hecke operators.
\begin{Theorem} \label{poly}
For each $N \in \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 18, 25\}$, let $j^{(N)}(z)$ be the Hauptmodul given in Table \ref{table} and let $J^{(N)}(z)$ be its normalization such that the constant term is zero. For each positive integer $\nu$,
\[P_{\nu}^N(j^{(N)}(z))=\sum_{d|(\nu,N)} \frac{\nu}{d} J^{(N)}(z) \mid T_{\frac{\nu}{d}}V_d,\]
where $T$ and $V$ are the standard Hecke operators defined in Section \ref{heckesec}.
\end{Theorem}
This theorem plays a central role in computing class polynomials of $j^{(N)}(z)$. To this end, it will be important to introduce sequences of half-integal weight modular forms whose coefficients describe the traces of these polynomials.
For a positive integer $D \equiv 0, 3 \pmod 4$, we define
\begin{equation}
\text{Tr}_{\nu}^{(N)}(D) := \sum\limits_{Q\in\mathcal{Q}_D^N/\Gamma_0(N)} \frac{P_{\nu}^{(N)}(j^{(N)}(\alpha_Q))}{w_{Q, N}},
\end{equation}
where $w_{Q, N}$ is the order of the stabilizer of a form $Q$ under the action of $\Gamma_0(N)$.
The following theorem is a more explicit version of Theorem 1.1 of \cite{MP}.
\begin{Theorem} \label{traces}
Let $N \in \{1, 3, 5, 7, 13\}$, and set $\varpi(N) = \#(\Gamma_0(1)/\Gamma_0(N))$. Let $\widetilde{b}_N(-m;n)$ denote the coefficient of $q^n$ in the weakly holomorphic modular forms $\widetilde{F}_N(-m;z)$ of weight $\frac{3}{2}$ defined in Section \ref{forms}. Then, for each positive integer $D$ with $D \equiv 0, 3 \pmod 4$ and any positive integer $\nu$, we have
\[\mathrm{Tr}_{\nu}^{(N)}(D) =-\nu \sum\limits_{d|\nu} \frac{1}{d}\left(\widetilde{b}_{\frac{N}{(N,d)}}(\tfrac{-\nu^2}{d^2};D) - \frac{24}{\varpi(N)}H_1(D)\right) - H_N(D)\mathfrak{c}_{N, \nu},\]
where $H_{N}(D)$ and $\mathfrak{c}_{N, \nu}$ are constants defined in Section \ref{forms}.
\end{Theorem}
These sequences of half-integal weight modular forms $\widetilde{F}_N(-m;z)$ are well-defined and can be recovered recursively from the two seed functions $\widetilde{F}_N(0;z)$ and $\widetilde{F}_N(-1; z)$ (except when $N = 1$ where $\widetilde{F}_1(0;z) = 0$, in which case $\widetilde{F}_1(-4;z)$ is required). The reader should consult the Appendix for a description of the seed functions in the levels we consider.
Using Theorems \ref{poly} and \ref{traces}, we obtain an algorithm for computing the $\mathcal{H}_D^{(N)}(x)$ for fundamental discriminants $-D$.
\begin{Theorem}
For $N \in \{1, 3, 5, 7, 13\}$ and $-D$ a fundamental discriminant, the algorithm given in Section \ref{algsec} computes $\mathcal{H}_D^{(N)}(x)$.
\end{Theorem}
\begin{remark}
In \cite{Suth}, Eagle and Sutherland give an algorithm for computing these class polynomials using elliptic curves.
\end{remark}
\begin{remark}
Gross provides an interesting approach to traces of singular moduli in \cite{Gross} and it is possible that these methods could also be used in determining singular values of Hauptmoduln.
\end{remark}
This paper is organized as follows. In Section \ref{heckesec}, we define relevant operators and prove a number of lemmas describing their effects on Fourier expansions. We apply these results to prove Theorem \ref{poly} in Section \ref{proofpoly}. In Section \ref{forms}, we first recall the results of Miller and Pixton, and then introduce a family of weakly holomorphic modular forms of weight $\frac{3}{2}$ and prove Theorem \ref{traces}. In Section \ref{algsec}, we detail an algorithm for computing the class polynomials. The last section demonstrates how to apply the algorithm in an explicit numerical example.
\vspace{.1in}
\noindent
\textbf{Acknowledgements:} The authors would like to thank Professor Ken Ono for suggesting the topic and for advice and guidance throughout the process, and an anony- mous referee for useful comments on a draft of this paper. We also would like to thank Michael Griffin and Sarah Trebat-Leader for useful conversations. Both authors are also grateful to NSF for its support.
\section{Hauptmoduln and Proof of Theorem \ref{poly}}
\subsection{Hecke Operators and Atkin-Lehner Involutions} \label{heckesec} Let $M_{k}(\Gamma_0(N))$ be the space of holomorphic modular forms of weight $k$ and level $N$.
We denote by $M_k^{!}(\Gamma_0(N))$ the space of meromorphic modular forms of weight $k$ and level $N$ whose poles, if any, are supported at the cusps. Such forms are known as \textit{weakly holomorphic} modular forms. We write $M_k^{\#}(\Gamma_0(N))$ for the subspace of $M_k^{!}(\Gamma_0(N))$ of modular forms whose only poles are supported at the cusp at infinity.
We first recall the definitions of basic operators on $\Gamma_0(N)$. For $f$ a meromorphic modular form of weight $k$, and any $\gamma = \left(\begin{array}{cc} a & b \\ c & d\end{array}\right) \in GL_2^+(\mathbb{R})$, the ``slash" operator $\vert_{k}$ is defined by
\[(f\vert_{k}\gamma)(z):=(\text{det}\gamma)^{k/2}(cz + d)^{-k}f\left(\frac{az + b}{cz + d}\right).\]
Since the weight will be clear from context, we drop the subscript $k$ and just write $f\vert \gamma$.
For a positive integer $d$, Atkin's $U$-operator is defined by
\begin{equation}
\left(\sum_{n \in \mathbb{Z}} a_nq^n\right)\vert U_d=\sum_{n \in \mathbb{Z}} a_{dn}q^n,
\end{equation}
and can be written in terms of the slash operator as
\begin{equation}
f\vert U_d = d^{\frac{k}{2} - 1}\sum_{j=0}^{d-1}f \vert \left(\begin{array}{cc} 1 & j \\ 0 & d\end{array}\right).
\end{equation}
The $V$-operator is defined by
\begin{equation}
\left(\sum_{n \in \mathbb{Z}} a_nq^n\right) \vert V_d=\sum_{n \in \mathbb{Z}}a_nq^{dn},
\end{equation}
and can be written in terms of the slash operator as
\begin{equation}
f \vert V_d = d^{-\frac{k}{2}}f \vert \left( \begin{array}{cc} p & 0 \\ 0 & 1\end{array}\right).
\end{equation}
For $p$ prime, the $p$th Hecke operator on $\Gamma_0(N)$ is defined by
\[T_p:= U_p + p^{k-1}\epsilon(p)V_p,\]
where $\epsilon(p) = 1$ for $p \nmid N$ and $0$ for $p \mid N$. The Hecke operators satisfy $T_{mn} = T_{m}T_n$ for $m$ and $n$ coprime and for $r \geq 2$, we have $T_{p^r} = T_{p^{r-1}}T_p - \epsilon(p)p^{k-1}T_{p^{r-2}}$. If $p_1, \ldots, p_n$ are the distinct primes dividing $N$, and $m = p_1^{r_1} \cdots p_n^{r_n}s$ where $(s, m) = 1$, we will often write $T_m = U_{p_1}^{r_1} \cdots U_{p_n}^{r_n}T_s$. The Hecke operators act on Fourier expansions by
\begin{equation} \label{heckeformula}
\left(\sum\limits_{n \in \mathbb{Z}}a_nq^n\right)|T_m =
\sum_{n \in \mathbb{Z}}\left(\sum_{d \mid (m,n)}\chi(d)d^{k-1}a_{mn/d^2}\right)q^n.
\end{equation}
where $\chi(d) = 1$ if $(d, N) = 1$ and $0$ otherwise.
A direct consequence of the above equation is the following.
\begin{Lemma} \label{Tqexp}
Suppose $f \in M_k^{!}(\Gamma_0(N)$ has the Fourier expansion $f(z)=\sum a_nq^n=q^{-\nu}+O(q)$ for some positive integer $\nu$. If $m$ is any positive integer with $(m, \nu)=1$ and $(m,N)=1$, then $m^{1-k} \cdot f(z)|T_{m}$ has Fourier expansion beginning $q^{-m\nu}+O(q)$.
\end{Lemma}
For a prime divisor $p$ of $N$ for which $(p^{\alpha},\frac{N}{p^{\alpha}})=1$, the \textit{Atkin-Lehner involution at $p$} is defined to be any matrix of the form
\begin{equation}
W_{p^{\alpha}}= \left(\begin{array}{cc} p^{\alpha}x & y \\ Nz & p^{\alpha}w \end{array}\right)
\end{equation}
for integers $x, y, z, w$, such that the determinant is $p^{\alpha}$. These operators define involutions on $M_k^{!}(\Gamma_0(N))$. Products of Atkin-Lehner involutions correspond to the cusps of $\Gamma_0(N)$, and slashing by these matrices give the expansions at those cusps.
\begin{remark}
All choices of $x, y, z, w$ that satisfy the conditions on the determinant of $W_{p^{\alpha}}$ are equivalent under the action of $\Gamma_0(N)$ so this is well-defined.
\end{remark}
If $N=p_1^{\alpha_1}\cdots p_n^{\alpha_n}$, the product of the $W_{p_i^{\alpha_i}}$ is equivalent to the \textit{Fricke involution}
\begin{equation}
W_{N}= \left(\begin{array}{cc} 0 & -1 \\ N & 0 \end{array}\right).
\end{equation}
\begin{remark}
A number of lemmas we will reference, involving the action of operators on Fourier expansions, were originally only stated for cusp forms. However, the proofs are equally valid for any weakly holomorphic modular form.
\end{remark}
The following result (Lemma 2 of ~\cite{Li}) shows that, under certain conditions, the Atkin-Lehner involutions commute with the Hecke operators $V$ and $T$.
\begin{Lemma} \label{winnie}
Let $N$ be a positive integer, and let $p$ and $p'$ be primes with $p^{\alpha}||N$. Then the following are true:
\begin{enumerate}
\item If $(p',p)=1$, then $f|V_{p'}W_{p^{\alpha}}=f|W_{p^{\alpha}}V_{p'}$ for any $f \in M_k^!(\Gamma_0(\frac{N}{p'}))$.
\item If $(p',N)=1$, then $f|W_{p^{\alpha}}T_{p'}=f|T_{p'}W_{p^{\alpha}}$ for any $f \in M_k^!(\Gamma_0(N))$. \label{WT=TW}
\end{enumerate}
\end{Lemma}
It turns out that, in some cases, the operators $W_{p^{\alpha}}$ also commute with the $U$-operator.
\begin{Lemma} \label{WandUcommute}
Let $f$ be in $M_k^!(\Gamma_0(N))$, and suppose $N=p^\alpha M$ with $(p,M)=1$. If $\ell$ is any prime dividing $N$ with $(p,\ell)=1$, then $f|U_{\ell}W_{p^\alpha}=f|W_{p^\alpha}U_{\ell}$.
\end{Lemma}
\begin{proof} We have
\begin{align*}
f|U_{\ell}W_{p^{\alpha}} &=\frac{1}{\ell}\sum\limits_{j=0}^{{\ell}-1} f\vert\left(\begin{array}{cc} 1 & 0 \\ 0 & \ell \end{array}\right)\left(\begin{array}{cc} 1 & j \\ 0 & 1 \end{array}\right)\left(\begin{array}{cc} p^{\alpha}x & 1 \\ Nz & p^\alpha \end{array}\right) \\
&= \frac{1}{\ell}\sum\limits_{j=0}^{\ell-1} f|\left(\begin{array}{cc} p^\alpha x + Nzj & 1 +jp^\alpha\\ zN \ell & p^\alpha \ell \end{array}\right).
\end{align*}
Since $(p,\ell)=1$ and $\ell \mid N$, for each $j$, there exists $i_j$ such that
\[-i_j(p^\alpha x+Nzj)+(1+jp^\alpha)=n_j\ell\]
for some integer $n_j$. As $j$ runs over residues mod $\ell$, so does $i$.
Hence, we can write the sum as
\begin{align*}
\frac{1}{\ell}\sum\limits_{j=0}^{\ell-1} \ &f \ \vert\left(\begin{array}{cc} p^\alpha x + Nzj & 1 +jp^\alpha-i_j(p^\alpha x+Nzj)\\ Nz\ell & p^\alpha \ell -i_jNz\ell \end{array}\right) \left(\begin{array}{cc} 1 & i_j \\ 0 & 1 \end{array}\right) \\
&=\frac{1}{\ell}\sum\limits_{j=0}^{\ell-1} f|\left(\begin{array}{cc} p^\alpha x +Nzj & n_j \\ Nz\ell & p^\alpha -i_jNz \end{array}\right) \left(\begin{array}{cc} 1 & 0 \\ 0 & \ell \end{array}\right) \left(\begin{array}{cc} 1 & i_j \\ 0 & 1 \end{array}\right)
\end{align*}
\begin{align*}
&= \frac{1}{\ell}\sum\limits_{j=0}^{\ell-1} f|W_{p^{\alpha}}\left(\begin{array}{cc} 1 & 0 \\ 0 & \ell \end{array}\right) \left(\begin{array}{cc} 1 & i_j \\ 0 & 1 \end{array}\right)=f|W_{p^{\alpha}}|U_{\ell},
\end{align*}
as desired.
\end{proof}
The next lemma allows us to recursively determine $f|U_p^aW_p$ in the case when the two operators do not commute.
\begin{Lemma} \label{thegame}
Let f be in $M_{k}^!(\Gamma_0(N))$ and let $p$ be a prime with $p||N$. Then
\[pf|U_p^aW_p=pf|U^a_pV_p+f|U^{a-1}_pW_pV_p-f|U_p^{a-1}\]
for all integers $a \geq 1$.
\end{Lemma}
\begin{proof} Lemma $7$ of ~\cite{AK} applied to $f|U_p^{a-1}$ says that $f|U_p^{a-1}$ is on $\Gamma_0(N)$ and that
\[pf|U_p^{a}+f|U_p^{a-1}W_p\]
is on $\Gamma_0(\tfrac{N}{p})$. Observe that $W_p= \left(\begin{array}{cc} px & y \\ Nz & pw \end{array}\right)= \left(\begin{array}{cc} x & y \\ Nz/p & pw \end{array}\right) \left(\begin{array}{cc} p& 0 \\ 0 & 1 \end{array}\right)$. Since the first matrix in this product is in $\Gamma_0(\tfrac{N}{p})$, it fixes the above sum, and we see that
\[(pf|U_p^{a}+f|U_p^{a-1}W_p)|W_p= (pf|U_p^{a}+f|U_p^{a-1}W_p)|V_p.\]
Since $W_p$ is an involution, we find
\[pf|U_p^aW_p+f|U_p^{a-1}=pf|U_p^{a}V_p+f|U_p^{a-1}W_pV_p,\]
demonstrating the desired identity.
\end{proof}
We will apply this lemma frequently in the following context.
\begin{Corollary} \label{poles}
If $f \in M_0^{\#}(\Gamma_0(N))$ has Fourier expansion beginning $q^{-1} + O(q)$ and $p$ is a prime with $p \mid\mid N$, then
\[p^af|U_p^aW_p=-q^{-p^{a-1}}+O(q)\]
for all integers $a \geq 1$.
\end{Corollary}
\begin{proof}
We proceed by induction on $a$. When $a = 1$, applying Lemma \ref{thegame} shows that
\[pf\vert U_pW_p = pf \vert U_pV_p + f \vert W_pV_p - f = -q^{-1} + O(q)\]
because neither $f \vert U_p$ nor $f \vert W_p$ has a pole at $i\infty$ (the operator $W_p$ sends the pole to another cusp). Hence, $f\vert U_pV_p$ and $f \vert W_pV_p$ do not have poles at $i \infty$ and neither contributes to the principal part of the expansion at $i \infty$.
Now suppose $a>1$ and $p^{a-1}f \vert U_p^{a-1}W_p = -q^{p^{a-2}} + O(q)$. Applying Lemma \ref{thegame}, we find
\[p^af\vert U_p^a W_p = p^af|U^a_pV_p+p^{a-1}f|U^{a-1}_pW_pV_p-p^{a-1}f|U_p^{a-1}.\]
Since $f\vert U_p^{a-1}$ has no pole at $i \infty$, the first and last terms on the right hand side do not contribute to the principal part of the expansion at $i\infty$. Hence,
\[p^af\vert U_p^a W_p= (-q^{p^{a-2}} + O(q)) \vert V_p = -q^{p^{a-1}} + O(q).\]
\end{proof}
Next we show that for levels $N$ divisible by a square, the normalized Hauptmodul for level $N$ is annihilated by certain $U$-operators.
\begin{Lemma} \label{zero}
Let $N \in \{4, 8, 9, 12, 18, 25\}$ and suppose $p$ is a prime with $p^2|N$. Then $J^{(N)}(z)|U_p=0.$
\end{Lemma}
\begin{proof} The form $J^{(N)}(z) \Delta(pz)$ is holomorphic of level $Np$, so $(J^{(N)}(z) \Delta(pz))\vert U_p = J^{(N)}(z)|U_p\cdot\Delta(z)$ is a weight $12$, holomorphic modular form of level $Np$. Since this space is finite dimensional, calculating that the first few coefficients of these expansions are zero proves the claim. The dimension formulas in Theorem 1.34 of ~\cite{ken} determine how many coefficients need to be checked.
\end{proof}
The last two lemmas of this section give some useful properties of the $V$-operator in relation to the forms we will consider.
\begin{Lemma} \label{VW}
Let $f$ be in $M_k^!(\Gamma_0(N))$. If $p$ is any prime not dividing $N$, then $f|V_p$ is on $\Gamma_0(Np)$ and $f|V_pW_p=p^kf$.
\end{Lemma}
\begin{proof} We have
\[f|V_pW_p=f| \left(\begin{array}{cc} p & 0 \\ 0 & 1 \end{array}\right)\left(\begin{array}{cc} px & y \\ Npz & pw \end{array}\right)=f| \left(\begin{array}{cc} p^2x & py \\ Npz & pw \end{array}\right)=p^kf| \left(\begin{array}{cc} px & y \\ Nz & w \end{array}\right).\]
The last matrix is in $\Gamma_0(N)$ and hence leaves $f$ fixed. Thus $f|V_pW_p=p^kf$.
\end{proof}
\begin{Lemma} \label{V}
Suppose $f$ is in $M_k^\#(\Gamma_0(N))$. If $p$ is a prime such that $p|N$, then $f|V_p$ is in $M_k^\#(\Gamma_0(Np))$.
\end{Lemma}
\begin{proof} Suppose $f|V_p=f(pz)$ has a pole at a rational cusp $x\over p$. Then $f$ has a pole at $x$, so $x$ is equivalent to the cusp at infinity under $\Gamma_0(N)$. In particular, we can write $x = \frac{a}{c}$ for $(a, c) = 1$ with $N \mid c$. Since $p \mid N$, we have $(a, pc) = 1$ so there exist integers $e$ and $f$ such that $\left(\begin{array}{cc} a & e \\ pc & f \end{array}\right) \in \Gamma_0(pN)$. This shows that $\frac{a}{pc} = \frac{x}{p}$ is equivalent to infinity in $\Gamma_0(Np)$, so $f \vert V_p$ has its only poles at infinity.
\end{proof}
\subsection{Proof of Theorem \ref{poly}} \label{proofpoly}
We begin with the following claim, after which the theorem follows directly using the same argument as in Theorem 3 of ~\cite{kaneko}.
\begin{Lemma} \label{woah}
Let $N$ be in the set $\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 18, 25\}$. For each positive integer $\nu$, the sum
\[\sum_{d|(\nu,N)} \frac{\nu}{d} J^{(N)}(z) \mid T_{\frac{\nu}{d}}V_d\]
is the unique meromorphic modular form in $M_0^{\#}(\Gamma_0(N))$ with Fourier expansion beginning $q^{-\nu} + O(q)$.
\end{Lemma}
\begin{proof}
The uniqueness of these forms is clear because the difference of any such modular forms is holomorphic of weight $0$ with zero constant term, and hence zero.
We first give a proof when the level $N = p$ is prime.
When $(\nu,p)=1$, Lemma \ref{Tqexp} shows that $\nu J^{(p)}(z)|T_{\nu} =q^{-\nu}+O(q)$ has the desired expansion at infinity. Then, Lemma \ref{winnie} shows that $\nu J^{(p)}(z)|T_{\nu}W_p=\nu J^{(p)}(z)|W_pT_{\nu}$. Since $W_p$ interchanges the poles at $0$ and $i \infty$ and $J^{(p)}(z)$ has no pole at zero, this shows that $\nu J^{(p)}(z)\vert T_{\nu}$ has no pole at zero. When $\nu$ is a multiple of $p$, two terms contribute to the sum, and we expand each at infinity and at zero. Write $\nu=p^as$ where $(s,N)=1$. At infinity, we have
\[\nu J^{(p)}(z)|T_{\nu} = \nu J^{(p)}(z)|U_p^aT_s=O(q)\]
because $J^{(p)}(z)$ has a simple pole which is killed by the $U$-operator. By Lemma \ref{Tqexp},
\[\frac{\nu}{p} J^{(1)}(z)|T_{\frac{\nu}{p}}V_p =q^{-\nu}+O(q).\]
To expand at zero, we apply the Atkin-Lehner involution at $p$. This gives
\begin{align*}
\nu J^{(p)}(z)|T_{\nu}W_p &=\nu J^{(p)}|U_p^aW_pT_s=s(q^{-p^{(a-1)}}+O(q))|T_s \\
&=q^{-p^{(a-1)s}}+O(q)=-q^{-\nu/p}+O(q)
\end{align*}
by applying first Lemma \ref{winnie} \eqref{WT=TW}, and then Corollary \ref{poles} followed by Lemma \ref{Tqexp}. Finally, by Lemmas \ref{VW} and \ref{Tqexp}, we have
\[\frac{\nu}{p} J^{(1)}(z)|T_{\frac{\nu}{p}}V_pW_p =\frac{\nu}{p} J^{(1)}(z)|T_{\frac{\nu}{p}}=q^{-\nu/p}+O(q).\]
Adding together these terms shows we have the desired expansion at infinity and that the poles at zero cancel.
When $N$ is a perfect power of a prime, the lemma follows from the prime case after applying Lemma \ref{zero} and Lemma \ref{V}.
We now treat the case when $N=p_1p_2$ is the product of two distinct primes. There are now four cusps (at $0, \frac{1}{p_1}, \frac{1}{p_2}$, and infinity) and we consider the expansion of each term in the sum at each cusp. Write $\nu=p_1^ap_2^bs$ where $(s,N)=1$.
There will be up to four terms in the sum depending on $a$ and $b$: $d=1, d=p_1, d=p_2,$ and $d=N$.
At infinity, it is easy to see that when $d=(\nu, N)$,
applying $T_{\frac{\nu}{d}}$ and then $V_d$ gives a pole of order $-\nu$ as desired, while for $d<(\nu,N)$, the Hecke operator $T_{\frac{\nu}{d}}=U_{(\frac{N}{d},\frac{\nu}{d})}T_s$ and the $U$ operator kills the pole at infinity.
At the cusp $\frac{1}{p_1}$, the four possible terms in the sum contribute in the following way when present:
\begin{align*}
d = 1: \qquad \quad \ \ \nu J^{(N)}(z)|T_{\nu}W_{p_2} &= \nu J^{(N)}(z)|U_{p_1}^aU_{p_2}^bT_sW_{p_2} =\nu J^{(N)}(z)|U_{p_2}^bW_{p_2}U_{p_1}^aT_s \\
&=\begin{cases} -q^{-\nu/p_2}+O(q) & \text{if $b> 0$ and $a=0$} \\ O(q) & \text{otherwise} \end{cases} \\
d=p_1: \quad \frac{\nu}{p_1} J^{(p_2)}(z)\vert T_{\frac{\nu}{p_1}}V_{p_1}W_{p_2} &= \frac{\nu}{p_1} J^{(p_2)}(z)|U_{p_2}^bT_{{p_1}^{a-1}s}V_{p_1}W_{p_2} \\
&=\frac{\nu}{p_1} J^{(p_2)}(z)|U_{p_2}^bW_{p_2}T_{{p_1}^{a-1}s}V_{p_1}\\
&=\begin{cases} -q^{-\nu/p_2} + O(q) & \text{if } b>0\\ O(q) & \text{if } b=0\end{cases}\\
d=p_2:\quad \frac{\nu}{p_2}J^{(p_1)}(z)| T_{\frac{\nu}{p_2}}V_{p_2}W_{p_2} &=\frac{\nu}{p_2}J^{(p_1)}(z)|T_{\frac{\nu}{p_2}} = \frac{\nu}{p_2}J^{(p_1)}(z) \vert U_{p_1}^aT_{p_2^{b-1}s} \\
&=\begin{cases} q^{-\nu/p_2} + O(q) & \text{if $a = 0$}\\ O(q) & \text{if $a > 0$}\end{cases}\\
d = N: \quad \ \ \frac{\nu}{N} J^{(1)}(z)|T_{\frac{\nu}{N}}V_{N}W_{p_2}
&=\frac{\nu}{N}J^{(1)}(z)|T_{\frac{\nu}{N}}V_{p_2}W_{p_2}V_{p_1} =\frac{\nu}{N}J^{(1)}(z)|T_{\frac{\nu}{N}}V_{p_1} \\&=q^{-\nu/p_2} + O(q)
\end{align*}
For any choice of $a$ and $b$, the negative terms in the $q$-expansions of the different terms present cancel each other, showing that the sum has no pole at $\frac{1}{p_1}$. By symmetry, the same is true at the cusp $\frac{1}{p_2}$. It remains to check the expansion at zero.
Applying operators as before, we find
\begin{align*}
d=1: \qquad \quad \ \ \nu J^{(N)}(z)|T_{\nu}W_{N} &= \nu J^{(N)}(z)|U_{p_1}^aU_{p_2}^bT_sW_{p_1}W_{p_2} \\
&= \nu J^{(N)}(z)|U_{p_1}^aW_{p_1}U_{p_2}^bW_{p_2}T_s \\
&=\begin{cases} O(q) & \text{if $a=0$ or $b=0$} \\ q^{-\nu/N} + O(q) & \text{if $a>0$ and $b>0$} \end{cases} \\
d=p_1: \quad \frac{\nu}{p_1}J^{(p_2)}(z)|T_{\frac{\nu}{p_1}}V_{p_1}W_{N} &=\frac{\nu}{p_1}J^{(p_2)}(z)|U_{p_2}^bT_{p_1^{a-1}s}V_{p_1}W_{p_1}W_{p_2} \\
&=\frac{\nu}{p_1}J^{(p_2)}(z)|U_{p_2}^bW_{p_2}T_{p_1^{a-1}s}\\
&=\begin{cases} O(q) & \text{if $b=0$} \\ -q^{-\nu/N} + O(q) & \text{if $b>0$} \end{cases}\\
d=p_2: \quad \frac{\nu}{p_2}J^{(p_1)}(z)|T_{\frac{\nu}{p_2}}V_{p_2}W_{N} &= \frac{\nu}{p_2}J^{(p_1)}(z)|U_{p_1}^aT_{p_2^{b-1}s}V_{p_2}W_{p_2}W_{p_1} \\
&=\frac{\nu}{p_2}J^{(p_1)}(z)|U_{p_1}^aW_{p_1}T_{p_2^{b-1}s}\\
&=\begin{cases} O(q) & \text{if $a=0$} \\ -q^{-\nu/N} +O(q)& \text{if $a>0$} \end{cases} \\
d=N: \quad \ \ \frac{\nu}{N} J^{(1)}(z)|T_{\frac{\nu}{N}}V_NW_N &= q^{-\nu/N} + O(q).
\end{align*}
Again, for any choice of $a$ and $b$, the negative terms in the $q$-expansions of the different terms present cancel each other, showing that the sum has no pole at infinity.
This proves the lemma for all $N$ except for $N=12$ and $N=18$. Write $N = p_1^2p_2$. When $p_1|\nu$, applying Lemma \ref{zero} we can write
\[\sum_{d|(\nu,N)} \frac{\nu}{d}J^{({N}/{d})}(z)|T_{\frac{\nu}{d}}V_d=p_1\left(\sum_{d|(\nu',N')}\frac{\nu'}{d}J^{(N'/d)}(z)|T_{\frac{\nu'}{d}}V_d\right)\vert V_{p_1}\]
where $\nu'=\frac{\nu}{p_1}$ and $N'=p_1p_2$ and so the lemma follows from the case when $N$ is the product of two distinct primes together with Lemma \ref{V}.
If $p_1\nmid \nu$, the only terms contributing to the sum over divisors of $(\nu,N)$ are $d=1$ and $d=p_2$. The expansion at infinity is as claimed by the same argument as before. Expanding at each of the other cusps as before, we find there are no poles in each term except at the cusp at $\frac{1}{p_1^2}$. Here we find
\begin{align*}
d=1: \quad \nu J^{(N)}(z)|T_{\nu}W_{p_2}=\nu J^{(N)}(z)|U_{p_2}^bW_{p_2}T_s&=\begin{cases} O(q) & \text{if $b = 0$} \\ -q^{-\nu/p_2}+O(q) & \text{if $b > 0$}\end{cases} \\
d=p_2: \quad \ \ \frac{\nu}{p_2}J^{(p_1^2)}(z)|T_{\frac{\nu}{p_2}}V_{p_2}W_{p_2}=\frac{\nu}{p_2}J^{(p_1^2)}|T_{\frac{\nu}{p_2}}&=q^{-\nu/p_2}+O(q),
\end{align*}
showing that the sum has no poles except at infinity.
\end{proof}
The following lemma is stated in ~\cite{kaneko} for level $1$ and weights $k$ such that $M_k(\Gamma_0(1))$ has dimension $1$. Given Lemma \ref{woah}, the proof for weight $0$ and levels $N$ where a Hauptmodul exists follows identically.
\begin{Lemma} \label{kan}
Let $N$ be in the set $\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 18, 25\}$. For each positive integer $\nu$, let $f_{\nu}^{(N)}(z)$ be the unique meromorphic modular form in $M_{0}^{\#}(\Gamma_0(N))$ with Fourier expansion beginning $q^{-\nu} + O(q)$. Then
\[\frac{j^{(N)'}(q)}{j^{(N)}(p) - j^{(N)}(q)} = \sum_{\nu=0}^{\infty}f_{\nu}^{(N)}(p)q^{\nu},\]
where $p$ and $q$ are independent formal variables and $f_0^{(N)}(z) = 1$.
\end{Lemma}
\begin{proof}
First we set
\begin{equation} \label{js}
j^{(N)}(q) = q^{-1} + \sum_{n=0}^{\infty}a_n^{(N)}q^n \qquad \text{and} \qquad J^{(N)}(q) = q^{-1} + \sum_{n=1}^{\infty}a_n^{(N)}q^n.
\end{equation}
It is clear from the description of $f_{\nu}^{(N)}(z)$ in Lemma \ref{woah}, as
\[f_{\nu}^{(N)}(z) = \sum_{d|(\nu,N)} \frac{\nu}{d}J^{(N)}(z) \mid T_{\frac{\nu}{d}}V_d,\]
that the coefficient of $q$ comes from the term where $d = 1$, and using \eqref{heckeformula} we see
\begin{equation} \label{fnu}
f_{\nu}^{(N)}(q) = q^{-\nu} + \nu a_{\nu}^{(N)}q + \ldots.
\end{equation}
Since the form $j^{(N)}(p)f_{\nu}^{(N)}(p)$ is uniquely determined by the non-positive terms in its Fourier expansion, using \eqref{js} and \eqref{fnu} we obtain the recurrence relation
\begin{equation*}
j^{(N)}(p)f_{\nu}^{(N)}(p) = f_{\nu+1}^{(N)}(p) + \sum_{\ell = 0}^{\nu}a_{\nu - \ell}^{(N)}f_{\ell}^{(N)}(p) + \nu a_{\nu}^{(N)}.
\end{equation*}
Multiplying both sides of this equation by $q^{\nu}$ and summing over all $\nu$ gives us
\begin{align*}
j^{(N)}(p)F(p, q) &= \frac{1}{q}(F(p, q) - 1) + \left(j^{(N)}(q) - \frac{1}{q}\right)F(p, q) + \sum_{\nu = 0}^{\infty}\nu a_{\nu}^{(N)}q^{\nu} \\
&= j^{(N)}(q)F(p, q) + j^{(N)'}(q)
\end{align*}
where $F(p, q) = \sum_{\nu=0}^{\infty}f_{\nu}^{(N)}(p)q^{\nu}$. Rearranging gives the desired expression.
\end{proof}
Theorem \ref{poly} now follows immediately from these two results.
\begin{proof}[Proof of Theorem 1.1]
It follows from Lemma \ref{kan} that $P_{\nu}^{(N)}(j^{(N)}(z))$ is the unique modular form in $M_{0}^{\#}(\Gamma_0(N))$ with Fourier expansion beginning $q^{-\nu} + O(q)$. Then, applying Lemma \ref{woah} proves the identity.
\end{proof}
\section{Half integral weight modular forms} \label{forms}
In this section, we consider only levels $4N$ where $N$ is odd and square-free. We use the results of Miller and Pixton ~\cite{MP} to show that the traces $\text{Tr}_{\nu}^N(D)$ are expressible in terms of coefficients of certain weakly holomorphic modular forms of weight $\frac{3}{2}$.
For positive integers $\lambda, \nu, N$ the integral weight Niebur Poincar\'{e} series are defined by \cite{N}
$$\mathfrak{F}_{\lambda, N,\nu}(z):= \pi \nu^{\lambda - 1}\sum_{A\in \Gamma_{\infty}\backslash \Gamma_0(N)} \text{Im}(\nu Az)^{\frac{1}{2}}I_{\lambda - \frac{1}{2}}(2\pi \text{Im}(\nu Az))e(-\text{Re}(\nu Az)),$$ where $I_s(x)$ is the usual modified Bessel function of the first kind, and $\Gamma_{\infty} : = \left\{\left(\begin{array}{cc} 1 & n \\ 0 & 1\end{array}\right) : n \in \mathbb{Z} \right\}$ denotes the stabilizer of infinity. They then construct half-integal weight Poincar\'{e} series whose coefficients describe the traces of the $\mathfrak{F}_{\lambda, N, \nu}(z)$.
We first relate $\mathfrak{F}_{1, N, \nu}(z)$ to degree $\nu$ polynomials in the Hauptmodul for level $N$. Let $\mu(n)$ be the M\"{o}bius function, defined by
\[\mu(n) = \begin{cases} (-1)^{t} & \text{if $n$ is square-free with $t$ prime factors} \\ 0 & \text{if $n$ is divisible by a square}.\end{cases}\]
It is not difficult to show that $\mu(n)$ is equal to the sum of the primitive $n$th roots of unity:
\[\mu(n) = \sum_{v \! \! \! \! \pmod{n}^*}e^{2\pi i v/n},\]
where the * indicates that the sum is taken over primitive residue classes mod $n$.
Let $\varphi(n)$ be Euler's totient function, and let $\zeta(s) = \sum_{c > 0} \frac{1}{c^s}$ be the Riemann-zeta function. The following identities will be useful for explicitly computing the constant term $\mathfrak{c}_{N, \nu}$ that appears in the subsequent Lemma \ref{relate}.
\begin{Lemma} \label{cool}
For any positive integer $n > 1$ and any real number $s > 1$, the following are true:
\begin{align*}
\sum_{c > 0} \frac{\mu(c)}{c^s} &= \frac{1}{\zeta(s)}
\intertext{and}
\sum_{c > 0} \frac{\mu(nc)}{(nc)^s} &= \frac{\mu(n)}{n^s - 1}\sum_{\substack{d \mid n \\ d \neq n}} \mu(d) \left(\sum_{c>0} \frac{\mu(cd)}{(cd)^s}\right).
\end{align*}
\end{Lemma}
\begin{proof}
The first identity is well-known and is proved by observing that the product of the two Dirichlet series $\zeta(s)\sum_{c > 0} \frac{\mu(c)}{c^s} = 1$ (because the convolution of $1$ and $\mu(c)$ is $1$ for $c=1$ and $0$ for all other $c$).
To prove the second identity, we use the inclusion/exclusion principle to write
\begin{align*}
\sum_{c > 0} \frac{\mu(nc)}{(nc)^s} &= \sum_{(c, n) = 1}\frac{\mu(nc)}{(nc)^s} = \frac{\mu(n)}{n^s}\sum_{(c, n)=1}\frac{\mu(c)}{c^s} \\
&= \frac{\mu(n)}{n^s}\sum_{\substack{d \mid n \\ d \neq n}} \mu(d)\left(\sum_{c>0}\frac{\mu(dc)}{(dc)^s}\right) + \frac{1}{n^s}\sum_{c>0}\frac{\mu(nc)}{(nc)^s.}
\end{align*}
Rearranging gives the recursive identity.
\end{proof}
We now explicitly describe the relationship between Miller and Pixton's integral weight Poincar\'e series and special polynomials in Haupmoduln.
\begin{Lemma} \label{relate}
For $N \in \{1, 3, 5, 7, 13\}$, and any positive integer $\nu$, we have
\[2\mathfrak{F}_{1, N, \nu}(z) = P_{\nu}^{(N)}(j^{(N)}(z)) + \mathfrak{c}_{N, \nu}, \]
where $P_{\nu}^{(N)}(x)$ is the polynomial defined in equation \eqref{defP} and
\[\mathfrak{c}_{N, \nu} =
\begin{dcases} 4\nu \pi^2 \sum_{d | \nu}\left(\sum_{\ell \mid d}
\frac{\varphi(d)\ell}{\varphi(\ell)d^2}
\left( \sum_{x \mid \frac{\nu}{d}, \ y \mid \frac{d}{\ell}} \mu(x)\mu(y)
\left(\sum_{c > 0} \frac{\mu(\frac{Nxy\ell}{(x, y\ell)}c)}{(\frac{Nxy\ell}{(x, y\ell)}c)^2} \right)\right)\right)
&\text{if $N \nmid \nu$} \\
4\nu \pi^2 \sum_{\substack{d | \nu \\ N \mid d}}
\left(\sum_{\substack{\ell \mid d \\ N \mid \ell}} \frac{\varphi(d)\ell}{\varphi(\ell)d^2}
\left(\sum_{x \mid \frac{\nu N}{d}, \ y \mid \frac{Nd}{\ell}} \mu(x)\mu(y)
\left(\sum_{c > 0} \frac{\mu(\frac{xy\ell}{(x, y\ell)}c)}{(\frac{xy\ell}{(x, y\ell)}c)^2}\right)\right)\right) & \text{if $N \mid \nu$.}
\end{dcases}\]
In particular, $\mathfrak{c}_{N, \nu}$ is a rational number.
\end{Lemma}
\begin{proof}
By Theorem 1.2 of ~\cite{MP}, $2\mathfrak{F}_{1, N, \nu}(z)$ differs from $P_{\nu}^{(N)}(j^{(N)}(z))$ by a constant. Since the constant term in the Fourier expansion of $P_{\nu}^{(N)}(j^{(N)}(z))$ is zero, the constant $\mathfrak{c}_{N, \nu}$ is equal to twice the constant term of $\mathfrak{F}_{1, N, \nu}(z)$. Using equation (17) of ~\cite{MP}, we determine that
\[\mathfrak{c}_{N, \nu} = 4\nu\pi^2 \sum_{c > 0} \frac{\mathcal{S}(0, -\nu; Nc)}{(Nc)^2},\]
where $\mathcal{S}(0, -\nu;c)$ is the Kloosterman sum
\[\mathcal{S}(0, -\nu; c) := \sum_{v \! \! \! \pmod{c}^*} e^{2\pi i \nu v/c}.\]
We re-write the Kloosterman sum in terms of the M\"{o}bius function as
\[ \mathcal{S}(0, -\nu; c) = \frac{\varphi(c)}{\varphi(\frac{c}{(c, \nu)})}\mu\left(\frac{c}{(c, \nu)}\right),\]
so that the constant term becomes
\begin{align*}
\mathfrak{c}_{N, \nu} &= 4\nu\pi^2\sum_{d \mid \nu} \left(\sum_{\substack{c > 0 \\ (Nc, \nu) = d}}\frac{\varphi(Nc)}{\varphi(\frac{Nc}{d})d^2} \cdot \frac{\mu(\frac{Nc}{d})}{(\frac{Nc}{d})^2}\right) \\
&= 4\nu\pi^2\sum_{d \mid \nu}
\left(\sum_{\ell \mid d} \frac{\varphi(d)\ell}{\varphi(\ell)d^2}
\left(\sum_{\substack{c > 0\\(Nc, \nu) = d \\ (d, \frac{Nc}{d}) = \ell}}
\frac{\mu(\frac{Nc}{d})}{(\frac{Nc}{d})^2} \right)\right).
\end{align*}
Recall $N = 1$ or a prime. The set $\{c > 0 : (Nc, \nu) = d\}$ can be written as
\begin{align}
\{c &> 0: (Nc, \nu) = d\} \notag \\
&=
\begin{dcases} \{c > 0 : (c, \nu) = d\} = \sum_{x \mid \frac{\nu}{d}} \mu(x)\{xdc : c > 0\} &\text{if $N \nmid \nu$} \\
\{c > 0: (c, \nu) = \tfrac{d}{N}\} = \sum_{x \mid \frac{\nu N}{d}}\mu(x)\{\tfrac{xdc}{N} : c > 0\} & \text{if $N \mid \nu$ and $N \mid d$} \\
\varnothing & \text{otherwise.} \label{decomp}
\end{dcases}
\end{align}
The set $\{c > 0: (\tfrac{Nc}{d}, d) = \ell\}$ can be described in a similar way. Taking the intersection of these sets yields
\begin{align*}
\{c >0 : (Nc, \nu) = d, (\tfrac{Nc}{d}, d) = \ell\} = \begin{dcases}
\sum_{x \mid \frac{\nu}{d}, \ y \mid \frac{d}{\ell}} \mu(x)\mu(y)\{\tfrac{dxy\ell}{(x, y\ell)}c : c > 0\} & \text{if $N \nmid \nu$} \\
\sum_{x \mid \frac{\nu N}{d}, \ y \mid \frac{Nd}{\ell}} \mu(x)\mu(y)\{\tfrac{dxy\ell}{N(x, y\ell)}: c > 0 \} & \text{if $N \mid \nu, d, \ell$} \\
\varnothing & \text{otherwise.}
\end{dcases}
\end{align*}
Using this decomposition to re-write the inner sum gives the desired expression.
Since $\frac{1}{\zeta(2)} = \frac{6}{\pi^2}$, applying Lemma \ref{cool} shows that the constant term $\mathfrak{c}_{N, \nu}$ is always rational.
\end{proof}
We now introduce Miller and Pixton's sequence of half-integal weight Poincar\'{e} series, whose coefficients will describe traces of polynomials in Hauptmoduln. We restrict our attention to the Poincar\'{e} series of weight $\frac{3}{2}$, as the coefficients of these forms are sufficient to determine the traces of polynomials in Hauptmoduln. However, we note that, due to the duality properties relating the forms of weight $\frac{3}{2}$ and forms of weight $\frac{1}{2}$ (see Corollary 1.4 of ~\cite{MP}), this could also be accomplished by working with forms of weight $\frac{1}{2}$.
Following Miller and Pixton, for $s \in \mathbb{C}$ and $y \in \mathbb{R} - \{0\}$, we define
\[\mathcal{M}_{s}(y):= |y|^{-\frac{3}{4}}M_{\frac{3}{4}\text{sgn}(y), s - \frac{1}{2}}(|y|),\]
where $M_{\nu, \mu}(z)$ denotes the usual $M$-Whittaker function. For $m \geq 1$ with $m \equiv 0, 1 \pmod{4}$, let
\[\varphi_{-m, s}(z) := \mathcal{M}_s(-4\pi m \mathrm{Im}(z))e(-m\mathrm{Re}(z)).\]
Now, define the Poincar\'e series of weight $\frac{3}{2}$
$$\mathcal{F}_N(-m, s;z):=\sum\limits_{A\in \Gamma_{\infty}\backslash\Gamma_0(4N)} (\varphi_{-m, s}|_{\frac{3}{2}}A)(z),$$
which converges for $\mathrm{Re}(s) > 1$.
\begin{remark}
The slash operator for half-integral weight modular forms requires a slight modification. For $A = \left(\begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array}\right) \in \Gamma_0(4)$, we define
\[(f \vert_{k} A) := \left(\left( \frac{\gamma}{\delta}\right)\epsilon_{\delta}^{-1}(\gamma z + \delta)^{\frac{1}{2}} \right)^{-2k}f(Az),\]
where
\[\epsilon_{\delta} := \begin{cases}1 & \text{if $ \delta \equiv 1 \pmod{4}$} \\ i & \text{if $\delta \equiv 3 \pmod 4.$} \end{cases}\]
\end{remark}
For the special value $s = \frac{3}{4}$, we define
$$F_{N}(-m;z):=\frac{3}{2}\mathcal{F}_{N}(-m, \tfrac{3}{4}; z)|\text{pr}_1$$
to be the projection of $\mathcal{F}_{N}(-m, \frac{3}{4};z)$ into the plus space using Kohnen's projection operator $\text{pr}_{\lambda}$ (defined in \cite{kohnen}). Note that this definition requires analytic continuation.
The $F_{N}(-m;z)$ are weak Maass forms. Let $\varpi(N)$ be the index of $\Gamma_0(N)$ in the full modular group, and let $\beta(s):=\int\limits_1^{\infty} t^{-3/2}e^{-st}dt$. Set
\[\delta_{\square}(m) := \begin{cases} 1 & \text{if $m$ is square,} \\ 0 & \text{otherwise.}\end{cases}\]
Theorem 2.1 of ~\cite{MP} shows that $F_N(-m;z)$ has a Fourier expansion of the form
\begin{equation} \label{Fexpansion}
F_{N}(-m;z)=q^{-m} + \! \! \! \! \sum\limits_{\substack{ n\geq 0 \\ n \equiv 0, 3 \! \! \! \! \pmod4}} \! \! \! \! b_{N}(-m;n)q^n
-\frac{3\delta_{\square}(m)}{2\pi\varpi(N)\sqrt{y}} \sum_{n = -\infty}^{\infty} \beta(4\pi n^2 y)q^{-n^2}
\end{equation}
and gives formulas for the coefficients $b_N(-m;n)$ as infinite sums of Kloosterman sums weighted by Bessel functions. The next lemma relates the forms $F_{N}(-m;z)$ to a family of weakly holomorphic modular forms.
Let $H_N(D)$ be the \textit{generalized class numbers} defined by
\begin{equation}
H_N(D) := \sum_{Q \in \mathcal{Q}_D^N/\Gamma_0(N)} \frac{1}{w_{Q, N}} \qquad \text{and} \qquad H_N(0) := \frac{-\varpi(N)}{12}.
\end{equation}
The numbers $H_{1}(D)$ are well-known to be coefficients of Zagier's non-holomorphic Eisenstein series of weight $\frac{3}{2}$, defined by
\begin{equation} \label{defG}
G(z):=\sum\limits_{n=0}^{\infty} H_1(n)q^n+ \frac{1}{16\pi\sqrt{y}}\sum\limits_{n=-\infty}^{\infty}\beta(4\pi n^2y)q^{-n^2}.\end{equation}
\begin{Lemma} \label{defFtilde}
Let $N \in \{1, 3, 5, 7, 13\}$ and let $m$ be a positive integer satisfying $m \equiv 0, 1 \pmod{4}$. Then
\begin{equation*}
\widetilde{F}_N(-m;z) := F_N(-m;z) + \frac{24\delta_{\square}(m)}{\varpi(N)}\sum_{n=0}^{\infty}H_1(n)q^n
\end{equation*}
is a weakly holomorphic modular form of weight $\frac{3}{2}$ on $\Gamma_0(4N)$. If $N > 1$, then
\[\widetilde{F}_N(0;z) := \sum_{n=0}^{\infty}(2H_1(n) - H_N(n))q^n\]
is a holomorphic modular form of weight $\frac{3}{2}$ on $\Gamma_0(4N)$.
\end{Lemma}
\begin{proof}
Looking at equations \eqref{Fexpansion} and \eqref{defG}, we see that the non-holomorphic parts of $F_N(-m;z)$ and $\frac{24\delta_{\square}(m)}{\varpi(N)}G(z)$ cancel each other, so $F_N(-m;z) + \frac{24\delta_{\square}(m)}{\varpi(N)}G(z)$ is a weakly holomorphic modular form equal to $\widetilde{F}_N(-m;z)$.
Arguing as in Chapter 2 of ~\cite{HZ}, one can confirm modularity properties of the $H_N(D)$ for general $N$ by finding further non-holomorphic Eisenstein series with coefficients equal to $H_N(D)$ whose non-holomorphic parts are also period integrals of the $\theta$-function. Subtracting these forms from $2G(z)$ cancels the non-holomorphic parts, leaving a holomorphic form equal to $\widetilde{F}_N(0;z)$.
\end{proof}
Let $\widetilde{b}_N(-m;n)$ be the coefficient of $q^n$ in $\widetilde{F}_N(-m;z)$ so that
\[\widetilde{F}_N(-m;z) = q^{-m} + \sum_{n \equiv 0, 3 \! \! \! \! \pmod{4}}\widetilde{b}(-m;n)q^n.\]
The following theorem describes traces of polynomials in Hauptmoduln in terms of the coefficients $\widetilde{b}_N(-m;n)$. It is a more-explicit version of Theorem 1.1 of \cite{MP}, and generalizes Theorem 1.2 of \cite{BO}.
This makes it possible to calculate these traces from the weakly holomorphic modular forms $\widetilde{F}_N(-m; z)$ given in the Appendix.
\begin{Theorem}
Let $N \in \{1, 3, 5, 7, 13\}$. For each positive integer $D$ with $D \equiv 0, 3 \pmod 4$, we have
\[\mathrm{Tr}_{\nu}^{(N)}(D) =-\nu \sum\limits_{d|\nu} \frac{1}{d}\left(\widetilde{b}_{\frac{N}{(N,d)}}(\tfrac{-\nu^2}{d^2};D) - \frac{24}{\varpi(N)}H_{1}(D)\right) - H_N(D)\mathfrak{c}_{N, \nu}.\]
Furthermore, $\widetilde{F}_N(0;z)$ and $\widetilde{F}_N(-1;z)$ satisfy the identities listed in the Appendix.
\end{Theorem}
\begin{proof}
Using Theorem 1.1 of ~\cite{MP} together with Lemma \ref{relate}, we find
\begin{align*}
\mathrm{Tr}_{\nu}^{(N)}(D) &=
\sum_{Q \in \mathcal{Q}_D^N/\Gamma_0(N)} \! \! \frac{P_{\nu}^{(N)}(j^{(N)}(\alpha_Q)) }{w_{Q, N}} \\
&= 2 \! \! \sum_{Q \in \mathcal{Q}_D^N/\Gamma_0(N)} \frac{\mathfrak{F}_{1, N, \nu}(\alpha_Q)}{w_{Q, N}} - \! \! \sum_{Q \in \mathcal{Q}_D^N/\Gamma_0(N)} \! \! \frac{\mathfrak{c}_{N, \nu}}{w_{Q, N}} \\
&= -\nu \sum\limits_{d|\nu} \frac{1}{d}b_{\frac{N}{(N,d)}}(\tfrac{-\nu^2}{d^2};D) - H_N(D)\mathfrak{c}_{N, \nu},
\end{align*}
Applying the definition of $\widetilde{F}_N(-m;z)$ in Lemma \ref{defFtilde} gives the above formula.
Hauptmoduln take on algebraic-integer values at Heegner points, so when $\nu = 1$ the left-hand side of the formula for the trace is $\frac{1}{2}$ or $\frac{1}{3}$ times an integer. Since the constants appearing on the right-hand side are rational with bounded denominator, so are the coefficients $\widetilde{b}_N(-1;n)$. Since the Kloosterman sums in Miller and Pixton's Theorem 2.1 converge, the coefficients of $\widetilde{F}_N(-1;z),$ can be determined from these formulas by taking sufficiently large partial sums. A standard argument then shows that the identities in the Appendix hold after comparing finitely many coefficients. Since $\widetilde{F}_N(-1;z)$ is a weakly holomorphic modular form, multiplying by an appropriate cusp form, for example $\eta(4z)^{6}$, lands the product in a space of holomorphic modular forms. These spaces are finite-dimensional, so one need only check that finitely many coefficients in their $q$-expansions agree.
The number of coefficients one has to check is determined by the dimension-formulas given in Theorem 1.34 of ~\cite{ken}.
\end{proof}
\begin{remark}
It should be pointed out that the theory of half-integral weight modular forms is particularly difficult for levels $4N$ when $N$ is even or not-square free. However, for $N=2$ we observe that the definitions in Lemma \ref{defFtilde} still give weakly holomorphic modular forms (equal to those listed in the Appendix), and the formula for the traces in the above theorem still holds.
\end{remark}
\section{Algorithm for Computing $\mathcal{H}_D^{(N)}(x)$} \label{algsec}
Recall that for $\alpha_Q$ a Heegner point of level $N$ and discriminant $-D$, the minimal polynomial of $j^{(N)}(\alpha_Q)$ is given by
$$\mathcal{H}_D^{(N)}(x)=\prod_{Q\in\mathcal{Q}_D/\Gamma_0(N)} (x-j^{(N)}(\alpha_Q)).$$
It is clear from this description that the coefficients of $\mathcal{H}_D^{(N)}(x)$ are the elementary symmetric polynomials in the $j^{(N)}(\alpha_Q)$. The following algorithm explains how to compute these class polynomials for $N = 1, 2, 3, 5, 7, 13$ and $-D$ a fundamental discriminant.
\vspace{.1in}
\noindent
\textbf{Algorithm}
\begin{enumerate}
\item Recursively generate the weakly holomorphic modular forms $\widetilde{F}_N(-m; z)$ for $1 \leq m \leq |\mathcal{Q}_D^N/\Gamma_0(N)|^2$ using the forms $\widetilde{F}_{N}(0;z)$ and $\widetilde{F}_N(-1;z)$ given in the Appendix.\label{comp}
\item Use Theorem \ref{traces} to calculate the traces $\text{Tr}_{\nu}^{(N)}(D)$ for $1 \leq \nu \leq |\mathcal{Q}_D^N/\Gamma_0(N)|$ by plugging in the appropriate coefficients $\widetilde{b}_N(-m;D)$ and constants $H_N(D)$, $H_1(D),$ and $\mathfrak{c}_{N, \nu}$ (and multiplying by $2$ or $3$ if $D = 3$ or $4$ respectively). \label{plugin}
\item Use the generating function in equation \eqref{defP} to obtain the coefficients of $P_{\nu}^N(x)$ for each $1 \leq \nu \leq |\mathcal{Q}_D^N/\Gamma_0(N)|$, and diagonalize to determine the values of the power sums $\sum (j^{(N)}(\alpha_Q))^{\nu}$.
\item Apply the Newton-Girard formulae
to recursively determine the elementary symmetric polynomials in $j^{(N)}(\alpha_Q)$.
\end{enumerate}
\begin{proof}[Proof of Algorithm]
Step \ref{comp} can be accomplished by multiplying $\widetilde{F}_N(-m+4,z)$ by $j^{(N)}(4z)$ to obtain a non-trivial linear combination of $\widetilde{F}_N(-m;z)$ and $\widetilde{F}_N(-n;z)$ for $n<m$. Subtracting off appropriate multiples of $\widetilde{F}_N(-n;z)$ for $0 \leq n < m$ leaves a form with the same non-positive Fourier coefficients as $\widetilde{F}_N(-m;z)$. The two forms must then be equal because their difference is in $M_{\frac{3}{2}}(\Gamma_0(4N))$ with a zero constant term and hence zero.
In step \ref{plugin}, the constant $\mathfrak{c}_{N, \nu}$ can be computed from the formula in Lemma \ref{relate}, using the identities in Lemma \ref{cool} to express the infinite sums in terms of reciprocals of the Riemann-zeta function.
Finally, given the $i$th power sums $p_i(x_1, \ldots, x_n) = x_1^i + \ldots + x_n^i$, the Newton-Girard formulae recursively determine the elementary symmetric polynomials $e_k(x_1, \ldots, x_n)$ by
\[ke_k(x_1, \ldots, x_n) = \sum_{i=1}^k(-1)^{i-1}e_{k-i}(x_1, \ldots, x_n)p_i(x_1, \ldots, x_n). \qedhere \]
\end{proof}
\section{Explicit Numerical Examples}
We show how to compute the class polynomial for level $7$ and discriminant $-20$.
The following table lists representatives of binary quadratic forms for this level and discriminant along with an approximation (which required 1500 coefficients) of the Hauptmodul $j^{(7)}(z)$ evaluated at their roots:
\begin{center}
\begin{tabular}{|c|c|}
\hline
$Q\in\mathcal{Q}^7_{20}/\Gamma_0(7)$ & approximation of $j^{(7)}(\alpha_Q)$\\
\hline
$14x^2 + 6xy + y^2$ & $-4.1458\ldots + i1.2360\ldots $\\
\hline
$21x^2 + 8xy + y^2$ & $-4.1458\ldots - i1.2360\ldots $\\
\hline
$7x^2 + 6xy + 2y^2$ & $-10.8541\ldots + i3.2360\ldots $\\
\hline
$63x^2 + 22xy + 2y^2$ & $-10.8541\ldots -i 3.2360\ldots $ \\
\hline
\end{tabular}
\end{center}
Using the seed forms given in the Appendix, we recursively compute the Fourier expansions of $\widetilde{F}_7(-m;z)$ for $m \leq 16$ by the method described in the proof of the algorithm.
In particular, we find the coefficients
\[\widetilde{b}_7(-1;20) = 22 \qquad \widetilde{b}_7(-4;20) = -26 \qquad \widetilde{b}_7(-9;20) = 78 \qquad \widetilde{b}_7(-16;20) = 338.\]
Next, we compute the constants $\mathfrak{c}_{7, \nu}$ for $1 \leq \nu \leq 4$ using the formula in Lemma \ref{relate} along with the identities in Lemma \ref{cool} to evaluate the infinite sums:
\[\mathfrak{c}_{7,1}=-\tfrac{1}{2} \qquad \mathfrak{c}_{7,2}=-\tfrac{3}{2} \qquad \mathfrak{c}_{7,3}=-2 \qquad \mathfrak{c}_{7,4}=-\tfrac{7}{2}.\]
We have $\frac{24}{\varpi(7)}H_{1}(20) = 6$ and $H_{7}(20) = 4$. Applying the formula in Theorem \ref{traces}, we obtain the traces
\begin{align*}
\text{Tr}_{1}^{(7)}(20) =-14 \qquad \text{Tr}_{2}^{(7)}(20) =54 \qquad \text{Tr}_{3}^{(7)}(20) =-224 \qquad \text{Tr}_{4}^{(7)}(20) =-1266.
\end{align*}
The polynomials $P_{\nu}^{(7)}(x)$ can be determined explicitly by expanding equation \eqref{defP} in a formal power series over the polynomial ring in $x$ and taking the coefficient of $q^{\nu}$:
\begin{align*}
P_{1}^{(7)}(x) = x+4 \qquad P_{2}^{(7)}(x)& = x^2+8x+12 \qquad P_{3}^{(7)}(x) = x^3+12x^2+42x+16 \\
P_{4}^{(7)}(x) &= x^4+16x^3+88x^2+160x+28
\end{align*}
The power sums can now be determined by diagonalization, giving
\begin{align*}
\sum_{Q \in \mathcal{Q}_{20}^7/\Gamma_0(7)}j_{(7)}(\alpha_Q) &= -30 &\qquad
\sum_{Q \in \mathcal{Q}_{20}^7/\Gamma_0(7)}(j_{(7)}(\alpha_Q))^2 &= 246 \\
\sum_{Q \in \mathcal{Q}_{20}^7/\Gamma_0(7)}(j_{(7)}(\alpha_Q))^3 &= -1980 &\qquad
\sum_{Q \in \mathcal{Q}_{20}^7/\Gamma_0(7)}(j_{(7)}(\alpha_Q))^4 &= 13454.
\end{align*}
Finally, the elementary symmetric polynomials can be recovered recursively using the Newton-Girard formulae, thus determining the class polynomial:
$$\mathcal{H}_{20}^{(7)}(x)=x^4+30x^3 +327x^2-1470x+2401.$$
We factor this polynomial as
\begin{align*}
&\left(x + \tfrac{15}{2} - \tfrac{3\sqrt{5}}{2} - i\sqrt{2(3 - \sqrt{5})}\right)
\left(x + \tfrac{15}{2} - \tfrac{3\sqrt{5}}{2} + i\sqrt{2(3 - \sqrt{5})}\right) \\
&\qquad \qquad \qquad \times \left(x + \tfrac{15}{2} + \tfrac{3\sqrt{5}}{2} + i\sqrt{2(3+\sqrt{5})}\right)
\left(x + \tfrac{15}{2} + \tfrac{3\sqrt{5}}{2} - i\sqrt{2(3+\sqrt{5})}\right),
\end{align*}
confirming the numerical approximations at the beginning of the section.
\section{Appendix}
Here we give explicit closed formulas for the seed functions $\widetilde{F}_{N}(0;z)$ and $\widetilde{F}_{N}(-1;z)$ for $N = 1, 2, 3, 5, 7, 13$. We write $E_4(z)$ for the Eisenstein series of weight $4$ and $E_2(z)$ for the Eisenstein series of weight $2$. Also, let
\[\theta(z) = \sum_{n \in \mathbb{Z}}q^{n^2} \qquad \text{and} \qquad \theta_1(z) = \sum_{n \in \mathbb{Z}}(-1)^nq^{n^2}\]
be the standard theta functions.
When the space of cusp forms of weight $k$ and level $N$ is one-dimensional, we will write $S_{k}^{(N)}(z)$ for the unique normalized form in this space. We also require the following forms:
\begin{align*}
S_2^{(26, +)}(z)& &&\text{the sum of the two newforms of weight $2$ on $\Gamma_0(26)$} \\
S_2^{(26, -)}(z)& &&\text{the difference of the two newforms of weight $2$ on $\Gamma_0(26)$ which} \\
&&&\text{begins $-2q^2 + 4q^3 - 2q^5 + \ldots$} \\
S_4^{(13, 1)}(z) \ & &&\text{the newform of weight $4$ on $\Gamma_0(13)$ with rational coefficients} \\
S_4^{(13, +)}(z)& &&\text{the sum of the two newforms of weight $4$ on $\Gamma_0(13)$ with coefficients} \\
&&&\text{in the field with defining polynomial $x^2 - x - 4$.}
\end{align*}
We can now list the following forms:
\begin{align*}
\widetilde{F}_{2}(0; z) &= \frac{24E_2(8z) - 14E_2(4z) + 3E_2(2z) - E_2(z)}{144 \theta(z)} \\
\widetilde{F}_{3}(0;z) &= \frac{24E_2(12z) - 9E_2(6z) - 8E_2(4z) + 3E_2(3z) + 3E_2(2z) - E_2(z)}{72\theta(z)} \\
\widetilde{F}_{5}(0;z) &=
\frac{40E_2(20z)- 15E_2(10z) + 5E_2(5z)- 8E_2(4z) + 3E_2(2z) -E_2(z) + 24S_2^{(20)}(z)}{72\theta(z)} \\
\widetilde{F}_{7}(0;z) &= \frac{12E_2(28z) -11E_2(14z)+13E_2(7z)+4E_2(4z) -7E_2(2z) +E_2(z)}{24\theta(z)} \\
&\qquad - \frac{\eta(2z)^7\eta(14z)^7}{2\theta(z)\eta(z)^3\eta(4z)^2\eta(7z)^3\eta(28z)^2}
+ \frac{5\eta(z)\eta(4z)^2\eta(14z)^{17}}{2\theta(z)\eta(2z)^3\eta(7z)^7\eta(28z)^6} \\ \\
\widetilde{F}_{13}(0;z) &= \frac{104E2(52z)-39E_2(26z) + 13E_2(13z) - 8E_2(4z) + 3E_2(2z) - E_2(z)}{72\theta(z)} \\
&\qquad + \frac{2S_2^{(26, +)}(z) + 2S_2^{(26, -)}(2z) + S_2^{(52)}(z)}{3\theta(z)}
\\
\widetilde{F}_{1}(-1;z) &= \theta_1(z) \frac{E_4(4z)}{\eta(4z)^6} \\
\widetilde{F}_{2}(-1;z) &= \theta_1(z) \frac{16E_4(8z) - E_4(4z)}{15\eta(4z)^6} + 16F_{2}(0; z) \\
\widetilde{F}_{3}(-1;z) &= \theta_1(z) \frac{81E_4(12z) - E_4(4z)}{80\eta(4z)^6} + 9F_{3}(0;z)
\\
\widetilde{F}_{5}(-1;z) &= \theta_1(z) \frac{\eta(4z)^4}{\eta(20z)^2} + 5F_{5}(0;z)
\end{align*}
\begin{align*}
\widetilde{F}_{7}(-1;z) &= \theta_1(z) \frac{2401E_4(28z) - E_4(4z) - 11760S_4^{(7)}(4z)}{2400\eta(4z)^6} + \frac{7}{2}F_{7}(0;z) \\
\widetilde{F}_{13}(-1;z) &=
\frac{\theta_1(z)}{\eta(4z)^6}
\left(-\frac{137E_4(4z)}{14280} -\frac{ 2197E_4(52z)}{3570} + \frac{13(13E_2(52z) - E_2(4z))^2}{1152}\right) \\
&\qquad -\frac{\theta_1(z)}{\eta(4z)^6}\left( \frac{39S_4^{(13, 1)}(4z)}{14} + \frac{143S_4^{(13, +)}(4z)}{34}\right) + \frac{13}{7}F_{13}(0;z)
\end{align*}
\begin{remark}
There is no holomorphic modular form of weight $\frac{3}{2}$ when $N = 1$, so $\widetilde{F}_{1}(-4;z)$ is required to recursively generate the $\widetilde{F}_1(-m;z)$ for all $m$. Zagier explains how to obtain $\widetilde{F}_{1}(-4;z)$ from $\widetilde{F}_1(-1;z)$ in the discussion preceding Theorem 4 of ~\cite{traces}.
\end{remark}
\bibliographystyle{plain}
|
1,314,259,995,648 | arxiv |
\section{Introduction}
\subfile{introduction.tex}
\section{GNA Architecture}
The computation process in GNA is represented by a directed graph in which nodes represent functions and edges
present the data flow. Nodes are called transformations, which is an abstraction layer for \CC functions. They may have
inputs (arguments) and have at least one output (return values). Transformations typically operate on data arrays.
A computational graph describes how transformations interact with each other. Because transformations are encapsulated and
have universal interfaces a high flexibility is achieved.
Data analysis in GNA consists of two stages:
\begin{enumerate}
\item Configuration stage on which the computational graph is created.
\item Computational stage on which graph is evaluated.
\end{enumerate}
In the first stage the transformation instances are created, and outputs and inputs are bound together. This step is done only
once within Python and is flexible, but may be inefficient. The actual calculation happens on the second step.
Calculations are done within compiled \CC code and are usually executed repeatedly.
\begin{figure}[tb]
\centering
\begin{tikzpicture}
\begin{umlsystem}[x=4, fill=red!10]{GNA}
\umlusecase{User Interface}
\umlusecase[y=-2]{PyRoot}
\umlusecase[y=-4]{Common code}
\end{umlsystem}
\umlnote[x=-3]{usecase-1}{Python}
\umlnote[x=-3, y=-4]{usecase-3}{\CC{}}
\umlassoc{usecase-1}{usecase-2}
\umlassoc{usecase-2}{usecase-3}
\umlactor[y=-2]{user}
\umluniassoc{user}{usecase-1}
\umluniassoc{user}{usecase-3}
\end{tikzpicture}
\caption{GNA architecture schematic diagram.}
\label{fig:mytikz}
\end{figure}
The generalized scheme of the framework is shown on Figure~\ref{fig:mytikz}. GNA has a Python user interface (UI) that is used
for building computation chains. The implementation of all transformations and the way they interact are described in
\CC{}. These two parts are linked via PyRoot.
The user may manage the computational process by using transformations already implemented in GNA. Transformations may also be
written by users themselves and added into the framework environment.
\subsection{Transformation}
A transformation is an encapsulated wrapper for a function that converts input data into output.
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\input{trans-type1.tex}
\caption{}
\label{fig:test1}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\input{trans-type2.tex}
\caption{}
\label{fig:test2}
\end{subfigure}
\begin{subfigure}{0.5\linewidth}
\centering
\input{trans-type3.tex}
\caption{}
\label{fig:test3}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\input{trans-type4.tex}
\caption{}
\label{fig:test4}
\end{subfigure}
\caption{Example of transformation kinds.
Intermediate transformation (\subref{fig:test1}) with a single input and multiple outputs.
Initial transformation (\subref{fig:test2}) with multiple outputs.
Intermediate transformation (\subref{fig:test3}) with multiple inputs and single output.
Intermediate transformation (\subref{fig:test4}) with single input and single output.
}
\label{fig:trasfs}
\end{figure}
Figure~\ref{fig:trasfs} schematically displays several kinds of transformations. Transformations may or
may not have inputs (marked by arrows on the left side) and must have at least one output (marked by arrows on the right
side). Inputs and outputs generally refer to data arrays. In addition to inputs transformation may also depend on
variables. A variable is a small input data type which usually refers to a single number.
Actual data is allocated on the transformation outputs.
Input data cannot be changed inside the transformation, it is a read-only state for the output it is
connected to. It enables us to ensure that data will not be modified by following transformations after it is computed.
A transformation is computed only once and the
result may be used multiple times afterwards. It will be re-computed only if any of the variables or inputs it depends on were modified.
There is a set of predefined transformations implemented in the GNA framework. Because transformations are independent from each
other the set may be straightforwardly extended by the users. The guidelines on how to do this are provided in the framework
documentation~\cite{gnadoc}.
The typical computational chain that produces prediction for the reactor antineutrino experiments contains hundreds of
nodes and is evaluated within a time frame on the order of 0.1 seconds to seconds. The prediction is a histogram with 300 bins and
depends overall on 250 independent parameters. The prediction is then used in the process of multidimensional
minimization, which takes around 30 minutes for 15 free parameters or around 6 hours for all the model parameters, most
of which are constrained. Statistical analysis requires repeated minimization and may take several days to evaluate
confidence intervals. MC based methods, such as Feldman-Cousins, require millions of minimization procedures and may take
months when executed on a cluster. The framework is also suitable for building more complex graphs with evaluation times
on the order of seconds to hours.
\subsection{Computational graph}
\begin{figure}[h]
\centering
\input{comp-chain.tex}
\caption{Schematic example of the GNA computational graph.}
\label{fig:comp-gr}
\end{figure}
A computational graph is formed by a chain of transformations with inputs connected to outputs. Figure~\ref{fig:comp-gr} displays
a simple example of such a graph. This scheme shows that the same output may refer to and be referred by any number of
inputs. The graph may be
configured in an arbitrary way, as long as data types of the outputs are compatible with the requirements of the transformation they are connected to.
The graph is constructed using Python. Users describe the way transformations are chained via Python script or from the
command line interface. The result of any transformation may be read at any moment through the Python interface.
Lazy evaluation means that the output of a transformation is computed on demand if the output is read by a caller.
In the case when the output of an intermediate transformation is accessed only preceding transformations are evaluated,
not the entire graph.
\subsection{Parallelism opportunities}
Parallel computing is a well-known method to speed up the computational process. There are methods to achieve performance
increases on different levels. The most efficient and safe method is to divide input data into smaller independent datasets
and execute the analysis on a distributed system~\cite{ballintijn2003proof,gankevich2017subord}. However, in real-world cases
analysis of those datasets often takes a long time. Due to this fact acceleration at an individual dataset level is also needed,
and may be implemented for multi-core CPUs or GPUs~\cite{iakushkin2017application}. In this paper we consider the
prospects for acceleration of computations in GNA on a framework level using GPGPU.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{figure1.pdf}
\caption{Neutrino oscillation probability calculation scheme. A part of JUNO computational graph.}
\label{fig:juno}
\end{figure}
Figure~\ref{fig:juno} shows a part of a computational graph for the JUNO experiment implementing the neutrino oscillation
probability calculation (see section~\ref{sec:oscprob}). There are multiple \texttt{OscProb}
transformation instances in the graph computing the neutrino oscillation probability for various
distances $L$,
each of them depending on a vector neutrino energy $\vec{E}_\nu$. For the most practical cases $\vec{E}_\nu$ may be
computed only once. \texttt{OscProb} transformation instances are independent from each other and bound to different parameters
(variables) that may change their output. Parallel technologies are applicable for graphs with such a structure,
since no data writing collision is possible.
The \texttt{OscProb} transformation, as well as most of the framework modules, provide multi-dimensional array operations
which are particularly suitable for multi-threaded systems such as GPUs or multi-core CPUs if their elements are
computed independently.
\section{CUDA overview}
CUDA (Compute Unified Device Architecture) is an architecture for parallel data processing for NVIDIA GPUs. The average
GPU has hundreds of times more threads compared to modern CPUs.
Threads run in parallel in SIMT (Single Instruction, Multiple Threads)~\cite{lindholm2008nvidia} manner as GPUs were
originally created for image processing --- a vivid example of SIMT algorithms.
The CUDA Toolkit~\cite{nvidia2007compute} has a set of specialized libraries optimized for their purposes, such as cuBLAS (linear algebra), cuRAND (random number generators),
cuDNN (deep neural networks), etc. It also provides high-level abstractions to manage computational processes on GPUs, and
low-level methods to tune it.
GPGPU's main performance limitations are memory allocation
and data transfers, as the co-processor is an independent physical device. The copying of data from Host (CPU and
RAM) to Device (GPU) or vice versa is slow. Nevertheless, it
is a powerful tool for accelerating algorithms that contain operations with the same
instruction applied to each element of an array, and producing independent output.
\section{GPU acceleration}
\subsection{Neutrino oscillation probability}
\label{sec:oscprob}
In this section we consider an opportunity of achieving better performance for a distinct transformation that calculates
the neutrino oscillation probability~\cite{giunti2007fundamentals}.
The general formula for oscillation probability in vacuum, the probability that neutrino flavor changes from $\nu_{\alpha}$ to
$\nu_{\beta}$ after travelling distance $L$, reads as follows:
\begin{multline*}
P(\nu_{\alpha} \rightarrow \nu_{\beta}) = \delta_{\alpha \beta} - 4 \sum_{i>j}\operatorname{Re}(V^*_{\alpha i} V_{\beta i} V_{\alpha j} V^*_{\beta j}) \sin^2 \frac{\Delta m^2_{ij} L}{4E_\nu}+ \\
+ 2 \sum_{i>j}\operatorname{Im}(V^*_{\alpha i} V_{\beta i} V_{\alpha j} V^*_{\beta j}) \sin \frac{\Delta m^2_{ij} L}{2E_\nu},
\end{multline*}
where $E$ denotes neutrino energy, $L$ is a distance between neutrino source and detector, $V_{\alpha i}$ is a complex
unitary matrix called a Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, and $\Delta m^2_{ij} = m_i^2 - m_j^2$ is a neutrino
mass splitting.
Within GNA the oscillation probability is implemented as a set of transformations for each formula item respectively. Each
transformation input is a vector of neutrino energy values $\vec{E}_\nu$.
The computations for different energy values are identical and independent from each other,
therefore they can run in parallel on a GPU. It should be noted that the input array (neutrino energy), in most realistic
cases, is known beforehand and will be copied to the GPU only once while the computation is performed for different
oscillation parameter values.
The following features were used to port the oscillation probability code to GPU:
\begin{itemize}
\item CUDA Streams~\cite{gomez2012performance},
\item datasets are divided into smaller sizes to organize overlapped execution,
\item asynchronous memory copying.
\end{itemize}
After porting the oscillation probability the result was verified: a difference between GPU
and CPU output results is within the roundoff accuracy of the double precision floating point numbers.
\begin{table}[b]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{p{7cm}r@{\hspace{1cm}}r}
\toprule
Input data size, elements & $10^4$ & $10^6$ \\
\midrule
CPU time / (GPU computing + transfer time) & 0.017 & 1.39\phantom{0} \\
CPU time / GPU computing-only time & 20.90\phantom{0} & 26.46\phantom{0} \\
\bottomrule
\end{tabular}
\vspace{10pt}
\caption{Benchmarks for oscillation probability calculation on CPU and GPU with input vectors sizes of $10^4$ and
$10^6$ elements.}
\label{tab:tab}
\end{table}
Results of the test with input energy vectors of sizes $10^4$ and $10^6$ elements are presented in table \ref{tab:tab}. The
calculation is performed with double precision on Intel Core i7-6700HQ CPU and NVIDIA GeForce GTX 970M GPU.
It should be noted that a size of $10^4$ elements corresponds to the JUNO experiment's case.
First row contains the ratio of the full computation times for CPU-only and GPU-oriented (including data transfer costs)
versions of the algorithm. The second row contains the ratio of the computation times (without data transfer costs in
GPU-based case).
When data transfer is taken into account the acceleration for the $10^6$ sample size is not significant. For the smaller
sample the acceleration is not enough to cover the overhead due to data transfer.
When data transfer is not taken into account the achieved acceleration is at least $\times$20 compared to CPU case.
Since the neutrino energy is computed only once and then stored the latter is the more realistic case for this task.
The speed-up is expected to be more significant for larger datasets. At the same time the data transfer overhead should
be considered and handled appropriately in any case.
It should also be noted that single precision floating point operations are typically much faster (dozens of times)
on most GPUs when compared to double precision. For CPUs the single precision is only twice as faster.
Therefore a significant speed-up is expected for cases when
single precision is sufficient.
\subsection{Computational chains with GPU-oriented transformations}
The original CPU computational scheme was modified in such a way
that switching between CPU- and GPU-oriented transformation modes is transparent for the end user. The transformation is
still a single object with two function definitions: one for the CPU and another for the GPU. On the UI side the GPU
computation is enabled by setting a single flag that changes the target device of the transformation and switches the active
function.
Thus, users are enabled to work with the GPU mode of GNA without any special knowledge about GPGPU.
In order to handle data transfer we implemented a \CC{} wrapper for the GPU array and defined several frequently used
mathematical operations. The portion of the framework that contains CUDA is built as a separate shared library. Then the
main code is built with this library as a dependency. This way GPU functions may be called from the common \CC{} code.
GPU related code may be switched off completely by a special flag during the compilation of the framework.
\begin{figure}[tb]
\centering
\input{gpu-cpu-chain-copying.tex}
\caption{Schema of mixed (CPU and GPU) computational chain.}
\label{fig:cpu-gpu-chain}
\end{figure}
Since memory allocation is one of GPGPU's limitations within GNA, all required memory for both the GPU and CPU is allocated
during the configuration stage to avoid extra time costs in the runtime.
As described earlier, inputs are simply the views on the data of the corresponding outputs of preceding transformations.
The same feature is implemented for the GPU arrays.
There is no additional allocation on the GPU for the inputs as
it refers to the output it is bound to. The only exception to this rule is the first GPU-oriented transformation in the computational
subchain: an extra GPU memory
allocation for its inputs occurs because we need to transfer data from Host memory to
the Device.
We have extended the GNA internal data storage objects in order to maintain a synchronized copy of Host data on the
Device. The synchronization is done in a lazy manner, i.e. it happens only when the unsynchronized Host data is read
from Device and vice versa.
Figure~\ref{fig:cpu-gpu-chain} shows the computation scheme in which the chain contains a subset of GPU-based
transformations. Only two data transfers between the Host and the Device take place in this case: at the beginning of
GPU subchain and at the end of it. We
minimize communication between Host and Device to cut the time costs due to data copying since it is an expensive
operation. The status of GPU function, which indicates whether or not it was executed successfully, is available on the Host side after the
transformation computation is finished. Device-To-Device data transfers may occur inside the transformations
implementation, but they are not considered to be costly.
\begin{figure}[tb]
\centering
\input{gpu-chain-reading.tex}
\caption{Reading an intermediate result from the GPU chain.}
\label{fig:gpu-read}
\end{figure}
Extra data transfers from Device to Host may be triggered by the user, reading the data at
any point of the computational chain as is shown in Figure~\ref{fig:gpu-read}. In this case an extra
data transfer occurs. The backward transfer is not needed. Because user-triggered reading may occur during a debugging
procedure or for the plotting of data, the data transfer overhead in not significant in this case when compared to the
actual data analysis.
\section{Future work}
The major shortcoming of the current GPU support implementation is the lack of fault tolerance. In the case of GPU
failure the computation will be aborted.
\begin{figure}[h!]
\centering
\input{fault-gpu.tex}
\caption{Computational process recovery on CPU after GPU fault.}
\label{fig:crash}
\end{figure}
We are planning to add a feature of switching the computation between CPU and GPU modes automatically during runtime as is
shown in Figure~\ref{fig:crash}. It is assumed that the deceleration of the algorithm execution is more preferred than aborting
it.
Another planned feature is adding checkpoints for the GPU side of the framework. It will
decrease latency time for recovering the computation crashed on GPU side. This implies that data will regularly be synchronized between
Host and Device. Since this may lead to an additional overhead the existence and frequency of the checkpoints will be configurable.
In order to use a GPU for the computational chain in a real analysis a subset of existing transformations should be ported to the GPU.
Not every algorithm will be ported, however. The choice will be made based on analysis of the computational chains of the
Daya Bay and JUNO experiments. As a sufficient set of transformations is ported we will benchmark the GPU-enabled
version of GNA on several realistic computational schemes with various configurations and floating point precision
settings.
Since the data transfer costs may negate performance improvement of GPU-enabled computational chain the actual choice of
the configuration should be made and tested by the end-user, based on a particular computational chain. Specialized
benchmarking tools will be implemented in GNA to simplify this task.
\section{Conclusion}
In this paper we describe the GPU support within the GNA framework implemented via the CUDA architecture
with transparency for the end-user. For the particular case of neutrino oscillation probability it has been demonstrated that the
achieved acceleration may be of order of $\times$20 for double precision floating point numbers.
While the realistic acceleration for the large computational chains may be lower and may depend on a particular chain, the
prospects look very promising. Significant improvement is expected when single precision is sufficient for the task.
An acceleration obtained in case of single precision is usually much higher for GPUs compared to CPUs.
The corresponding studies and benchmarks will be performed in further work.
The solutions to the major problems and limitations, such as memory allocation and data transfer are discussed.
\section*{Acknowledgements}
We are grateful to Chris Kullenberg for reading the manuscript and for valuable suggestions.
This research is supported by the Russian Foundation for Basic Research (projects
no.~18-32-00935 and 16-07-00886) and by the Association of Young Scientists and Specialists of Joint Institute for Nuclear
Research (grant no.~18-202-08).
The manuscript has been submitted to ICCSA 2018 (Lecture Notes in Computer Science, publisher: Springer Verlag).
\bibliographystyle{h-physrev.bst}
|
1,314,259,995,649 | arxiv | \section{Introduction}
Let $X \to \De$ be a smooth projective family of complex manifolds over a disc. A celebrated result of Siu (see \cite{Siu98, Siu02}) asserts that the plurigenera $h^0(X_t, \Oo_{X_t}(mK_{X_t}))$ is a constant function of $t \in \De$ for each positive integer $m$. Results of such type have been investigated for a long time (see \cite{Kaw99b} for a survey) and have been generalized in many directions after Siu's work (see \cite{Pau07, Tak07, BB12, RT20}, etc.). Invariance of plurigenera is an important ingredient in the proof of the boundedness of moduli of general type varieties (see \cite[Theorem 1.8]{HMX13}, \cite[\S 2.3]{HMX18}). Moreover, Siu's method has a profound impact on problems such as deformations of canonical singularities (see \cite{Kaw99}) and the existence of good minimal models (see \cite{DHP13}).
Let $Y$ be a smooth projective variety and $\mu: Y \dto X$ be the canonical model of $Y$ (see \cite[Definition 3.6.5]{BCHM10}, \cite[Definition 2.1]{Li20b}). It is natural to ask whether $X$ (birationally) belongs to a bounded moduli space given $Y$ satisfying certain restrictions (see \cite{FS20}, \cite[Conjecture 1.2]{Li20a}). When $Y$ is of general type, then the answer is affirmative if $Y$ has a fixed dimension and a bounded volume (see \cite{HMX13, HMX18}). To study the general situation, one can adopt the same strategy as \cite{HMX13, HMX18}. In this case, the base variety $X$ admits a generalized polarized pair structure instead of a log pair structure (see Definition \ref{def: gpair}). Roughly speaking, there is a triple $(X, B+M)$ with $B, M$ divisors on $X$. Here $B \geq 0$ accounts for the singularities of the fibers, and $M$ is the push-forward of a nef and abundant divisor which accounts for the moduli of the fibers. It is this additional $M$ that ruins the straightforward applications of the results in \cite{HMX13, HMX18}. For one thing, $M$ is only well-defined up to linear equivalence.
More generally, if $Y \to X$ is a $K_Y$-trivial fibration, then $X$ also admits a generalized polarized pair structure. In fact, the concept of generalized polarized pairs originates from such observations (see \cite{Bir20}). Note that in the definition of generalized polarized pairs, the nef part is just assumed to be the push-forward of a nef divisor instead of a nef and abundant divisor. However, the latter case is more meaningful in geometry, and our main motivation is to study the invariance of plurigenera in such setting:
\begin{theorem}\label{thm: sing extension}
Let $\pi: X \to \De$ be a projective contraction from a complex space $X$ to the disc $\De$. Assume that $X$ has canonical singularities. Let $L$ be a Cartier divisor on $X$ and $h$ be a metric for $\Oo_X(L)$ with non-negative curvature current. Let $X_0\subset X$ be the fiber over $0\in\De$. Suppose that $X_0$ has canonical singularities and $h|_{X_0}$ is well-defined. Then for each $m\in \Nn$, any section of
\[
H^0(X_0, \Oo_{X_0}(mK_{X_0}+L|_{X_0}) \otimes \Gg_m(h|_{X_0}))
\] extends over $X$ (i.e. any section has a preimage in $H^0(X, \Oo_X(mK+L))$ under the map in Lemma \ref{le: meaning of extension}).
\end{theorem}
The newly introduced ideal sheaf $\Gg_m(h)$ is of the birational (or bimeromorphic) nature (see Section \ref{subsec: Multiplier ideal sheaves and sections}). Comparing with previous studies, this birational point of view gives new ingredients even in the smooth case (see Example \ref{eg: not the same as multiplier ideal sheaf}, Remark \ref{rmk: Paun's original thm is not enough}). For relevant notions of metrics on complex spaces which are adapted to this perspective, see Section \ref{sec: 3}.
Theorem \ref{thm: sing extension} involves with metrics and $L^2$-conditions. In order to use it in algebraic geometry, we need to obtain such analytic requirements by natural algebro-geometric conditions. The following statement is metric-free.
\begin{theorem}\label{thm: AG sing trivial boundary extension}
Let $\pi: X \to \De$ be a projective contraction from a complex space $X$ to the disc $\De$. Let $X' \xrightarrow{f} X \xrightarrow{\pi} \De$ be a generalized polarized pair with the boundary part $B$ and the abundant nef part $M$. Let $X_0 \subset X$ be the fiber over $0\in\De$ and $(X_0, B_0+M_0)$ be the generalized polarized pair obtained by restricting to $X_0$. Assume that $(X_0, B_0+M_0)$ has g-canonical singularities (in particular, $B_0=0$). Then for each $m\in\Nn$ such that $mM$ is Cartier, any section of
\[
H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0})))
\] extends over $X$.
\end{theorem}
A direct consequence of Theorem \ref{thm: AG sing trivial boundary extension} is
\begin{corollary}\label{cor: sing invariant of plurigenera for g-pair}
Let $\pi: X \to \De$ be a projective contraction from a complex space $X$ to the disc $\De$. Let $X' \xrightarrow{f} X \xrightarrow{\pi} \De$ be a generalized polarized pair with the boundary part $B$ and the abundant nef part $M$. Let $X_t\subset X$ be the fiber over $t\in\De$ and $(X_t, B_t+M_t)$ be the generalized polarized pair obtained by restricting to $X_t$. Assume that for any $t\in \De$, $(X_t, B_t+M_t)$ has g-canonical singularities (in particular, $B_t=0$). Then for each $m\in\Nn$ such that $mM$ is Cartier,
\[
h^0(X_t, \Oo_{X_t}(m(K_{X_t}+M|_{X_t})))
\] is independent of $t \in \De$.
\end{corollary}
Both the abundant assumption and the singularity assumption are indispensable. In fact, Corollary \ref{cor: sing invariant of plurigenera for g-pair} is false if $M$ is just assumed to be nef (see Example \ref{eg: M nef}). An example of Kawamata shows that the extension of local sections fails if the singularities are klt (see \cite[Example 4.3]{Kaw99b}).
It turns out that even in the smooth case, P\u aun's twisted version of invariance of plurigenera (see \cite[Theorem 1]{Pau07}) is not enough to obtain Theorem \ref{thm: AG sing trivial boundary extension} (see Remark \ref{rmk: Paun's original thm is not enough}). Instead, we need Theorem \ref{thm: sing extension} which allows potentially more sections to be extended.
Besides, \cite[Theorem 4.1]{FS20} establishes a version of invariance of plurigenera assuming that $K_X+B+M$ or $B+M$ is big over $\De$ and $M$ is nef over $\De$. The argument of \cite[Theorem 4.1]{FS20} follows from \cite[Theorem 4.2]{HMX13} which relies on the minimal model program for varieties of general type.
We discuss the structure of the paper. Section \ref{sec: 2} gives the background material on generalized polarized pairs for complex spaces. Section \ref{sec: 3} discusses metrics of $\Qq$-Cartier divisors on complex spaces. Section \ref{sec: 4} gives the construction of metrics for the abundant nef parts of generalized polarized pairs. Section \ref{sec: 5} introduces a new type multiplier ideal sheaf and proves the aforementioned theorems and corollary.
\medskip
\noindent\textbf{Acknowledgements}.
We thank Hanlong Fang, Sheng Rao and Lei Zhang for many discussions and for answering questions. Z. L. is partially supported by a grant from SUSTech. Z. W. is partially supported by National Key R\&D Program of China (No.2021YFA1002600) and NSFC grant (No.12071035).
\section{Generalized polarized pairs}\label{sec: 2}
\subsection{Notation and conventions}
Let $\Zz$ be the set of integers and $\Nn= \Zz_{>0}$ be the set of natural numbers.
A holomorphic map $f: X \to Y$ between complex spaces is called a morphism, and a morphism is called a contraction if it is surjective with connected fibers. The morphism $f$ is called a proper modification if (1) $f$ is proper and surjective, and (2) there exists a nowhere dense analytic subset $Z\subset Y$ such that $f|_{X- f^{-1}(Z)}: X- f^{-1}(Z) \to Y- Z$ is an isomorphism. A bimeromorphic map $f: X\dto Y$ between complex spaces is a meromorphic map such that the graph $\Gamma_f$ is an irreducible analytic subset in $X \times Y$, and the natural maps $\Gamma_f \to X, \Gamma_f \to Y$ are proper modifications (see \cite[Definitions 2.1, 2.2, 2.7]{Uen75}). A proper modification $\mu: W \to X$ is called a resolution if $W$ is smooth. A proper morphism $f: X \to Y$ between complex spaces is called projective if for any relatively compact open set $U \subset Y$, there is an embedding $j: f^{-1}(U) \hookrightarrow \Pp_U^n \coloneqq \Pp_{\Cc}^n \times U$ such that $f|_{f^{-1}(U)}=p_2\circ j$, where $p_2: \Pp_U^n \to U$ is the natural projection map (see \cite[Chapter V]{Pet94}).
A divisor means a Weil divisor. A finite $\Qq$-linear combination of divisors gives $\Qq$-divisors. A $\Qq$-divisor is called $\Qq$-Cartier if it is a finite $\Qq$-linear combination of Cartier divisors.
Let $D$ be a $\Qq$-divisor on $X$. Then $(X, D)$ is called a log pair. A resolution $\mu: W \to X$ is called a log resolution of $(X, D)$ if $W$ is smooth and $\Supp \mu_*^{-1} D \cup \Supp \Exc(\mu)$ is a simple normal crossing divisor, where $\mu_*^{-1} D$ is the strict transform of $D$ and $\Exc(\mu)$ is the exceptional locus of $\mu$. When $X$ is a reduced complex space, $(X, D)$ always admits a projective log resolution (see \cite[Theorem 2.0.2]{Wlo09}).
Let $X$ be a normal complex space. Let $X \to S$ be a morphism to a complex space $S$. We write the relative property over $S$ by $/S$. Let $k = \Zz, \Qq$. For two $k$-divisors $B$ and $D$, we use $B \sim_{k } D/S$ to denote that $B$ and $D$ are $k$-linearly equivalent over $S$.
Let $D$ be a Weil divisor on a normal complex space $X$, then $\Oo_X(D)$ denotes the sheaf associated with $D$. To be precise, let $\mathscr{M}_X$ be the sheaf of germs of meromorphic functions on $X$ (see \cite[Chapter II, \S 6.2]{Dem12}), then $\Oo_X(D)\subset \mathscr{M}_X$ is the sheaf such that for any open set $U \subset X$,
\[
\Oo_X(D)(U) = \{f\in \mathscr{M}_X(U) \mid \di(f)+D|_{U} \geq 0\}.
\] The sheaf $\Oo_X(D)$ is coherent and the subscript of $\Oo_X(D)$ will be omitted if it is clear from the context.
Recall that for an $n$-dimensional normal complex space $X$, the canonical divisor $K_X$ is any Weil divisor such that $j_*\Omega^n_{X_{\rm reg}} \simeq \Oo_X(K_X)$ for the open embedding $j: X_{\rm reg} \hookrightarrow X$, where $X_{\rm reg}$ is the smooth locus of $X$ and $\Omega^n_{X_{\rm reg}}$ is the sheaf of holomorphic $n$-forms on $X_{\rm reg}$.
Let $f: X \to Y$ be a morphism and $B$ be a $\Qq$-Cartier divisor on $Y$. We write $f^*B$ for the $\Qq$-Cartier divisor which is the pull-back of $B$. If $D$ is a Cartier divisor on $X$, then we write $\Oo_X(D)$ for the corresponding line bundle, and $f_*\Oo_X(D)$ for the push-forward of the sheaf $\Oo_X(D)$.
Let $f: X \to Y$ be a proper modification between normal complex spaces and $D$ be a prime divisor on $X$. We define the push-forward divisor
\[
f_*D \coloneqq\begin{cases}
f(D), & \text{ if~ } D \text{~is not~} f\text{-exceptional},\\
0, & \text{ if~ } D \text{~is~} f\text{-exceptional}.
\end{cases}
\] This construction can be extended to $\Qq$-divisors by linearity. Moreover, for $k = \Zz, \Qq$, if $B \sim_{k } D/S$, then $f_*B \sim_{k } f_*D/S$ because the push-forward of a principal divisor is still a principal divisor.
Finally, for a projective morphism $X \to S$, a $\Qq$-Cartier divisor $D$ on $X$ is nef$/S$ if there is an ample$/S$ divisor $H$ on $X$ such that $D+\ep H$ is ample$/S$ for any $\ep \in \Qq_{>0}$, and $D$ is big$/S$ if $D|_{X_t}$ is big for a general $t\in S$.
\subsection{Generalized polarized pairs with abundant nef part}
Generalized polarized pairs originate from the canonical bundle formula (see \cite{Kaw98}). It was observed by \cite{BZ16} that the generalized polarized pair structure deserves to be studied separately and it behaves like the usual log pair structure in many ways. We state the definition of generalized polarized pairs for complex spaces (see {\cite[Definition 1.4]{BZ16} for the original definition in the algebraic category).
\begin{definition}\label{def: gpair}
Let $X', X$ and $S$ be normal complex spaces. A generalized polarized pair (g-pair) consists of projective morphisms $X' \xrightarrow{f} X \to S$ where $f$ is a modification, a $\Qq$-divisor $B \geq 0$ on $X$, and a $\Qq$-Cartier divisor $M'$ on $X'$ which is nef$/S$ such that $K_{X} + {B} + {M}$ is $\Qq$-Cartier, where $M \coloneqq f_*M'$. We call $B$ the boundary part and $M$ the nef part.
\end{definition}
\begin{remark}\label{rmk: replacing by higher model}
In the definition of g-pairs, we can replace $X'$ by any projective modification of $X'$ and $M'$ by its pull-back. Therefore, we can assume that $f$ is a projective log resolution of $(X, B)$.
\end{remark}
\begin{definition}\label{def: abundant gpair}
Under the notation of Definition \ref{def: gpair}, if there exists a proper contraction $g: X' \to Z/S$ such that $M' \sim_\Qq g^*H/S$ for some nef and big$/S$ $\Qq$-Cartier divisor $H$ on $Z$, then we call that such g-pair has the abundant nef part.
\end{definition}
Replacing $X'$ by a projective modification and $M'$ by its pull-back (see Remark \ref{rmk: replacing by higher model}), the new g-pair still has the abundant nef part.
\begin{remark}
A g-pair with the abundant nef part naturally appears in the canonical bundle formula where the concept of g-pairs originates (see \cite{Amb05}). The word ``abundant" comes from the fact that divisors satisfying the property in Definition \ref{def: abundant gpair} is related to the abundance conjecture (see \cite[Proposition 2.1]{Kaw85}).
\end{remark}
\subsection{Singularities and adjunctions for g-pairs}
\begin{definition}\label{def: Q-Gorenstein}
A normal complex space is called $\Qq$-Gorenstein if $K_X$ is a $\Qq$-Cartier divisor.
\end{definition}
\begin{definition}\label{def: canonical sing}
A normal complex space $X$ has canonical singularities if (1) $X$ is $\Qq$-Gorenstein, and (2) for any resolution $f: W \to X$, if $K_W=f^*K_X+E$ with $E$ an $f$-exceptional divisor, then $E \geq 0$.
\end{definition}
\begin{definition}\label{def: discrepancies}
Under the notation of Definition \ref{def: gpair}, for a prime divisor $P$ over $X$, its discrepancy with respect to (X, B+M) is defined to be
\[
{\rm discp}(P;X,B+M) \coloneqq \mult_P\left(K_Y+g^*M' - (f\circ g)^*(K_X+B+M)\right),
\] where $g: Y \to X'$ is a proper modification such that $P \subset Y$.
\end{definition}
\begin{definition}\label{def: singularities}
A g-pair $(X, B+M)$ has g-canonical (resp. g-terminal, g-klt, g-lc) singularities if ${\rm discp}(P;X,B+M) \geq 0$ (resp. $>0$, $>-1$, $\geq -1$) for each prime divisor $P$ over $X$.
\end{definition}
The adjunction formula for g-pairs is given by \cite[Definition 4.7]{BZ16} in the algebraic setting. It can be naturally extended to the analytic setting.
\begin{definition}[Adjunction formula for g-pairs]\label{def: g-adjunction}
Let $(X,B+M)$ be a g-pair with data $ X' \xrightarrow{f} X \to S$ and $M'$. Let $P$ be a normal irreducible component of $\lfloor B \rfloor$ and $P'$ be its strict transform on $X'$. We may assume that $f$ is a projective log resolution of $(X,B+M)$. Write
\[
K_{X'} +B'+M'=f^*(K_{X} +B +M),
\] then
\[
K_{P'} +B_{P'} +M_{P'} \coloneqq (K_{X'} +B'+M')|_{P'},
\]
where $B_{P'} = (B'-P')|_{P'}$ and $M_{P'} =M'|_{P'}$. Let $g$ be the induced morphism $P'\to P$. Set $B_{P} = g_*B_{P'}$ and $M_{P} =g_*M_{P'}$. Then we get the equality
\[
K_{P}+B_{P}+M_{P} = (K_{X}+B+M)|_{P},
\] which is referred as (generalized) adjunction formula.
\end{definition}
In general, $M|_{P} \neq M_P$, and even if $(X,B+M)$ has the abundant nef part, $(P, B_P+M_P)$ may not have the abundant nef part. On the other hand, in the category of algebraic varieties, if $(X, B+M)$ is g-lc, then $(P, B_P+M_P)$ is still g-lc with data $P' \xrightarrow{f} P \to S$ and $M_{P'}$ by \cite[Remark 4.8]{BZ16}.
Let $X \to \De$ be a projective contraction to the disc $\De$. Let $X_0$ be the fiber over $0\in\De$. Suppose that $X_0$ is normal, then we define
\begin{equation}\label{eq: adjunction on X_0}
(X_0, B_0+M_0)
\end{equation} to be the g-pair by the adjunction of $(X, X_0+B+M)$ on $X_0$. In the sequel, $(X_0, B_0+M_0)$ will be assumed to have g-canonical singularities. Under this assumption, $B_0 =0$ and $M_0 = M|_{X_0}$.
\section{Metrics on complex spaces}\label{sec: 3}
\subsection{Metrics and $\Qq$-metrics}
Let $X$ be a normal complex space, $\Ll$ be a holomorphic line bundle on $X$. We define the notion of singular metrics in this setting.
Let $x\in X$ be a point and $\Ll_x$ be the stalk at $x$. Assume that there is a function
\[
\|-\|_h: \Ll_x \to \Rr_{\geq 0} \cup \{+\infty\},
\] such that if $\theta: \Ll|_U \simeq \Oo_U$ is a trivialization on an open set $U \subset X$, then for any $s \in \Ll_x$, we have
\[
\|s\|_h = |\theta(s)|e^{-\vphi}
\] for a function $\vphi: U \to \Rr \cup \{\pm \infty\}$. This $\vphi$ is called a local weight and it depends on $\theta$. Let $\mu: W \to X$ be a resolution. Then $\mu^*\theta: \mu^*\Ll|_{\mu^{-1}(U)} \simeq \Oo_{\mu^{-1}(U)}$ is a trivialization for $\mu^*\Ll$. For any $\ti s \in (\mu^*\Ll)_w$ with $w \in \mu^{-1}(U)$, set
\begin{equation}\label{eq: pullback of metric}
\|\ti s\|_{\mu^*h} \coloneqq |\mu^*\theta(\ti s)|e^{-\mu^*\vphi},
\end{equation} where $\mu^*\vphi(w) = \vphi(\mu(w))$. Note that $\|-\|_{\mu^*h}$ is independent of the choice of trivializations. In fact, for local trivializations $\theta_1, \theta_2$, there is a nowhere vanishing holomorphic function $u$ such that $\theta_1=u\theta_2$. Thus $|u|e^{-\vphi_1} = e^{-\vphi_2}$ and
\begin{equation}\label{eq: pullback relation}
|\mu^*u|e^{-\mu^*\vphi_1} = e^{-\mu^*\vphi_2}.
\end{equation}
Let $L^1_{\rm loc}(V)$ be the set of locally integrable functions on a smooth open set $V$.
\begin{definition}[Singular metrics on normal complex spaces]\label{def: singular metric on varieties}
Under the above notation, $h$ is a (singular) metric of $\Ll$ if for any resolution $\mu: W \to X$, we have $\mu^*\vphi \in L_{\rm loc}^1(\mu^{-1}(U))$ for any open set $U \subset X$.
\end{definition}
The property of $\vphi$ in Definition \ref{def: singular metric on varieties} will be referred as the $L^1_{\rm loc}$-property. By abuse of terminology, if $D$ is a Cartier divisor, then we also call a metric of the corresponding line bundle $\Oo(D)$ a metric of $D$. This is particularly convenient to treat $\Qq$-Cartier divisors.
\begin{definition}[{\cite[Definition 1.4]{Pet94}}]\label{def: psh on complex space}
Let $X$ be a complex space. A plurisubharmonic (psh for short) function on $X$ is a function $\varphi:X\rightarrow [-\infty,\infty)$ having the following property. For every $x\in X$ there is an open neighborhood $U$ with a biholomorphic map $h:U\rightarrow V$ onto a closed complex subspace $V$ of some domain $G\subset \mathbb C^m$ and a plurisubharmonic function $\tilde \varphi:G\rightarrow [-\infty,\infty)$ such that $\varphi|_U=\tilde\varphi\circ h$.
\end{definition}
If $f:X\rightarrow Y$ is a holomorphic map between two complex spaces, and $\varphi:Y\rightarrow [-\infty,\infty)$ is a psh function on $Y$, then $\varphi\circ f$ is a psh function on $X$ (see \cite[Page 356]{Nar61}).
\begin{definition}\label{def: non-negative curvature on variety}
Under the notation of Definition \ref{def: singular metric on varieties}, $(\Ll, h)$ is said to have non-negative curvature current if any local weight $\varphi$ on an open subset of $X$ is a psh function.
\end{definition}
\begin{remark}\label{rmk: change of variable}
In Definition \ref{def: singular metric on varieties}, the local $L_{\rm loc}^1$-property is needed to hold for all resolutions. If the $L_{\rm loc}^1$-property holds for one resolution $\mu: W \to X$, then it is not true that this property holds for another resolution $\nu: V \to X$ even if $\nu$ factors through $\mu$. In fact, by the change-of-variables formula,
\[
\int_W |\ti\vphi| ~dV_W = \int_V |\nu^*\ti\vphi| \cdot |s|^2~dV_V,
\] where $s$ is the local equation for the effective exceptional divisor $K_V-f^*K_W$ (hence $s$ is holomorphic). Hence $\ti\vphi \in L_{\rm loc}^1$ does not imply $\nu^*\ti\vphi \in L_{\rm loc}^1$.
However, if the curvature current is non-negative (see Definition \ref{def: non-negative curvature on variety}), then it is enough to just consider one resolution (see Proposition \ref{prop: independent}).
\end{remark}
The notion of psh functions is of bimeromorphic nature:
\begin{proposition}\label{prop: independent}
Let $\mu: W \to X$ be a resolution of a normal complex space $X$. Then a function $\vphi: X \to [-\infty, \infty]$ is a psh function iff $\vphi\circ\mu: Y \to [-\infty, \infty]$ is a psh function.
\end{proposition}
\begin{proof}
One only needs to show the sufficient part. As $\mu$ is a proper modification, $Z \coloneqq \mu(\Exc(\mu))$ is an analytic set of codimension $\geq 2$. Then $\vphi$ is a psh function on $X-Z$. By \cite[Theorem 1.5]{Pet94}, the definition of psh functions in \cite{GR56} is the same as Definition \ref{def: psh on complex space}. By \cite[Page 181, Satz 4]{GR56}, $\vphi$ extends to a psh function $\hat\vphi$ on $X$. As $\hat\vphi\circ\mu$ is a psh function on $W$ which coincides with $\vphi\circ\mu$ on $W-\Exc(\mu)$, we have $\hat\vphi\circ\mu=\vphi\circ\mu$ on $W$. Hence $\vphi=\hat\vphi$ is a psh function.
\end{proof}
For simplicity, we can formally extend the notion of metrics for $\Qq$-Cartier divisors (or $\Qq$-line bundles).
\begin{definition}\label{def: metric for Q-div}
Let $D$ be a $\Qq$-Cartier divisor on a complex space $X$. Then a $\Qq$-metric $h$ for $D$ is a triple $(m, D, h_{mD})$ such that $m \in \Nn$ with $mD$ a Cartier divisor and $h_{mD}$ is a metric for $mD$. We say that two $\Qq$-metrics $(m, D, h_{mD})$ and $(n, D, h_{nD})$ are equal if $h_{mD}^n= h_{nD}^m$ as metrics for the Cartier divisor $mnD$. For simplicity, $h$ is also called a metric.
\end{definition}
\begin{remark}\label{rmk: uniqueness}
Given a metric $(m, D, h_{mD})$, for any $n \in \Nn$ such that $nD$ is Cartier, there always exists a metric $(n, D, h_{nD})$ which is equal to $(m, D, h_{mD})$. In fact, we can set $h_{nD} = h_{mD}^{\frac n m}$.
\end{remark}
If $D =\sum D_i$ is a finite sum of $\Qq$-Cartier divisors such that each $D_i$ admits a metric $h_i$, then we define a metric $h\coloneqq \prod_i h_{i}$ for $D$ as follows. Take $m\in\Nn$ such that each $mD_i$ is Cartier, then
\[
h=(m, D, h_{mD})
\] with $h_{mD} = \prod_i h_{mD_i}$ where $h_i=(m, D_i, h_{mD_i})$ (see Remark \ref{rmk: uniqueness}). It is straightforward to check that this definition is independent of the choice of $m$. Let $D$ be a $\Qq$-Cartier divisor with a metric $h$, then for $r\in \Qq$, we can similarly define a metric $h^r$ for $rD$.
Let $f: W \to X$ be a proper modification between normal complex spaces. Let $D$ be a $\Qq$-Cartier divisor on $X$ with a metric $h$. We can define the pull-back metric $f^*h$ for the $\Qq$-Cartier divisor $f^*D$. In fact, it is enough to assume that $D$ is a Cartier divisor by the definition of metrics for $\Qq$-Cartier divisors. If $\theta: \Oo(D)|_U \simeq \Oo_U$ is a trivialization, then $f^*\theta: \Oo(f^*D)|_{f^{-1}(U)} \simeq \Oo_{f^{-1}(U)}$. If $\|s\|_{h}=|\theta(s)|e^{-\vphi}$ for $x\in U$ and $s\in \Oo(D)_x$, then
\[
\|\ti s\|_{f^*h} \coloneqq |f^*\theta(\ti s)| e^{-f^*\vphi}
\] for $w\in f^{-1}(U)$ and $\ti s\in \Oo(f^*D)_w$. This definition is independent of the choice of $\theta$.
\begin{remark}\label{rmk: in terms of weight}
It is more convenient to think of a metric in terms of a local weight: if $\vphi$ is the local weight for $h_{mD}$ (under a given trivialization), then $\frac \vphi m$ is defined to be the local weight of $(m, D, h_{mD})$. The summation (resp. rational multiple, pull-back) of local weights corresponds to the product (resp. rational exponent, pull-back) of metrics.
\end{remark}
\begin{remark}\label{rmk: metric for linear equiv}
Suppose that $B, D$ are $\Qq$-Cartier divisors such that $B \sim_\Qq D$, then a metric of $B$ is also a metric of $D$. In fact, local weights are identified under an isomorphism $\Oo(mB) \simeq \Oo(mD)$ for some $m\in \Nn$.
\end{remark}
On a complex manifold, a function $\vphi$ is called quasi-plurisubharmonic (quasi-psh) if it is a summation of a psh function with a smooth function.
\begin{definition}[{\cite[Definition 1.4]{DPS00}}]\label{def: compare sing}
For two quasi-psh functions $\vphi_1, \vphi_2$, we say that $\vphi_1$ is less singular than $\vphi_2$ (and write $\vphi_1 \preceq \vphi_2$) if $\vphi_2 \leq \vphi_1+c$ for a constant $c$. We write $\vphi_1 \approx \vphi_2$ if $\vphi_1 \preceq \vphi_2$ and $\vphi_2 \preceq \vphi_1$.
For two functions $\vphi_i: X \to \Rr \cup \{-\infty\}, i=1,2$ on a complex space $X$, we use the same notation as above if the corresponding relations hold after pulling back $\vphi_i$ to all the resolutions.
\end{definition}
\subsection{Metrics defined by global sections}\label{subsection: metrics defined by global sections}
Let $X$ be a normal complex space and $L$ be a Cartier divisor on $X$. Suppose that $\sigma_i \in H^0(X, \Oo(L)), 1 \leq i \leq k$ are global sections (may not necessarily be distinct). They can be used to define a metric $h$ for $L$ just as in the smooth case.
Let $U \subset X$ be an open set such that $\theta: \Oo(L)|_U \simeq \Oo_U$ is a trivialization. Then for any $s\in L_x$, set
\begin{equation}\label{eq: definition of metric}
\|s\|^2_h = \frac{|\theta(s)|^2}{\sum_{1 \leq i \leq k} |\theta(\sigma_i)|^2}=|\theta(s)|^2 e^{-2\vphi}.
\end{equation} The local weight of $h$ (under the trivialization $\theta$) is
\[
\vphi = \frac 1 2 \log (\sum_{1 \leq i \leq k} |\theta(\sigma_i)|^2).
\] It satisfies the $L_{\rm loc}^1$-property for any resolution $\mu: W \to X$ because
\[
\mu^*\vphi =\frac 1 2 \log (\sum_{1 \leq i \leq k} |\mu^*\theta(\mu^*\sigma_i)|^2)
\] is a psh function and thus Proposition \ref{prop: independent} applies to this situation.
More generally, for a $\Qq$-Cartier divisor $D$ on $X$, suppose that $mD$ is Cartier and $\sigma_i \in H^0(X, \Oo_{X}(mD)), 1\leq i\leq k$ are global sections. If $\theta: \Oo(mD)|_U \simeq \Oo_U$ is a trivialization, then set
\[
\vphi = \frac{1}{2m}\log (\sum_{1\leq i\leq k} |\theta(\sigma_i)|^{2})
\] as the local weight of $h$ (see Remark \ref{rmk: in terms of weight}).
We call the metrics defined above the metrics defined by global sections.
For an effective Cartier divisor $D$, choose $\sigma=1 \in H^0(X, \Oo_{X}(D))$, then the metric as \eqref{eq: definition of metric} is denoted by $\hbar_D$.
In general, let $D$ be a $\Qq$-Cartier divisor. If $D = \sum_{1 \leq i \leq n} r_i D_i-\sum_{1 \leq j\leq m}s_j B_j$ is a decomposition such that $r_i,s_j \in \Qq_{>0}$ and $D_i, B_j$ are effective Cartier divisors, then set
\begin{equation}\label{eq: for non-effective div}
\hbar_D \coloneqq \left( \prod_{1 \leq i \leq n} \hbar_{D_i}^{r_i}\right)\cdot \left( \prod_{1 \leq j \leq m} (\hbar^{-1}_{B_j})^{s_j}\right),
\end{equation}
where $\hbar^{-1}_{B_j}$ is the dual metric for $\Oo(-B_j)$. Note that $\hbar_D$ is a metric as the corresponding weight still satisfies the $L^1_{\rm loc}$-property.
The above $\hbar_D$ is independent of the choice of decompositions of $D$. Indeed, suppose that $mD$ is Cartier for some $m \in \Nn$ and $\theta: \Oo(mD)|_U \simeq \Oo_U$ is a trivialization. Then for any $x\in U$ and $s\in \Oo(mD)_x$, we always have $\|s\|^2_{\hbar_{D}^m}=|\theta(s)|^2/|\theta(1)|^2$.
Let $Z \subset Y$ be a normal complex subspace such that $Z \not\subset \Supp D$, then $D|_Z$ is still a $\Qq$-Cartier divisor. Let $\hbar_D|_Z$ be the pull-back of $\hbar_D$ to $Z$, then
\begin{equation}\label{eq:restriction}
\hbar_D|_Z = \hbar_{D|_Z}.
\end{equation}
\begin{proposition}\label{prop: product of metric defined by global sections}
Suppose that $h_B, h_D$ are metrics defined by global sections for $\Qq$-Cartier divisors $B, D$ respectively. Then $h_Bh_D$ is a metric for $B+D$ which is also defined by global sections.
\end{proposition}
\begin{proof}
Suppose that $h_B$ (resp. $h_D$) is defined by global sections of $\Oo(rB)$ (resp. $\Oo(sD)$), and $\theta_{rB}: \Oo(rB)|_U \simeq \Oo_U$ (resp. $\theta_{sD}: \Oo(sD)|_U \simeq \Oo_U$) is a trivialization. Then the local weights are
\[
\vphi_B = \frac{1}{2r}(\log \sum_{i=1}^{m_B}|\theta_{rB}(\sigma_i)|^2), \quad \vphi_D = \frac{1}{2s}(\log \sum_{j=1}^{m_D}|\theta_{sD}(\tau_j)|^2),
\] where $\sigma_i \in H^0(U, \Oo(rB)), \tau_j \in H^0(U, \Oo(sD))$. The local weight for $B+D$ is
\[
\begin{split}
\vphi_B+\vphi_D &= \frac{1}{2rs}\log\left((\sum_{i=1}^{m_B}|\theta_{rB}(\sigma_i)|^2)^s \cdot (\sum_{j=1}^{m_D}|\theta_{sD}(\tau_j)|^2)^r\right)\\
&=\frac{1}{2rs}\log\left( \sum_{\substack{i_1, \cdots, i_s\\ j_1 \cdots j_r}} |\theta_{rB}(\sigma_{i_1}) \cdots \theta_{rB}(\sigma_{i_s})\cdot \theta_{sD}(\tau_{j_1}) \cdots \theta_{sD}(\tau_{j_r})|^2\right).
\end{split}
\] By $\sigma_{i_1} \cdots \sigma_{i_s}\cdot \tau_{j_1} \cdots \tau_{j_r} \in H^0(U, \Oo(rs(B+D)))$, the claim follows.
\end{proof}
\subsection{Push-forward metrics defined by global sections}\label{subsection: pushforward metrics}
Let $f: X' \to X$ be a proper modification between normal complex spaces and $D$ be a $\Qq$-Cartier divisor on $X'$. Suppose that $B = f_*D$ is still a $\Qq$-Cartier divisor on $X'$. Suppose that $D$ admits a metric $h'$. In general, there is no push-forward metric $f_*h'$ of $h'$. However, if $h'$ is given by global sections, then we can naturally define $f_*h'$ as follows.
Recall that for a $\sigma \in \mathscr{M}_{X'}(X')$, $f_*\sigma$ is defined as the meromorphic extension of $\sigma|_{X-f(\Exc(f))}$ on $X$ (see \cite[Chapter II (10.2)]{Dem12}). This is possible as $\codim_Xf(\Exc(f)) \geq 2$. Hence $f_*\sigma \in \mathscr M_X(X)$.
Suppose that $mD$ and $mB$ are Cartier divisors. If $\sigma \in H^0(X', \Oo_{X'}(mD))$, that is, $\sigma\in \mathscr{M}_{X'}(X')$ such that $\di(\sigma) + mD \geq 0$, then $f_*\sigma \in H^0(X, \Oo_X(mB))$ because
\[
\di(f_*\sigma)+mB = f_*\di(\sigma)+f_*(mD) \geq 0.
\] Therefore, if $h'$ is the metric of $D$ defined by global sections as \eqref{eq: definition of metric}, then the push-forwards of these sections also define a metric $h\coloneqq f_*h'$ for $B$. We call this metric the push-forward of $h'$.
\begin{lemma}\label{le: comparing pullback metrics}
Under the above notation and assumptions, let $D+F= f^*B$ with $F$ an $f$-exceptional divisor ($F$ may not be effective). Then $f^*h$ is a metric for $D+F$ such that
\begin{equation}\label{eq: comparing metrics}
h'\cdot \hbar_F = f^*h,
\end{equation}
where $\hbar_F$ is the metric defined as \eqref{eq: for non-effective div}.
\end{lemma}
\begin{proof}
Suppose that $h'$ is defined by sections $\sigma_i \in H^0(X', \Oo_{X'}(mD)), i=1 ,\cdots, k$, then $\sigma_i =f^*(f_*\sigma_i) \in H^0(X', \Oo_{X'}(mD+mF)), i=1 ,\cdots, k$.
Let $\theta: \Oo_{X'}(mD)|_{U} \simeq \Oo_U, \theta_F: \Oo_{X'}(mF)|_U \simeq \Oo_U$ be trivializations on $U\subset X'$, and $\theta_B: \Oo_X(mB)|_V \simeq \Oo_V$ be a trivialization on $V\subset X$ such that $U\subset f^{-1}(V)$ and $f^*\theta_B = \theta \theta_F$. Then $h'\cdot \hbar_F$ has the local weight
\[
\frac{1}{2m} (\log \sum_i |\theta(\sigma_i)|^{2} + \log |\theta_F(1)|^{2}) = \frac{1}{2m} \log (\sum_i |\theta\theta_F(\sigma_i)|^{2}),
\] where $\sigma_i$ in $\theta(\sigma_i)$ is a section with respect to the divisor $mD$ while $\sigma_i$ in $\theta\theta_F(\sigma_i)$ is a section with respect to the divisor $mD+mF$.
The local weight for $h$ is
\[
\frac{1}{2m} \log (\sum_i|\theta_B(f_*\sigma_i)|^{2}).
\] Thus $f^*h$ has the local weight
\[
f^*\left(\frac{1}{2m}\log (\sum_i|\theta_B(f_*\sigma_i)|^{2})\right)
\] under the trivialization $f^*\theta_B$ (see \eqref{eq: pullback of metric}). The claim follows from
\[
f^*(\theta_B(f_*\sigma_i)) = (f^*\theta_B)(f^*(f_*\sigma_i))=(\theta \theta_F)(\sigma_i).
\]
\end{proof}
\section{Metrics for the abundant nef parts}\label{sec: 4}
\subsection{Approximations for the abundant nef part}\label{subsec: Algebraic approximation for the abundant nef part}
Let $\pi: X \to \De$ be a projective contraction to the disc $\De$. Suppose that $f: X' \to X/\De$ is a projective modification and $M'$ is a $\Qq$-divisor on $X'$. Moreover, suppose that $g: X' \to Z/\De$ is a proper contraction such that $M' \sim_\Qq g^*H'/\De$, where $H'$ is a nef and big $\Qq$-divisor over $\De$.
\[
\xymatrix{
X \ar[d]_\pi & X' \ar[d]^g \ar[l]_f\\
\De & Z\ar[l]_\tau }\\
\]
Take projective resolutions $\ti f: X'' \to X', \ti\tau: Z'' \to Z$ such that $\ti g \coloneqq {\ti\tau}^{-1}\circ g\circ{\ti{f}}: X'' \to Z''$ is a morphism. By Hironaka's Chow lemma (see \cite[Chapter VII, Theorem 2.8]{Pet94b}), we can further assume that $Z'' \to \De$ and $\ti g$ are projective. Let $M''=\ti f^*M'$, then $M''$ is the pull-back of the nef and big$/\De$ divisor $\ti\tau^*H'$. Hence, replacing $X', Z, M'$ by $X'', Z'', M''$, we can assume that $Z$ is smooth and $g, \tau$ are projective.
As the Picard group $\Pic(\De)$ is trivial, for $k=\Zz, \Qq$ and $k$-divisors $B, D$ on $X$, $B \sim_{k} D/\De$ is the same as $B \sim_{k} D$.
The following lemma is a direct consequence of the negativity lemma (see \cite[Lemma 3.39]{KM98}). Note that such result also holds for normal complex spaces (see \cite[Proof of Lemma 3.40]{KM98}).
\begin{lemma}\label{le: negativity}
Let $X' \xrightarrow{f} X \to S$ be a g-pair with a nef$/S$ divisor $M'$ on $X'$ and the nef part $M =f_*M'$. Suppose that $M$ is $\Qq$-Cartier, then
\begin{equation}\label{eq: pullback nef part}
f^*M = M'+\Upxi,
\end{equation} where $\Upxi \geq 0$ is an $f$-exceptional divisor.
\end{lemma}
\begin{lemma}\label{le: a decomposition}
Under the above notation and assumptions. For a g-pair $X' \xrightarrow{f} X \xrightarrow{\pi} \De$ with the abundant nef part. Let $X_0 \subset X$ be the fiber over $0\in\De$. Let $\ti X_0$ be the strict transform of $X_0$ on $X'$. There exist divisors $E, F$ on $X'$ that satisfy the following:
\begin{enumerate}
\item $E, F$ are effective $\Qq$-divisors without common components,
\item $E$ is $f$-exceptional,
\item for each $k \in \Nn$, there exists an ample $\Qq$-Cartier divisor $H_k$ on $Z$ such that $$M' + \frac 1 k E \sim_{\Qq} g^*H_k + \frac 1 k F,$$
\item $\ti X_0$ is not a component of $E \cup F$.
\end{enumerate}
\end{lemma}
\begin{proof}
There is a nef and big$/\De$ divisor $H'$ on a smooth complex space $Z$ such that $M' \sim_{\Qq} g^*H'$. Then for any $k \in \Nn$, $H'\sim_{\Qq} H_k+\frac 1 k B$ where $H_k$ is ample and $B \geq 0$ (see \cite[Proposition 2.6.1 (3)]{KM98}). Let $c=\mult_{X_0} (f_*g^*B)$, then
\begin{equation}\label{eq: subtract from g^*B}
g^*B - c(\pi \circ f)^*(0) =g^*B - c (\ti X_0 + E_f)
\end{equation} does not contain $\ti X_0$ in its support, where $E_f$ is some $f$-exceptional divisor (here $0\in \De$ is viewed as a divisor and $(\pi\circ f)^*(0)$ is the pull-back of $0$). However, $g^*B - c(\pi \circ f)^*(0)$ may not be effective.
We have
\[
g^*B - c(\pi \circ f)^*(0) = g^*B - c(\tau \circ g)^*(0) = g^*(B - c\tau^*(0)).
\] Write
\begin{equation}\label{eq: subtract from B}
B - c\tau^*(0) = \Theta - E^-,
\end{equation} where $\Theta, E^-$ are $\Qq$-Cartier effective divisors without common components.
But $g^*\Theta, g^*E^-$ may have common components. Let
\[
\Lambda \coloneqq g^*\Theta \wedge g^*E^-
\] be the effective $\Qq$-Cartier divisor such that $$\mult_P \Lambda=\min\{\mult_P g^*\Theta, \mult_P g^*E^-\}$$ for each prime divisor $P$. Then $g^*\Theta- \Lambda, g^*E^-- \Lambda$ are effective divisors without common components.
By \eqref{eq: subtract from B},
\[
g^*B - c(\pi \circ f)^*(0) = g^*\Theta- g^*E^- = (g^*\Theta- \Lambda)-(g^*E^-- \Lambda).
\] Moreover, the negative component (i.e. the component with negative coefficients) of $g^*B - c(\pi \circ f)^*(0)$ is $f$-exceptional by \eqref{eq: subtract from g^*B}. Therefore, $g^*E^-- \Lambda$ is $f$-exceptional. By the choice of $c$ (see \eqref{eq: subtract from g^*B}),
\[
\ti X_0 \not\subset \Supp ((g^*\Theta- \Lambda)\cup(g^*E^-- \Lambda)).
\]
Let $E = g^*E^- - \Lambda$ and $F = g^*\Theta-\Lambda$, then
\[
M' +\frac 1 k E \sim_{\Qq} g^*H_k+\frac 1 k F.
\]
\end{proof}
\subsection{Construction metrics for the abundant nef part}
Under the above notation and assumptions, for the morphism $f: X' \to X$, let $\ti X_0$ be the strict transform of $X_0$. Replacing $f$ by a projective resolution, we can assume that $\ti X_0$ is smooth. Let $f_0: \ti X_0 \to X_0$ be the corresponding morphism. Set $E_0\coloneqq E|_{\ti X_0}, F_0\coloneqq F|_{\ti X_0}$ which are well-defined $\Qq$-Cartier divisors by Lemma \ref{le: a decomposition} (4), and set $\Upxi_0\coloneqq \Upxi|_{\ti X_0}$ (see Lemma \ref{le: negativity} for the definition of $\Upxi$).
\begin{lemma}\label{le: a metric on X_0}
After shrinking $\De$ around $0$, there is a metric $h_k'$ (non-canonically constructed) for the $\Qq$-Cartier divisor $M'+\frac 1 k E$ such that $h'_k$ is a non-negative metric defined by global sections as \eqref{eq: definition of metric}. Moreover,
\begin{enumerate}
\item $h'_k|_{\ti X_0} \not\equiv \infty$,
\item $\vphi_k' \approx \frac 1 k \vphi_F$, where $\vphi_k', \vphi_F$ are the local weights of $h_k', \hbar_F$ respectively, and
\item $\vphi_{k,0}' \approx \frac 1 k \vphi_{F_0}$, where $\vphi_{k,0}', \vphi_{F_0}$ are the local weights of $h'_k|_{\ti X_0}, \hbar_{F_0}$ respectively.
\end{enumerate}
\end{lemma}
\begin{proof}
We have $M' + \frac 1 k E \sim_\Qq g^*H_k + \frac 1 k F$. Because $H_k$ is ample, after shrinking $\De$ around $0$, $|mH_k|$ induces an embedding $Z \hookrightarrow \Pp_\De^r$ for some $m\in \Nn$. Pulling back a Fubini-Study metric of $\Oo_{\Pp_\De^r}(1)$, we have a positive and continuous metric $h_{mH_k}$ for $mH_k$ which is defined by global sections. Let
\[
h_{H_k} \coloneqq (h_{mH_k})^{\frac 1 m} \quad \text{and} \quad \chi_k \coloneqq g^*h_{H_k}.
\] Then $\chi_k$ is defined by the global sections of $g^*H_k$, and it is a smooth metric with non-negative curvature current for the semi-ample divisor $g^*H_k$. Multiplying with the non-negative metric $\hbar_F$ for $F$ (see \eqref{eq: for non-effective div}), we get the desired metric
\[
h_k' \coloneqq \chi_k \cdot \hbar_F^{\frac 1 k}
\] for $g^*H_k + \frac 1 k F$, and thus for $M' + \frac 1 k E$ (see Remark \ref{rmk: metric for linear equiv}). As $\ti X_0 \not\subset \Supp F$, $h'|_{\ti X_0} \not\equiv \infty$. This is (1).
(2) follows as the local weight $\vphi_{\chi_k}$ for $\chi_k$ is a continuous function. We can choose trivializations such that $\vphi'_{k} = \vphi_{\chi_k} + \frac 1 k \vphi_F$.
We have $h_k'|_{\ti X_0} = \chi_k|_{\ti X_0} \cdot (\hbar_F|_{\ti X_0})^{\frac 1 k}$ and $\hbar_F|_{\ti X_0} = \hbar_{F_0}$. As the local weight for $ \chi_k|_{\ti X_0}$ is still a continuous function, (3) holds for the same reason as (2).
\end{proof}
By Lemma \ref{le: a decomposition}, $E$ is $f$-exceptional, and thus
\[
M =f_*M'=f_*(M'+\frac 1 k E)\sim_{\Qq} f_*(g^*H_k + \frac 1 k F).
\] Because $g^*H_k$ and $\frac 1 k F$ admit metrics $\chi_k$ and $\hbar_{F}^{\frac 1 k}$ which are both defined by global sections, $h_k' = \chi_k \cdot \hbar_{F}^{\frac 1 k}$ is also defined by global sections by Proposition \ref{prop: product of metric defined by global sections}. Moreover, $h_k'$ is a metric for $M'+\frac 1 k E$ (see Remark \ref{rmk: metric for linear equiv}). Thus $h_k \coloneqq f_*h_k'$ is a metric for $M$ which is still defined by global sections (see Section \ref{subsection: pushforward metrics}).
\begin{lemma}\label{le: comparing h with h_k'}
Under the above notation and assumptions, let $h_k= f_*h_k'$ be the metric for $M=f_*(M'+\frac 1 k E)$. Then
\begin{enumerate}
\item $h_k$ has non-negative curvature current,
\item $h_k|_{X_0} \not\equiv \infty$,
\item $f^*\vphi_k+\frac 1 k \vphi_E \approx \frac 1 k \vphi_F+\vphi_{\Upxi}$, where $\vphi_k, \vphi_E, \vphi_F, \vphi_{\Upxi}$ are local weights for $h_k, \hbar_E, \hbar_F,\hbar_\Upxi$ respectively, and
\item $f_0^*\vphi_{k,0}+\frac 1 k \vphi_{E_0} \approx \frac 1 k \vphi_{F_0}+\vphi_{\Upxi_0}$, where $\vphi_{k,0}, \vphi_{E_0}, \vphi_{F_0}, \vphi_{\Upxi_0}$ are local weights for $h_{k}|_{X_0}, \hbar_{E_0}, \hbar_{F_0}, \hbar_{\Upxi_0}$ respectively.
\end{enumerate}
\end{lemma}
\begin{proof}
As the metric defined by global sections always has non-negative curvature current, we have (1).
For (2), by Lemma \ref{le: a metric on X_0}, $h'_k|_{\ti X_0} \not\equiv \infty$. By
\[
f^*M = f^*(f_*(M'+\frac 1 k E))=(M'+\frac 1 k E)+\Upxi - \frac 1 k E,
\] Lemma \ref{le: comparing pullback metrics} implies $f^*h_k = h_k' \cdot \hbar_{\Upxi-\frac 1 k E}$, which is the same as
\begin{equation}\label{eq: relation of metrics}
f^*h_k \cdot \hbar_E^{\frac 1 k} = h_k' \cdot \hbar_\Upxi.
\end{equation} As $E, \Upxi$ are $f$-exceptional divisors, $h_E|_{\ti X_0} \not\equiv \infty, h_\Upxi|_{\ti X_0} \not\equiv \infty$. Hence $f^*h_k|_{\ti X_0} \not\equiv \infty$. This implies $h_k|_{\ti X_0} \not\equiv \infty$ as $f_0$ is a proper modification.
(3) follows from \eqref{eq: relation of metrics} and Lemma \ref{le: a metric on X_0} (2).
For (4), restricting \eqref{eq: relation of metrics} to $\ti X_0$, we have
\[
f^*h_k|_{\ti X_0} \cdot \hbar_E^{\frac 1 k}|_{\ti X_0} = h_k'|_{\ti X_0} \cdot \hbar_\Upxi|_{\ti X_0}.
\] Moreover, $\hbar_E|_{\ti X_0} = \hbar_{E_0}, \hbar_\Upxi|_{\ti X_0} = \hbar_{\Upxi_0}$. By $f^*h_k|_{\ti X_0} = f_0^*(h_k|_{X_0})$, we have
\[
f_0^*(h_k|_{X_0}) \cdot \hbar_{E_0}^{\frac 1 k} = h_k'|_{\ti X_0} \cdot \hbar_{\Upxi_0}.
\] The result follows from Lemma \ref{le: a metric on X_0} (3).
\end{proof}
\begin{remark}
In application, it is possible to work with $h_{\min}$, the metric with minimal singularities for $M$ (which also makes sense on complex spaces under suitable modifications). Comparing with $h_k$, we have $\vphi_{\min} \preceq \vphi_k$ where $\vphi_{\min}$ is the local weight for $h_{\min}$. In the following, we adopt the more direct approach to work with $h_k$.
\end{remark}
\section{Extension theorems and invariance of plurigenera}\label{sec: 5}
\subsection{Multiplier ideal sheaves}
Recall that for a complex manifold $X$ and a Cartier divisor $L$ with a metric $h$. If the local weight of $h$ is $\vphi$, then the multiplier ideal sheaf $\Ii(h)$ is the sheaf of germs of holomorphic functions $\alpha$ such that $|\alpha|^2e^{-2\vphi}$ is locally integrable. Therefore,
\[
\begin{split}
H^0(U, \Ii(h) \otimes \Oo_X(L)) =\{&s \in H^0(U, \Oo_X(L)) \mid |\theta(s)|^2e^{-2\vphi} \in L^1_{\rm loc}(U), \\
&\text{where~}\theta \text{~is a trivialization of~}\Oo_X(L) \}.
\end{split}
\]
We have the following observation.
\begin{proposition}\label{claim: same integration}
Let $X$ be a complex manifold. Let $L$ be a Cartier divisor with a metric $h$ and $E$ be a Cartier divisor with a metric $\hbar_E$. If $s \in \mathscr M_X(X)$ is a global section of both $L$ and $L+E$ (i.e. $\di(s)+L \geq 0, \di(s)+L+E \geq 0$), then $s \in H^0(X, \Oo_X(L) \otimes \Ii(h))$ iff $s \in H^0(X, \Oo_X(L+E) \otimes \Ii(h\hbar_E))$.
\end{proposition}
\begin{proof}
Suppose that $\theta_L: \Oo(L)|_U \simeq \Oo_U$ is a trivialization of $\Oo(L)$ on $U \subset X$. Shrinking $U$, let $\sigma_E \in \mathscr M_X(U)$ be a local equation of $E$. There is a trivialization $\theta_E: \Oo(E)|_U \simeq \Oo_U$ by multiplying $\sigma_E$. Therefore, locally at a point $x \in U$,
\[
\int |\theta_L(s)|^2 e^{-2\vphi_{h}}~dV_X
=\int |\theta_L\theta_E(s)|^2 e^{-2\vphi_{h}}e^{-2\log|\sigma_E|}~dV_X,
\]where $\vphi_h$ is the local weight of $h$ corresponding to $\theta_L$.
\end{proof}
\subsection{Extension theorem for twisted pairs}
We need a modification of \cite[Theorem 3.1]{Tak07} (see Lemma \ref{le: extension}). In fact, we only work in the setting of \cite[Theorem 3.1]{Tak07} instead of generalizing this result (see Remark \ref{rmk: generalize Tak07}).
Let $\mu: W \to X$ be a log resolution and $\tau: W \to X \to \De$ be the corresponding morphism. Let $N$ be a Cartier divisor on $W$ and $\ti h_N$ be a metric for $N$. Suppose that $X_0$ is a normal complex subspace and $Y_0 \subset \tau^{-1}(0)$ is the strict transform of $X_0$. Taking a higher log resolution, we can assume that $Y_0$ is a smooth divisor. Viewing $X_0$ as an effective Cartier divisor, we have $\mu^*X_0 = Y_0 + \Theta$ with $\Theta \geq 0$ a $\mu$-exceptional divisor. In particular, $\Theta_0 \coloneqq\Theta|_{Y_0} \geq 0$. Restricting
\[
K_W+Y_0 + \Theta \sim K_W
\] to $Y_0$ and by the adjunction formula, we have
\begin{equation}\label{eq: K_W|Y bigger}
K_{Y_0}+\Theta_0 \sim K_W|_{Y_0}.
\end{equation} Therefore, a section of $\Oo(mK_W|_{Y_0})$ corresponds to a section of $\Oo(mK_{Y_0} + m\Theta_0)$, and its integrability with respect to the metric $\hbar_{m\Theta_0}$ makes sense (see Remark \ref{rmk: metric for linear equiv}).
We explain the natural isomorphism
\[
\ti\nu: \Oo(K_{Y_0} + \Theta_0) \simeq \Oo(K_W|_{Y_0})
\] following \cite[Page 5-6]{Tak07}. The pull-back of the coordinate function $t$ on $\De$ is regarded as a holomorphic function on $W$ and it is still denoted by $t$. Suppose that $w_1, \ldots, w_{n-1}, y_0$ are local coordinates with $Y_0=\{y_0=0\}$. Then $t= \xi y_0$ with $\Theta= \{\xi=0\}$. For a sufficiently small open set $U \subset Y_0$, let
\[
\tau \omega_{Y_0} \coloneqq \tau dw_1 \wedge \cdots \wedge dw_{n-1} \in H^0(U, \Oo(K_{Y_0}+\Theta_0)),
\] where $\tau \in H^0(U, \Oo(\Theta_0))$ is a section. Let $\ti\tau$ be a local meromorphic extension of $\tau$ near $U$ such that $\di(\ti\tau)+\Theta \geq 0$. Then $\ti\tau dw_1 \wedge \cdots \wedge dw_{n-1}$ is a local extension of $\tau\omega_{Y_0}$. By abuse of notation, $\omega_{Y_0}$ is still used to denote a local extension of $\omega_{Y_0}$ to $W$. In fact, in what follows, we always restrict the above forms to $Y_0$, hence the results are independent of the choice of extensions.
Then
\begin{equation}\label{eq:star}
\ti \nu: \tau\omega_{Y_0} \mapsto (\ti\tau \omega_{Y_0} \wedge dt)|_{Y_0}.
\end{equation} Note that this is well-defined because $dt$ is independent of the choices of the open set $U$. In order to write $( \omega_{Y_0} \wedge dt)|_{Y_0}$ in terms of $(dw_1 \wedge \cdots \wedge dw_{n-1}\wedge dy_0)|_{Y_0}$, note that
\[
dt= \xi dy_0 + y_0 d\xi
\] and $dw_1 \wedge \cdots \wedge dw_{n-1}\wedge d\xi=0$. Thus
\begin{equation}\label{eq: ti nu}
\ti \nu: \tau \omega_{Y_0} \mapsto (\ti\tau\xi \omega_{Y_0} \wedge dy_0)|_{Y_0}.
\end{equation} Because $\Theta=\{\xi=0\}$ and $\di(\ti\tau)+\Theta \geq 0$, $\ti\tau\xi$ is holomorphic. Hence $(\ti\tau\xi \omega_{Y_0} \wedge dy_0)|_{Y_0} \in H^0(U, \Oo(K_W|_{Y_0}))$. From such local expressions, $\ti\nu$ is seen to be an isomorphism. Moreover, the above discussion also gives a natural isomorphism
\begin{equation}\label{eq:double dagger}
\Oo(mK_{Y_0} +m \Theta_0) \xrightarrow{\sim} \Oo(mK_W|_{Y_0}).
\end{equation}
\begin{lemma}\label{le: extension}
Under the above notation and assumptions. If $\ti h_N|_{Y_0}$ is well-defined, then for each $m\in \Nn$, as long as $\ti h_N \hbar_{m\Theta}$ has non-negative curvature current, any section of
\[
H^0(Y_0, \Oo(mK_W|_{Y_0} + N|_{Y_0}) \otimes \Ii(\ti h_N|_{Y_0} \hbar_{m\Theta_0}))
\] extends over $W$.
\end{lemma}
\begin{proof}
The argument is identically as that for \cite[Theorem 1]{Pau07} (a simplified and twisted version of \cite{Siu02}). Therefore, we just sketch the argument.
We show that the naturally defined morphisms $p,\nu,q$ (defined below) give the following commutative diagram
\begin{equation}\label{eq: diag 1}
\xymatrix{
&\Oo_W(mK_W+N) \ar[rd]^q \ar[ld]_p&\\
\Oo_{Y_0}(mK_{Y_0}+m\Theta_0+N|_{Y_0}) \ar[rr]^{\nu}& & \Oo_{Y_0}(mK_W|_{Y_0}+N|_{Y_0}).
}
\end{equation}
It suffices to define the morphism when $N=0$. First, $p$ is defined through
\[
\Oo_W(mK_W) \xrightarrow{\sim} \Oo_W(mK_W+mY_0+m\Theta_0) \to \Oo_{Y_0}(mK_{Y_0}+m\Theta_0).
\] Suppose that locally $\Oo_{Y_0}(K_{Y_0})|_U=\omega_{Y_0}\Oo_{U}$, then the above is given in local expressions by
\[
\alpha (\omega_{Y_0}\wedge dy_0)^{\otimes m} \mapsto \frac{\alpha}{t^m} (\omega_{Y_0}\wedge dy_0)^{\otimes m} \mapsto \frac{\alpha_0 y_0^m}{t^m} \omega_{Y_0}^{\otimes m},
\] where $\alpha$ is a local section of $\Oo_W$ and $\alpha_0 = \alpha|_{Y_0}$.
The $\nu$ is given as \eqref{eq:double dagger} and the $q$ is the natural restriction map. To show $q=\nu\circ p$, set $\xi_0 \coloneqq \xi|_{Y_0}$. As $t=\xi y_0$, we have
\[
\frac{\alpha_0 y_0^m}{t^m} \omega_{Y_0}^{\otimes m} = \frac{\alpha_0}{\xi_0^m} \omega_{Y_0}^{\otimes m}.
\] Thus
\[
p: \alpha(\omega_{Y_0}\wedge dy_0)^{\otimes m} \mapsto \frac{\alpha_0}{\xi_0^m}\omega_{Y_0}^{\otimes m}.
\] By $dt = \xi dy_0 + y_0 d\xi$, we have
\begin{equation}\label{eq: relation between forms}
\omega_{Y_0} \wedge dy_0 = \frac{1}{\xi} \omega_{Y_0} \wedge dt.
\end{equation} Thus
\[
q: \alpha (\omega_{Y_0}\wedge dy_0)^{\otimes m} \mapsto \alpha_0 (\frac{1}{\xi_0} (\omega_{Y_0} \wedge dt)|_{Y_0})^{\otimes m}.
\] By \eqref{eq:star}, $\nu: \Oo(mK_{Y_0} + m\Theta_0) \xrightarrow{\sim} \Oo(mK_W|_{Y_0})$ is given by
\[
\nu: \tau_0 \omega_{Y_0}^{\otimes m} \mapsto \ti\tau_0(\omega_{Y_0} \wedge dt)^{\otimes m}|_{Y_0},
\] where $\tau_0 \in H^0(U, \Oo(m\Theta_0))$ and $\ti\tau_0$ is a meromorphic extension of $\tau_0$. Take $\tau_0 = \frac{\alpha_0}{\xi_0^m}$, we have $q=\nu \circ p$.
We claim that any section of
\[
H^0(Y_0, \Oo(mK_{Y_0} + (N|_{Y_0}+m\Theta_0)) \otimes \Ii(\ti h_N|_{Y_0} h_{m\Theta_0}))
\] extends over $W$ (i.e. such section has preimage in $H^0(W, \Oo(mK_W+N))$). To show this, the argument of \cite[Theorem 1]{Pau07} goes through with the version of the Ohsawa-Takegoshi extension theorem in \cite[Theorem 2.1]{Pau07} replaced by \cite[Lemma 3.6]{Tak07}.
Now, as $\nu$ induces the isomorphism
\[
\begin{split}
& H^0(Y_0, \Oo(mK_{Y_0} + (N|_{Y_0}+m\Theta_0)) \otimes \Ii(\ti h_N|_{Y_0} \hbar_{m\Theta_0}))\\
\xrightarrow{\sim} &H^0(Y_0, \Oo(mK_W|_{Y_0} + (N|_{Y_0})) \otimes \Ii(\ti h_N|_{Y_0} \hbar_{m\Theta_0})),
\end{split}
\] the commutativity of the diagram \eqref{eq: diag 1} gives the desired result.
\end{proof}
\begin{remark}\label{rmk: generalize Tak07}
When $N=0$, Lemma \ref{le: extension} is weaker than \cite[Theorem 3.1]{Tak07} because
\[
H^0(Y_0, \Oo(mK_W|_{Y_0} ) \otimes \Ii(h_{m\Theta_0})) \subset H^0(Y_0, \Oo(mK_W|_{Y_0} )).
\]
It is likely that under the assumption of non-negative curvature current of $\ti h_N$, any section of
\[
H^0(Y_0, \Oo(mK_{Y_0} + m\Theta_0+N|_{Y_0}) \otimes \Ii(\ti h_N|_{Y_0}))
\] extends over $W$.
\end{remark}
\subsection{The construction of $\Gg_m(h)$}\label{subsec: Multiplier ideal sheaves and sections}
Let $X$ be a complex space with canonical singularities. Note that the canonical divisor $K_X$ is a Weil divisor which may not be Cartier. There exists $\ell \in \Nn$ such that $\ell K_X$ is Cartier. Let $\mu: W \to X$ be a resolution. Let $w_i, i=1 \ldots, n$ be local coordinates on $W$ and $z_i, i=1 \ldots, n$ be local coordinates on $X_{\rm reg}$. There is a multiple-valued meromorphic function $J(\mu)$ such that
\[
(d\mu^*z_1 \wedge \cdots \wedge d\mu^*z_n)^{\otimes \ell} = J(\mu)^\ell (dw_1 \wedge \cdots \wedge dw_n)^{\otimes \ell},
\] where $J(\mu)^\ell$ is a (single-valued) meromorphic function. In what follows, over possibly singular locus, we are only concerned with $|J(\mu)|$, hence the multi-valuedness of $J(\mu)$ will not cause any problem.
If we write $E = \mu^*K_X-K_W$ with $E$ the $\mu$-exceptional $\Qq$-Cartier divisor, then $J(\mu)$ is the local equation of $-E$ (up to multiply a nowhere vanishing function). As $X$ has canonical singularities, $E \leq 0$ and $J(\mu)$ is a multiple-valued holomorphic function.
Let $L$ be a Cartier divisor. As $X$ has canonical singularities, for any $m\in\Nn$ ($mK_X$ may not be Cartier), there is a natural pull-back
\[
\mu^*\Oo_X(mK_X) \to \Oo_W(mK_W)
\] which induces the natural map
\[
\mu^*\Oo_X(mK_X+L) \to \Oo_W(mK_W+\mu^*L).
\] To be precise, suppose that on an open set $U \subset X_{\reg}$, we have
\[
s\in H^0(U, \Oo_X(mK_X+L))
\] such that
\[
s = \alpha (dz_1 \wedge \cdots \wedge dz_n)^{\otimes m}
\] with $\alpha \in H^0(U, \Oo(L))$. Let
\[
d\mu^*z_1 \wedge \cdots \wedge d\mu^*z_n = J(\mu) dw_1 \wedge \cdots \wedge dw_n.
\] As $U$ is smooth, $J(\mu)$ is a single-valued holomorphic function on $U$. Then
\begin{equation}\label{eq: explicit expression}
\mu^*s=\mu^*\alpha \cdot J(\mu)^m (dw_1 \wedge \cdots \wedge w_n)^{\otimes m},
\end{equation} where $\mu^*\alpha \cdot J(\mu)^m \in H^0(\mu^{-1}(U), \Oo_W(\mu^*L))$. As $E \leq 0$,
\[
\mu^*\Oo_X(mK_X+L) \hookrightarrow \Oo_W(mK_W+\mu^*L),
\] and thus $\mu^*\alpha \cdot J(\mu)^m (dw_1 \wedge \cdots \wedge w_n)^{\otimes m}$ extends over $W$.
Let $X$ be a normal complex space and $L$ be a Cartier divisor with a metric $h$. Assume that $\vphi$ is a local weight of $h$ under some trivialization. The following type of ideal sheaves generalizes multiplier ideal sheaves.
\begin{definition}[Definition of $\Gg_m(h)$]\label{def: G_m}
Under the above notation and assumptions. Let $\Gg_m(h)$ be the sheaf of germs of holomorphic functions such that for any $x\in X$,
\[
\begin{split}
\Gg_m(h)_x = \{&\alpha\in \Oo_{X,x} \mid |\mu^*\alpha|^2|J(\mu)|^{2m}e^{-2\mu^*\vphi} \in L_{\rm loc}^1(\mu^{-1}(U)), \\
& \text{where~}\mu: W \to X \text{~is a resolution and~} U \text{~is a neighborhood of~} x \}.
\end{split}
\]
\end{definition}
\begin{remark}\label{rem: global resolution}
(1) Because of \eqref{eq: pullback relation}, the above definition is independent of the choice of trivializations. However, $\alpha$ may depend on $\mu$. (2) For technical reasons (see Lemma \ref{lem: one resolution}), we require $\mu$ to be a (global) resolution of $X$ instead of a neighborhood of $x$.
\end{remark}
When $X$ is smooth, we certainly have $\Gg_m(h) \supset \Ii(h)$. Furthermore, if $m=1$, then $\Gg_1(h) = \Ii(h)$ by the change-of-variables formula (see \eqref{eq: integrability of pullback}). The following example shows that the inclusion $\Gg_m(h)\supset\Ii(h)$ may be strict.
\begin{example}\label{eg: not the same as multiplier ideal sheaf}
Let $x, y$ be the coordinates of $\Cc^2$. Let $h=e^{-\vphi}$ with $\vphi \coloneqq \frac 1 2 \log(|x|^4+|y|^4)$ be the metric for the trivial divisor. Then $1 \not\in \Ii(h)$. We claim that $1\in \Gg_2(h)$. In fact, let $\mu: W \to \Cc^2$ be the blow-up of the origin. Then $W=\{(x,y)\times[u:v] \in \Cc^2\times \Pp^1 \mid xv=yu\}$. It is covered by two pieces $U_1=\{(x,y,z) \in \Cc^2 \mid xz=y)\}$ and $U_2=\{(x,y,w) \in \Cc^2 \mid x=yw)\}$. By the symmetry of $\vphi$, it suffices to consider the $L_{\rm loc}^1$-property on one piece, say $U_1$. Choose $x, z$ as local coordinates on $U_1$. By $dy=xdz+zdx$, we have
\[
xdx\wedge dz =d\mu^*x \wedge d\mu^*y.
\] Thus $J(\mu)=x$. Then locally on $U_1$,
\[
\begin{split}
&\int 1 \cdot |J(\mu)|^4e^{-2\mu^*\vphi}~dV_{W}
\\
=&\int \frac{|x|^4}{|x|^4+|y|^4}~dV_{W}=\int \frac{1}{1+|z|^4}~dV_{W}<\infty.
\end{split}
\]
\end{example}
It is natural to ask the following question:
\begin{question}
Is $\Gg_m(h)$ a coherent sheaf?
\end{question}
To study the above question, it seems that the $L^2$-extension theorem also needs to be generalized in the bimeromorphic setting.
\subsection{Proof of Theorem \ref{thm: sing extension}, Theorem \ref{thm: AG sing trivial boundary extension} and Corollary \ref{cor: sing invariant of plurigenera for g-pair}}
Let $\pi: X \to \De$ be a projective contraction from a $\Qq$-Gorenstein complex space $X$ to the disc $\De$. Let $L$ be a Cartier divisor on $X$. Assume that $X_0\subset X$ is the fiber over $0\in\De$, and it is a normal complex subspace. As $X_0$ is a Cartier divisor, by adjunction formula,
\[
(K_X+X_0)|_{X_0} \sim_\Qq K_{X_0}.
\] Thus $X_0$ is also $\Qq$-Gorenstein, and $(mK_X+L)|_{X_0} \sim_\Qq mK_{X_0}+L|_{X_0}$ as $X_0$ is linearly equivalent to $0$ on $X$. Note that we do not assume that $mK_{X_0}$ is Cartier. We explain the meaning of extending sections from $X_0$ to $X$.
\begin{lemma}\label{le: meaning of extension}
Under the above notation and assumptions, there exists a natural map
\[
H^0(X, \Oo(mK_X+L)) \to H^0(X_0, \Oo(mK_{X_0} + L|_{X_0})).
\]
\end{lemma}
\begin{proof}
Let $V_0 \subset X_0$ be the smooth locus of $X_0$. As $X_0$ is a Cartier divisor, there exists a neighborhood $V\supset V_0$ such that $V\subset X$ is a smooth open variety. Let $\ti V_0 = X_0 \cap {V}$, then $\ti V_0 \supset V_0$.
Let $j: \ti V_0 \to V$ be the closed embedding. Because $V$ is smooth, we have
\[
\Oo_V(mK_X+L) \to j_*\Oo_{\ti V_0}(mK_X+L),
\] and $\Oo_{\ti V_0}(mK_X+L) \simeq \Oo(mK_{\ti V_0}+L|_{\ti V_0})$ as $\ti V_0$ is linearly equivalent to $0$ on $V$. As $\codim_{X_0}(X_0\backslash V_0) \geq 2$ and $\ti V_0 \supset V_0$,
\[
H^0(\ti V_0, \Oo(mK_{\ti V_0}+L|_{\ti V_0})) \simeq H^0(X_0, \Oo(mK_{X_0}+L|_{X_0})).
\] Therefore, there exist natural maps
\[
\begin{split}
&H^0(X, \Oo(mK_X+L)) \to H^0(V, \Oo_V(mK_X+L))\\
\to &H^0(\ti V_0, \Oo(mK_{\ti V_0}+L|_{\ti V_0}))\to H^0(X_0, \Oo(mK_{X_0} + L|_{X_0})).
\end{split}
\]
\end{proof}
Assume that $X$ has canonical singularities. Let $\mu: W \to X$ be a log resolution of $(X, X_0)$. Let $Y_0$ be the strict transform of $X_0$. Suppose that $X_0$ has canonical singularities. Let $\mu_0 \coloneqq \mu|_{Y_0}: Y_0 \to X_0$ and $E_0 = \mu_0^*K_{X_0} - K_{Y_0}$. The following is the key extension lemma.
\begin{lemma}\label{le: extension for a fixed resolution}
Under the above notation and assumptions. Let $L$ be a Cartier divisor on $X$ with a non-negative metric $h$. Suppose that $h|_{X_0}$ is well-defined. Then for each $m\in \Nn$, any section $s \in H^0(X_0, \Oo_{X_0}(mK_{X_0}+L|_{X_0}))$ such that
\[
\mu_0^*s \in H^0\left(Y_0, \Oo(mK_{Y_0}+ \mu_0^*(L|_{X_0})) \otimes \Ii(\mu_0^*(h|_{X_0}))\right)
\] extends over $X$ (i.e. $s$ has a preimage in $H^0(X, \Oo(mK_X+L))$ under the natural map in Lemma \ref{le: meaning of extension}).
\end{lemma}
\begin{proof}
Let $L_0 = L|_{X_0}, h_0 = h|_{X_0}$. By assumption (see Definition \ref{def: non-negative curvature on variety}), $\mu^*h$ has non-negative curvature current. Let $\mu^*X_0=Y_0+\Theta$ and $\Theta_0 = \Theta|_{Y_0}$. Then $\mu^*h \hbar_{m\Theta}$ has non-negative curvature current by $\Theta \geq 0$.
By Proposition \ref{claim: same integration} and \eqref{eq: K_W|Y bigger},
\[
\begin{split}
& H^0(Y_0, \Oo(mK_{Y_0} + \mu_0^*L_{0}) \otimes \Ii(\mu_0^*h_0 ))\\
\subset & H^0(Y_0, \Oo(mK_{Y_0} +m\Theta_0 + \mu_0^*L_{0}) \otimes \Ii(\mu_0^*h_0 \hbar_{m\Theta_0}))\\
\simeq & H^0(Y_0, \Oo(mK_W|_{Y_0} + \mu_0^*L_{0}) \otimes \Ii(\mu_0^*h_0 \hbar_{m\Theta_0})).
\end{split}
\] By Lemma \ref{le: extension}, $\mu_0^*s$ extends over $W$. That is, there exists $\ti \omega \in H^0(W, \Oo(mK_W+\mu^*L))$ such that $\ti \omega|_{Y_0} = \mu_0^*s$.
We have the following diagram
\begin{equation}\label{eq: diag2}
\xymatrix{
H^0(W, \Oo(mK_W+\mu^*L)) \ar[r]^a & H^0(Y_0, \Oo(mK_W|_{Y_0}+\mu_0^*L_0)) \\
H^0(X, \Oo(mK_X+L)) \ar[u]_\simeq^b \ar[r]^d & H^0(X_0, \Oo(mK_{X_0}+L_0)), \ar@{^{(}->}[u]_c }
\end{equation} where $a$ is the restriction map, $d$ comes from Lemma \ref{le: meaning of extension}, $b$ comes from that $X$ has canonical singularities, and $c$ is an inclusion which comes from \eqref{eq: K_W|Y bigger} and $X_0$ has canonical singularities. The desired extension follows from the diagram chasing once we know that the diagram is commutative.
To check the commutativity of the diagram, we follow the notation in the discussion before Lemma \ref{le: extension}.
Besides, it is enough to work locally in the smooth loci of $X_0, Y_0$ and $W$. Then
\[
b: \Oo_X(mK_X+L) \to \Oo_W(m\mu^*K_X+\mu^*L)\to \Oo(mK_W+\mu^*L)
\] is given by
\[
\beta (\omega_{X_0} \wedge dt)^{\otimes m} \mapsto \mu^*\beta (\mu^*\omega_{X_0} \wedge dt)^{\otimes m} \mapsto \mu^*\beta (J(\mu_0)\omega_{Y_0} \wedge dt)^{\otimes m},
\] where $\beta$ is a local section of $L$, $\omega_{X_0}$ and $\omega_{Y_0}$ are local generators of $\Oo(K_{X_0})$ and $\Oo(K_{Y_0})$ respectively, and $\mu^*\omega_{X_0} = J(\mu_0) \omega_{Y_0}$. Note that the $t$ in each term is the corresponding pull-back of the coordinate $t$ on $\De$.
The map
\[
\begin{split}
c: \Oo(mK_{X_0}+L_0) \to \Oo(m\mu_0^*K_{X_0} +\mu_0^*L_0) &\hookrightarrow \Oo(mK_{Y_0} +\mu_0^*L_0) \\
&\to \Oo(mK_W|_{Y_0}+\mu_0^*L_0)
\end{split}
\] is given by
\[
\beta_0\omega_{X_0}^{\otimes m} \mapsto \mu_0^*\beta_0 (\mu_0^*\omega_{X_0})^{\otimes m} \mapsto \mu_0^*\beta_0 (J(\mu_0)\omega_{Y_0})^{\otimes m} \mapsto \mu_0^*\beta_0 J(\mu_0)^m (\omega_{Y_0}\wedge dt|_{Y_0})^{\otimes m},
\] where $\beta_0$ is a local section of $L_0$ and $\mu_0^*\omega_{X_0}=J(\mu_0) \omega_{Y_0}$. The last map is \eqref{eq:double dagger}.
For the same local section $\beta$ of $L$, the map $a$ is given by
\[
a: \mu^*\beta (\omega_{Y_0} \wedge dt)^{\otimes m}\mapsto
(\mu^*\beta|_{Y_0})(\omega_{Y_0}\wedge dt|_{Y_0})^{\otimes m}.
\] (Strictly speaking, locally around $Y_0$, a section of $\Oo(mK_W+\mu^*L)$ should be represented by $\mu^*\beta (\omega_{Y_0} \wedge dy_0)^{\otimes m}$. Note $\omega_{Y_0} \wedge dy_0 = \frac{1}{\xi} \omega_{Y_0} \wedge dt$ by \eqref{eq: relation between forms}.)
Finally, the map $d$ is given by
\[
d: \beta (\omega_{X_0} \wedge dt)^{\otimes m} \mapsto \beta|_{Y_0} \omega_{X_0}^{\otimes m}.
\]
Thus the diagram \eqref{eq: diag2} commutes.
\end{proof}
\begin{remark}\label{rmk: above lemma is enough}
Lemma \ref{le: extension for a fixed resolution} is enough to show the desired extension result Theorem \ref{thm: AG sing trivial boundary extension}. The reason is that for any section of $H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0})))$, the integrability requirement is always satisfied when taking the resolution $W=X'$ (see \eqref{eq: integrable}, the key is that after taking a resolution, we have an extra weight $\vphi_{-mB_0}$).
\end{remark}
\begin{lemma}\label{le: comparing sections}
Let $X$ be a $\Qq$-Gorenstein complex space. Let $g: V \to X$ and $u: W \to V$ be resolutions. Set $f=g\circ u: W \to X$. Let $L$ be a Cartier divisor on $X$ with a metric $h$. For any $m\in \Nn$ and $s \in H^0(X, \Oo(mK_X+L))$, if $g^*s \in H^0(V, \Oo(mK_V+g^*L) \otimes \Ii(g^*h))$, then $f^*s \in H^0(W, \Oo(mK_W+f^*L) \otimes \Ii(f^*h ))$.
\end{lemma}
\begin{proof}
By $V$ smooth and $g^*s \in H^0(V, \Oo(mK_V+g^*L))$, we have $f^*s \in H^0(W, \Oo(mK_W+f^*L))$. Hence we only need to check the integrability.
Let $v_i, i=1, \ldots, n$ and $w_i, i=1, \ldots, n$ be local coordinates of $V$ and $W$ respectively. Assume that $g^*s = \sigma (dv_1 \wedge \cdots \wedge dv_n)^{\otimes m}$ with $\sigma$ a local section of $g^*L$, then locally we have
\[
\int \|\sigma\|^2_{g^*h} ~dV_{V} < \infty.
\] By the change-of-variables formula, this is the same as
\begin{equation}\label{eq: integrability of pullback}
\int \|u^*\sigma\|^2_{u^*g^*h} |J(u)|^2 ~dV_{W} < \infty,
\end{equation} where $J(u)$ is the local equation of the $u$-exceptional divisor $K_{W}-u^*K_{V}$.
On the other hand,
\[
f^*s = u^*(\sigma (dv_1 \wedge \cdots \wedge dv_n)^{\otimes m})=(u^*\sigma) J(u)^{2m} (dw_1 \wedge \cdots \wedge dw_n)^{\otimes m}.
\] Thus $f^*s \in H^0(W, \Oo(mK_W+f^*L) \otimes \Ii(f^*h ))$
means that locally
\[
\int \|u^*\sigma\|^2_{u^*g^*h} |J(u)|^{2m} ~dV_{W} < \infty.
\] As $m \geq 1$ and $J(u)$ is holomorphic, the claim follows.
\end{proof}
\begin{lemma}\label{lem: one resolution}
Let $X$ be a compact complex space with canonical singularities. Let $L$ be a Cartier divisor on $X$ with a metric $h$. For some $m\in\Nn$, $s\in H^0(X, \Oo_X(mK_X+L)\otimes \Gg_m(h))$ if and only if there exists a resolution $\mu: W \to X$ such that
\[
\mu^*s \in H^0(W, \Oo_W(mK_W+\mu^*L)\otimes\Ii(\mu^*h)).
\]
\end{lemma}
\begin{proof}
The sufficient part follows from the definition. In the following, we show the necessary part.
Let $\vphi$ be the local weight for $h$. By $X$ compact, there are open sets $U_j, 1 \leq j \leq k$ and projective resolutions $\mu_j: W_j \to X, 1 \leq j \leq k$ such that $s\in H^0(U_j, \Oo_X(mK_X+L))$ and
\begin{equation}\label{eq: loc L1}
|\mu^*_j\alpha|^2|J(\mu_j)|^{2m}e^{-2\mu_j^*\vphi} \in L^1_{\rm loc}(\mu_j^{-1}(U_j)),
\end{equation}
where $s=\alpha\cdot(\omega_X)^{\otimes m}$ with $\omega_X$ a local generator for $\Oo(K_X)$ and $\alpha$ a local section of $L$ (see Remark \ref{rem: global resolution} (2)). More precisely, we have
\[
\mu_j^*s=(\mu_j^*\alpha)\cdot J(\mu_j)^m(\omega_{W_j})^{\otimes m},
\] where $(\mu_j^*\alpha)\cdot J(\mu_j)^m$ is first defined on the smooth locus, but it extends as a local section of $\mu_j^*L$ on $W_j$ as $X$ has canonical singularities.
Hence, \eqref{eq: loc L1} is the same as
\begin{equation}\label{eq: higher multiplier ideal space}
\mu_j^*s \in H^0(\mu^{-1}_j(U_j), \Oo_{W_j}(mK_{W_j}+\mu_j^*L)\otimes \Ii(\mu_j^*h)).
\end{equation}
Let $\mu: W \to X$ be a resolution such that $\mu$ factors through each $\mu_j, 1 \leq j \leq k$. We claim that $\mu$ satisfies the desired property. By construction, the natural morphism $\nu_j: \mu^{-1}(U_j) \to \mu^{-1}_j(U_j)$ is a resolution. By \eqref{eq: higher multiplier ideal space} and Lemma \ref{le: comparing sections},
\[
\nu_j^*(\mu_j^*s) \in H^0(\mu^{-1}(U_j), \Oo_{W}(mK_W+\mu^*L)\otimes\Ii(\mu^*h)).
\] As $\nu_j^*(\mu_j^*s) = (\mu^*s)|_{\mu^{-1}(U_j) }$, the claim follows.
\end{proof}
\begin{lemma}\label{le: extend birational morphism}
Let $X$ be a complex space, and $X_0 \subset X$ be a reduced irreducible compact complex subspace of codimension $1$. Suppose that $\nu_0: \ti X_0 \to X_0$ is a proper modification. Then there exist smooth complex spaces $Y_0\subset Y$ and resolutions $\mu: Y \to X, \tau_0: Y_0 \to \ti X_0$ such that $\mu|_{Y_0}=\nu_0\circ\tau_0$.
\end{lemma}
\begin{proof}
By Hironaka's Chow lemma (see \cite[Chapter VII, Theorem 2.8, Corollary 2.9]{Pet94b}), there exist a blow-up $\sigma_0: Z_0 \to X_0$ of an ideal sheaf $\mathfrak I_0$ on $X_0$ and a proper modification $\mu_0: Z_0 \to \ti X_0$ such that $\sigma_0 = \nu_0\circ\mu_0$. Replacing $\nu_0$ by $\sigma_0$, we can assume that $\nu_0$ is the blow-up of the ideal sheaf $\mathfrak I_0$ on $X_0$. Let $\mathfrak I$ be the kernel of $\Oo_X \to \Oo_{X_0}/\mathfrak I_0$. Let $X'= \proj_X \oplus_{i=0}^\infty \mathfrak I^i \to X$ be the blow-up of $\mathfrak I$. Then there exist embeddings $\ti X_0\subset X_0 \times_{X} X' \subset X'$. Let $Y \to X'$ be a log resolution of $(X', \ti X_0)$ with $Y_0$ the strict transform of $\ti X_0$. Then the corresponding morphisms satisfy the claim.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: sing extension}]
By Lemma \ref{lem: one resolution}, for a fixed $s\in H^0(X_0, \Oo_{X_0}(mK_{X_0}+L|_{X_0}) \otimes \Gg_m(h|_{X_0}))$, there is a resolution $\nu_0: \ti X_0 \to X_0$ such that $\nu_0^*s \in H^0(\ti X_0, \Oo_{\ti X_0}(mK_{\ti X_0}+\nu_0^*(L|_{X_0})) \otimes \Ii(\nu_0^*(h|_{X_0})))$. By Lemma \ref{le: extend birational morphism}, there are smooth complex spaces $Y_0 \subset Y$ and resolutions $\mu: Y \to X, \tau_0: Y_0 \to \ti X_0$ such that $\mu_0\coloneqq\mu|_{Y_0}=v_0\circ\tau_0$. By Lemma \ref{le: comparing sections},
\[
\mu_0^*s \in H^0(Y_0, \Oo_{Y_0}(mK_{Y_0}+\mu_0^*(L|_{X_0})) \otimes \Ii(\mu_0^*(h|_{X_0}))).
\] Then the claim follows from Lemma \ref{le: extension for a fixed resolution}.
\end{proof}
\begin{remark}
\cite[Theorem 1]{Tak07} does not assume that $X_0$ has canonical singularities. This is because the $m$-genus in \cite{Tak07} is defined by using its smooth model which does not coincide with our definition (take $L=0$) when the singularities are worse than the canonical singularities.
\end{remark}
Using this result, we show the extension theorem for g-pairs with abundant nef part.
\begin{remark}\label{rmk: K_X not Cartier}
In Theorem \ref{thm: AG sing trivial boundary extension}, $mK_{X_0}$ is not assumed to be Cartier. Instead, it is just a Weil divisor on $X_0$. A priori, $K_{X_0}$ could be even not $\Qq$-Cartier. But if there exists $m$ such that $mM|_{X_0}$ is Cartier, then $K_{X_0}$ is $\Qq$-Cartier as $K_{X_0}+M_0$ is $\Qq$-Cartier (this is included in the definition of g-canonical singularities).
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm: AG sing trivial boundary extension}]
As $\pi_*\Oo_X(mK_X+mM)$ is a coherent sheaf on $\De$, by Cartan's Theorem A, it suffices to show the extension after shrinking $\De$. Hence, by \cite[Main Theorem]{Kaw99}, we can assume that $X$ has canonical singularities.
Let $L=mM$ and $M'$ be the nef$/\De$ $\Qq$-divisor on $X'$ such that $f_*M'=M$ and satisfying Definition \ref{def: abundant gpair}. To apply Theorem \ref{thm: sing extension}, it suffices to construct a metric $h$ for $L$ satisfying conditions of Theorem \ref{thm: sing extension} and show
\[
H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0})))=H^0(X_0, \Oo_{X_0}(mK_{X_0}+L|_{X_0}) \otimes \Gg_m(h|_{X_0})).
\]
By the adjunction formula (see \eqref{eq: adjunction on X_0}), we have a g-lc pair $(X_0, B_0+M_0)$ such that $K_{X_0}+B_0+M_0 = K_{X_0}+M|_{X_0}$. As $(X_0, B_0+M_0)$ has g-canonical singularities, $B_0=0$ and thus $M_0 = M|_{X_0}$. Then
\[
K_{\ti X_0}+D_0+M'_0 =f_0^*(K_{X_0}+M_0),
\] where $M_0'\coloneqq M'|_{\ti X_0}$ and $D_0 \leq 0$ is an $f_0$-exceptional divisor. Besides,
\[
K_{\ti X_0}+B_0 =f_0^*K_{X_0},
\] where $B_0$ is an $f_0$-exceptional divisor. Because $(X_0, M_0)$ has g-canonical singularities and $K_{X_0}$ is $\Qq$-Cartier, $X_0$ also has canonical singularities, and thus $B_0 \leq 0$. By Lemma \ref{le: negativity}, $M'+\Upxi=f^*M$ with $\Upxi \geq 0$. We have
\[
M_0'+\Upxi_{0}= f_0^*(M_0),
\] where $\Upxi_{0} \coloneqq \Upxi|_{X_0}$. Combining the above equations, we have
\begin{equation}\label{eq: sing D =B+Upxi}
D_0=B_0+\Upxi_0.
\end{equation}
Shrinking $\De$ further, let $h_k$ be a metric for $M$ as in Lemma \ref{le: comparing h with h_k'} and set $h_{k,0}=h_k|_{X_0}$. For a fixed $m \in \Nn$, we claim that there exists $k \gg 1$ such that
\[
H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0}))) = H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0}))\otimes \Gg_m(h_{k,0}^{m})).
\]
Then the theorem follows from Theorem \ref{thm: sing extension}.
For a section $s \in H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0})))$, assume that $\theta: \Oo(L)|_U \simeq \Oo_U$ is a trivialization and on $U_{\rm reg}$, $s$ can be written as $\alpha(dx_1 \wedge \cdots \wedge d x_{n-1})^{\otimes m}$. By Lemma \ref{lem: one resolution}, it suffices to show
\begin{equation}\label{eq: sing bounded}
\int \|f_0^*\alpha\|^2_{f_0^*(h^m_{k,0})} |J(f_0)|^{2m}~dV_{{\ti X_0}} < \infty.
\end{equation}
\eqref{eq: sing bounded} holds for $k\gg 1$ by Lemma \ref{le: comparing h with h_k'} (4) and \eqref{eq: sing D =B+Upxi}. In fact, first note that $J(f_0)$ is the local equation of $-B_0$, hence $|J(f_0)|=e^{-\vphi_{B_0}}$, where $\vphi_{B_0}$ is the local weight for $\hbar_{B_0}$. Thus
\[
\|f_0^*\alpha\|^2_{f_0^*(h_{k,0}^{m})}|J(f_0)|^{2m}=\|f_0^*(\theta(\alpha))\|^2 \cdot e^{-2mf_0^*\vphi_{k,0}} \cdot e^{-2m\vphi_{B_0}},
\] where $\vphi_{k,0}$ is the local weight for $h_{k,0}$. By Lemma \ref{le: comparing h with h_k'} (4),
\[
f_0^*\vphi_{k,0}+\frac 1 k \vphi_{E_0} \approx \frac 1 k \vphi_{F_0}+\vphi_{\Upxi_0}.
\] By \eqref{eq: sing D =B+Upxi}, $mB_0 = mD_0 -m\Upxi_0$. Thus
\begin{equation}\label{eq: integrable}
\begin{split}
-mf_0^*\vphi_{k,0} - m\vphi_{B_0} &\approx m(-\frac 1 k \vphi_{F_0}-\vphi_{\Upxi_0}+\frac 1 k \vphi_{E_0})- m\vphi_{D_0} +m\vphi_{\Upxi_0} \\
& \approx m(-\frac 1 k \vphi_{F_0}+ \frac 1 k \vphi_{E_0} - \vphi_{D_0}).
\end{split}
\end{equation} Note $-D_0 \geq 0$, and thus for a fixed $m$, we can take $k \gg 1$ such that the integrability of \eqref{eq: sing bounded} holds. In fact, it is enough to choose $k$ such that
\[
\frac{2m}{k} \nu(\Theta_{\hbar_{F_0}}(F_0), y) <1 \text{~for all~} y \in \ti X_0,
\] where $\nu(\Theta_{h_{F_0}}(F_0), y)$ is the Lelong number of the curvature current at $y$.
\end{proof}
The following remark explains the crucial point of Theorem \ref{thm: AG sing trivial boundary extension}.
\begin{remark}\label{rmk: Paun's original thm is not enough}
Even in the smooth case, for the metric $h=h_k$ constructed in the proof of Theorem \ref{thm: AG sing trivial boundary extension}, we may have
\[
H^0(X_0, \Oo_{X_0}(m(K_{X_0}+ M|_{X_0})))\subsetneqq H^0(X_0, \Oo_{X_0}(m(K_{X_0}+M|_{X_0})) \otimes \Ii(h|_{X_0})).
\] Hence \cite[Theorem 1]{Pau07} does not apply. In fact, under the notation of the proof of Theorem \ref{thm: AG sing trivial boundary extension}, assuming that $X_0$ is smooth, by the change-of-variables formula, we have
\[
\int \|\alpha\|_{h_0^m}^2 ~dV_{{X_0}}=
\int \|f_0^*\alpha\|_{f_0^*h_0^m}^2 |J(f_0)|^2~dV_{{\ti X_0}},
\] where $J(f_0)$ is the local equation of $K_{\ti X_0}-f_0^*K_{X_0}=-B_0\geq 0$ (c.f. \eqref{eq: sing bounded}). If $\ti \theta: \Oo(f_0^*(L|_{X_0}))|_{f_0^{-1}(U)} \simeq \Oo_{f_0^{-1}(U)}$ is a trivialization, then it becomes
\[
\int |\ti\theta(f_0^*\alpha)|^2e^{-2mf_0^*\vphi_{k,0}}e^{-2\vphi_{B_0}}~dV_{{\ti X_0}}.
\] However (c.f. \eqref{eq: integrable}),
\[
\begin{split}
-mf_0^*\vphi_{k,0}-\vphi_{B_0} &\approx m(-\frac 1 k \vphi_{F_0}-\vphi_{\Upxi_0}+\frac 1 k \vphi_{E_0})- \vphi_{D_0} +\vphi_{\Upxi_0}\\
& \approx m(-\frac 1 k \vphi_{F_0}+ \frac 1 k \vphi_{E_0} )- \vphi_{D_0}-(m-1)\vphi_{\Upxi_0}.
\end{split}
\] Note $\Upxi_0 \geq 0$, for $m\gg 1$, we do not have the integrability.
The new extension theorem works in this setting because the integrability requirement is for
\[
\int \|f_0^*\alpha\|^2_{f_0^*(h_{k,0}^{m})}|J(f_0)|^{2m} ~dV_{{\ti X_0}}.
\] The extra $|J(f_0)|^{2m}$ makes the integral finite.
\end{remark}
\begin{proof}[Proof of Corollary \ref{cor: sing invariant of plurigenera for g-pair}]
If $\Ff$ is a sheaf of $\Oo_\De$-module and $t\in \De$ is a point, then $\Ff \otimes \Cc(t)=\Ff_t/m_t\Ff_t$, where $m_t\subset\Oo_{\De,t}$ is the maximal ideal corresponding to $t$ and $\Cc(t)\coloneqq \Oo_{\De,t}/m_t$. The following argument is similar to \cite[Proof of Theorem 1.1]{Tak07}.
Let $L=mM$ and $L_t=L|_{X_t}$. By the same argument as Lemma \ref{le: meaning of extension}, there is a natural map
\[
\pi_*\Oo_X(mK_{X}+L) \to H^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t))
\] which is surjective by Theorem \ref{thm: AG sing trivial boundary extension}. For $\alpha \in m_t$ and $\sigma \in (\pi_*\Oo_X(mK_{X}+L))_t$, $(\alpha\otimes \sigma)|_{X_t}=0$, thus the above map induces the surjective map
\begin{equation}\label{eq: iso}
\pi_*\Oo_X(mK_{X}+L) \otimes \Cc(t) \to H^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t)).
\end{equation} We show that this map is also injective. Let $U= X - {\rm Sing} X_t$. We claim that the following natural maps give a short exact sequence
\begin{equation}\label{eq: U SEC}
\begin{split}
0& \to \Oo_U(mK_X+(m-1)X_t+L) \to \Oo_U(mK_X+mX_t+L) \\
&\to\iota_* \Oo_{U \cap X_t}(mK_{X_t}+L_t) \to 0,
\end{split}
\end{equation} where $\iota: U \cap X_t \to U$. It is enough to check the exactness on stalks. Let $z \in U$. If $z \not\in X_t$, then $\iota_* \Oo_{U \cap X_t}(mK_{X_t}+L_t)_z=0$ and $\Oo_U(mK_X+(m-1)X_t+L)_z \simeq \Oo_U(mK_X+mX_t+L)_z$ as we can locally invert the defining equation of $X_t$. If $z \in U \cap X_t$, then by the choice of $U$, $z$ is a smooth point of $X_t$. As $X_t$ is Cartier, there is a smooth open set of $U$ which contains $z$. Then the exactness follows. Let $j: U \hookrightarrow X$. Pushing forward \eqref{eq: U SEC}, we have
\[
\begin{split}
0& \to \Oo_X(mK_X+(m-1)X_t+L) \to \Oo_X(mK_X+mX_t+L) \\
&\to j_*\iota_* \Oo_{U \cap X_t}(mK_{X_t}+L_t).
\end{split}
\] As $\codim_{X_t}(X_t - U\cap X_t) \geq 2$, the natural map
\[
\eta_*\Oo_{X_t}(mK_{X_t}+L_t) \to j_*\iota_* \Oo_{U \cap X_t}(mK_{X_t}+L_t)
\] is an isomorphism, where $\eta: X_t \to X$. In conclusion, there is an exact sequence
\[
0 \to \Oo_X(mK_X+(m-1)X_t+L) \rightarrow \Oo_X(mK_X+mX_t+L) \to \eta_*\Oo_{X_t}(mK_{X_t}+L_t).
\] Pushing forward by $\pi$ and taking the stalk at $t$, we have
\[
\begin{split}
0 &\to \pi_*\Oo_X(mK_X+(m-1)X_t+L)_t \rightarrow \pi_*\Oo_X(mK_X+mX_t+L)_t \\
&\to H^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t)).
\end{split}
\] Let $\Ff \coloneqq \pi_*\Oo_X(mK_X+(m-1)X_t+L)$ and $\Gg\coloneqq\pi_*\Oo_X(mK_X+mX_t+L)$, then
\[
\begin{split}
&\Ker(\pi_*\Oo_X(mK_X+mX_t+L) \otimes \Cc(t) \to H^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t)))\\
\simeq &(\Ff_t+m_t\Gg_t)/m\Gg_t.
\end{split}
\] We claim that $(\Ff_t+m_t\Gg_t)/m_t\Gg_t=0$. In fact, as $m_t=(z-t)\Oo_{\De,t}$, $\pi^*(z-t)$ is the defining equation of $X_t$, we have $\Ff_t\subset m_t\Gg_t$. Because $X_t \sim 0$, we have $\Oo_X(mK_X+(m-1)X_t + L) \simeq \Oo_X(mK_X + L)$. Thus \eqref{eq: iso} is an isomorphism.
Note that $\pi_*\Oo_X(mK_X+L)$ is a coherent sheaf. By the upper semi-continuity of
\[
\dim_\Cc(\pi_*\Oo_X(mK_X+L)\otimes \Cc(t))
\] and the isomorphism \eqref{eq: iso}, $h^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t))$ is upper semi-continuous. Fix a $t_0 \in \De$, any section of $H^0(X_{t_0}, \Oo_{X_{t_0}}(mK_{X_{t_0}}+L_{t_0}))$ extends over $X$ by Theorem \ref{thm: AG sing trivial boundary extension}. Hence
\[
h^0(X_{t_0}, \Oo_{X_{t_0}}(mK_{X_{t_0}}+L_{t_0})) \leq h^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t))
\] for a general $t \in \De$. Thus, $h^0(X_t, \Oo_{X_t}(mK_{X_t}+L_t))$ must be a constant for all $t\in \De$.
\end{proof}
\subsection{Further discussions}
The following example shows that nefness of $M$ along does not guarantee the invariance of plurigenera.
\begin{example}\label{eg: M nef}
Let $A/\Cc$ be an abelian variety and $A^\vee$ be its dual abelian variety. If $\Pic^0(A)$ is the identity component of the Picard variety of $A$, then $\Pic^0(A) = A^\vee$. Let $\mathcal{P}$ be the Poincar\'e bundle on $A \times A^\vee$. Suppose that $0\in A^\vee$ corresponds to $\mathcal P_0 \simeq \Oo_A$. Let $0\in \De \subset A^\vee$ be a disc containing $0$. Let $X = A \times \De$ and $\mathcal P_\De \coloneqq \mathcal P|_{X}$ be a line bundle on $X$. $\mathcal P_\De$ is nef over $\De$ as $\mathcal P_t$ is numerically trivial for each $t\in \De$. Moreover, $\Oo(K_{X}) = \Oo_{X}$. Note that for an abelian variety, a line bundle $\Ll \in \Pic^0(A)$ has global sections if and only if $\Ll\simeq \Oo_A$ (see \cite[Page 76, (vii)]{Mum70}). Therefore, for each $m\in \Nn$,
\[
h^0(X_{X_t}, \Oo_{X_t}(mK_{X_t})\otimes \mathcal P_\De^{\otimes m}|_{X_t})=\begin{cases}
1, & \text{ if~ } t=0,\\
0, & \text{ if~ } t \in \De-\{0\}.
\end{cases}
\]
\end{example}
Next, recall that for a non-smooth family of varieties, we have
\begin{theorem}[{\cite[Theorem 1.1]{Tak07}}]
Let $\pi: X \to C$ be a proper surjective algebraic morphism with connected fibers from a complex variety $X$ to a smooth curve $C$. Assume that every fiber $X_t=\pi^{-1}(t)$ has only canonical singularities. Then $h^0(X_t, \Oo_{X_t}(mK_{X_t}))$ is independent of $t\in C$ for any positive integer $m$.
\end{theorem}
Such result has been established by \cite[Theorem 6]{Kaw99} under the additional assumption that each fiber is of general type. On the other hand, for klt singularities, local sections of fibers may not lift to global sections (see \cite[Example 4.3]{Kaw99b}). For g-pairs, the additional nef part introduces the singularities even if each $X_t$ is smooth.
The above discussions show that both assumptions on the abundant nef parts and the g-canonical singularities are indispensable for Theorem \ref{thm: AG sing trivial boundary extension}
and Corollary \ref{cor: sing invariant of plurigenera for g-pair}.
\bibliographystyle{alpha}
|
1,314,259,995,650 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{T}{he} field of quantum computing has witnessed a surge of interest from different fields in the scientific community recently. With the realization of quantum devices with increasing qubits and fidelity by several manufacturers like IBM~\cite{ibmq2022} and Google~\cite{Arute2019}, research in quantum computing started shifting from theoretical research towards applied research~\cite{Bova2021}. Quantum devices employed for information processing and computational purposes are currently being explored to overcome many of the limitations posed by classical hardware in different industry segments such as finance, cybersecurity, and chemical industry~\cite{Bova2021}. Research groups from diverse fields of science and technology are studying numerous heuristic and non-heuristic quantum computing methods to solve a given problem and achieve the so-called quantum supremacy~\cite{Moussa2020}.
However, the current limitation in the number of qubits and low gate fidelity makes non-heuristic approaches impractical. Hence heuristic approaches, especially in the domain of \gls{ML}, are deemed to be one of the prime candidates for practical quantum computing in the NISQ era.
Though classical \gls{ML} is well matured and has decades of domain-specific enhancements, its quantum counterpart is still a lively research field with many loose ends and uncertainties~\cite{Nimish2021QMLreview}. Adding in the factors from NISQ devices like low gate fidelity and qubit connectivity limitations to this mixture of uncertainties, learning-based approaches like \gls{QML} and \gls{QRL} become more complex.
\begin{figure}[t!]
\centering
\tiny
\def\textwidth{\columnwidth}
\import{img/}{figure1.pdf_tex}
\caption{Proposed method: The image is downscaled to $10\times 10$, then fed row by row to the 10 bit quantum circuit, each encoding layer (green) is followed by a variational layer (gray), then the 10 qubits are measured (blue).}
\label{fig:overview}
\end{figure}
Of all the open questions yet to be answered in the field of gate based quantum computing and quantum machine learning, the question of selecting the optimal \gls{VQC} for a given problem is of utmost importance and significance~\cite{Watanabe2021}. The gates in a \gls{QC} used for \gls{QML} are grouped into three categories, namely, encoding gates, decoding gates and the variational gates. These encoding gates, decoding gates and variational gates are further grouped into encoding layer, decoding layer and variational layer in many \gls{QML} works, though these layers are just a visual representation and do not reflect the theory behind layers in classical ML. The encoding gates are selected based on the encoding method chosen and the number of input features. The optimal selection of the encoding method is pivotal to successful learning of a \gls{QML} model as this represents the classical information to be fed into the circuit~\cite{yano2020efficient, Caro2021encodingdependent, LaRose2020, Banchi2021}. While the effect of different encoding methods in data representation and expressivity of a \gls{VQC} have been studied before~\cite{Abbas_2021, Schuld_2021, Caro2021, Banchi2021, franz2022uncovering}, the encoding pattern we propose (see \cref{sec:method}) has not been discussed in the literature previously to the best of our knowledge.
The encoding gate set positioning becomes a salient factor for \gls{VQC}-based \gls{QML} models that learn from high dimensional data due to the limitations in the size of the quantum device in terms of the number of qubits and the depth of the QC that can be executed both in a simulation and on a real device. This limitation brings in a trade-off between the number of encoding gates and the number of parameterized gates for a fixed circuit depth and gate count. The larger the dimension of the input, the larger the number of encoding gates required and the larger the number of encoding gates used, the fewer the number of parameterized gates that can be used. The reduction in the parameterized gate count reduces the expressivity of the model. This reduction of the dimension of the input results in information loss. Naively encoding the data at the start of the circuit or encoding patterns like data re-uploading is not very promising for high dimensional data as it results in an increase in the QC depth and gate count or the information in data becomes less accessible by the model.
This paper investigates the impact of encoding gate set positioning on the trainability of a \gls{QML} model and proposes an encoding pattern for high dimensional inputs, see~\cref{fig:overview}. The key concept behind our method is incremental uploading.
We evaluate our approach on an image classification task (i.e., MNIST~\cite{MNIST} and Fashion MNIST~\cite{FashionMNIST}) as image inputs are among the most common high dimensional inputs with various practical significance. While classifying MNIST with \gls{QML} is not new, i.e., early work uses heavily downscaled $4 \times 4$ images for binary classification~\cite{farhi2018classification} or quantum techniques for dimensionality reduction and classification~\cite{Kerenidis2020SlowFeat}, a truncation of the dataset or extreme reduction in dimensions using a dimensionality reduction will only work for simple classification tasks which tolerate these information losses. One other approach which handles high dimensional image data effectively is the quantum convolutional neural network~\cite{Matic2022}. However, this is not a pure quantum approach, and the input image is broken into smaller pieces and fed into the circuit sequentially, resulting in a longer runtime, however, with the prospect for parallelization To show how to handle more complex classification tasks, we use a high dimensional representation of the full MNIST dataset.
\section{Method}
\label{sec:method}
Quantum gates in a \gls{VQC} can be grouped into three categories: encoding layers, decoding layers and variational layers. Due to the limitation in circuit depth and to avoid possible barren plateau effects~\cite{mcclean2018}, the \gls{QC} has to be designed in such a way that it allows for maximum classification performance and expressivity for a given gate count and circuit depth. To this end, we studied the effect of encoding layer positioning on the performance of a \gls{QML} model by decomposing them into smaller encoding layers and progressively increasing the number of variational parameters between the layers in the \gls{VQC}. Also, we would like to introduce the nomenclature used for grouping the encoding gates throughout this paper. From here on, the collection of all encoding gates is to be called an encoding block and the encoding block split in to a smaller group of encoding gates are to be called as encoding layer.
An obvious decomposition of encoding block for image data splits and groups the gates used to encode raw features from each row of the input image. These row-wise grouped encoding gates (hereafter referred to as \textit{encoding layers}) can per design choice be freely moved across the \gls{VQC}, though each move results in a different architecture with an impact on the performance of the model. Hence, we designed five different encoding block split patterns with incremental number parameterized gates between them. These circuits are as follows: IDU\_1, here, there is no variational layer between the encoding layers. All encoding gates are placed at the start followed by all variational layers. IDU\_2, IDU\_4, IDU\_8, IDU\_10 represent the circuits where the encoding block is split into 2, 4, 8, and 10 parts respectively. Between each split, there is a variational layer and the remaining variational layers are appended at the end. The number of variational layers between any two encoding layers is restricted to one as we wanted to analyze the performance boost attained by introducing a minimal and constant number of variational layers in between them. This restriction is only a design choice and other design choices are of course possible. The overall working of this proposed encoding pattern is shown in~\cref{fig:splits}. We call this encoding pattern \gls{IDU}. The performance of this pattern is compared against the \gls{DRU} encoding pattern~\cite{Salinas2020} which is deemed to be the state-of-the-art following the evaluation metrics of Skolik et al.~\cite{Skolik2021} and theoretical support from Schuld et al.~\cite{Schuld_2021}. However, to have a fair comparison, the number of parameters of the \gls{QML} model is kept constant. Hence in the \gls{DRU} architecture, the entire image information is encoded into the circuit followed by a variational layer and this is repeated until the number of variational parameters matches the number of parameters used in the incremental data-uploading experiment.
\section{Evaluation}
\begin{figure*}
{\tiny \centering
\def\textwidth{\textwidth}
\import{img/}{splits.pdf_tex}}
\caption{Different splits of interleaved layers. For the 1-split, there are 10 encoding layers followed by 10 variational layers, for the 2-split, there are 5 encoding layers, followed by 1 variational layer, then 5 encoding layers, followed by the remaining 9 variational layers. The proposed 10-split is interleaving encoding and variational layers.}
\label{fig:splits}
\end{figure*}
\subsection{Data Statistics}
\label{sec:dataset}
Our experiments have been conducted using the MNIST~\cite{MNIST} and Fashion-MNIST~\cite{FashionMNIST} datasets. MNIST is a handwritten digit dataset consisting of 70,000 images representing the digits 0-9 with a size of $28\times 28$ pixels each. Each digit class contains roughly between 5,400 - 6,750 images. The dataset of 70,000 grayscale images has been randomly grouped into 48,000 images for training, 12,000 images for validation, and 10,000 images for evaluation. As our quantum experiments use Tensorflow Quantum and the Cirq simulator on classical hardware, deeper and larger circuits become computationally intractable. To reduce computation time we reduce the size of the images from $28\times 28$ to $10\times 10$ using a bilinear filter so that the encoding gate count required to encode the data is small. Similarly, Fashion MNIST is also made of 70,000 grayscale images of size $28\times 28$ representing 10 categories of clothes. As for MNIST we reduce the image size and hence the overall computational time.
\subsection{Quantum Encoding, Variational and Decoding Layer}
All our datasets consist of 10 classes. Hence, we designed a ten qubit quantum circuit along with the softmax function to learn a mapping function $f(\cdot): X \rightarrow R^{o}$, where $X$ is the dataset with $n$ data points and $R^{o}$ is the probability of a data point belonging to each class. The quantum circuit consists of multiple encoding and variational layers as explained below.
The process of embedding a classical data point $x \in X$ into a quantum circuit is commonly known as data encoding, sometimes also referred to as data uploading~\cite{Salinas2020}. In practice, one of the most common ways to encode a data point into a quantum circuit is via a state preparation circuit acting on state $|0\rangle^{\otimes n}$ in computational basis~\cite{LaRose2020}. A state preparation circuit often consists of single qubit rotational gates matching the dimension of $x$ with or without entangling gates so that each raw feature of the $x$ can be scaled between $\left[0, \pi\right]$ or $\left[0, 2\pi\right]$ and used as the rotational angles for one gate in the state preparation circuit~\cite{LaRose2020}. We choose a total of 100 single qubit rotational gates $R_x$ that match the feature dimension of each data point $x$ as the encoding layer(s) for all our experiments. The $R_x$ gates are split into groups of ten where each group acts on one qubit. Each pixel value in $x$, ranging between $\left[0, 255\right]$ is scaled to $\left[0, \pi\right]$ and are fed as the rotational angle for the encoding gates.
The variational layers hold the learnable parameters that are optimized using gradient descent to approximate the mapping function $f$. In quantum circuits, the variational layers are again realized using single-qubit rotational and multi-qubit entangling gates where the rotational angles of the rotational gates act as the learnable parameters. Our variational layers consist of single-qubit $R_y$ and $R_z$ rotational gates with nearest neighbour controlled-$R_z$ entanglements. The complete quantum circuit with encoding and variational layers is shown in~\cref{fig:varialtional_layer}. The circuit is measured in the computational basis, and the expectation values of the individual qubit along with the softmax function are used for class prediction.
\begin{figure}[hbtp!]
\centering
\includegraphics[width=.95\linewidth]{img/vqc_single.pdf}
\caption{Single encoding block followed by a variational layer. The encoding block and the variational block are repeated 10 times with different intervals in between to form different architectures.}
\label{fig:varialtional_layer}
\end{figure}
\subsection{Incremental Data Uploading}
We implement our interval uploading method with each of the architectures representing a different split in the encoding layer. We train each architecture on all datasets with a learning rate of 0.001 using the ADAM optimizer~\cite{ADAM} for 25 epochs.
From \cref{fig:val_enc}, we can infer that there is a direct correlation between the split in the encoding layers and the performance of the model. The more the number of variational layers between the encoding blocks, the better the approximation of the mapping function and accurate the classification in both the training and testing phase. The architecture with ten encoding blocks yields the highest accuracy of around 60\%.\footnote{We acknowledge that the classification accuracy is not competitive for a simple dataset such as MNIST. However, the goal of our work is not the accurate classification of the MNIST dataset but to study the impact of the encoding pattern on the trainability of the model by comparing the relative change in classification accuracy without increasing the number of parameters. Hence, we did not optimize the model for an increased classification accuracy.} We have observed the same pattern on Fashion MNIST.
To validate the argument that the interval uploading method is not data-dependent and is expected to work on arbitrary classification tasks, we shuffled every pixel value within each image in the MNIST dataset with a fixed permutation chosen randomly. The models trained on this shuffled MNIST dataset also displayed the same pattern as in the other datasets. The test accuracy of these models is shown in~\cref{tab:test_acc_enc}.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[name=plot1,title={MNIST digits},
width=\linewidth, height=.3\textheight,
ylabel={validation accuracy [\%]},
xlabel={epoch},
xmin=1,xmax=20.5,
ymin=25,
ymax=68.5,
grid=major,
minor y tick num=4,minor x tick num=4,
tick label style={font=\footnotesize},
axis x line=bottom, axis y line=left, tick align = outside,
legend columns=-1,
legend style={/tikz/every even column/.append style={column sep=0.1cm},at={(0.5,1)},anchor=south,yshift=-5mm}, %
]
\addplot[draw=red, mark=o] table[draw=red, x=epoch,y=vamean] {results/compiled/arch_mnist_DRU.dat};
\addlegendentry{DRU}
\addplot[draw=blue, mark=+] table[x=epoch,y=vamean] {results/compiled/arch_mnist_1.dat};
\addlegendentry{1}
\addplot[draw=green,mark=triangle] table[ x=epoch,y=vamean] {results/compiled/arch_mnist_2.dat};
\addlegendentry{2}
\addplot[draw=yellow,mark=star] table[ x=epoch,y=vamean] {results/compiled/arch_mnist_4.dat};
\addlegendentry{4}
\addplot[draw=pink,mark=pentagon] table[ x=epoch,y=vamean] {results/compiled/arch_mnist_8.dat};
\addlegendentry{8}
\addplot[draw=black,mark=*] table[ x=epoch,y=vamean] {results/compiled/arch_mnist_10.dat};
\addlegendentry{10}
\addplot [name path=upper1,draw=none] table[x=epoch,y expr=\thisrow{vamean}+\thisrow{vastd}] {results/compiled/arch_mnist_1.dat};
\addplot [name path=lower1,draw=none] table[x=epoch,y expr=\thisrow{vamean}-\thisrow{vastd}] {results/compiled/arch_mnist_1.dat};
\addplot [draw=blue, fill=blue!10] fill between[of=upper1 and lower1];
\addplot [name path=upper2,draw=none] table[x=epoch,y expr=\thisrow{vamean}+\thisrow{vastd}] {results/compiled/arch_mnist_2.dat};
\addplot [name path=lower2,draw=none] table[x=epoch,y expr=\thisrow{vamean}-\thisrow{vastd}] {results/compiled/arch_mnist_2.dat};
\addplot [draw=green, fill=green!10] fill between[of=upper2 and lower2];
\addplot [name path=upper4,draw=none] table[x=epoch,y expr=\thisrow{vamean}+\thisrow{vastd}] {results/compiled/arch_mnist_4.dat};
\addplot [name path=lower4,draw=none] table[x=epoch,y expr=\thisrow{vamean}-\thisrow{vastd}] {results/compiled/arch_mnist_4.dat};
\addplot [draw=yellow, fill=yellow!10] fill between[of=upper4 and lower4];
\addplot [name path=upper8,draw=none] table[x=epoch,y expr=\thisrow{vamean}+\thisrow{vastd}] {results/compiled/arch_mnist_8.dat};
\addplot [name path=lower8,draw=none] table[x=epoch,y expr=\thisrow{vamean}-\thisrow{vastd}] {results/compiled/arch_mnist_8.dat};
\addplot [draw=pink, fill=pink!10] fill between[of=upper8 and lower8];
\addplot [name path=upper10,draw=none] table[x=epoch,y expr=\thisrow{vamean}+\thisrow{vastd}] {results/compiled/arch_mnist_10.dat};
\addplot [name path=lower10,draw=none] table[x=epoch,y expr=\thisrow{vamean}-\thisrow{vastd}] {results/compiled/arch_mnist_10.dat};
\addplot [draw=black, fill=black!10] fill between[of=upper10 and lower10];
\end{axis}
\end{tikzpicture}
\caption{Average validation accuracy and its standard deviation over 5 training runs for quantum circuits with different splits in the encoding layer. DRU stands for the Data Re-uploading method, and the numbers 1, 2, 4, 8, 10 represents the quantum circuits with 1 whole encoding layer, encoding layer split into 2, 4, 8, 10 blocks respectively.}
\label{fig:val_enc}
\end{figure}
\begin{table}
\caption{Interval uploading performance on test sets.}
\label{tab:test_acc_enc}
\centering
\renewcommand{\tabcolsep}{1pt}
\begin{tabular}{p{3.5em}|@{\hspace{1pt}}r|rrrrr}
Dataset & \multicolumn{1}{c|}{DRU} & \multicolumn{5}{c}{IDU} \\
&& \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{10} \\
\hline
MNIST &33.2$\pm$0.01&34.0$\pm$0.01&47.3$\pm$0.03&50.9$\pm$0.01&51.5$\pm$0.00&\textbf{56.7$\pm$0.02}\\
\textit{shuffled} &32.2$\pm$0.00&47.1$\pm$0.01&52.2$\pm$0.01&53.8$\pm$0.01&56.1$\pm$0.01&\textbf{58.6$\pm$0.01}\\
Fashion&43.5$\pm$0.17&43.8$\pm$0.01&48.3$\pm$0.01&52.5$\pm$0.01&53.6$\pm$0.03&\textbf{56.9$\pm$0.03}
\\
\end{tabular}
\end{table}
\subsection{IDU in a "Deeper" circuit}
\label{sec:IDU_deeper}
From~\cref{fig:val_enc} and~\cref{tab:test_acc_enc} it becomes clear that the quantum architecture with data re-uploading type encoding exhibits similar or lower performance than the least performing IDU architecture. Intuitively, this is an expected result as the data used for the data re-uploading architecture is a reduced dataset where each image is summed over its columns. This summation results in information loss, hence the loss in performance by the model. However, the architecture with a single encoding layer performs the same summation over the image columns (as only $R_x$ gates are used for encoding) and performs slightly better than the DRU architecture. The poor performance of the model with DRU encoding can be correlated to the low expressive power of the variational layers. A explanation of this hypothesis is given in~\cref{sec:schulz}.
To further validate this hypothesis, we increased the number of parameters in the variational layer from 20 to 60 to increase the trainability of the model, see~\cref{fig:varialtional_layer_deeper}. This in turn increased the performance of the DRU architecture. The accuracy of DRU with a higher number of parameters is better than the architecture with a single encoding layer for the same number of parameters. However, the DRU still demonstrate a significantly low performance compared to all IDU architectures with split greater than two.~\cref{tab:test_acc_enc_deep} shows the results of different architectures with a higher number of parameters.
\begin{table}
\centering
\caption{Performance on a "deeper" architecture.}
\label{tab:test_acc_enc_deep}
\centering
\renewcommand{\tabcolsep}{1pt}
\begin{tabular}{p{3.5em}|@{\hspace{1pt}}r|rrrrr}
Dataset & \multicolumn{1}{c|}{DRU} & \multicolumn{5}{c}{IDU} \\
&& \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{10} \\
\hline
MNIST &42.7$\pm$0.00&41.5$\pm$0.01&54.0$\pm$0.01&57.9$\pm$0.02&62.6$\pm$0.01&\textbf{63.9$\pm$0.01}\\
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/vqc_triple.pdf}
\caption{Variational layer with 60 parameters used in deeper models.}
\label{fig:varialtional_layer_deeper}
\end{figure}
\subsection{IDU for Advanced Encoding schemes}
Using single-qubit $R_x$ type encoding gates sequentially results in partial or complete summation of the input image data along the column resulting in some information loss. To further validate the effect of IDU-type encoding pattern without the influence of the summation effect, we tested two other encoding methods: 1) a $R_x$-$R_y$ encoding, where we used a sequence of alternating $R_x$ and $R_y$ rotational gates for each row of the image instead of just $R_x$ gates, see~\cref{fig:advanced_encoding}, and 2) a $R_x$-$CR_z$-$R_y$ encoding, which is similar to $R_x$-$R_y$ encoding but that uses a $CR_z$ gate in between $R_x$ and $R_y$ gates, see~\cref{fig:advanced_encoding}.
\begin{figure}[b!]
\centering
\includegraphics[width=.95\linewidth]{img/enc_combined.pdf}
\caption{Single encoding block for $R_x$-$R_y$ type encoding (left) and $R_x$-$CR_z$-$R_y$ type encoding (right).}
\label{fig:advanced_encoding}
\end{figure}
\begin{table}[b]
\caption{Incremental data-uploading performance on advanced encoding methods.}
\label{tab:test_acc_enc_adv}
\centering
\begin{tabular}{p{6.5em}|@{\hspace{1pt}}r|rrrrr}
Dataset & \multicolumn{1}{c|}{DRU} & \multicolumn{5}{c}{IDU} \\
&& \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{10} \\
\hline
$R_x$-$R_y$ & 0.23 & 0.29 & 0.34 & 0.37 & 0.43 & \textbf{0.45}\\
$R_x$-$CR_z$-$R_y$ & 0.19 & 0.34 & 0.34 & 0.36 & 0.37 & \textbf{0.41}\\
\end{tabular}
\end{table}
The classification results of MNIST dataset using these two encoding methods are given in~\cref{tab:test_acc_enc_adv}. We see that the effect of incremental data-uploading type encoding pattern is more general and not restricted to single-qubit $R_x$ type encoding. Please note that the encoding methods are simple design choices where the encoding gates do not commute. These encoding methods are not optimized toward the MNIST dataset as the intent behind the experiment was to study the effect of incremental data-uploading on different encoding methods and not the effect of encoding method on MNIST dataset in itself.
\section{Theoretical considerations}
\subsection{Quantification of Trainability and Expressibility}\noindent
Two important properties of an \gls{ML} model are its expressibility and trainability. Abbas et al.~\cite{Abbas_2021} generalizes tools for quantitative analysis to the quantum realm. Both concepts are based on the \gls{FIM} \cite{Thomas_2006} associated with the statistical model~\cite{Rissanen_1996} $p_{\theta}(x,y)$ implemented by the \glspl{VQC}. In practice, we use the empirical \gls{FIM} defined as
\begin{equation}
\Tilde{F}_k(\theta) = \frac{1}{k} \sum_{j=1}^{k} \frac{\partial}{\partial \theta} \ln p_{\theta}(x^{(j)},y^{(j)}) \frac{\partial}{\partial \theta} \ln p_{\theta}(x^{(j)},y^{(j)})^t.
\end{equation}
Here, $(x^{(j)},y^{(j)})_{j=1}^{k}$ are i.i.d. drawn from the joint distribution $p_{\theta}(x,y) = p_{\theta}(y \mid x) p(x)$. For the MNIST dataset one has inputs $x \in \mathbb{R}^{10 \times 10}$ and labels $y \in \left\{ 0, \cdots, 9 \right\}$. However, the following consideration generalize to data of any finite dimensionality.
The \gls{FIM} captures the geometry of the parameter space, which has a crucial influence on the trainability of a model. To assess this, the spectrum of the positive semidefinite matrix, i.e., the distribution of its eigenvalues, is considered. A degenerate spectrum indicates a distorted parameter space, which is disadvantageous for any gradient-based optimization technique. Furthermore, an increasing accumulation of eigenvalues around zero for growing model size (i.e., qubit number) indicates the presence of barren plateaus~\cite{Abbas_2021}.
The effective dimension~\cite{Berezniuk_2020} is a tool to capture the expressibility or capacity of a \gls{ML} model. It is based upon the (empirical) \gls{FIM}, and therefore can be estimated relatively straightforward by sampling. The effective dimension of a statistical model $\mathcal{M}_{\Theta}$ is defined as
\begin{equation}
ed_n(\mathcal{M}_{\Theta}) := 2 \frac{\ln \left( \frac{1}{V_{\Theta}} \int_{\Theta} \sqrt{\det \left( I_d + c_n \hat{F}(\theta) \right)} d \theta \right)}{\ln \left( c_n \right)},
\end{equation}
where $d = \left| \theta \right|$ is the number of parameters, $V_{\Theta} := \int_{\Theta} d \theta$ is the volume of the parameter space, and $\hat{F}(\theta) \in \mathbb{R}^{d \times d}$ is a normalized version of the (empirical) \gls{FIM}. The parameter $n$ captures the effective resolution of the parameter space (i.e. is related to the data availability). It enters the definition in the normalization factor $c_n = \frac{n}{2 \pi \ln n}$. Under certain conditions the effective dimension provides an upper bound to the generalization error~\cite{Abbas_2021}. In more plain words, the measure quantifies the range of different functions, that a given model can approximate. In order to compare different models, a normalized version of the effective dimension is preferable. A division by $d$ restricts the measure to the range $[0, 1]$, where higher values indicate a more expressible model.
The empirical Fisher information matrix was estimated using 200 random samples $x^k$ from the MNIST data set and 100 random parameter sets $\theta$ of 200 parameters each drawn from a uniform distribution with range $\left[0,\pi \right]$. The eigenvalue spectra of the \glspl{FIM} over 4000 samples for different \gls{IDU} architectures are shown in~\cref{fig:FIM}. For reasons of presentation the histograms cut values larger than one, which anyhow do not change the overall picture. The normalized effective dimension for different IDU architectures for sample sizes ranging from $10^3$ to $10^6$ is shown in~\cref{fig:effDim}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[name=plot1,title={Fisher Information Spectrum},
width=\linewidth, height=.25\textheight,
grid=major,
tick label style={font=\footnotesize},
minor y tick num=1,
xmin=0,xmax=1.1,
ymin=0,ymax=75,
ylabel={count},
xlabel={bin},
ybar = .05cm,
bar width = 4pt,
axis x line=bottom, axis y line=left, tick align = outside,
legend columns=-1,
legend style={/tikz/every even column/.append style={column sep=0.1cm},at={(0.7,1)},anchor=south,yshift=-5mm},
]
\addplot[fill=blue!66,ybar,no marks] coordinates {
(0.16390732506531325,73)
(0.3278146494044079,30)
(0.49172197374350257,16)
(0.6556292980825973,11)
(0.819536622421692,6)
(0.9834439467607866,8)
};
\addlegendentry{1}
\addplot[fill=green!66,no marks,ybar] coordinates
{(0.16390732506531325,72)
(0.3278146494044079,26)
(0.49172197374350257,16)
(0.6556292980825973,11)
(0.819536622421692,11)
(0.9834439467607866,6)
};
\addlegendentry{2}
\addplot[fill=yellow!66,no marks,ybar] coordinates
{(0.16390732506531325,72)
(0.3278146494044079,24)
(0.49172197374350257,17)
(0.6556292980825973,13)
(0.819536622421692,7)
(0.9834439467607866,10)
};
\addlegendentry{4}
\addplot[fill=pink!66,no marks,ybar] coordinates
{(0.16390732506531325,68)
(0.3278146494044079,24)
(0.49172197374350257,17)
(0.6556292980825973,15)
(0.819536622421692,12)
(0.9834439467607866,7)
};
\addlegendentry{8}
\addplot[fill=black!66,no marks,ybar] coordinates
{(0.16390732506531325,69)
(0.3278146494044079,24)
(0.49172197374350257,18)
(0.6556292980825973,13)
(0.819536622421692,10)
(0.9834439467607866,8)
};
\addlegendentry{10}
\end{axis}
\begin{axis}[name=plot2,
at=(plot1.below south west), anchor=above north west,
title={Cumulative sums},
width=\linewidth, height=.25\textheight,
ylabel={Eigenvalues count},
xlabel={bin},
xmin=0,xmax=1.1,
ymin=50,
ymax=165,
grid=major,minor y tick num=4,
tick label style={font=\footnotesize},
axis x line=bottom, axis y line=left, tick align = outside,
legend columns=-1,
legend style={/tikz/every even column/.append style={column sep=0.1cm},at={(0.5,1)},anchor=south,yshift=-5mm}, %
]
\addplot[draw=blue, mark=+] coordinates
{(0.16390732506531325,73)
(0.3278146494044079,103)
(0.49172197374350257,119)
(0.6556292980825973,130)
(0.819536622421692,136)
(0.9834439467607866,144)
};
\addlegendentry{1}
\addplot[draw=green,mark=triangle] coordinates
{(0.16390732506531325,72)
(0.3278146494044079,98)
(0.49172197374350257,114)
(0.6556292980825973,125)
(0.819536622421692,136)
(0.9834439467607866,142)
};
\addlegendentry{2}
\addplot[draw=yellow,mark=star] coordinates
{(0.16390732506531325,72)
(0.3278146494044079,96)
(0.49172197374350257,113)
(0.6556292980825973,126)
(0.819536622421692,133)
(0.9834439467607866,143)
};
\addlegendentry{4}
\addplot[draw=pink,mark=pentagon] coordinates
{(0.16390732506531325,68)
(0.3278146494044079,92)
(0.49172197374350257,109)
(0.6556292980825973,124)
(0.819536622421692,136)
(0.9834439467607866,143)
};
\addlegendentry{8}
\addplot[draw=black,mark=*] coordinates
{(0.16390732506531325,69)
(0.3278146494044079,93)
(0.49172197374350257,111)
(0.6556292980825973,124)
(0.819536622421692,134)
(0.9834439467607866,142)
};
\addlegendentry{10}
\end{axis}
\end{tikzpicture}
\caption{Fisher information}
\label{fig:FIM}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\begin{axis}[name=plot1,title={Effective Dimension},
width=.95\linewidth, height=.25\textheight,
ylabel style={align=center},
ylabel={Normalized effective\\ dimension [\%]},
xlabel={Number of samples $n$},
xmin=-10000,xmax=1000000,
ymin=50,
ymax=104,
grid=major,
minor y tick num=1,minor x tick num=1,
tick label style={font=\footnotesize},
axis x line=bottom, axis y line=left, tick align = outside,
legend columns=-1,
legend style={/tikz/every even column/.append style={column sep=0.1cm},at={(0.5,1)},anchor=south,yshift=-5mm}, %
]
\addplot[draw=blue, mark=+] table[x=n,y=1] {results./compiled/eff_mnist.dat};
\addlegendentry{1}
\addplot[draw=green,mark=triangle] table[ x=n,y=2] {results./compiled/eff_mnist.dat};
\addlegendentry{2}
\addplot[draw=yellow,mark=star] table[ x=n,y=4] {results./compiled/eff_mnist.dat};
\addlegendentry{4}
\addplot[draw=pink,mark=pentagon] table[ x=n,y=8] {results./compiled/eff_mnist.dat};
\addlegendentry{8}
\addplot[draw=black,mark=*] table[ x=n,y=10] {results./compiled/eff_mnist.dat};
\addlegendentry{10}
\end{axis}
\end{tikzpicture}
\caption{Effective dimension for different IDU architectures.}
\label{fig:effDim}
\end{figure}
Results presented in~\cref{fig:FIM} depict that the eigenvalue spectrum becomes more uniform for IDU architectures with a higher number of splits. This normalizing effect becomes more obvious when considering the cumulative sum plot shown in~\cref{fig:FIM}, bottom.
Although the peculiarity of the normalization effect is quite small in the considered instances, it indicates an improvement in trainability when employing the proposed approach. As the spectrum is more uniform with fewer eigenvalues close to zero, the parameter space is less distorted, which is beneficial for optimization methods. The difference in terms of the effective dimension is more distinct, i.e. it clearly increases when using more IDU layers. This indicates an increase in model expressibility, while the number of parameters stays the same. In all instances the normalized effective dimension grows with larger resolution of the parameter space, which is a reasonable behaviour for machine learning models.
\subsection{Frequency spectrum}\noindent
\label{sec:schulz}
To gain more insight into the performance differences observed in the previous sections, in the following we investigate the function class represented by the different architectures. Slightly generalizing the setting, we consider $x \in \mathbb{R}^{N \times M}$ in the following and denote the $j$th rows of the matrix by the column vector $\bm{x}_j$, that is ($\bm{x}_j)_k=x_{jk}$ for $k=0,...,M-1$ and $j=0,...,N-1$.
The vectors $\bm{x}_j$ therefore correspond to the data fed in the $j$th encoding layer in~\cref{fig:overview}.
It was shown by Schuld et al.\ in Ref.~\cite{Schuld_2021} that the functions $f_\theta$ represented by VQCs are Fourier sums when each encoding layer is given by single-qubit rotations about a given axis for each qubit. In particular, the variational layers determine the amplitudes and the frequency spectrum is fixed by the data-encoding layers. Following Ref.~\cite{Schuld_2021}, we find
\begin{equation}
\label{eq:Fouriersum}
f_\theta (x) =\sum_{\bm{\omega}_0,...,\bm{\omega}_{N-1} \in \Omega} c_{\omega}(\theta) \mathrm{exp}\left\{i \sum_{j=0}^{N-1} \bm{\omega}_j \bm{x}_j \right\}\,,
\end{equation}
where $\Omega=\{-1,0,1\}^M$ is the frequency spectrum and $\omega$ the matrix containing $\bm{\omega}_j$ as $j$th row. Since $f_\theta$ is real valued, we find $c_{-\omega}=c^*_{\omega}$. More intuitively, the functions represent $NM$ dimensional Fourier sums with frequencies $\pm 1$ and $0$. Note that the coefficients $c_\omega (\theta)$ are only independent and can be chosen freely if the variational layers are universal, i.e.~can represent any unitary matrix. In practice, the expressivity of the circuit might be severely limited by the number of variational parameters, indeed a general $n$-qubit unitary requires exponentially many parameters in the numbers of qubits. Nevertheless, equation (\cref{eq:Fouriersum}) qualitatively explains the behaviour shown in~\cref{tab:test_acc_enc} where an increase of performance with increasing number of interleaved variational layers is observed. Since the data encoding is based on $R_x$ rotations only, the setup in the left subfigure of~\cref{fig:splits} is equivalent to summing the vectors and feeding the result into the circuit by only one encoding layer. As a result, the input dimension in equation (\cref{eq:Fouriersum}) decreases from $NM$ to $M$ so that the model loses access to much of the information present in the data $x$, explaining the poor performance in the left column of~\cref{tab:test_acc_enc}. As the number of variational layers increase, $f_\theta$ gains access to more information as only some of the rows in the data are summed, finally reaching optimal performance for the fully interleaved setup shown in the right subfigure of \cref{fig:splits}. It is worthwhile noting that decreasing the number of interleaved layers while keeping the number of variational layers constant, increases the expressibility of the final variational layers but it
seems conceivable that this increase cannot compensate for the information loss by partially or fully summing the rows in the data $x$. The same argument applies to the interpretation of the
DRU column in \cref{tab:test_acc_enc} and \cref{tab:test_acc_enc_deep}. Here, in the data re-uploading setting the rows of the image are first summed and then repeatedly encoded into the circuit with variational layers in between. While the frequency spectrum of the Fourier sum now contains all integers between $-N$ and $N$ \cite{Schuld_2021}, as can be seen from equation (\cref{eq:Fouriersum}) by replacing $\bm{x}_j$ by the sum over the rows for all $j$, again the model seems unable to compensate for the information which is lost in summing the rows of the image.
In case of more general encoding schemes such as alternating $R_x$-$R_y$ gates for subsequent encoding layers, intuitively, this effect is less dramatic due to the non-commutativity of the encoding gates. However, \cref{tab:test_acc_enc_adv} indicates that also in case of non-commuting encoding gates the information in the data is much better accessible by the model in which the more variational layers are interleaved with endoding layers. A more rigorous discussion of this situation will be at the focus of further work.
\section{Conclusion}
This paper proposes an encoding pattern called incremental data-uploading for high dimensional data. It acts as a guideline for positioning the encoding layers in a variational quantum circuit. Here, the encoding and variational layers alternate one after the other so that the data is fed incrementally into the circuit and becomes more accessible to the model.
IDU with maximum variational layers in between them showed a performance boost of 15 - 25 percentage points in image classification tasks. The effective dimension and Fisher information results also support our claim that the IDU pattern increases the trainability and expressivity of the QML model without increasing the number of its parameters. In addition, expressing the quantum model as a partial Fourier sum, we were able to connect its performance to the range of accessible frequencies.
Our experiments also showed that an encoding pattern like data re-uploading exhibits low accuracy when dealing with high dimensional data and fewer parameters in a QML model even though it increases the overall QC depth and number of quantum gates. Hence, we conclude that the presented data encoding pattern shows an improvement in the performance of a QML model with high dimensional data, shallow circuit depth and a given encoding method. However, finding an optimal encoding method in the IDU framework for a given dataset is left for future work.
|
1,314,259,995,651 | arxiv | \section{Introduction}
Deep convolutional neural networks gain remarkable successes in object
recognition in recent years \cite{Bengio_etal_2013RL,Krizhevsky_etal_2012ICDCN,Simonyan_Zisserman_2015DCNIR}.
However, most successful deep neural networks are trained under supervised
learning frameworks, which always require a large amount of annotated
data for each class \cite{Deng2009ImageNetAL}. Inspired by human's
ability to recognize objects without having seen visual samples, recently,
zero-shot learning (ZSL) gains a surge of interest and has been used in broad applications
\cite{Palatucci_etal_2009ZsLSO,Socher_etal_2013ZsCMT,Lampert_etal_2014AbCZsVBC,Zhang_etal_2015ZsVSSS,Xian_etal_2016LEZsC,Wu_etal_2016HOSSVU,Chao_etal_2016ZsOR,Changpinyo_etal_2016SCZs,Zhang_etal_2017LDEMZs,Zhang_etal_2018TEDEZs,Xian_etal_2018ZsCETGTBTU,Wang2019ASO}.
ZSL offers an elegant way to extend classifiers from source categories,
of which labeled images are available during training, to target categories,
of which labeled images are not accessible.
The goal of ZSL is to recognize objects of target classes by transferring knowledge
from source classes through the relation in the semantic space, while
generalized zero-shot learning (GZSL), a more general and challenging
scenario of ZSL, tries to recognize objects from the joint set of both
source and target classes.
Generally, methods for ZSL/GZSL can be
categorized into two majors — deterministic and generative: Deterministic methods focus on carefully designed models and semantic relation preserving the knowledge from source classes to target classes, using only the seen data from source classes; Generative methods leverage on novel generative models to transfer the knowledge of the paired relation between the semantic representation and visual feature of source classes, in order to generate the data for target classes. With these generated data, although less reliable, generative methods always obtain superior performance than deterministic methods. Broad studies show that filling the performance gap between them is a challenge. Moreover, a common problem in both methods is how to trust the less reliable data, such as using seen data of source classes to train the embedding model of target
classes or using the generated data of target classes to train the discriminative models, so that some uncertainty-aware strategies are required in such
scenarios. These two problems are the primary concerns of this study.
Two technical problems as envisioned in deterministic ZSL/GZSL \cite{Changpinyo_etal_2016SCZs,Liu_etal_2018GZsDCN}:
(i) how to bridge source classes to target classes for knowledge transfer
and (ii) how to make prediction on target classes without labeled
training data. Toward the first problem, deterministic ZSL/GZSL methods
typically embed the image features and the semantic representations
into a predefined common embedding space (with properly defined distances) using a regression model.
The choice of the embedding space and the design of regression model/neural network are essential to inherit the semantic relation meanwhile maintaining the discriminative ability. As for the second problem, we need to effectively bridge target classes to source classes by knowledge transfer such as retaining the structure of the semantic space, and prevent the overfitting in the seen data of source classes as they are blind to the semantic representations of target classes. The seminal work, deep calibration network (DCN) \cite{Liu_etal_2018GZsDCN}, introduces an entropy loss on target class which brings the semantic representations close to certain seen data of source classes. However, the entropy loss with a calibration parameter is not adequate to accurately control how much the target classes should learn from the seen data, which prevents the DCN from obtaining superior performance.
\begin{figure*}
\centering
\includegraphics[scale=0.67]{figs/gzsl_ok}
\caption{Illustration of probability vector representation for zero-shot learning.
Circles and diamonds with text names denote the semantic representations
of all classes, small dots
denote seen data (or say visual features) of source classes, and
unseen data of target classes are unavailable. The probability vector
(PV) represents the probability that the data is assigned to different
clustering centroids/prototypes, where the assignment is shown by the arrow.
The training goal is intuitively illustrated by
the change of bars in the PV representation.}
\label{figure1}
\end{figure*}
Before the introduction of our major contributions, we address a uniform representation of the concerned variables by a soft assignment probability vector (PV), illustrated in \cref{figure1}. By regarding the semantic representation of either source
or target classes as the clustering centroids, or say some reference points in the common space, the position of visual feature in this space can be formatted into a soft assignment PV under the prototypical model \cite{Snell2017PrototypicalNF}: we characterize the position of visual feature indirectly by measuring its relation (assignment probability) to the reference points. We discuss the choice of projection model, the common space and the distance functions for the definition of the prototypical model based PV representation. Given such a uniform PV representation, we can evaluate the information measurements, such as mutual information, entropy, cross entropy with closed form expressions.
The major contributions: 1) we propose a mutual information loss
to link the semantic representations of target classes to seen data of source classes. The mutual information consists of two parts: the conditional entropy encourages the
seen data to attach to certain prototype/centroid of the target classes, while the entropy term preserving the semantic representations collapses to trivial solutions when projecting
them to the visual space; 2) we propose an uncertainty-aware loss function which
prevents overfitting when using seen data of source classes to train the embedding model of target classes. We define a regularized entropy, which allows us to compare/control the uncertainty of the seen data belonging to source and target classes; 3) we propose
a semantic preserving loss function, which minimizes the cross entropy between the PV in original semantic space and in the visual space to preserve the semantic relation when learning the network that maps the semantic representations to the visual space.
We evaluate the performance of our proposed methods on broadly studied benchmark datasets.
Simulation shows that, as a deterministic GZSL model, our proposed
method obtains state of the art results, and outperforms significantly the recent deterministic
models on all benchmark datasets. Our proposed model is compatible with generative
models as well. We present additional loss functions to learn with generated data by considering their higher uncertainty than seen data. The experiments show that, by incorporating with generated data from f-CLSWGAN \cite{Xian_etal_2018FGNZs}, we gain obvious improvement over the vanilla f-CLSWGAN model and for the first time
demonstrate a deterministic model can perform as well as generative ones.
\section{Related works}
\textbf{Deterministic models for GZSL.} Deterministic models try to
sufficiently utilize the knowledge of the semantic embedding of both
source and target classes to conduct the inference on visual data. To this end, previous works typically embed visual samples and the semantic
embedding to a common embedding space \cite{Frome_etal_2013DEVISE,Fu_etal_2015TMvZsL,Zhang_etal_2017LDEMZs,Cacheux2019ModelingIA},
such as the visual space, the semantic embedding space or an intermediate
space between semantic and visual domains. The choice
of embedding space is critical for model performance. Previous works
\cite{Shigeto_etal_2015RSHZsL,Zhang_etal_2017LDEMZs} show that
using visual space instead of semantic space or any other intermediate
space as the common embedding space alleviates the negative effect of the hubness problem
\cite{Radovanovi2010HubsIS,Tomasev2014,Lazaridou2015HubnessAP}. The choice of distance function in the common embedding space also plays an importance roll. In previous studies \cite{Vinyals2016MatchingNF,Snell2017PrototypicalNF,Ravi2017OptimizationAA,Liu_etal_2018GZsDCN}, Euclidean distance, dot product similarity and cosine similarity are broadly applied.
The majority of the ZSL/GZSL methods tend to compensate the lack
of visual representation of the unseen classes with the learning of
a semantic preserving mapping. For instance, a fairly successful approach
is based on a bi-linear compatibility function that associates visual
representation and semantic features, such as ALE \cite{Akata_etal_2013LeAbC},
DEVISE \cite{Frome_etal_2013DEVISE}, SJE \cite{Akata_etal_2015EOEFGIC}
and ESZSL \cite{Paredes_Toor_2015AESATZs}. A straightforward extension
of the methods above is the exploration of a non-linear compatibility
function between visual and semantic spaces, such as a ridge regression
\cite{Shigeto_etal_2015RSHZsL}. Furthermore, in \cite{Annadani_Biswas_2018PSRZs},
they introduce explicit regularization for semantic preserving but require an extra threshold for the similarity. In another seminal work \cite{Liu_etal_2018GZsDCN}, they introduce an entropy loss to allow the embedding network of target classes trained by seen data, and a calibration parameter is required to balance the training of source classes and target classes. In our work, we introduce a series of information-theoretic loss functions which enable the use of non-linear compatibility functions. Meanwhile, these functions allow us to translate
several intuitive assumptions on the semantic relation to easy computing formulas. Moreover, we find that the conditional entropy in our mutual information loss is consistent with the entropy loss in \cite{Liu_etal_2018GZsDCN}, while the new marginal entropy term in our loss make additional effects to encourage cluster
balancing.
\textbf{Data-Generating Models for GZSL.} Generative models possess the advantage of utilizing generated image features to remove the blindness as a result of inaccessible data of target classes during training. Variational Autoencoders (VAE) \cite{Kingma_Welling_2014AEVB}
and conditional VAE \cite{Sohn2015LearningSO} based generative models
are proposed with an aim to align the visual embedding with the semantic
embedding \cite{Tsai_etal_2017LRVsE,Schonfeld_etal_2019GZsAVA,Mishra2018AGM,Keshari2020GeneralizedZL}. A VAE based algorithm can train stably, but it fails
to capture the complex distribution \cite{Bao2017CVAEGANFI}, leading
to unsatisfied results. Generative adversarial network (GAN) \cite{Goodfellow_etal_2014GAN}
has an advantage in generating more diverse data. The f-CLSWGAN \cite{Xian_etal_2018FGNZs}
is the model including a variant of an improved WGAN \cite{Arjovsky2017WassersteinG}
and a softmax classifier. f-CLSWGAN synthesizes visual features conditioned
on semantic representations, offering a shortcut directly
from a semantic descriptor to a class-conditional feature
distribution. Despite strong performance, GAN always suffers from mode collapse issues and has a unstable training phase
\cite{Arjovsky2017TowardsPM}. Fortunately, an improved deterministic
model incorporated with a generative model leads to a more advanced performance\cite{Tong2019HierarchicalDO}. In this work, we notice that the synthetic data from the generative model are generally less reliable than the seen data, so the uncertainty-aware entropy constraint loss is also applicable here. Thus, instead of constructing a complex generative model, we train the proposed model additionally with the generated data from an f-CLSWGAN, and obtain competitive results compared to recent advanced generative models.
\section{Generalized Zero-shot learning}
Following the notation in \cite{Liu_etal_2018GZsDCN}, we first present
the definition of generalized zero-shot learning as follows: suppose
we have the seen data $\mathcal{D}=\left\{ (x^{(n)},y^{(n)})\right\} _{n=1}^{N}$,
where $x^{(n)}\in\mathbb{R}^{P}$ is the feature of the $n$-th image
in the visual space $\mathbb{R}^{P}$ and $y^{(n)}\in\mathcal{S}$
is the label from the source classes $\mathcal{S}=\{1,...,S\}$. In
this study, we assume that the image feature $x$ (also named visual
embedding) has already been extracted by a pretrained deep convolutional
networks, such as ResNet \cite{He_etal_2016DRLIR}. Let $\mathcal{T}=\left\{ S+1,...,S+T\right\} $
denote the target classes, where no seen data is available in the
training phase. For each class $c\in\mathcal{S}\cup\mathcal{T}$,
let $v_{c}\in\mathbb{R}^{Q}$ denote the semantic representation in
the semantic space $\mathbb{R}^{Q}$, such as word embedding generated
by Word2Vec \cite{Mikolov_etal_2013EERVS} or visual attributes annotated
by humans to describe the visual patterns \cite{Lampert_etal_2014AbCZsVBC},
and $\mathcal{V}=\left\{ v_{c}\right\} _{c=1}^{S+T}$ denote the set
of semantic representations. In the test phase, we predict unseen
data $\mathcal{D}'=\left\{ x^{(m)}\right\} _{m=N+1}^{N+M}$ of $M$
points from either source or target classes. The task of Zero-Shot
Learning (ZSL) is that, given $\mathcal{D}$ and $\left\{ v_{c}\right\} _{c=1}^{S}$,
lean a model $\phi:x\rightarrow y$ to classify $\mathcal{D}'$ over
target classes $\mathcal{T}$. The task of Generalized Zero-Shot Learning
(GZSL) is that, given $\mathcal{D}$ and $\left\{ v_{c}\right\} _{c=1}^{S+T}$
of both source and target classes, learn a model $f:x\rightarrow y$
to classify $\mathcal{D}'$ over both source and target classes $\mathcal{S}\cup\mathcal{T}$.
\section{Proposed methods}
\subsection{Prototype model}\label{sec:protomodel}
In GSZL, to link the visual embedding in the seen data to the class
semantic representations, an intuitive way is to view the semantic representations
(or their projection in another space) as the centroids of their corresponding
classes, and learn to push the visual embedding to surround the centroid
of its belonging class. In this work, we utilize the prototypical model/networks
\cite{Snell2017PrototypicalNF}
to realize this goal. Prototypical networks learn a metric space in
which classification can be performed by computing distances between
samples and the prototype representation (or centroid) of each
class. Under the GZSL settings, we assume that the semantic representation
$v_{c}$ or its projection by a network or a liner model $\psi(v_{c})$
in a common embedding space, $\mathbb{R}^{K}$, is the prototype of
each class. For the image feature $x$, we assume a network $\phi(x)$
to transform the image feature to the same space of the prototype
$\psi(v_{c})$. Given a distance function $d:\mathbb{R}^{K}\times\mathbb{R}^{K}\rightarrow[0,+\propto)$
for measuring the distances between samples and the prototypes,
the prototype model produce a soft assignment PV, $\boldsymbol{p}=[p_{1}(y=1|x),...,p_{C}(y=C|x)]^{T}$,
over the prototypes of each classes for the data sample $x$,
\begin{equation}\label{eq:proto}
p_{c}(y=c|x)=\frac{\exp[-d(\phi(x),\psi(v_{c}))]}{\sum_{c'}\exp[-d(\phi(x),\psi(v_{c'}))]}.
\end{equation}
Here two issues lay in the definition of the soft assignment PV $\boldsymbol{p}$,
which are the choice of predefined embedding metric space and the definition of distance function in
such space.
\subsubsection*{Choice of common embedding space}
The choice of the common embedding space is a key factor in utilizing
the prototypical model. Motivated by previous works \cite{Shigeto_etal_2015RSHZsL}\cite{Zhang_etal_2017LDEMZs},
we map the semantic representations $\mathcal{V}$ to the visual space
such that the semantic relation between the mapped semantic representations
and the visual features reflects the relation between their corresponding
classes. We propose a \emph{multilayer perceptrons (MLP)} \cite{Rumelhart_etal_1986PDP} as the compatibility function
to map the semantic representations to the visual space $\psi:v_{c}\rightarrow z$,
where $z\in\mathbb{R}^{P}$. Therefore, the soft assignment PV $\boldsymbol{p}$
expression becomes
\begin{equation}
p_{c}(y=c|x)=\frac{\exp[-d(x,\psi(v_{c}))]}{\sum_{c'}\exp[-d(x,\psi(v_{c'}))]}.
\end{equation}
In previous works \cite{Akata_etal_2013LeAbC,Frome_etal_2013DEVISE,Tong2019HierarchicalDO},
they use a linear mode to project the semantic representations onto
another space (visual space or common embedding space), as linear model
is easy to keep the semantic relations. However, MLP is more flexible
and can learn the nonlinear relation between the original semantic
representations and the mapping in the visual space. In
\cref{subsec:Infor_Loss}, we introduce the information-theoretic loss functions that prevail in the unreasonable nonlinear transform
for $\psi(\cdot)$. Simulation studies in \cref{subsec:Component-analysis}
verify that the choice of visual space as the common embedding space
significantly improves the performance of the proposed prototypical
model.
\subsubsection*{Choice of distance function}
The distance function $d(\cdot,\cdot)$ plays another importance role
in the prototypical model, while Euclidean distance, cosine similarity
and dot product similarity based distances have been utilized in
previous works \cite{Vinyals2016MatchingNF,Snell2017PrototypicalNF,Ravi2017OptimizationAA,Liu_etal_2018GZsDCN}. It is easy to understand that the semantic
prototypes generally contain less information than the visual embedding. So when we map the semantic representations
to the visual space, it is not appropriate to hope that the mapped
semantic representation can be well aligned to the visual feature embedding
under Euclidean distance. In comparison, cosine similarity only emphasizes
the angle between the prototypes and visual embeddings, but their
norms could be significantly different. Dot product similarity based on
distance is more flexible than cosine similarity, as it has two degrees
of freedom, such that when it is difficult to push the prototype close
to the visual embeddings of its class in the sense of maximizing the cosine
similarity, it can allow the prototype to change its norm to get larger
dot product similarity (or smaller distance). Moreover, as it will
be discussed in \cref{subsec:Infor_Loss}, we use the prototypical model to learn
the embedding of semantic representation from target classes by linking
them to seen data. In this scenario, the uncertainty of learned model
should be higher than that of learning the embedding of source classes by
seen data. To reflect this viewpoint, we propose an asymmetrical dot
produced based distance $d(x,\psi(v_{c}))=-\max\{mx\cdot\psi(v_{c}),0\}$, where $m=\mathrm{m}_{1}$ when $c\in\mathcal{S}$ and $m=\mathrm{m}_{2}$ when $c\in\mathcal{T}$. The setting of $m$ is similar to the calibration parameter $\rho$
in the DCN model \cite{Liu_etal_2018GZsDCN}, which was introduced
to balance the confidence of source classes and the uncertainty of
target classes. We observed through experiments that our model is
not sensitive to $m$, so we choose a predefined value for $m$ by
cross validation. In the simulation study (\cref{subsec:Component-analysis}),
we compared the different choice of distance functions and show the
advantage of our proposed function.
\subsubsection*{Cross entropy loss for seen data}
Given the PV expression, $\boldsymbol{p}=[p_{1}(y=1|x),...,p_{S}(y=S|x)]^{T}$
with $p_{c}(y=c|x)=\frac{\exp[-d(x,\psi(v_{c}))]}{\sum_{c'=1}^{S}\exp[-d(x,\psi(v_{c'}))]}$
for each $c\in\mathcal{S}$, and the label $y$ of the seen data from
source classes, we can define the loss function, such as cross entropy
loss, to train the prototypical model \cite{Snell2017PrototypicalNF}.
For instance, given the seen data $x^{(n)}\in\mathcal{D}$ from source
classes $\mathcal{S}$, we can learn the proposed network $\phi(\cdot)$
and $\psi(\cdot)$ by minimizing the cross entropy loss,
\begin{equation}\label{eq:ce_loss}
L_{CE}=-\frac{1}{N}\sum_{n=1}^{N}\sum_{c=1}^{S}y_{c}^{(n)}\log p_{c}(x^{(n)}).
\end{equation}
However, only the cross entropy loss is high enough to train a prototypical
model for GSZL. So, we address several novel information-theoretic loss functions to boost the performance of a deterministic prototypical model.
\subsection{Information-theoretic loss functions} \label{subsec:Infor_Loss}
In the GZSL, only the semantic embedding of target classes
is available, while the data associated are inaccessible
during training. To remove this blindness, we propose to utilize information-theoretic measurement to translate intuitive ideas of knowledge and semantic preserving into formal quantities; to bridge the source
and target classes through the seen data of source classes, we propose the
mutual information loss; to reflect the factor that the seen data should be closer to prototypes of sources classes rather than target classes, we propose an entropy constraint loss; to preserve the semantic relation of prototypes when projecting them from the original semantic space to the visual space, we propose another cross entropy loss.
\subsubsection*{Mutual information loss to link seen data and target classes}
To link the semantic embedding of target classes to visual images
of source classes, we leverage on an intuitive factor that each seen
image can be classified to the target class that is most similar
to the image's label in the source classes, rather than being classified to all target classes with equal uncertainty (or say equal assignment
probability) \cite{Liu_etal_2018GZsDCN}. Here we translate this intuitive factor into a formal information-theoretic measurement, and let the mutual information, $MI(x,c)=H(c)-H(c|x)$,
quantify the relation (or say closeness) between the seen data (visual features) and prototypes of target classes. With the prototypical model discussed in \cref{sec:protomodel}, we can obtain the probability vector that
the seed data $x$ belongs to the prototypes of target classes, $p_{c}(x)=\frac{\exp[-d(x,\psi(v_{c}))]}{\sum_{c'=S+1}^{S+T}\exp[-d(x,\psi(v_{c'}))]}$.
To bridge the seen data and the prototypes of
target classes, we minimize the MI loss as follows,
\begin{eqnarray}\label{eq:L_MI}
L_{\mathrm{MI}}
& = & \sum_{c}P_{c}\log P_{c}-\mathbb{E}_x\left[\sum_{c}p_{c}(x)\log p_{c}(x)\right] \nonumber \\
& \approx &\footnotesize{\sum_{c=S+1}^{S+T}\left(\frac{1}{N}\sum_{n=1}^{N}p_{c}(x^{(n)})\right)\log\left(\frac{1}{N}\sum_{n=1}^{N}p_{c}(x^{(n)})\right)\nonumber} \\
& & -\frac{1}{N}\sum_{n=1}^{N}\sum_{c=S+1}^{S+T}p_{c}(x^{(n)})\log p_{c}(x^{(n)})
\end{eqnarray}
where $P_{c}=\mathbb{E}_x[p_c(x^{(n)})]\approx\frac{1}{N}\sum_{n=1}^{N}p_{c}(x^{(n)})$. $\mathbb{E}_x[\cdot]$ denotes the expectation with respective to $x$, which is always approximated by Monte Carlo approach as sample $\{x^{(n)}\}_{n=1}^N$ are available here. $P_{c}$
can be viewed as a marginal assignment probability that a sample data
belongs to the target classes. Furthermore, increasing the marginal
entropy $H(c)$ encourages cluster balancing, which is helpful for
preventing trivial solutions that map the semantic embedding of all target
classes to a prototype (or a few prototypes) in the visual
space. The second term $H(c|x)$ in \cref{eq:L_MI}, usually
named conditional entropy, measures the uncertainty that an image data
belongs to the target classes. Previous study shows that the second
term can significantly improve prediction over target classes while
having little harm on classifying seen data \cite{Liu_etal_2018GZsDCN}.
Here, we further introduce a margin for this conditional entropy, that
\begin{equation}\label{eq:conditional_entropy}\small{
L_{\mathrm{Ent}}=\frac{1}{N}\sum_{n=1}^{N}\left[\frac{1}{\log_{2}(\textrm{\#}\mathcal{T})}\sum_{c=S+1}^{S+T}p_{c}(x^{(n)})\log p_{c}(x^{(n)})-\mathrm{margin}_1\right]^{+} }
\end{equation}
where $\textrm{\#}\mathcal{T}$ represents the number of element in
set $\mathcal{T}$, and the term $\log_{2}(\textrm{\#}\mathcal{T})$ denotes the information capacity of $\textrm{\#}\mathcal{T}$ bits. Here, we propose to regularize the entropy by dividing the information capacity term, as a consequence, the resulting \emph{regularized entropy} varies only in a small fixed-interval $(0, C_0]$, where $C_0=\log(n)/\log_2(n)$, $\forall n>1.0$ and $n\in R$. Therefore, the selection of $\mathrm{margin}_1$ becomes easy and consistent, even though the number of element in $\mathcal{T}$ varies among different datasets. We also apply the regularization for marginal entropy $H(c)$, by dividing the term $\log_{2}(\textrm{\#}\mathcal{T})$. To Finally, the improved MI loss becomes,
\begin{equation}
\label{eq:L_MI_final}
L_{\mathrm{MI}}=\frac{1}{\log_{2}(\textrm{\#}\mathcal{T})}\sum_{c=S+1}^{S+T}P_{c}\log P_{c}-\lambda_0 L_{\mathrm{Ent}}
\end{equation}
where we introduce an additional hyperparameter $\lambda_0$ to control the contribution of conditional entropy. The value of $\lambda_0$ is selected by cross validation, which experimental is not sensitive for different datasets.
\subsubsection*{Entropy constraint loss for uncertainty-aware training}
When training the embedding network of target classes using the seen data from source classes, a notable factor is that the seen data, to a certain extension, is the out of distribution data for target classes. So, the uncertainty of classifying the seen data to target classes should be larger than that of classifying them to source classes. Here, we propose an information constraint loss to control the entropy of seen image with respect to the
prototypes of source classes to be less than that with respect to the prototypes of target classes. We first define the entropy terms as follows,
\begin{equation*}
\mathrm{E_{u}}(x^{(n)})=\frac{1}{\log_{2}(\textrm{\#}\mathcal{T})}\sum_{c=S+1}^{S+T}p_{c}(x^{(n)})\log p_{c}(x^{(n)})
\end{equation*}
where $p_{c}(x)=\frac{\exp[-d(x,\psi(v_{c}))]}{\sum_{c'=S+1}^{S+T}\exp[-d(x,\psi(v_{c'}))]}$
is the PV that assign the seen data to each prototype of the target classes and
\begin{equation*}
\mathrm{E_{s}}(x^{(n)})=\frac{1}{\log_{2}(\textrm{\#}\mathcal{S})}\sum_{c=1}^{S}p_{c}(x^{(n)})\log p_{c}(x^{(n)})
\end{equation*}
where $p_{c}(x)=\frac{\exp[-d(x,\psi(v_{c}))]}{\sum_{c'=1}^{S}\exp[-d(x,\psi(v_{c'}))]}$
is the PV that assign the seen data to each prototype of the source classes. The entropy
constraint loss is then defined as,
\begin{equation}\label{eq:Loss_EC}
L_{\text{EC}}=\frac{1}{N}\sum_{n=1}^{N}\left[\mathrm{E_{u}}(x^{(n)})-(\mathrm{E_{s}}(x^{(n)})+\mathrm{margin}_2)\right]^{+}
\end{equation}
This loss reflects the expectation that the entropy $\mathrm{E_{u}}(x^{(n)})$ should be larger than the sum of $\mathrm{E_{s}}(x^{(n)})$ plus a margin $\mathrm{margin}_2$. As discussed in the last section, the regularized entropy $\mathrm{E_{u}}(x^{(n)})$ and $\mathrm{E_{s}}(x^{(n)})$ vary in the interval $(0, C_0]$, so it is not difficult to set a proper value for $\mathrm{margin}_2$.
\subsubsection*{Cross entropy loss for semantic preserving}
We map the class embeddings to the visual space such that the semantic
relation between the mapped class embeddings and the visual embedding
reflects the relation between their corresponding classes. To keep
the relation of the mapped class embedding similar to their relation
in the original semantic embedding, we introduce another novel regularization
to preserve semantic relations. We utilize the concepts of soft
assignment PV to explicitly define semantic relations between classes,
so that the objective function is specified to preserve the soft assignment
similarity between the original semantic embedding and
the mapped prototypes in visual space. By treating the source classes
embedding as the prototypes, we can assign the target class embedding.
to these prototypes, which leads to a soft assignment PV presentation. Here let $v_i^t$ (for $i \in \left\{1,..,T \right\}$) denotes target class embedding and $v_{j}^s$ (for $j \in \left\{1,..,S \right\}$) denotes the source class embedding. The assignment
PV in the original semantic space is $p_{j}(v_{i}^{t})=\frac{\exp[-d(v_i^{t},v_{j}^{s})]}{\sum_{j'=1}^S\exp[-d(v_{i}^{t},v_{j'}^{s})]}$,
while the assignment PV after mapping to the visual space by the network
$\psi(\cdot)$ is $p_{j}^{\psi}(v_i^{t})=\frac{\exp[-d(\psi(v_i^{t}),\psi(v_{j}^{s}))]}{\sum_{j'=1}^S\exp[-d(\psi(v_i^{t}),\psi(v_{j'}^{s}))]}$.Then,
we can use cross entropy to measure the similarity between $p_{j}(v_i^{t})$
and $p_{j}^{\psi}(v_i^{t})$. The overall semantic preserving loss is
given by
\begin{equation}
L_{\text{SPCE}}=\frac{1}{T}\sum_{i=1}^{T}\left[\frac{1}{\log_{2}(\textrm{\#}\mathcal{S})} \sum_{j=1}^{S} p_{j}(v_i^{t})\log p_{j}^{\psi}(v_i^{t})-\text{margin}_3\right]^{+}\label{eq:sp}
\end{equation}
In previous semantic preserving method \cite{Annadani_Biswas_2018PSRZs},
the difficulty is that the semantic similarity is not easy for comparison
across the original space and the mapped space. Therefore, a careful
design for the threshold on each dataset is required \cite{Annadani_Biswas_2018PSRZs}.
However, the value of regularized entropy in $L_{\text{SPCE}}$ varies only in the interval $(0, C_0]$, so it becomes easy to set a proper margin for semantic preserving.
\subsection{Learning and inference}
We combine all the four
loss functions with different weights $\lambda_{1:3}$. Therefore, we optimize
the parameters of our model by jointly learning the following loss
functions,
\begin{equation}
L_{D}=L_{\mathrm{CE}}+\lambda_{1}L_{\mathrm{MI}}+\lambda_{2}L_{\mathrm{EC}}+\lambda_{3}L_{\mathrm{SPCE}}\label{eq:Loss}
\end{equation}
We observed through experiments that our model is sensitive to $\lambda_{1}$
but less sensitive to $\lambda_{2}$ and $\lambda_{3}$. So we use cross validation to
set $\lambda_{1}$ for each dataset and set consistent
values of $\lambda_{2}$ and $\lambda_{3}$ for all simulation experiments except few exceptions. The network parameters
in $\psi(\cdot)$ can be efficiently optimized by SGD or Adam algorithm
with auto-differentiation technique supported in PyTorch \cite{Paszke2017AutomaticDI}.
In the test stage, the predicted class $y(x^{(n)})$ of image feature
$x^{(n)}$ is given by $y(x^{(n)})=\textrm{argmax}p_{c}(x^{(n)})$,
where $p_{c}(x^{(n)})=\frac{\exp[-d(x,\psi(v_{c}))]}{\sum_{c'}\exp[-d(x,\psi(v_{c'}))]}$
and $\psi(\cdot)$ is the trained network that maps semantic embedding
to the visual feature space. So, the prediction is made over both source
and target classes, as $c\in\mathcal{S\cup T}$ in generalized zero-shot
learning. In the conventional zero-shot learning, we only need the prediction over the target classes $c\in\mathcal{T}$.
\subsection{Cooperate with generative model}\label{sec:generative}
Most works of generative methods emphasize on the development of a sophisticated model to generate more `realistic' data for target classes. However, effective utilization of generated data is still largely ignored. We noticed that the synthetic data from the generative model are generally less reliable than the seen data, so we propose an uncertainty-aware entropy constraint to select the generated data when they are applied in training the discriminative model. Specially, we put the generated data $\{\widetilde{x}^{(m)}\}_{m=1}^{M}$ into the prototypical model where the prototypes are from target classes, and obtain the PV, $p_{c}(\widetilde{x}^{(m)})=\frac{\exp[-d(\widetilde{x}^{(m)},\psi(v_{c}))]}{\sum_{c'=S+1}^{S+T}\exp[-d(\widetilde{x}^{(m)},\psi(v_{c'}))]}$. Then, we define the uncertainty of the generated data by the regularized entropy, $\mathrm{\widetilde{E}_{u}}(\widetilde{x}^{(m)})=\frac{1}{\log_{2}(\textrm{\#}\mathcal{T})}\sum_{c=S+1}^{S+T}p_{c}(\widetilde{x}^{(m)})\log p_{c}(\widetilde{x}^{(m)})$.
After that, we select the generated data by the criterion that $\mathrm{\widetilde{E}_{u}}(\widetilde{x}^{(m)})<\mathrm{margin}_4$ with a predefined threshold $\mathrm{margin}_4$. This uncertainty based selection can prevent improper generated data from making a negative effect on the prediction of target classes. Let $\widetilde{x}_{sel}=\{\widetilde{x}^{(m)}_{sel}\}_{1}^{M_s}$ denote the selected generated data, which we use to train the embedding network of target classes by
\begin{equation}
\widetilde{L}_{\mathrm{CE}}(\widetilde{x})=-\frac{1}{M_s}\sum_{m=1}^{M_s}\sum_{c=S+1}^{S+T}\widetilde{y}_{c}^{(m)}\log p_{c}(\widetilde{x}_{sel}^{(m)}),\label{eq:G_ce_loss}
\end{equation}
where the label $\widetilde{y}_{c}$ is known in the generation of the data. Let $\gamma_1$ denote the weight for this entropy loss.
Moreover, here, we also allow the generated data to train the embedding network of source classes. To this end, we define another mutual information loss as \cref{eq:L_MI}, the difference is that the prototypes change from target classes to source classes, $p_{c}(\widetilde{x})=\frac{\exp[-d(\widetilde{x},\psi(v_{c}))]}{\sum_{c'=1}^{S}\exp[-d(\widetilde{x},\psi(v_{c'}))]}$. Let $\widetilde{L}_{\mathrm{MI}}(\widetilde{x})$ denote this mutual information loss for generated data, and $\gamma_2$ denote its weight.
Finally, putting all the loss functions together, we obtain the loss function to train the proposed model with both seen data $x$ and generated data $\widetilde{x}$,
\begin{equation}
L_{G}=L_D(x)+\gamma_1 \widetilde{L}_{\mathrm{CE}}(\widetilde{x})+\gamma_{2}\widetilde{L}_{\mathrm{MI}}(\widetilde{x}).
\end{equation}
\section{Experiments }
We perform extensive evaluation for both conventional ZSL and
GZSL on standard benchmark datasets: AwA1, AwA2, CUB, SUN, and aPy.
\subsection{Experimental settings}
\textbf{Datasets}. The benchmark datasets are briefly described as
follow: Animals with Attributes (AwA1) \cite{Lampert_etal_2014AbCZsVBC}
is a widely-used dataset for coarse-grained zero-shot learning, which
contains 30,475 images from 50 different animal classes. A standard
split into 40 source classes and 10 target classes is provided in
\cite{Lampert_etal_2014AbCZsVBC}. A variant of this dataset is Animal
with Attributes2 (AwA2) \cite{Xian_etal_2017ZsTGTBTU} which has the
same 50 classes as AwA1, but AwA2 has 37,322 images in all which don't
overlap with images in AwA1. Caltech-UCSD-Birds-200-2011 (CUB) \cite{Wah_2011_TCuBD}
is a fine-grained dataset with large number of classes and attributes,
containing 11,788 images from 200 different types of birds annotated
with 312 attributes. The split of CUB with 150 source classes and
50 target classes is provided in \cite{Akata_etal_2016LeIC}. SUN
Attribute (SUN) \cite{Patterson_Hays_2012SAD} is another fine-grained
dataset, containing 14,340 images from 717 types of scenes annotated
with 102 attributes. The split of SUN with 645 source classes and
72 target classes is provided in \cite{Lampert_etal_2014AbCZsVBC}.
Attribute Pascal and Yahoo (aPY) \cite{Farhadi_etal_2009DOTA} is
a small-scale dataset with 64 attributes and 32 classes(20 Pascal
classes as source classes and 12 Yahoo classes as target classes).
\textbf{Image features.} Due to variations in image features used
by different zero-shot learning methods, for a fair comparison, we use
the widely-used features: 2048-dimensional ResNet-101 features provided
by \cite{Xian_etal_2018ZsCETGTBTU}. Classification accuracies of
existing methods are directly reported from their papers.
\textbf{Semantic representations.} We use the per-class continuous attributes provided with
the datasets of aPY, AwA, CUB and SUN. Note that we can also use the
Word2Vec representations as class embeddings \cite{Mikolov_etal_2013EERVS}.
\subsection{Implementation Details}
The compatibility function in the prototypical model is implemented as MLP. The input dimension
of attribute embedding is dependent on the problem. The MLP has 2
fully connected layers with 2048 hidden units. We use LeakyReLU as
the nonlinear activation function, Dropout function, for the first
layer, and Tanh for the output layer to squash the predicted values
within $[-1,1]$. The setting of the hyperparameters is given
as follows: by cross validation, we set $m_{1}=0.5$ and $m_{2}=1.0$
for the asymmetric dot product distance.
In the overall loss function \cref{eq:Loss}, we set the value
of $\lambda_{2}$ as 0.5 for all dataset, we
set the value of $\lambda_{3}$ as 0.05 for all datasets; while the value of $\lambda_{1}$ is depended on the
data sets, we choose a value between $0.025\sim1$ by cross validation,
but by observation, for the value of $\lambda_{1}$ is also relative
robust 0.025 and 0.05 is good enough for dataset AwA1/2, aPY, CUB,
and only SUN require a larger value 0.5. The margin value is chose
as follows: $\text{margin}_{1}=0.15$, $\text{margin}_{2}=0.05$,
$\text{margin}_{3}=0.3$ for dataset aPY, CUB and SUN, while $\text{margin}_{2}=0.0$
for AwA1 and AwA2. The batch size of visual feature data is set to
$512$. For the optimization, we use Adam optimizer \cite{Kingma2015AdamAM}
with constant learning rate $0.001$ and early stopping on the validation
set.
Following the Proposed Split in the Rigorous Protocol \cite{Xian_etal_2017ZsTGTBTU},
we compare three accuracies: $ACC_{ts}$, accuracy of all unseen images
in target classes; $ACC_{tr}$, accuracy of some seen images from
source classes which are not used for training. Then we compute the
harmonic mean of the two accuracies as $ACC_{H}=2(ACC_{ts}*ACC_{tr})/(ACC_{ts}+ACC_{tr})$,
which is used as the final criterion to favor high accuracies on both
source and target classes.
\subsection{Component analysis}
\label{subsec:Component-analysis}
\subsubsection*{Effectiveness of the proposed loss functions}
We illustrate how the proposed losses affect the model by three straightforward experiments. We perform the experiments on AwA1 dataset.
\textbf{Mutual information loss}: \cref{figure1_MI} shows the effectiveness of the mutual information loss function by comparing two cases: case 1 (as shown in left part of \cref{figure1_MI}), the deterministic model is trained without the mutual information loss $L_{\mathrm{MI}}$, obtaining the trajectory of $\mathrm{MI}(x_{seen},c_{target})$, the mutual information between the seen data and the labels of target classes, on both the AwA1 training and testing datasets; case 2 (as shown in right part of \cref{figure1_MI}), the deterministic model is trained with the mutual information loss $L_{\mathrm{MI}}$, getting the trajectory of $\mathrm{MI}(x_{seen},c_{target})$. From the comparison, it is noted that the $L_{\mathrm{MI}}$ loss significantly improves the mutual information between the seen data and the labels of target classes, which also means an effective knowledge/information transfer from the seen data of source classes to the classification of target classes.
\textbf{Entropy constraint loss}: \cref{figure2_EC} illustrates the effectiveness of the entropy constraint loss $L_{\mathrm{EC}}$, by comparing two cases: case 1, the deterministic model is trained without this loss, showing the sample histogram of the term $E_u(x_{seen},c_{target})-E_s(x_{seen},c_{source})$ (also denoted as $E_u-E_s$), that is $\frac{1}{\log_{2}(\textrm{\#}\mathcal{T})}\sum_{c\in \mathcal{T}} p_c(y=c|x^{(n)})\log (p_c(y=c|x^{(n)}))-\frac{1}{\log_{2}(\textrm{\#}\mathcal{S})}\sum_{c \in \mathcal{S}} p_c(y^{(n)}=c|x^{(n)})\log (p_c(y^{(n)}=c|x^{(n)}))$ with $(x^{(n)}, y^{(n)})$ from the AwA1 training or testing dataset in left part of \cref{figure2_EC}; case 2, the deterministic model is trained with the entropy constraint loss $L_{\mathrm{EC}}$, the histogram as shown in right part of \cref{figure2_EC}. In case 1, a certain percentage of the samples of $E_u-E_s$ is negative or close to zero. However in case 2, the $L_{EC}$ loss enforces the samples of $E_u-E_s$ being larger than zero, which means the uncertainty that the seen data from source classes are classified to target classes should be larger than being classified to source classes. This comparison also shows that the entropy constraint loss mitigates the overfitting ( the seen data samples from source classes with negative $E_u-E_s$ tend to be incorrectly classified to target classes), when using seen data from source classes to train the embedding of semantic representations of target classes.
\textbf{Semantic preserving loss}: \cref{figure3_SPCE} illustrates the effectiveness of the semantic preserving cross entropy loss $L_{\mathrm{SPCE}}$, also by comparing two cases: case 1, the deterministic model is trained without $L_{\mathrm{SPCE}}$ on the AwA1 dataset. We evaluate two PV representations that the PV for target class representation $v_i^{t}$ in the original semantic space with respect to prototypes of source classes representation $v_{j}^{s}$, $p_{j}(v_i^{t})$, and the PV after mapping them to the visual space by the network $\psi(\cdot)$, $p_{j}^{\psi}(v_i^{t})$. Then,
we utilize the cross entropy to measure the similarity between $p_{j}(v_i^{t})$
and $p_{j}^{\psi}(v_i^{t})$ for each $v_i^{t}$, that $CE(v_i^t, v^s_{1:S},\psi)=\frac{1}{\log_{2}(S)}\sum_{j=1}^S p_{j}(v_i^{t})\log p_{j}^{\psi}(v_i^{t})$. In the left part of \cref{figure3_SPCE}, we evaluate and plot $CE(v_i^t, v^s_{1:S},\psi)$ for each $v_i^t$, where $\psi$ is the trained projection network; case 2, the deterministic model is trained with the semantic preserving cross entropy $L_{\mathrm{SPCE}}$, where we also evaluate and show the $CE(v_i^t, v^s_{1:S},\psi)$ in the right part of \cref{figure3_SPCE}. From the comparison, it is obvious that with the semantic preserving loss $L_{\mathrm{SPCE}}$, the term $CE(v_i^t, v^s_{1:S},\psi)$ become smaller, which means that the semantic relation between the target class representations to the source class representation is more similar (or say more inherited) after mapping these representations from the original semantic space to the visual space. Therefore, the loss $L_{\mathrm{SPCE}}$ helps to preserve the semantic relation.
\begin{figure}
\centering
\includegraphics[scale=0.475]{figs/Plots_of_Mutual_information_without_MI_loss}
\includegraphics[scale=0.475]{figs/Plots_of_Mutual_information_with_MI_loss}
\caption{Trajectory of mutual information $\mathrm{MI}(x_{seen},c_{target})$: Left, the model is trained without the mutual information loss $L_{\mathrm{MI}}$; Right, the model is trained with the mutual information loss $L_{\mathrm{MI}}$.}\label{figure1_MI}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.475]{figs/Plots_of_Entropy_Sample_without_EC2}
\includegraphics[scale=0.475]{figs/Plots_of_Entropy_Sample_with_EC2}
\caption{Sample histogram of the term $E_u-E_s$: Left, the model is trained without the entropy constraint loss $L_{\mathrm{EC}}$; Right, the model is trained with entropy constraint loss $L_{\mathrm{EC}}$.}\label{figure2_EC}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.475]{figs/Plots_of_Cross_Entropy_SP_without}
\includegraphics[scale=0.475]{figs/Plots_of_Cross_Entropy_SP_with}
\caption{Cross entropy between PV representations of ten target classes representations $v_{i}^t$ (for $i \in \{1,...,10\}$) in the AwA1 dataset with respect to its forty source classes representations $v_{1:40}^s$, $CE(v_i^t, v^s_{1:40},\psi)$: Left, the model is trained without the semantic preserving cross entropy loss $L_{\mathrm{SPCE}}$; Right, the model is trained with loss $L_{\mathrm{SPCE}}$.}\label{figure3_SPCE}
\end{figure}
\subsubsection*{Numerical evaluation on performance}
We investigate contribution of our proposed approach
on the model performance for GZSL. Here we use both the AwA1 and AwA2 datasets. We include
the result of DCN model \cite{Liu_etal_2018GZsDCN} on AwA2 and the prototypical
model trained with only the cross entropy loss as baseline methods. We
compare the choice of common embedding spaces, attribute space and feature
space, and the choice of distances, cosine and dot product. We represent the combinations as follows: space A uses the attribute space as the embedding space and uses dot product similarity based distance; space B uses the visual space as the embedding space and uses cosine similarity based distance; space C uses the visual space as the embedding space and uses dot product similarity based distance. Notice that the last two rows in \cref{all-tabel1} also choose space C. Furthermore,
we show the contribution of the proposed information-theoretic losses:
$L_{\text{Ent}}$, $L_{\text{MI}}$, $L_{\text{EC}}$ and $L_{\text{SPCE}}$.
All the simulation results
are shown in \cref{all-tabel1}, where we cite the result of DCN directly from
\cite{Liu_etal_2018GZsDCN}. The DCN model introduces an entropy regularization for bridging seen data
and target classes, which is similar with the $L_{\text{Ent}}$ (notice that we have an extra term $\mathrm{margin}_1$ in the \cref{eq:conditional_entropy}). The DCN uses the dot product distance, and project the feature and attribute onto a common space. The third to the fifth rows show that the entropy loss $L_{\text{Ent}}$ significantly improves the GZSL performance, comparing to the cross entropy loss $L_{\text{CE}}$. And the results of the third to the fifth rows outperform the DCN significantly, which might be due to the factors that it is easier to train the model using the original attribute/feature space rather than looking for a common space, and the proposed entropy loss seems more effective than the entropy regularization in DCN. Furthermore, the third to the fifth rows demonstrate the importance of choosing the common embedding space and distance function: using the visual feature space rather than the semantic space as the embedding space gains strong improvement; using dot product similarity based distance yields better performances than using cosine similarity based distance. Comparing the sixth row with the fifth row, we show that the proposed marginal entropy $H(c)$ in $L_{\text{MI}}$ brings additional improvements. Comparing the results in the seventh row and sixth row, we show that the entropy constraint $L_{\text{EC}}$ can significantly boost the model performance. In the last row, we show that the semantic preserving loss $L_{\text{SPCE}}$ achieves a favorable improvement on AwA1 dataset but brings a negligible negative effect on AwA2 dataset. Experiments show that, for other datasets, the semantic
preserving loss also make positive contributions, so we retain
this loss in following simulation studies.
\begin{table}
\centering
\caption{Comparison of the contribution of different improvement approaches.}
\centering%
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}{1cm}{\centering Methods} & \multicolumn{3}{c|}{AwA1} & \multicolumn{3}{c}{AwA2}\tabularnewline
\cline{2-7}
~ & ts & tr & H & ts & tr & H\tabularnewline
\hline
\small{DCN\cite{Liu_etal_2018GZsDCN}} & - & - & - & 25.5 & 84.2 & 39.1\tabularnewline
\small{$L_{\text{CE}}$ (space C)} & 11.4 & 89.9 & 20.2 & 13.6 & 90.6 & 23.7\tabularnewline
\small{+$L_{\text{Ent}}$ (space A)} & 35.7 & 66.0 & 46.3 & 39.4 & 75.5 & 51.7\tabularnewline
\small{+$L_{\text{Ent}}$ (space B)} & 37.8 & 67.0 & 48.3 & 41.1 & 80.2 & 54.2\tabularnewline
\small{+$L_{\text{Ent}}$ (space C)} & 39.8 & 70.1 & 50.8 & 46.2 & 71.6 & 56.2\tabularnewline
\small{+$L_{\text{MI}}$ (space C)} & 39.3 & 72.9 & 51.1 & 49.5 & 70.9 & 58.2\tabularnewline
\small{+$L_{\text{MI}}$+$L_{\text{EC}}$ } & 45.2 & 72.6 & 55.7 & 52.7 & 74.1 & \textbf{61.6}\tabularnewline
\small{+$L_{\text{MI}}$+$L_{\text{EC}}$+$L_{\text{SPCE}}$} & 50.2 & 71.5 & \textbf{59.0} & 52.2 & 74.3 & 61.3\tabularnewline
\hline
\end{tabular}\label{all-tabel1}
\end{table}
\subsection{Conventional zero shot learning results}
We investigate the proposed method for conventional ZSL that only recognize unseen classes at the test stage. And we compare the result of our
method with several state of the art results from recent works \cite{Akata_etal_2015EOEFGIC}\cite{Paredes_Toor_2015AESATZs}\cite{Changpinyo_etal_2016SCZs}\cite{Annadani_Biswas_2018PSRZs}\cite{Tong2019HierarchicalDO}.
As shown in \cref{c-table}, the proposed approach compares favorably with the existing
approaches in literature, obtaining the state-of-the-art on SUN, AwA2
and aPY datasets. On CUB dataset, our result is 2\% lower than DLFZRL \cite{Tong2019HierarchicalDO}.
\begin{table}
\centering
\caption{Results of Conventional Zero-Shot Learning.}
\begin{tabular}{l c c c c}
\hline
Method & SUN & CUB & AwA2 & aPY\tabularnewline
\hline
DAP\cite{Lampert_etal_2014AbCZsVBC} & 39.9 & 40.0 & 46.1 & 33.8\tabularnewline
CONSE\cite{Mohammad_etal_2014ZsLCCSE} & 38.8 & 34.3 & 44.5 & 26.9\tabularnewline
ALE\cite{Akata_etal_2013LeAbC} & 58.1 & 54.9 & 62.5 & 39.7\tabularnewline
DEVISE\cite{Frome_etal_2013DEVISE} & 56.5 & 52.0 & 59.7 & 39.8\tabularnewline
SJE\cite{Akata_etal_2015EOEFGIC} & 53.7 & 53.9 & 61.9 & 32.9\tabularnewline
ESZSL\cite{Paredes_Toor_2015AESATZs} & 54.5 & 53.9 & 58.6 & 38.3\tabularnewline
SYNC\cite{Changpinyo_etal_2016SCZs} & 40.3 & 55.6 & 46.6 & 23.9\tabularnewline
PSR\cite{Annadani_Biswas_2018PSRZs} & 61.4 & 56.0 & 63.8 & 38.4\tabularnewline
DLFZRL\cite{Tong2019HierarchicalDO} & 59.3 & \textbf{57.8} & 63.7 & 44.5\tabularnewline
\textbf{Proposed} & \textbf{62.1} & 57.6 & \textbf{64.6} & \textbf{44.7}\tabularnewline
\hline
\end{tabular}\label{c-table}
\end{table}
\begin{table*}[htbp]
\centering
\caption{Results of Generalized Zero-Shot Learning on four datasets under Proposed
Splits (PS)\cite{Xian_etal_2017ZsTGTBTU}}
\small{
\begin{tabular}{l|ccc|ccc|ccc|ccc} \hline
\multirow{2}{1cm}{\centering Methods} & \multicolumn{3}{c|}{AwA2} & \multicolumn{3}{c|}{CUB}
& \multicolumn{3}{c|}{SUN} & \multicolumn{3}{c}{aPY} \\
\cline{2-13}
~ & {\centering ts} & {\centering tr} & {\centering H}
& {\centering ts} & {\centering tr} & {\centering H}
& {\centering ts} & {\centering tr} & {\centering H}
& {\centering ts} & {\centering tr} & {\centering H} \\
\hline
\textbf{Non-Generative Models} & & & & & & & & & & & & \\
ALE\cite{Akata_etal_2013LeAbC} & {16.8 } & {76.1 } & {27.5 } & {23.7 } & {62.8 } & {34.4 } & {21.8 } & {33.1 } & {26.3 } & {4.6 } & {73.7 } & {8.7}\tabularnewline
DeViSE\cite{Frome_etal_2013DEVISE} & {13.4 } & {68.7 } & {22.4 } & {23.8 } & {53.0 } & {32.8 } & {16.9 } & {27.4 } & {20.9 } & {4.9 } & {76.9 } & {9.2}\tabularnewline
SynC\cite{Changpinyo_etal_2016SCZs} & {8.9 } & {87.3 } & {16.2 } & {11.5 } & {70.9 } & {19.8 } & {7.9 } & {43.3 } & {13.4 } & {7.4 } & {66.3 } & {13.3}\tabularnewline
ZSKL\cite{Zhang_etal_2018ZsKL} & {18.9 } & {82.7 } & {30.8 } & {21.6 } & {52.8 } & {30.6 } & {20.1 } & {31.4 } & {24.5 } & {10.5 } & {76.2 } & {18.5}\tabularnewline
DCN \cite{Liu_etal_2018GZsDCN} & {25.5 } & {84.2 } & {39.1 } & {28.4 } & {60.7 } & {38.7 } & {25.5 } & {37.0 } & {30.2 } & {14.2 } & {75.0 } & {23.9}\tabularnewline
DLFZRL \cite{Tong2019HierarchicalDO} & {- } & {- } & {45.1 } & {- } & {- } & {37.1 } & {- } & {- } & {24.6 } & {- } & {- } & {31.0}\tabularnewline
\textbf{Proposed} & 52.7 & 74.1 & \textbf{61.6} & {40.6 } & {55.1 } & \textbf{46.7}{ } & 41.7 & 37.4 & \textbf{39.5} & 31.5 & 51.8 & \textbf{39.2}\tabularnewline
\hline
\textbf{Generative Models} & & & & & & & & & & & & \tabularnewline
f-CLSWGAN\cite{Xian_etal_2018FGNZs} & {52.1 } & {68.9 } & {59.4 } & {43.7 } & {57.7 } & {49.7 } & {42.6 } & {36.6 } & {39.4 } & {- } & {- } & {-}\tabularnewline
F-VAEGAN-D2\cite{Xian2019FVAEGAND2AF} & {57.6 } & {70.6 } & {63.5 } & {48.4 } & {60.1 } & 53.6 & {45.1 } & {38.0 } & {41.3 } & {- } & {- } & {-}\tabularnewline
CADA-VAE\cite{Schonfeld_etal_2019GZsAVA} & {55.8 } & {75.0 } & {63.9 } & {51.6 } & {53.5 } & {52.4 } & {47.2 } & {35.7 } & {40.6 } & {- } & {- } & {-}\tabularnewline
CRnet\cite{Zhang2019CoRepresentationNF} & {52.6 } & {52.6 } & {63.1 } & {45.5 } & {56.8 } & {50.5 } & {34.1 } & {36.5 } & {35.3 } & {32.4 } & {68.4 } & {44.0}\tabularnewline
DLFZRL+softmax\cite{Tong2019HierarchicalDO} & {- } & {- } & {60.9 } & {- } & {- } & {51.9 } & {- } & {- } & 42.5 & {- } & {- } & {38.5}\tabularnewline
TCN \cite{Jiang2019TransferableCN} & 61.2 & 65.8 & 63.4 & 52.6 & 52.0 & 52.3 & 31.2 & 37.3 & 34.0 & 24.1 & 64.0 & 35.1\tabularnewline
GDAN \cite{Huang2019GenerativeDA} & 32.1 & 67.5 & 43.5 & 39.3 & 66.7 & 49.5 & 38.1 & 89.9 & 53.4 & 30.4 & 75.0 & 43.4\tabularnewline
IZF \cite{Shen2020InvertibleZR} & 60.6 & 77.5 & 68.0 & 52.7 & 68.0 & 59.4 & 52.7 & 57.0 & \textbf{54.8} & 42.3 & 60.5 & \textbf{49.8}\tabularnewline
DVBE \cite{Min2020DomainAwareVB} & 62.7 & 77.5 & 69.4 & 64.4 & 73.2 & \textbf{68.5} & 44.1 & 41.6 & 42.8 & 37.8 & 55.9 & 45.2\tabularnewline
IAS \cite{Chou2021AdaptiveAG} & 65.1 & 78.9 & \textbf{71.3} & 41.4 & 49.7 & 45.2 & 29.9 & 40.2 & 34.3 & 35.1 & 65.5 & 45.7\tabularnewline
\textbf{f-CLSWGAN+Proposed} & 56.4 & 83.2 & 67.2 & 52.1 & 55.8 & 53.9 & 53.3 & 35.0 & 42.3 & 37.1 & 57.7 & 45.2\tabularnewline
\hline
\end{tabular}\label{all-table}}
\end{table*}
\subsection{Generalized zero shot learning results}
\subsubsection*{Comparison with deterministic models}
We compare the performance of our proposed model with several recent deterministic models for GZSL. Taking DCN \cite{Liu_etal_2018GZsDCN} as the baseline model, as shown in \cref{all-table}, our method gains a superior accuracy compared to other deterministic models on all datasets: it
obtains $21\%\thicksim64\%$ improvements over DCN and significantly outperforms
a previous state-of-art deterministic model-DLFZRL \cite{Tong2019HierarchicalDO}.
Besides, we observe that our deterministic model obtains
superior results than some generative models, for example, it outperforms the f-CLSWGAN on all datasets.
\subsubsection*{Comparison with generative models}
We investigate the performance of our proposed method by incorporating a generative model, f-CLSWGAN. Seen image features and class-level attributes are used to train f-CLSWGAN
and image features of unseen classes can be generated by unseen
class-level attributes. Including the generated features for target
classes into the entire training, we train the model by the loss function defined in \cref{sec:generative}.
As shown in \cref{all-table}, our proposed model significantly
outperforms the baseline model f-CLSWGAN with $7\%\thicksim 13\%$ improvements. Our proposed method achieve comparable results with several recently
proposed sophisticated generative models. Moreover, unlike some generative
models, such as \cite{Jiang2019TransferableCN,Huang2019GenerativeDA,Chou2021AdaptiveAG} gain superior results on one or two datasets but inferior results
on other dataset, our proposed method obtain favorable results on all the dataset: it ranks as the top 3 $\thicksim$ top 5 method for each dataset.
\section{Conclusion}
This paper addresses information-theoretic loss functions to quantify the knowledge transfer and semantic relation for GZSL/ZSL. Leveraging
on the proposed probability vector representation based on the prototypical model, the proposed loss can be effectively evaluated with simple closed forms. Experiments show that our approach yields state of the art performance for deterministic approach for GZSL and conventional ZSL tasks. Moreover, by incorporating with generated data from f-CLSWGAN, the proposed method also gains favorable performance. One limitation of this work is that we need pretty much extra cross validation to select the hyperparameters; Another limitation is that the loss functions have certain correlation, so further study is needed to simply the loss functions while keeping similar performance, and reduce the number of hyperparameters.
\clearpage
{\small{}{}{}{}{} \bibliographystyle{ieee_fullname}
|
1,314,259,995,652 | arxiv | \section*{Introduction}
Ohkawa proved in \cite{O} that the homotopy category of spectra has only a set
(that is, not a proper class) of distinct homological acyclic classes. The \emph{homological acyclic class} or \emph{Bousfield class} $\langle E\rangle$ of a spectrum $E$ consists of all $E_*$\nobreakdash-acyclic spectra, where $E_*$ is the reduced homology theory represented by~$E$. In other words, $\langle E\rangle$ is the collection of spectra $X$ such that $E\wedge X=0$ in the homotopy category.
The original source of this terminology is~\cite{B}.
Bousfield classes are closely related with localizations. The earliest form of localization in homotopy theory \cite{Su} was a technique to split homotopy types into their $p$\nobreakdash-primary components for all primes~$p$, thereby introducing the use of Hasse-principle methods in topology, both for spaces and for spectra. A~decade later, it was discovered that every $p$\nobreakdash-local spectrum could be further resolved into \emph{$v_n$\nobreakdash-periodic} components for $n\ge 0$. The resulting \emph{chromatic towers} and their associated spectral sequences became major tools to compute stable homotopy groups~\cite{R}.
All these are special cases of homological localizations. For each reduced homology theory $E_*$ defined on spaces or spectra there is an \emph{$E_*$\nobreakdash-localization functor} \cite{B}, which transforms the $E_*$\nobreakdash-equivalences
(that is, maps $X\to Y$ inducing isomorphisms $E_k(X)\cong E_k(Y)$ for all~$k$) into homotopy equivalences in a universal way.
Localization at a prime $p$ is obtained by letting $E_*$ be ordinary homology with $p$\nobreakdash-local coefficients, and the $n$th stage of the chromatic resolution is $E(n)_*$\nobreakdash-localization, where $E(n)=K(0)\vee\cdots\vee K(n)$ is a wedge of Morava $K$\nobreakdash-theories~\cite{JW}.
Two spectra $E$ and $F$ are called \emph{Bousfield equivalent} if $E_*$\nobreakdash-local\-iza\-tion is equivalent to $F_*$\nobreakdash-local\-iza\-tion. This happens precisely when the classes of $E_*$\nobreakdash-acyclic spectra and $F_*$\nobreakdash-acylic spectra coincide, that is, when the Bousfield classes $\langle E\rangle$ and $\langle F\rangle$ are identical.
Thus, according to Ohkawa's theorem, Bousfield equivalence classes of spectra form a set. A~shorter proof of this fact was given by Dwyer and Palmieri in~\cite{DP}, and some consequences were described in~\cite{HP}.
In a different direction, Neeman proved in \cite{N} that Bousfield classes form a set
in the derived category of any commutative Noetherian ring.
In this context, the Bousfield class of a chain complex $A$ is defined as the collection
of chain complexes $X$ such that the derived tensor product $A\otimes X$ is zero.
Dwyer and Palmieri proved the same result in \cite{DP2} for the derived category of a truncated polynomial ring on countably many generators
over a countable field.
They asked in \cite[Question~5.9]{DP2} if Ohkawa's theorem
is in fact true in the derived category of every commutative ring. This was answered
in the affirmative by Stevenson in \cite{S} and by Iyengar and Krause in~\cite{IK}, and it also follows from the results of the present article.
Both the homotopy category of spectra and the derived category
of a commutative ring are homotopy categories of \emph{combinatorial model categories}, and their tensor product comes
from a closed monoidal structure in the model category. In this article we prove that the collection of Bousfield classes is a set under these general assumptions.
This extends the validity of Ohkawa's theorem, for example, to categories of motivic spaces or motivic spectra over any base scheme~\cite{MV}, and to categories of modules over (ordinary or motivic) ring spectra. Thus, Okhawa's theorem also holds in the derived category of motives over any field $k$ of characteristic zero, since these are modules over a motivic Eilenberg--Mac\,Lane spectrum \cite{RO2}.
Specifically, we show that in every combinatorial model category $\calM$ (neither necessarily stable nor pointed), for every sufficiently large regular cardinal $\lambda$
there is only a set of distinct acyclic classes $\Acyclic(H)$ for functors $H\colon\calM\to\calM$ preserving $\lambda$\nobreakdash-filtered colimits and such that the terminal object of $\calM$ is $H$\nobreakdash-acyclic.
An object $X$ of $\calM$ is called \emph{$H$\nobreakdash-acyclic} if $HX$
is weakly equivalent to the terminal object, and we denote by $\Acyclic(H)$ the collection of all $H$\nobreakdash-acyclic objects.
If a model category $\calM$ is closed monoidal, combinatorial and pointed, then, since left adjoints preserve all co\-limits
and there are cofibrant replacement functors on $\calM$ preserving $\lambda$\nobreakdash-filtered colimits for sufficiently
large~$\lambda$, it follows that Bousfield classes in the homotopy category of $\calM$ form a~set.
In contrast with this fact, in the derived category of $\ZZ$ or in the homotopy category of spectra there is a proper class of distinct acyclic classes for nullification functors; see \cite[\S\,8]{St} for terminology and details. Each nullification functor $P_A$ preserves $\lambda$\nobreakdash-filtered colimits for $\lambda$ big enough, although the size of $\lambda$ increases with~$A$.
Our method of proof of Ohkawa's theorem for combinatorial model categories generalizes the argument given in~\cite{DP}.
A similar argument was used in \cite{S} for compactly generated tensor triangulated categories.
Using a different approach, it was shown in~\cite[Theorem~3.1]{IK}
that every well generated tensor triangulated category has only a set of Bousfield classes.
This result is consistent with the fact that
homotopy categories of stable combinatorial model categories are well generated.
Nevertheless, we emphasize that Ohkawa's theorem is by far not exclusively a result about triangulated categories.
For example, Corollary~\ref{discrete} below implies that there is only a set of
homological acyclic classes of simplicial sets or motivic spaces for every base scheme, and our proof just relies on the fact that these categories are locally presentable and homology functors preserve filtered colimits.
\bigskip
\noindent
\textbf{Acknowledgements}
We are indebted to Fernando Muro for frequent
exchanges of views on this topic, which made us rethink earlier versions of the article. Corollary~3.7 was kindly pointed out by Paul Arne {\O}stv{\ae}r.
We also appreciate input from George Raptis and Greg Stevenson.
\section{Combinatorial model categories}
\label{prelims}
We assume that regular cardinals are infinite.
For a regular cardinal~$\lambda$,
a small category $\calK$ is \emph{$\lambda$\nobreakdash-filtered} if it is nonempty and,
given any set of objects $\{k_i\mid i\in I\}$ where $|I|<\lambda$, there is an object $k$ and a
morphism $k_i\to k$ for each $i\in I$, and, moreover, given any set of parallel arrows
between two fixed objects $\{\alpha_j\colon k\to k' \mid j\in J\}$ where $|J|<\lambda$, there is a morphism
$\gamma\colon k'\to k''$ such that $\gamma\circ\alpha_j$ is the same morphism for all $j\in J$.
An object $X$ of a category $\calC$ is \emph{$\lambda$\nobreakdash-presentable}
if the functor $\calC(X,-)$ from $\calC$ to sets preserves $\lambda$\nobreakdash-filtered colimits.
A~cocomplete category $\calC$ is \emph{locally $\lambda$\nobreakdash-pres\-ent\-able} if the collection of isomorphism classes of $\lambda$\nobreakdash-presentable objects is a set and every object of $\calC$ is a $\lambda$\nobreakdash-filtered colimit of $\lambda$\nobreakdash-presentable objects.
A~category is called \emph{locally presentable} if it is locally $\lambda$\nobreakdash-presentable for
some regular cardinal~$\lambda$.
See \cite[Section~1.B]{AR}, \cite{GU} or \cite{MP} for further information about locally presentable categories.
The essentials of Quillen model categories can be found in \cite{H} or~\cite{Q}.
A model category is \emph{pointed} if it has a zero object, i.e.,
if the initial object and the terminal object are isomorphic.
A~model category $\calM$ is called \emph{combinatorial} if it is cofibrantly generated~\cite{Hi, H} and the underlying category is locally presentable.
Dugger proved in \cite{D} that a model category is combinatorial if and only if it is Quillen equivalent to a left Bousfield localization of a category of diagrams of simplicial sets equipped with the projective model structure. Hence, many model categories of interest in various contexts are combinatorial. Examples relevant to the present article are pointed or unpointed simplicial sets, pointed or unpointed motivic spaces \cite{DRO, MV}, symmetric spectra over simplicial sets \cite[\S\,3.4]{HSS} or over motivic spaces~\cite{J}, module spectra over a ring spectrum \cite[Theorem~4.1]{SS}, and bounded or unbounded chain complexes of modules over a ring \cite[\S\,2.3]{H}.
\begin{lemma}
\label{suff}
If $\calM$ is a combinatorial model category, then for every ordinal $\alpha$ there is a regular cardinal $\lambda>\alpha$ with the following properties:
\begin{itemize}
\item[{\rm (i)}] $\calM$ is locally $\lambda$\nobreakdash-presentable;
\item[{\rm (ii)}] there are sets of generating cofibrations and generating trivial cofibrations in $\calM$ whose domains and codomains are $\lambda$\nobreakdash-pres\-ent\-able;
\item[{\rm (iii)}] there are fibrant and cofibrant replacement functors on $\calM$ that preserve $\lambda$\nobreakdash-filtered colimits;
\item[{\rm (iv)}] the terminal object of $\calM$ is $\lambda$\nobreakdash-presentable.
\end{itemize}
\end{lemma}
\begin{proof}
Take first a regular cardinal $\mu>\alpha$
such that $\calM$ is locally $\mu$\nobreakdash-presentable. This is possible since, by \cite[Theorem~1.20]{AR},
if $\calM$ is locally $\nu$\nobreakdash-presentable and $\nu'\ge\nu$ then $\calM$ is also locally $\nu'$\nobreakdash-presentable.
Next, pick a set $\calG$ of generating cofibrations and a set $\calJ$ of generating trivial cofibrations in $\calM$
and choose a regular cardinal $\lambda\ge\mu$ big enough so that all the domains and codomains
of morphisms in $\calG$ and $\calJ$ are $\lambda$\nobreakdash-presentable, and such that the terminal object of $\calM$
is $\lambda$\nobreakdash-presentable as well.
Such a choice is possible by \cite[Proposition~1.16 and Remark~1.30(1)]{AR}.
Finally, (iii) is a consequence of (i) and~(ii), as shown in \cite[\S 7]{D} or \cite[\S 3]{R1}.
\end{proof}
For a combinatorial model category $\calM$ and a sufficiently big regular cardinal~$\lambda$ (as provided by Lemma~\ref{suff}), we use the term \emph{$\lambda$\nobreakdash-combinatorial structure} on $\calM$ to designate a choice of the following items:
a set $\calM_{\lambda}$ of representatives of isomorphism classes of $\lambda$\nobreakdash-presentable objects, including the terminal object, such that every object of $\calM$ is a $\lambda$\nobreakdash-filtered colimit of objects in~$\calM_{\lambda}$;
a set $\calG$ of generating cofibrations and a set $\calJ$ of generating trivial cofibrations whose domains and codomains are in~$\calM_{\lambda}$; and a fibrant replacement functor and a cofibrant replacement functor both preserving $\lambda$\nobreakdash-filtered colimits.
Suppose that a category $\calC$ is locally $\lambda$\nobreakdash-presentable and its terminal object is $\lambda$\nobreakdash-presentable. Then, if we endow $\calC$ with the \emph{discrete} model structure, where the weak equivalences are the isomorphisms
and all morphisms are fibrations and cofibrations, the resulting model category has a $\lambda$\nobreakdash-combinatorial structure where the set $\calG$ of generating cofibrations is the set of all morphisms between members of the chosen set $\calC_{\lambda}$; cf.\ \cite[Example~4.6]{R2}. Recall that locally presentable categories are cocomplete by definition and they are also complete by \cite[Corollary~1.28]{AR}.
The condition that the terminal object be $\lambda$\nobreakdash-presentable
holds automatically when it is a zero object, but may fail otherwise, as exemplified by the category ${\rm Set}^I$ of $I$\nobreakdash-sorted sets (i.e., functors $I\to{\rm Set}$), where $I$ is any infinite set. This category is locally $\aleph_0$\nobreakdash-presentable by \cite[Corollary~1.54]{AR}, yet its terminal object is not $\aleph_0$\nobreakdash-presentable.
\section{Main result}
\label{results}
Let $\calM$ be a combinatorial model category and suppose given a $\lambda$\nobreakdash-combinatorial structure on it for a suitable regular cardinal~$\lambda$.
Recall that, if $\calG$ is the given set of generating cofibrations, then a morphism $f\colon X\to Y$ is a trivial fibration in $\calM$ if and only if it has the right
lifting property with respect to all the morphisms in~$\calG$.
An object $X$ of $\calM$ is called \emph{contractible} if the unique morphism
from $X$ to the terminal object $*$ is a weak equivalence.
For a functor $H\colon\calM\to\calM$, an object $X$ is called \emph{$H$\nobreakdash-acyclic} if $HX$ is contractible.
We denote by $\Acyclic(H)$ the collection of all $H$\nobreakdash-acyclic objects in~$\calM$.
Given a functor $H\colon\calM\to\calM$ and a
triple $(\sigma,A,f)$ where $\sigma\colon P\to Q$ is in $\calG$ and
\[
f\colon P\longrightarrow RHA
\]
is a morphism with $A\in\calM_{\lambda}$, where $R$ is the given fibrant replacement functor, we denote by $\Trivializer_H(\sigma,A,f)$ the set of all
morphisms $t\colon A\to B$ with $B\in\calM_{\lambda}$ for which there exists a morphism $g\colon Q\to RHB$ such that
$RHt\circ f=g\circ\sigma$:
\[
\xymatrix{
P\ar[d]^-{\sigma} \ar[r]^-{f} & RHA \ar[rr]^{RHt} & & RHB. \\
Q \ar@{.>}[urrr]_g
}
\]
Note that, since the terminal object $*$ is in $\calM_{\lambda}$, if $H(*)$ is contractible then the morphism $A\to *$ is in $\Trivializer_H(\sigma,A,f)$
for every $(\sigma,A,f)$.
Finally, let $\Ohkawa(H)$ denote the set whose elements are all the distinct sets
$\Trivializer_H(\sigma,A,f)$ with $A\in\calM_{\lambda}$, $\sigma\colon P\to Q$ in $\calG$, and $f\colon P\to RHA$.
\begin{theorem}
\label{mainthm}
Suppose given a $\lambda$\nobreakdash-combinatorial structure on a model category $\calM$ for a regular cardinal~$\lambda$. Let $H_1$ and $H_2$ be endofunctors of $\calM$ that preserve $\lambda$\nobreakdash-filtered colimits. Then, if $\Ohkawa(H_2)\subseteq\Ohkawa(H_1)$ and the terminal object of $\calM$ is $H_2$\nobreakdash-acyclic, it follows that $\Acyclic(H_1)\subseteq\Acyclic(H_2)$.
\end{theorem}
\begin{proof}
Let $X$ be $H_1$\nobreakdash-acyclic. In order to prove that
$X$ is $H_2$\nobreakdash-acyclic, we need to show that for every $\sigma\colon P\to Q$ in $\calG$
and every $f\colon P\to RH_2X$ there is a morphism $g\colon Q\to RH_2X$ such that
$g\circ\sigma=f$.
Write $X\cong\colim_{\calK}\,D$ for a diagram $D\colon\calK\to\calM$ where
$\calK$ is $\lambda$\nobreakdash-filtered and $Dk$ is in $\calM_{\lambda}$ for all $k\in\calK$.
Then $H_1X\cong\colim_{\calK}\,(H_1\circ D)$ and $H_2X\cong\colim_{\calK}\,(H_2\circ D)$.
Suppose given $f\colon P\to RH_2X$ for a morphism $\sigma\colon P\to Q$ in~$\calG$.
Since $P$ is $\lambda$\nobreakdash-presentable, $f$ factors as
\[
\xymatrix{
P \ar[r]^-{f'} & RH_2Dk \ar[rr]^{RH_2\delta_k} & & RH_2X
}
\]
for some $k\in\calK$, where $\delta_k\colon Dk\to X$ denotes the corresponding cocone morphism.
Thus, we may consider the set $\Trivializer_{H_2}(\sigma,Dk,f')$ in $\Ohkawa(H_2)$, which is nonempty
since $Dk\to *$ is in it, as $H_2(*)$ is contractible.
By assumption, $\Trivializer_{H_2}(\sigma,Dk,f')$ is then a member of $\Ohkawa(H_1)$, so there
is an object $A\in\calM_{\lambda}$ and there are morphisms
$\tau\colon U\to V$ in $\calG$ and $u\colon U\to RH_1A$ such that
\begin{equation}
\label{trivializers}
\Trivializer_{H_2}(\sigma,Dk,f')=\Trivializer_{H_1}(\tau,A,u).
\end{equation}
This forces, by definition, that $A=Dk$.
Since $H_1X$ is contractible, the morphism $RH_1X\to *$ is a trivial fibration
and hence there is a morphism $v\colon V\to RH_1X$ such that $v\circ\tau=RH_1\delta_k\circ u$.
Since $V$ is $\lambda$\nobreakdash-presentable, there is an object $k'\in\calK$
such that $v$ factors as
\[
\xymatrix{
V \ar[r]^-{w} & RH_1Dk' \ar[rr]^-{RH_1\delta_{k'}} & & RH_1X.
}
\]
Since $\calK$ is filtered,
there is an object $k''\in\calK$ together with morphisms $\alpha\colon k\to k''$ and $\beta\colon k'\to k''$.
Furthermore, since $U$ is $\lambda$\nobreakdash-presentable and
\[
RH_1\delta_{k''}\circ RH_1D\alpha\circ u = RH_1\delta_{k''}\circ RH_1D\beta\circ w\circ\tau,
\]
there is an object $k'''\in\calK$ and a morphism $\gamma\colon k''\to k'''$ such that the two composites
\[
\xymatrix{
U \ar[r]^-{u} & RH_1Dk \ar[rr]^-{RH_1D(\gamma\circ\alpha)} & & RH_1Dk'''
}
\]
and
\[
\xymatrix{
U \ar[r]^-{\tau} & V \ar[r]^-{w} & RH_1Dk' \ar[rr]^-{RH_1D(\gamma\circ\beta)} & & RH_1Dk'''
}
\]
coincide. Then $D(\gamma\circ\alpha)$ is in $\Trivializer_{H_1}(\tau,Dk,u)$
and therefore, by~\eqref{trivializers}, it is also in $\Trivializer_{H_2}(\sigma,Dk,f')$,
which means that the composite
\[
\xymatrix{
P \ar[r]^-{f'} & RH_2Dk \ar[rr]^-{RH_2D(\gamma\circ\alpha)} & & RH_2Dk'''
}
\]
factors through $\sigma\colon P\to Q$. Hence $f\colon P\to RH_2X$ also factors through $\sigma$ and this fact concludes the proof.
\end{proof}
\section{Consequences}
\label{consequences}
\begin{corollary}
\label{aset}
If a model category $\calM$ admits a $\lambda$\nobreakdash-combinatorial structure for a regular cardinal~$\lambda$, then there is only a set of distinct classes $\Acyclic(H)$ where $H$ runs over all functors
$\calM\to\calM$ that preserve $\lambda$\nobreakdash-filtered colimits and such that
the terminal object is $H$\nobreakdash-acyclic.
\end{corollary}
\begin{proof}
Suppose that there is a proper class of functors $H_{i}$
preserving $\lambda$\nobreakdash-filtered colimits,
such that the classes $\Acyclic(H_{i})$ are all distinct and contain the terminal object.
Then, by Theorem~\ref{mainthm}, after any choice of a $\lambda$\nobreakdash-combinatorial structure on $\calM$ the sets
$\Ohkawa(H_{i})$ will be distinct. This is impossible, since all sets $\Ohkawa(H_{i})$
are contained in the power set of the union of $\calM(A,B)$ for all $A,B\in\calM_{\lambda}$, where $\calM_{\lambda}$ denotes the chosen set of representatives of isomorphism classes of $\lambda$\nobreakdash-presentable objects in~$\calM$.
\end{proof}
Observe that this argument yields a bound on the cardinality of the set
of distinct classes $\Acyclic(H)$ for each sufficiently large regular cardinal~$\lambda$, namely
$2^{2^{\kappa}}$ where $\kappa$ is the cardinality of the set of all morphisms between objects of~$\calM_{\lambda}$.
As pointed out in \cite{DP}, the cardinality of the set of homological acyclic classes in the homotopy category of spectra is bounded above by $2^{2^{\aleph_0}}$, since there are only countably many isomorphism classes of finite spectra.
Homological acyclic classes of spectra form a lattice, whose precise size is not known. Its cardinality is at least $2^{\aleph_0}$, since distinct sets of primes $J$ yield distinct acyclic classes represented by Moore spectra $M\mathbb{Z}[J^{-1}]$.
Another set of distinct homological acyclic classes of spectra of cardinality $2^{\aleph_0}$ was displayed in~\cite[Lemma~3.4]{DP}, namely those represented by $\bigvee_{n\in A}K(n)$ for every subset $A$ of $\NN\cup\{\infty\}$. Lattices of homological acyclic classes have been calculated in several localized categories of spectra, including the harmonic category; see~\cite{W}.
\begin{corollary}
\label{bset}
If $\calM$ is a pointed combinatorial model category,
then there is only a set of distinct classes $\Acyclic(H)$ where
$H\colon\calM\to\calM$ has a right adjoint.
\end{corollary}
\begin{proof}
Left adjoints preserve all colimits and, in particular, the initial object (which is also terminal, since $\calM$ is pointed).
Hence, we may pick a regular cardinal $\lambda$ such that $\calM$ admits a $\lambda$\nobreakdash-combin\-atorial structure and the result follows from Corollary~\ref{aset}.
\end{proof}
Let $\calM$ be a monoidal model category in the sense of \cite[\S 4.2]{H}, so we tacitly assume that it is closed, but not necessarily symmetric.
For an object $E$ of~$\calM$, the \emph{Bousfield class} $\langle E\rangle$ is the class of all objects $X$
such that the derived tensor product of $E$ and $X$ is isomorphic to the terminal object in the homotopy category $\Ho(\calM)$.
Thus, the following statement generalizes Ohkawa's theorem.
\begin{corollary}
\label{bousfield}
If $\calM$ is a pointed combinatorial monoidal model category,
then there is only a set of distinct Bousfield classes in~$\Ho(\calM)$.
\end{corollary}
\begin{proof}
Let $\lambda$ be a regular cardinal such that $\calM$ has a $\lambda$\nobreakdash-combinatorial structure and let
$Q$ be the chosen cofibrant replacement functor
that preserves $\lambda$\nobreakdash-filtered colimits on~$\calM$.
For each object~$E$, consider the functor $H_E\colon\calM\to\calM$ defined as $H_EX=QE\wedge QX$. Then $H_E$
preserves $\lambda$\nobreakdash-filtered colimits
for all~$E$, since the functor $QE\wedge (-)$ has a right adjoint ${\rm Hom}_{\ell}(QE,-)$ and hence it preserves all colimits, including the zero object.
Moreover, the Bousfield class $\langle E\rangle$ is equal to $\Acyclic(H_E)$, as $QE\wedge QX$ represents the derived tensor product of $E$ and~$X$.
Since, by Corollary~\ref{aset}, there is only a set of distinct
classes $\Acyclic(H)$ where $H$ preserves $\lambda$\nobreakdash-filtered colimits
and the zero object, the claim follows.
\end{proof}
\begin{corollary}
\label{derived}
For every commutative ring $R$ there is only a set of distinct Bousfield classes in the derived category $\calD(R)$.
\end{corollary}
\begin{proof}
For every ring $R$,
the category $\calD(R)$ is the homotopy category of the model category $\Ch(R)$ of unbounded chain complexes of $R$\nobreakdash-modules with the standard model structure \cite[Definition~2.3.3]{H}. This structure is combinatorial \cite[Theorem~2.3.11]{H} and it is symmetric monoidal if the ring $R$ is commutative \cite[Prop\-osi\-tion~4.2.13]{H}.
\end{proof}
According to \cite[IV.2]{EKMM} or \cite[Theorem~5.1.6]{SS2}, the category $\calD(R)$ is equivalent to the homotopy category of (strict) $HR$\nobreakdash-module spectra for each commutative ring $R$, where $HR$ denotes the Eilenberg--Mac\,Lane spectrum of ordinary cohomology with coefficients in~$R$. Thus, the following result extends Corollary~\ref{derived}. By a commutative ring spectrum we mean a commutative monoid in the category of symmetric spectra over simplicial sets~\cite{HSS}.
\begin{corollary}
\label{Emodules}
For every commutative ring spectrum $E$ there is only a set of distinct Bousfield classes in the homotopy category of $E$\nobreakdash-module spectra.
\end{corollary}
\begin{proof}
Modules over a commutative ring spectrum $E$ admit a symmetric monoidal model category structure which is combinatorial; see \cite[Theorem~4.1]{SS} for details.
\end{proof}
Let $S$ be a Noetherian scheme of finite Krull dimension and denote by ${\rm Sm}/S$ the category of smooth schemes of finite type over~$S$.
Let $\Mot_S$ be the category of pointed simplicial presheaves on ${\rm Sm}/S$, that is, contravariant functors from ${\rm Sm}/S$ to pointed simplicial sets.
Each pointed simplicial set is viewed as a constant presheaf, and each object of ${\rm Sm}/S$ is treated as a discrete simplicial presheaf via the Yoneda embedding, with an added disjoint basepoint.
Since ${\rm Sm}/S$ is equivalent to a small category, $\Mot_S$ is locally finitely presentable by \cite[Corollary~1.54]{AR}. Moreover, as shown in \cite[\S 2]{DRO} or \cite[Theorem~1.2]{J}, the Nisnevich topology on ${\rm Sm}/S$ endows $\Mot_S$ with a proper, cofibrantly generated, monoidal model category structure (with object\-wise smash product), whose associated homotopy category is equivalent to the pointed motivic homotopy category ${\rm H}_*(S)$ of Morel--Voevodsky \cite{MV, V} over the base scheme~$S$.
The category $\Mot_S$ can be stabilized into a monoidal stable model category by considering \emph{motivic symmetric spectra} with respect to the Thom space $T=\mathbb{A}_S^1/(\mathbb{A}_S^1-\{0\})$ of the trivial line bundle over~$S$ as in~\cite{J}, or \emph{motivic $\mathbb{S}$\nobreakdash-modules} as in~\cite{Hu}, or \emph{motivic functors} as in~\cite{DRO}.
All these stable model categories are Quillen equivalent, and their homotopy categories are equivalent to the stable motivic homotopy category~$\SH$.
It is important to make a distinction between Bousfield classes and homological acyclic classes in the motivic context. Namely, if $E$ and $X$ are motivic spectra, the reduced $E$\nobreakdash-homology groups of $X$ are defined for $p,q\in\ZZ$~as
\[
E_{p,q}(X)=\pi_{p,q}(E\wedge X)=[S_s^{p-q}\wedge S_t^q,E\wedge X],
\]
where $S_s^1$ is the simplicial circle $\Delta^1/\partial\Delta^1$ and $S^1_t$ is the algebraic circle $\mathbb{A}_S^1-\{0\}$, and
no notational distinction is made between a motivic space and its associated suspension spectrum. The \emph{homological acyclic class} of $E$ is the class of those $X$ such that $E_{p,q}(X)=0$ for all $p$ and~$q$, while the \emph{Bousfield class} of $E$ is the class of those $X$ such that $E\wedge X=0$ in~$\SH$. As explained in \cite[\S 9]{DI} or \cite[\S 3.2]{J}, the latter condition is equivalent to $\pi_{p,q}(U_+\wedge E\wedge X)=0$ for all $p$ and $q$ and all smooth schemes $U$ of finite type over~$S$, where $U_+$ denotes the disjoint union of $U$ and~$S$.
Hence, $E\wedge X=0$ is a stronger statement than $E_{*,*}(X)=0$. Note, however, that if the homological acyclic classes of $E$ and $F$ coincide then their Bousfield classes coincide as well.
As we next state, motivic Bousfield classes form a set. The same result for homological acyclic classes is proved in Corollary~\ref{motivic2}.
\begin{corollary}
\label{motivic1}
For each Noetherian scheme $S$ of finite Krull dimension there is only a set of distinct Bousfield classes in the stable motivic homotopy category $\SH$ with base scheme~$S$.
\end{corollary}
\begin{proof}
As shown in \cite{J}, the category of motivic symmetric spectra admits a proper, cofibrantly generated, monoidal model category structure whose homotopy category is equivalent to~$\SH$.
Hence, Corollary~\ref{bousfield} applies.
\end{proof}
According to \cite[Theorem~13]{NS} or \cite[Proposition~5.5]{V}, the full subcategory of compact objects in $\SH$ is countable if ${\rm Sm}/S$ is countable (where a category is called \emph{countable} if it is equivalent to a category with only countably many morphisms). This implies that, if $S$ can be covered by affine open subsets ${\rm Spec}(R_i)$ where each ring $R_i$ is countable, then the cardinality of the lattice of Bousfield classes in $\SH$ is bounded above by $2^{2^{\aleph_0}}$.
This bound also follows from tensor triangulated category arguments; cf.\,\cite[Theorem~2.3]{IK}.
\begin{corollary}
\label{ostvaer}
There is only a set of distinct Bousfield classes in the derived category ${\rm DM}(k)$ of motives over any field $k$ of characteristic zero.
\end{corollary}
\begin{proof}
As shown in \cite[Theorem~1]{RO2}, the category ${\rm DM}(k)$ is equivalent to the homotopy category of modules over the commutative symmetric ring spectrum $M\ZZ$ that represents motivic cohomology for the given base field~$k$. According to \cite[Proposition~38]{RO2}, such modules form a symmetric monoidal model category. Since this model category is indeed combinatorial by \cite[Theorem~4.1]{SS}, we may use again Corollary~\ref{bousfield}.
\end{proof}
If $\calC$ and $\calD$ are any two categories and $\calD$ has a terminal object~$*$, then the \emph{kernel} of a functor $H\colon \calC\to\calD$ is the class of objects $X$ in $\calC$ such that $HX\cong *$.
Suppose that $\calC$ is locally $\lambda$\nobreakdash-presentable
and its terminal object is $\lambda$\nobreakdash-presentable.
Then, as mentioned in Section~\ref{prelims}, if we endow $\calC$ with the discrete model structure, the resulting model category has a $\lambda$\nobreakdash-comb\-in\-atorial structure.
For a functor
$H\colon\calC\to\calC$, the acyclic class $\Acyclic(H)$ is the kernel of~$H$.
Hence, Corollary~\ref{aset} specializes to the statement that, if $\lambda$
is a regular cardinal such that $\calC$ is locally $\lambda$\nobreakdash-presentable and its terminal object is $\lambda$\nobreakdash-presentable,
then there is only a set of distinct kernels of
functors $\calC\to\calC$ preserving $\lambda$\nobreakdash-filtered colimits and the terminal object.
The following variant is more useful.
\begin{corollary}
\label{discrete}
Let $\calC$ and $\calD$ be locally $\lambda$\nobreakdash-presentable
categories, where $\lambda$ is a regular cardinal. Suppose that the terminal object of $\calC$ is $\lambda$\nobreakdash-pres\-ent\-able and $\calD$
has a zero object. Then there is only a set of distinct kernels of
functors $H\colon\calC\to\calD$ that preserve $\lambda$\nobreakdash-filtered colimits and terminal objects.
\end{corollary}
\begin{proof}
Note that, since $\calD$ is locally $\lambda$\nobreakdash-presentable, an object $Y$ of $\calD$
is isomorphic to the zero object $0$ if and only if each morphism $P\to Y$
with $P\in\calD_{\lambda}$ factors through~$0$.
For each functor $H\colon\calC\to\calD$, consider the set $\Ohkawa(H)$
whose elements are the sets
\[
T_H(f)=\{t\in\calC(A, B) \mid \text{$B\in\calC_{\lambda}$ and $Ht\circ f$ factors through $0$}\},
\]
where $f$ runs over all morphisms $P\to HA$ in which $A\in\calC_{\lambda}$ and $P\in\calD_{\lambda}$.
Then it follows as in the proof of Theorem~\ref{mainthm} that
an equality $\Ohkawa(H_1)=\Ohkawa(H_2)$ implies that the kernels of $H_1$ and $H_2$ coincide,
if $H_1$ and $H_2$ preserve $\lambda$\nobreakdash-filtered colimits and terminal objects.
Since there is only a set of distinct sets~$\Ohkawa(H)$, the claim is proved.
\end{proof}
If $E_*$ denotes the reduced homology theory on pointed simplicial sets represented by a spectrum~$E$, then the condition $E_*(X)=0$ on a given $X$ is equivalent to $E\wedge\Sigma^{\infty}X=0$ in the homotopy category of spectra. Hence, it follows from Ohkawa's theorem that the collection of distinct homological acyclic classes of pointed simplicial sets is also a set.
This result can be inferred directly from Corollary~\ref{discrete} without passing to the category of spectra, since representable homology theories preserve $\aleph_0$\nobreakdash-filtered colimits if viewed as functors from pointed simplicial sets to graded abelian groups.
The same argument is valid in motivic homotopy theory:
\begin{corollary}
\label{motivic2}
There is only a set of distinct homological acyclic classes in the unstable motivic homotopy category and in the stable motivic homotopy category over any base scheme $S$.
\end{corollary}
\begin{proof}
This follows from Corollary~\ref{discrete}, both in the stable case and in the unstable case, by viewing $E_{*,*}$ as a functor to bigraded abelian groups for each motivic spectrum~$E$. This functor preserves $\aleph_0$\nobreakdash-filtered colimits since smashing with a cofibrant replacement of $E$ has a right adjoint and the circles $S^1_s$ and $S^1_t$ are finitely presentable.
\end{proof}
Note that Corollary~\ref{motivic1} also follows from Corollary~\ref{discrete} by letting $\pi_{*,*}$ take values in the category of presheaves of bigraded abelian groups on~${\rm Sm}/S$, which is locally finitely presentable by \cite[Corollary~1.54]{AR}.
|
1,314,259,995,653 | arxiv | \section{Introduction}
AM Her systems (polars) are semidetached binaries that consist of
strongly magnetic white dwarf (WD) primaries and red dwarf (RD)
secondaries. Polars were first recognized in 1976 with the discovery
of circular polarization in AM Her (Tapia 1976, 1977).
Magnetic fields play a crucial role in determining
the system's parameters. The field strength of the primary is
so high that the material flowing from the companion does not form an
accretion disk around the WD, but is guided
along the field lines to an accretion column that forms near the magnetic pole of the primary.
The flux distribution from the column consists of
hard X-ray bremsstrahlung, an approximately
blackbody spectrum in the UV and soft X-ray and cyclotron emission, which is the primary source of optical radiation.
The brightest polar AM Herculis (RX J1816.2 +4952 $\equiv$ EUVE J1816+49.8 $\equiv$ 3U 1809+50 $\equiv$ H
1816+49) was classified as a cataclysmic variable (CV) by Berg \& Duthie (1977).
It is not an eclipsing binary despite the observed large-amplitude minima in the
light curves. The system has been studied for different wavelengths. Each study revealed different characteristics of the polar.
The system shows long-term, non-periodic variations where the brightness of the system varies by about 2 mag (see, Hessman, 2000),
known as high and low states. High and low states of polars are thought
to be due to the variation in the mass transfer rate from the RD to the WD.
In a recent study on the high and low states of the system by Wu \& Kiss (2008)
it was found that the magnetic field of the primary is a crucial parameter in
regulating these states.
Previously, Livio \& Pringle (1994) explained the observed low state of the system with the
starspots migrating under the inner Lagrangian point (L1).
\begin{figure*}
\includegraphics{f1}
\caption{Long (more than 25 years) optical light curve of AM Her from AAVSO data.
The times of RTT150 and ROTSE IIId are shown with vertical lines (see Table~1 for the data)}. (A color version of this figure is available in the online journal.)\label{f1}
\end{figure*}
\begin{figure*}
\includegraphics[angle=90]{f2}
\caption{Light curves of AM Her obtained between the period 2004--2007. All light curves are in R$_{c}$ except (O) that is obtained in I$_{c}$.}\label{f2}
\end{figure*}
Observational properties of polars generally depend on the
observed state and wavelength. For instance, short-term, low-amplitude variations in the X-ray/optical bands, known as flickering, are
detected when the system is in high
and intermediate states (King, 1989; Bonnet-Bidaud et al. 1991).
Different mechanisms have been suggested to explain these variations. Szkody \& Margon (1980), using the cross-correlation
functions of high state observations of AM Her,
reported the strong correlation in Johnson U, V, and $\lambda$4686 features
and discussed the ionizing radiation as a responsible mechanism for the observed
small-amplitude brightness variations. Another mechanism is the oscillation of the magnetic flux tubes (Tuohy et al. 1981). Larsson (1988), on the other hand,
proposed an oscillatory shock height model to explain
optical variations with periods, of a few seconds. King (1989) discussed that X-ray
irradiation of the accretion flow below the L1 point produces
oscillating ionization fronts. These ionization fronts modulate the
accretion rate through L1. The timescale of these oscillations is
the dynamical timescale near the L1, point which is about 8 minutes for AM Her.
Besides the high state, the low state of the system has also been the subject of
interest because of its physical properties and the poorly defined characteristics
that can change over time.
Complex and unpredictable observational properties of polars prompted us to
obtain long-term optical variation of AM Her.
Studying the light variations obtained over a long period of time is necessary to
see the complex structure.
The study of AM Her was carried out with the 1.5m RTT150 and ROTSE IIId (Robotic Optical Transient Search Experiment-IIId) telescopes of
the T\"UB\.ITAK National Observatory (TUG). The results are presented in Section 2. In Section 3 we analyze the observations and discuss them in Section 4.
\section{New observations and light variation}
Optical photometry of the system was obtained using the Russian--Turkish 1.5m telescope (RTT150)
over 19 nights between the period 2003-2007 (Table~\ref{amhertab1}).
All the images were obtained using the Andor CCD. The Andor CCD camera is
equipped with a set of Cousins (R$_{c}$, I$_{c}$) and Johnson ($V$) filters. During the data reduction a few comparison and check stars were chosen
in the same CCD frame including GSC~3533~1026 and GSC~3533~1021.
All CCD reductions were done with the IRAF\footnote{IRAF is distributed by
National Optical Astronomy Observatories, which is operated by the
Association of Universities for Research in Astronomy, Inc., under
cooperative agreement with the National Science Foundation, U.S.A. } package.
In Fig.~\ref{f1} the times of RTT150 and ROTSE IIId observations are indicated by vertical lines with the optical light curve of AM Her from the data of the American Association of Var,able Star Observers (AAVSO). Light variations of AM Her, obtained with RTT150, over long periods of time show that the light-variation amplitude changes over time.
The light curves, in Fig.~\ref{f2}, are plotted as a function of the Julian Date (JD) to make clear any possible variation between successive orbital phases (see, Kalomeni et al. 2005 for the light variation of the system obtained in 2003).
\begin{table}
\begin{center}
\scriptsize
\caption{Summary of the observations of AM Her with RTT150 and ROTSE IIId* telescopes. HJD$^{+}$ shows JD
start+2400000}\vspace{0.25cm} \label{amhertab1}
\begin{tabular}{lllcc}
\hline
Run & HJD$^{+}$ & Filter & N$_{\rm{Obs}}$ & State \\
\hline
1 & 52858.46-52858.59 &R$_{c}$ & 162 &Intermediate \\
2 & 52859.47-52859.59 &R$_{c}$ & 195 &Intermediate \\
3 & 53037.55-53037.67 &R$_{c}$ & 148 &Active \\
4 & 53193.29-53193.57 &R$_{c}$ & 444 &Low \\
5 & 53194.30-53194.52 &R$_{c}$ & 261 &Active \\
6 & 53195.33-53195.59 &R$_{c}$ & 371 &Low \\
7 & 53425.51-53425.55 &R$_{c}$ & 61 &Low \\
8 & 53426.47-53426.53 &R$_{c}$ & 70 &Low \\
9 & 53450.53-53450.63 &R$_{c}$ & 129 &Low \\
10* & 53464.95-53582.84 &$-$ & 387 &Low + active + high \\
11 & 53489.45-53489.60 &R$_c$ & 229 &Active \\
12 & 53682.20-53682.33 &R$_{c}$ & 90 &Low \\
13 & 53682.20-53682.30 &V & 51 &Low \\
14 & 53683.19-53683.35 &R$_{c}$ & 266 &Active \\
15 & 54014.23-54014.31 &R$_{c}$ & 138 &Active \\
16 & 54015.26-54015.40 &R$_{c}$ & 200 &Low \\
17 & 54102.66-54102.68 &R$_{c}$ & 14 &Low \\
18 & 54103.17-54103.20 &R$_{c}$ & 16 &Low \\
19 & 54324.40-54324.56 &I$_{c}$ & 253 &High \\
20 & 54324.40-54324.60 &R$_{c}$ & 297 &High \\
21* & 54012.22-54402.21 &$-$ & 1020 &Low + active + high \\
22* & 54089.21-54573.53 &$-$ & 70 & High \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\includegraphics{f3}
\caption{(a) The figure shows long term light variation of AM Her spread over two hundred five nights obtained with the ROTSE IIId between the period 2005-2008, (b) the magnitude excess observed in HJD2453545}.\label{f3}
\end{figure*}
\begin{figure*}
\includegraphics{f4}
\caption{Expanded
views of the R$_{c}$- band brightness variation during the low-state of
AM Her (a) April 28, 2005,
(b)--(c) November 8, 2005, and (d) October 5, 2006. }\vspace{+5cm}\label{f4}
\end{figure*}
ROTSE IIId\footnote{The ROTSE III system's details described in Akerlof et al. (2003).} was used to carry out long-term variation of the system (Table~\ref{amhertab1}). Fig.~\ref{f3}a shows long-term light variation of AM Her spread over 205 nights, obtained with the ROTSE IIId, between the period 2005-2008. During the low state two noticeable brightness variations have been detected. Low-state observations of AM Her exhibit rather weak brightness variations with respect to the high and intermediate states.
During the low-state the mass transfer from the secondary is thought to decrease or cease.
If the accretion is almost negligible then the characteristic features of the component can exhibit themselves in
observed light curves (Kafka et al. 2005). Therefore, because the secondary is a late-type main-sequence star, we can expect to detect
stellar activity in the low state. Such a brightness variation in AM Her was observed by Shakhovskoy et al. (1993). They reported an approximately 2 mag flare event with a 20 minutes duration (see also, Bonnet-Bidaud et al. 2000). We detected similar events of 2.14 mag for 1 hour and 20 minutes in 2005 with ROTSE IIId (Fig.~\ref{f3}b). Following this variation ROTSE IIId detected a 1.37 mag excess in about 30 minutes on July 20, 2005 (see, Table~\ref{amhertab3}). During AM Her's low-state relatively small amplitude flaring with amplitudes of 0.2-0.6 mag and lasting 15-90 min was reported by Kafka et al. (2005). Likewise, during the observing runs performed between 2004 and 2006 similar variations were detected (Table~\ref{amhertab3}, Fig.~\ref{f4}).
Magnitude excesses, owing to the possible flare events, with respect to the quiescence level are shown in Table~\ref{amhertab3}. In the RTT150 observations we have also determined times of minima, derived by the Kwee--van Woerden method (Kwee \& van Woerden, 1956) and for the asymmetric minima by freehand curve, from the individual light curves. In Table~\ref{tab4} the times of minima are shown. The errors in times of minima are of the order of $0^\textrm{d}.0002-0^\textrm{d}.0007$.
\begin{table}
\begin{center}
\scriptsize \caption{Magnitude variations for AM Herculis. HJD$^*$ shows JD
start +2400000}\label{amhertab3}
\begin{tabular}{llllllll}
\hline
HJD$^*$ &Filter & Phase & magnitude excess
\\
\hline
53037.5712 &R$_{c}$& 0.1& 0.7 \\
53037.6200 &R$_{c}$& 0.5& 0.71 \\
53194.4846 &R$_{c}$& 0.3& 0.12 \\
53489.5381 &R$_{c}$& 0.8& 0.5\\
533545.8400 &-& 0.52& 2.14\\
53571.8293 &-& 0.01& 1.37\\
53683.2006 &R$_{c}$& 0.8& 0.45\\
53683.2484 &R$_{c}$& 0.2& 0.7\\
54014.2494 &R$_{c}$& 0.56& 0.7\\
\hline
\end{tabular}\\
\end{center}
\end{table}
\section{Period Change Analysis}
Times of minima in AM Her, as well as in other polars, is known to offset (e.g. Bailey et al. 1993). The nature of the observed shift in times is poorly understood. However, if the orbital period of the system is determined accurately and the WD is synchronized with the orbital period, then these shifts generally attributed to the oscillation of the magnetic pole (Bailey \& Axon, 1981; Bailey et al. 1993). Any variation in the mass accretion rate alters the accretion geometry. In this case while the location of the pole remains fixed the position of the spot with respect to the pole changes (Cropper 1989).
On the other hand, the orbital periods of interacting binaries are known to change because of different processes. One of these is the mass transfer between the components. In AM Her systems, the RD component loses mass to the primary WD star. In semidetached binary systems, the displacement in minima times causes a parabolic variation, either upward or downward, in the difference between the observed (O) and calculated (C) time of minima diagram. Thus, analysis of the observed times of minima is important to determine any variation, due to mass transfer, in the orbital period. Unfortunately, there are almost no studies done on (O--C) variation of other polars in the literature. Previous (O--C) studies of AM Her were performed by Young \& Schneider (1979) and Mazeh et~al. (1986). The study of Young \& Schneider (1979) shows no evidence for any continuous period variation. The latter study by Mazeh et al. (1986) shows a downward curved parabola with $\dot P/{P}$=$-5\times10^{-14} $s$^{-1}$. We collated the additional minima times obtained since then with those obtained in this study (Table~\ref{tab4}) to revise the (O--C) variation. We assigned the same weight for all minima points during analysis. The starting epoch for the primary minimum was adopted from the Szkody \& Brownlee, (1977).
The (O--C) diagram of AM Her constructed with an initial light element
can be represented by the relation,
\begin{equation}
\begin{array}{l}
HJD\,\textrm{Min}I=24\,443014.7136(2) +0.128927048(2)\times E \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+1.33(20)\times10^{-12}\times E^2\label{amher1}.
\end{array}
\end{equation}
\begin{figure}
\includegraphics{f5a}\\
\includegraphics{f5b}
\caption{a) Observed--calculated times of minima of AM Her vs. epoch, and b) the residuals of parabolic variation vs. epoch (see text for details). (A color version of this figure is available in the online journal.)}\label{f5}
\end{figure}
The (O--C) variation can be represented both by a parabola and with two broken lines.
First, if the long-term variation fits to a parabola, as is expected for binaries where mass transfer take place, the solid curve in Fig.~\ref{f5}a is for the best solution to the (O--C) variations. Then, the resulting solution with a quadratic term is an indication of the effect of mass transfer in AM Her. The mass transfer in polars is from the less massive secondary to the more massive primary WD star. This increases the orbital period in the conservative case and the resulting (O--C) curve is a parabola with a positive quadratic term. On the other hand, as a result of the nonconservative mass transfer and magnetic activity of the late-type component, mass loss from the system may also occur
in polars. However, the upward curving parabolic variation in the (O--C) diagram of AM Her indicates that the dominant effect is the mass transfer from the RD to the WD. If this variation fits to a parabola then the corresponding rate of period increase is $dP/dt=7.5(1.2)\times10^{-9}\,\ $days yr$^{-1}$ with a conservable mass transfer rate of $\dot M = {\dot P}/{3P} [{M_{1}M_{2}}/({M_1-M_2})]=7.6(2.3)\times10^{-9} \rm{M_{\odot}}$ yr$^{-1}$, for a WD with a mass 0.88$M_{\odot}$ (Bailey et al. 1988), which agrees with the maximum mass transfer rate given by Hessman et al. (2000) assuming that this variation is caused by stellar spots. On the other hand, the $P/ \dot P$ value is of the order of another recently studied polar (Andronov \& Baklanov, 2007). We can also fit the data with two broken lines. One of them is horizontal while the other one indicates an increase in the period. Nevertheless, observations in the next decades are necessary to clarify the true shape of the (O--C) diagram.
The accretion luminosity of the WD is $L=-GM_{1}\dot M/R_{1}$ where $M_{1}$ is the WD mass, $R_{1}$ is the radius of the WD, and $\dot M$ is the mass accretion rate.
Using the $\dot M$ estimated from the possible parabolic (O--C) variation, we can determine the accretion luminosity to derive the Alfv\'{e}n radius. The Alfv\'{e}n radius is given by (Frank et al. 2002)
\begin{equation}
r_{\mu} = 2.9\times10^8M_1^{1/7}R_{6}^{10/7}L_{37}^{-2/7} B_{12}^{4/7},
\label{amher3}
\end{equation}
where $R_6$ is the radius of the WD in units of $10^6$cm, $L_{37}$ its luminosity in units of $10^{37}$ erg s$^{-1}$ and $B_{12}$ is the surface magnetic field strength in unit of $10^{12}$ G. The estimated Alfv\'{e}n radius for the adopted value of $M=0.88M_{\odot}$ is $2.02\times10^{10}$cm .
AM Her shows evidence for brightness variations on a timescale of minutes (approximately 4.5 minutes; e.g., Bonnet-Bidaud et al. 1991).
One of the explanations for the origin of the quasiperiodic X-ray variations observed in AM Her is discussed by Tuohy et al. (1981). They presented the oscillation of the magnetic flux tubes, through which matter flows to the stellar surface, as a possible mechanism for these variations. If these oscillations occur at the point where the matter channeled by the magnetic field, they can lead to the quasi-periodic variations in the accretion rate. The timescale for an Alfv\'{e}n wave to cross the magnetosphere is
\begin{equation}
P_{osc}(r) = 2\times10^{-3} r_8^{11/4} L_{34}^{1/2}f_{-2}^{-1/2}M_1^{-3/4}R_{8}^{-2}B_7^{-1} \,\, \textrm{s},
\label{amher4}
\end{equation}
where $r_8$ is the radius where the flowing matter is channeled and quasiperiodic variations may arise, $f$ is the fraction of stellar surface where the accreting matter flows. Adopting $f=9.1\times10^{-3}$ as the characteristic dimensionless size of the flow (Hessman, 2000), we find the related $r$ on the timescales of interest as $2.03\times10^{10}$cm. The $r$ estimated for the 4 minutes oscillations agrees very well with the estimated Alfv\'{e}n radius. This indicates that this model adequately describes the observed brightness variations.
The Lagrangian radius of the WD is $R_{L1}/a=0.5-0.227\,$log$q$ (Plavec \& Kratochvil, 1964) with $q$ the mass ratio of the components and $a$ the orbital separation. For $q=0.31$ and $a=7.85\times10^{10}$cm, this yields $R_{L1}=4.8\times10^{10}$cm and $r_{\mu}\approx0.42R_{L1}$. Therefore, as is expected both the magnetospheric radius and the radius where the matter channeled toward the WD are smaller than the Lagrangian radius (Ferrario et al. 1989). The results obtained in this study are in agreement with the results presented by Bonnet-Bidaud et al. (1991). They discussed that their observed 270 s variations correspond to a radius $r_{\mu} \approx 2.1\times10^{10}$cm for AM Her. The flickering timescale depends on $\dot M$ and $f$ since $M_1$, $R$ and $B$ cannot change on the timescales of 3-8 minutes. Hence, as Bonnet-Bidaud et al. (1991) reported, any inhomogeneities in the accretion matter can produce these brightness variations on the timescales of interest.
\section{Results and Discussion}
AM Her-type systems show large-amplitude variations over years (Fig.~1-3)
as well as short-term low-amplitude variations.
In this study, five years of observations obtained in both states of the system are presented. Low-state photometry reveals weak orbital modulation but occasionally flaring-type variability of the secondary. Large flare events, as well as smaller amplitude flares, are detected.
Three of the nine magnitude excess events detected fell within the primary minimum and one within the secondary minimum. Durations of brightness variations range from tens of minutes to an hour.
Flickering with an amplitude of 0.01--0.60 mag occurs on a timescale of at least a few minutes.
We have obtained a total of nine times of minima, using them we could perform a period analysis of the system. The times of minima in AM Her show shift, which is, generally, assumed due to obscuration of the post-shock radiation of the main accretion column (Bailey et al. 1993). These changes are thought to be responsible for the observed shift in (O--C) diagram. On the other hand, AM Her systems are described with a mass-donating RD and a magnetic WD star, therefore, they are classified as semidetached binaries. Semi-detached binaries known to show a parabolic variation in the (O--C) diagram due to the mass transfer between the components. Using the available times of minima, we find for the first time, an evidence for an upward parabola general property observed in semi-detached binaries where the matter flows from less massive to the more massive. Using this variation we derive an orbital period evolution time of about 1.7$\times10^7$ yr. In addition, the (O--C) variation can be described with two broken lines.
A total of 30 years times of minima are collated. This time is quite long to see any variation in the (O--C) diagram. However, it is apparent from Fig.~\ref{f5}a that the upward parabola is not so clear as seen in binaries with nondegenerate components (e.g., Kalomeni et al. 2007). If the period variation is due to the conservative mass transfer, the mass transfer rate between the components is $\dot M = 7.6\times 10^{-9}M_{\odot}yr^{-1}$. This mass transfer rate is small in comparison with the binaries with non-degenerate components (\emph{ibid}). Using timescales for the gravitational radiation and magnetic braking (Yakut et al., 2008)
we have calculated the gravitational radiation timescale and
magnetic braking timescale of AM Her about 7 Gyr and 1.3 Gyr, respectively.
The mass accretion rate timescale of the system is much less than these. Thus
the gravitational radiation is not as important as the other mechanisms for AM
Her. On the other hand, if the orbital period of AM Her were half of its present
value then gravitational radiation and magnetic braking would be much more
important. At periods of 1h, 2h, and 8h (e.g., V1309 Ori) the gravitational radiation
timescales are about 0.4 Gyr, 2.4 Gyr, and 96 Gyr, respectively.
In AM Hers, the binary geometry, period change ratio, angular momentum loss mechanisms, etc. can all
change because of the strong magnetic field of the WD and the interaction between components' magnetic field
(see also Wickramasinghe \& Wu, 1994; Webbink \& Wickramasinghe, 2002).
\begin{table}
\begin{center}
\scriptsize \caption{The times of minima of
AM Her in HJD* (HJD - 2\,400\,000).}\label{tab4}
\begin{tabular}{llllll}
\hline
HJD$*$ & Passband & Ref. &HJD$^*$ & Passband & Ref.
\\
\hline
43014.71266 & V & 1& 44133.02540 & V & 4 \\
43014.84127 & V & 1& 45591.32259 & V & 5 \\
43015.74554 & V & 1& 45600.34559 & V & 5 \\
43015.87731 & V & 1 & 46000.2788 & V & 5 \\
43031.72862 & V & 1& 46001.31151 & V & 5 \\
43031.86055 & V & 1 & 46132.55800 & V & 5\\
43032.6336 & I & 2& 51277.5182 & - & 6 \\
43033.661 & I & 2 & 51708.45993 & 980-1180{\AA} &7 \\
43062.5439 & I & 2& 52858.5548 & Rc &8 \\
43062.8024 & V & 1& 52859.5259 & Rc &8\\
43069.6354 & I & 2 & 53193.3160 & Rc &8 \\
43083.5591 & I & 2 & 53193.5168 & Rc &8 \\
43635.88762 &6900-7400{\AA} & 3 & 53195.4500 & Rc &8 \\
43636.78853 & 8200-8700{\AA} & 3 &53450.5989 & Rc &8 \\
43704.73386 & 8000-8350{\AA} & 3 & 53489.5276 & Rc &8 \\
43704.86349 & 8000-8350{\AA} & 3 & 53683.2436 & Rc &8\\
43705.89342 & 7500-7800{\AA} & 3 &54015.2955 & Rc &8 \\
\hline
\end{tabular}
\end{center}
{References for Table ~\ref{tab4}: 1-Szkody \& Brownlee (1977), 2-Olson (1977), 3-Young \& Schneider (1979),
4-Young et al. (1981) based on Bailey \& Axon (1981) observations, 5-Mazeh et al. (1986),
6-Safar \& Zejda (2002), 7-Hutchings et al. 2002 (the average minimum time is used), 8-This study.}
\end{table}
\acknowledgements
We are indebted to the E. R. Pek\"unl\"u, C. A. Tout and anonymous referee for their valuable comments.
We would thank to J. Eldridge for the reading final version
of the MS and V. Keskin and \"U. K{\i}z{\i}lo\v{g}lu for their support in ROTSE data process and reduction. We acknowledge that in this study we have used the data from the AAVSO Database.
This work has been partly supported by T\"UB\.ITAK National Observatory and T\"UB\.ITAK-B\.IDEB.
|
1,314,259,995,654 | arxiv | \section{Introduction}
The $\Lambda$CDM cosmological model does a good job of reproducing the current cosmological observations. In this model, the standard model of particle physics is supplemented by a cosmological constant $\Lambda$ and a dark matter particle. This dark matter particle is assumed to interact purely due to the influence of gravity and to have a negligible (initial) velocity dispersion, thus the name Cold Dark Matter (CDM). In perturbative calculations this is typically modelled as a pressure-less perfect fluid. As a result, many cosmological constraints on the dark matter density are, more correctly, constraints on the density of this pressure-less perfect fluid. More generally, CDM is evolved by solving the collision-less Boltzmann equation. This is done on large scales using cosmological perturbation theory (implemented in Boltzmann codes such as \texttt{class} and \texttt{camb}) and on smaller scales using N-body simulations and other non-linear methods.
Since we are entering the era of so-called ``precision cosmology,'' in which many cosmological parameters have been measured with 1\% accuracy or better, it is timely to consider whether such an idealised and simple dark matter model is sufficient when analysing the data. There are many physical dark matter models that do not yield precisely CDM, for example Warm Dark Matter (WDM) \citep{DodelsonWidrow1994,Armendariz-PiconNeelakanta2014,PiattellaCasariniFabrisEtal2015} or ultra light axions \citep{HuBarkanaGruzinov2000,HlozekGrinMarshEtal2015}, which are one example of Fuzzy Dark Matter (FDM). In addition, recent work on the Effective Field Theory of Large Scale Structure (EFTofLSS) \citep{BaumannNicolisSenatoreEtal2012} shows that even an ideal CDM candidate develops a more complicated energy momentum tensor, even on linear scales, once the non-linearities that inevitably form on small scales back-react on the large scales. This causes an effective pressure and viscosity on large scales. From a non-cosmological perspective, despite a large number of direct and indirect detection experiments for dark matter, no convincing detections have been made, and many theoretically favoured regions of parameter space have been ruled out \citep{Xenon1002012,Xenon1002014,BuckleyCowenProfumo2013,OliveEtal2014,CRESST2015,LUX2015}. Thus, there are strong reasons to go beyond the simplest ways of modelling dark matter.
In \citet{KoppSkordisThomas2016}, the Generalised Dark Matter (GDM) model (first proposed in \citet{Hu1998a}) was examined in some detail, notably how it relates to particular physical models. GDM adds to the CDM energy momentum tensor a background pressure, pressure perturbation and anisotropic stress. Closure relations are then postulated to match qualitative properties of known models, like massive neutrinos, and in order to de-correlate background and perturbative properties. GDM encompasses WDM, FDM and the EFTofLSS effects as well as other physical models, so it is sufficiently versatile for examining dark matter properties in a model independent fashion. In \citet{ThomasKoppSkordis2016}, all GDM parameters were constrained using Cosmic Microwave Background (CMB) data, supported by additional data on the cosmological expansion history (see section 4.3 in this paper and references therein for comparison to earlier works constraining partial or similar parameters to those we consider here, such as \citet{Muller2005, CalabreseMigliaccioPaganoEtal2009,XuChang2013}). The results showed no evidence for any non-CDM properties of dark matter. This was expanded on in \citet{KoppEtal2018}, where an improved freedom was given to one of the GDM parameters; this was used to demonstrate for the first time that there is no cosmological epoch where the data would favour a nonzero equation of state, and furthermore that there is no cosmological epoch where the data is consistent with zero dark matter density, thus showing the strength of the GDM approach to testing the CDM paradigm. An independent group subsequently verified \citep{KunzNesserisSawicki2016} some of the results in \citet{ThomasKoppSkordis2016}, as well as using some late time matter clustering data; we will comment further on this later in the paper. Further work constraining the GDM parameters is now ongoing by other groups, see e.g. \citet{TutusausLamineDupaysEtal2016}.
It was noted in \citet{ThomasKoppSkordis2016} that matter power spectrum data could not only improve the constraints on the GDM parameters, but also has the potential to break a degeneracy between two of them (see section \ref{sec_gdm}). The robust use of such data requires a non-linear extension of the GDM model, which is not present in the literature. It was also noted in \citet{ThomasKoppSkordis2016} that the inclusion of a non-linear extension to perturbation theory for $\Lambda$CDM makes a difference to the CMB lensing potential. This effect is of a similar magnitude, but opposite sign, to that of GDM with parameters saturating the constraints found in \citet{ThomasKoppSkordis2016}. In this paper we develop a halo model for GDM which allows us first to test the robustness of the results in \citet{ThomasKoppSkordis2016}, and second to use matter power spectrum data from the WiggleZ survey \citep{ParkinsonEtal2012} to improve the constraints on the GDM parameters. The paper is laid out as follows: in section \ref{sec_gdm} we briefly review the GDM model and previous constraints, before constructing the GDM halo model in section \ref{sec_halo}. We then present the methodology for our constraints in section \ref{sec_method} and present our resulting constraints and robustness tests in section \ref{sec_results}. We conclude in section \ref{sec_conc}.
\section{Brief review of GDM model and previous constraints}
\label{sec_gdm}
The GDM model was first proposed as an extension to the standard CDM model in \citet{Hu1998a}. Here we give brief details of the model, following \citet{KoppSkordisThomas2016}; see both this work and \citet{Hu1998a} for further details of the model, its motivation and the different physical models that it can encompass.\\
The standard CDM energy momentum tensor is given by
\begin{equation}
T_{\mu \nu}=\rho u_\mu u_\nu \,\text{,}
\end{equation}
i.e. the fluid is specified entirely by its density, $\rho$, and velocity $u^\mu$. This is then typically divided into a background part that is homogeneous and a perturbation. The GDM parameterisation adds pressure and anisotropic stress to this, giving
\begin{equation}
T_{\mu \nu}=(\rho+P) u_\mu u_\nu+Pg_{\mu\nu} +\Sigma_{\mu \nu} \,\text{.}
\end{equation}
The pressure and density perturbations are divided into background quantities (denoted by an overbar) and perturbed quantities as usual, and the additional scalar perturbations of $P$ and $\Sigma_{\mu \nu}$ are controlled by the equation of state $w$ (background pressure), the sound speed $c^2_s$ (pressure perturbation) and the viscosity $c^2_{\text{vis}}$ (anisotropic stress). The equation of state relates to the background quantities in the usual way: $w=\bar{\rho}/\bar{P}$, and the additional perturbations are governed by the closure equations
\begin{eqnarray}
\Pi &=& c_a^2 \delta + \left( c_s^2 - c_a^2 \right) \hat{\Delta}\\
\dot{\Sigma} &=& - 3 {\cal{H}}\Sigma+ \frac{4}{1+w} c^2_\text{vis} \hat{\Theta} \text{.}
\end{eqnarray}
Here, $\hat{\Delta}$ and $\hat{\Theta}$ are (a particular choice of) gauge invariant density and velocity perturbations for GDM, and $\Pi$ and $\Sigma$ are the pressure and (scalar) anisotropic stress perturbations. The adiabatic sound speed is $c^2_a=\dot{\bar{P}}/ \dot{\bar{\rho}}=w-\frac{\dot{w}}{3{\cal{H}}(1+w)}$.
Note that overdots refer to conformal time $\eta$. See \citet{KoppSkordisThomas2016} for an in depth explanation of our notation and of this choice of closure equations. Note that $w=c^2_s=c^2_\text{vis}=0$ recovers the pressureless perfect fluid, and therefore the standard $\Lambda$CDM cosmological model.
Both $c^2_s$ and $c^2_\text{vis}$ cause a decay in the gravitational potential power spectrum on scales below $k^{-1}_\text{dec}(\eta)\approx\eta\sqrt{c^2_s+\frac{8}{15}c^2_\text{vis}}$ in a GDM dominated universe \citep{KoppSkordisThomas2016}. In addition, if $c^2_s$ is sufficiently larger than $c^2_\text{vis}$, then it causes oscillations in the density perturbation below the Jeans length. Although we refer to $c^2_s$ as the sound speed, this is technically only true if $c^2_s\gg c^2_\text{vis}$ (see \citet{KoppSkordisThomas2016}). The viscosity $c^2_\text{vis}$ damps the density perturbations without causing any oscillations. As expected, the equation of state $w$ changes the expansion history of the universe for a fixed $\Omega_m$. In particular, in \citet{KoppSkordisThomas2016} it was shown that the main effect is to change the time of matter-radiation equality, and thus to change the relative heights of the peaks in the CMB. In addition, $w$ changes the distance to the last scattering surface.
The aforementioned phenomenology of the parameters is all manifest in \citet{ThomasKoppSkordis2016}. In this work we took simple forms of the GDM parameters, giving them a single value with no time and scale dependence. We then constrained these parameters using Planck CMB data (temperature, polarisation and the lensing potential) \citep{PlanckCollaborationXI2015}, BAO data \citep{BeutlerBlakeCollessEtAl2011,AndersonAubourgBaileyEtal2014} and a $H_0$ prior from the HST key project \citep{RiessMacriCasertanoEtal2011}. We found upper bounds on $c^2_s$ and $c^2_\text{vis}$ of $3.21\times10^{-6}$ and $6.06\times10^{-6}$ respectively (at the 99\% confidence level), in line with the degeneracy expected if $k^{-1}_\text{dec}$ was primarily constrained by the CMB. We also put constraints on $w$ and found degeneracies between $w$ and $H_0,\Omega_m$ due to the effects above. In all cases we found no evidence for a non-zero value of any of these parameters. The first three rows of table \ref{table_results} summarise the constraints obtained in previous work. The main conclusions of \citet{ThomasKoppSkordis2016} were independently verified in \citet{KunzNesserisSawicki2016}. Furthermore, the assumption of a single time independent value for $w$ was relaxed in \citet{KoppEtal2018}, which showed that a non-vanishing dark matter background density is required at every epoch, thus showing the power of the GDM formalism for constraining extensions to $\Lambda$CDM.
In \citet{ThomasKoppSkordis2016} it was noted that the phenomenology of the GDM model suggests that the use of late time clustering data, such as the matter power spectrum, could significantly improve the constraints on $c^2_s$ and $c^2_\text{vis}$. In principle, if the data is precise enough to determine oscillations in the decaying region, then such data could also break the $k_\text{dec}$ degeneracy between $c^2_s$ and $c^2_\text{vis}$. However, the use of matter power spectrum data requires going beyond the scales where perturbation theory is valid. The GDM model as described in this section is only defined perturbatively and thus must be extended in order to be applicable on smaller scales. One of the main results of this paper is the development of a halo model extension to the GDM model (see section \ref{sec_halo}): As well as seeking to improve the constraints on GDM using matter power spectrum data, we also to seek to understand how \textit{safe} this process is, i.e. how robust it is to non-linear modelling. In \citet{KunzNesserisSawicki2016}, the authors used late time matter clustering data (weak lensing data) which probes the same underlying potential power spectrum as the matter power spectrum does. However, they did not consider a non-linear modelling of the GDM in that work; instead they used \textit{halofit} \citep{SmithEtal2003} as a sanity check, whilst noting themselves that \textit{halofit} has limitations when applied outside of a $\Lambda$CDM context. Our goal is thus not just to improve the constraints on the GDM parameters using matter power spectrum data, but also to be able to quantify how much we can trust any such results.
A further goal is to investigate the robustness of the constraints obtained in \citet{ThomasKoppSkordis2016}. More precisely, it was noted that even in $\Lambda$CDM, using \textit{halofit} makes a small difference to the lensing potential and thus to the lensed temperature and polarisation $C_\ell$s. We thus wish to determine whether inclusion of a non-linear prescription for GDM would strengthen or weaken the constraints previously obtained. Nonlinearities typically act to increase the matter power spectrum, which is the opposite effect to that caused by increasing GDM parameters, see figure \ref{fig_lensingphi}. Hence we expect that the constraints on $c^2_s$ and $c^2_\text{vis}$ could be weakened once nonlinearities are included.
Since we are focussing on the complexities introduced by the non-linearities and additional datasets, we will work with single, constant values of the GDM parameters as in \citet{ThomasKoppSkordis2016}. In particular, the assumption of time independence means that $c^2_a=w$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./plots/michi_lensingphi}\\
\includegraphics[width=\columnwidth]{./plots/michi_lensed_tt_ratio}
\caption{Effect of the GDM parameters and non-linear prescriptions on the $\Lambda$CDM lensing. The upper panel shows the lensing potential and the lower panel shows the fractional change to the lensed temperature spectrum ($C^{TT}_\ell/C^{TT, \Lambda \mathrm{CDM}\text{ linear}}_\ell-1$). In both panels, the black curve shows the linear $\Lambda$CDM spectrum, orange (dotted) is $\Lambda$CDM with \textit{halofit}, red is $\Lambda$CDM with our halo model, blue (dashed) is linear GDM ($c^2_s=0.000003$) and green is GDM ($c^2_s=0.000003$) with our halo model. The data points with errors in the upper hand panel correspond to the Planck data. There are several important points to note here. Firstly, the halo model has a significantly smaller effect on GDM than on $\Lambda$CDM. Secondly, the effect of GDM and non-linear prescriptions for $\Lambda$CDM spectrum are opposite (and of a similar order of magnitude for the lensed temperature spectrum). We also note that there are some small differences between our halo model and \textit{halofit} (as is expected for the halo model \citep{SmithEtal2003}).
}
\label{fig_lensingphi}
\end{figure}
\section{GDM halo model}
\label{sec_halo}
As stated in section \ref{sec_gdm}, the GDM model is only defined for linear perturbations and the homogeneous background, and thus cannot be constrained by all of the currently available cosmological data. One framework that has been used to make predictions on non-linear scales is the halo model \citep{CooraySheth2002}. The halo model is a semi-analytic method for computing the matter power spectrum on non-linear scales that works from the premise that the matter is organised into halos $\rho_m(\mathbf{x}) = \sum_i \rho_{\rm halo}(\mathbf{x} - \mathbf{x}_i)$ and that averaging over all of the halos gives the mean matter density in the universe $\langle \rho_m(\mathbf{x}) \rangle = \int dM M \frac{dn}{dM} = \bar{\rho}_m$, where $\frac{dn}{dM} $ is the halo mass function. The two point correlation of the matter field will thus depend on the halo density profile and also the halo mass function. For the former we will use the empirical Navarro-Frenk-White (NFW) profile and for the latter excursion set theory to predict the so-called multiplicity function $f(\sigma)$ and relate it to the halo mass function through $\frac{dn}{d \ln \sigma^{-1}}=\frac{\bar{\rho}}{M}f(\sigma)$.
For more details we refer to appendix \ref{sec_LCDMhalo}, where we present the $\Lambda$CDM halo model \citep{Seljak2000}. This also serves to introduce our notation and perspective, as these can vary between presentations of the halo model. We also present our mass function in this appendix. There is another commonly used non-linear correction for $\Lambda$CDM: \textit{halofit} \citep{SmithEtal2003}, which is an extension of the halo model that is calibrated against N-body simulations. As was already noticed in \citet{KunzNesserisSawicki2016}, it does not make sense to use \textit{halofit} for GDM, as GDM is not part of the cosmologies that have been used to calibrate it. Furthermore, the numerical implementation in Boltzmann codes (e.g. \texttt{class}) simply crashes for values of GDM parameters where the linear power spectrum falls off too quickly. See figures \ref{fig_lensingphi} and \ref{fig_spectra} for the differences between \textit{halofit} and the halo model as implemented in this paper for a $\Lambda$CDM cosmology; it is known that \textit{halofit} and the halo model differ for $\Lambda$CDM \citep{SmithEtal2003}. This difference is largest, up to 15\%, in the interval $0.1 <k [h/\mathrm{Mpc}] < 1$, visible in figure \ref{fig_spectra} comparing the red and dotted orange lines. However, for $ k< 0.1 h/\mathrm{Mpc}$, relevant for our applications the agreement between our halo model and \textit{halofit} is better than 2\% and sufficient.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./plots/michi_spectra}
\caption{Matter power spectra for different cosmologies and non-linear prescriptions. Black is linear $\Lambda$CDM, orange (dotted) is $\Lambda$CDM with \textit{halofit}, red is $\Lambda$CDM with our halo model, blue (dashed) is linear GDM ($\Lambda$CDM with $c^2_s=0.000003$) and green is GDM ($\Lambda$CDM with $c^2_s=0.000003$) with our halo model. For GDM, the non-linear effects do not become significant until smaller scales than for $\Lambda$CDM, but the difference between the GDM linear and non-linear spectra increases much more sharply. The difference between the cosmologies is dominated by the linear theory decay in GDM. Also note the small difference between \textit{halofit} and the halo model for $\Lambda$CDM.}
\label{fig_spectra}
\end{figure}
Examining the standard $\Lambda$CDM halo model, we can see that there are several obstacles to simply applying the framework ``as is'' to GDM. Firstly, due to the large drop in power on small scales when $c^2_s$ and $c^2_v$ are non-zero, the equation defining $M_\star$ does not necessarily have a solution. In addition, a lot of the formulas that are used are calibrated against $\Lambda$CDM N-body simulations. We thus implement the halo model for GDM using a similar approach to the \texttt{Warm and Fuzzy} code \citep{Marsh2016}, which is designed for warm dark matter and axions (note that both of these can fit into the GDM parameterisation \citep{KoppSkordisThomas2016}). In this approach, four modifications are made relative to a $\Lambda$CDM cosmology: A) a modified linear spectrum, B) a modified halo concentration, C) a mass dependent barrier related to critical density for spherical collapse, and D) a modified mass function to account for the non-Markovian corrections to standard Press-Schechter expression. We implement these corrections differently to the \texttt{Warm and Fuzzy} code; we detail our implementation here. Readers not interested in the construction and definition of our halo model can skip to section \ref{sec_method}.
\subsection{Modified linear spectrum}
The simplest and most obvious modification is that the appropriate $\Lambda$-GDM linear theory power spectrum is used as an input for the halo model, rather than a $\Lambda$CDM spectrum. While this is done via fitting functions for the transfer functions in \texttt{Warm and Fuzzy}, we instead directly use the output from the full Boltzmann calculation from the modified \texttt{class} code.
\subsection{Modified concentration}
Following \citet{Marsh2016}, which itself follows \citet{SchneiderSmithMaccioEtal2012} for WDM, we calculate the $\Lambda$CDM value of the concentration. We then apply a correction according to
\begin{equation}
c_\text{GDM}=c_\text{$\Lambda$CDM}\left(1+\gamma_2\frac{M_{1/2}}{M} \right)^{-\gamma_2}\text{,}
\end{equation}
where $\gamma_1=15$, $\gamma_2=0.3$ and $M_{1/2}$ is the half-mode mass defined by
\begin{eqnarray} \label{halfmodemass}
M_{1/2}&=&\frac{4\pi \bar{\rho}}{3}\left( \frac{\pi}{k_\text{1/2}}\right)^3 \\
\sqrt{\frac{P_\text{GDM}(k_\text{1/2})}{P_\text{$\Lambda$CDM}(k_\text{1/2})}}&=&0.5 \text{.}
\end{eqnarray}
Note that this functional form and the specific values were found to give good fits to FDM and WDM simulations. As GDM contains these two models as limiting cases we adopt this ``as is'' to GDM. The results are not sensitive to this guess. Furthermore, in the absence of non-perturbatively defined GDM model and cosmological GDM simulations, this is the best and most conservative choice we can make, as it is known to work in two limiting cases of the GDM model.
\subsection{Mass dependent spherical collapse density $\delta_\text{crit}$ and how it relates to the mass function}
\label{sec_gdmdeltacrit}
The central object in the excursion set theory is the so-called multiplicity function $f(\sigma)$, determined by the first upcrossing rate of the smoothed random field $\delta(x, R)$ through a barrier as a function of the smoothing scale $R$. These crossings are then identified with proto-halos of size $R$, and thus fixed mass $M$, so that $f(\sigma)$ determines the mass function $dn/dM$. In its most rudimentary form that mass function is mostly sensitive to the ratio
\begin{equation}
\label{eq_timedepdcrit}
\frac{\delta_c(z, z_{\rm ini}, R)}{\sigma_R(z_{\rm ini})} \,,
\end{equation}
where $\sigma_R(z_{\rm ini})$ is the standard deviation, Eq.\,\eqref{sigma}, of the (non-relativistic and participating in structure formation) matter perturbations smoothed on a scale $R$, and $\delta_c(z, z_{\rm ini}, R)$ is the spherical collapse barrier. The redshift $z_{\rm ini}$ is a time where linear perturbation theory applies to all scales $R$ of interest and thus the statistics of the density field is gaussian, but well after radiation domination (i.e. such that a spherical collapse threshold can be obtained neglecting the radiation component).
It is important to note that the $z$-dependence in $\delta_c$ is determined by non-perturbative fluid dynamics, it is not a density field, or some linear extrapolation of it. Rather, $\delta_c$ assigns a collapse redshift $z$ to each region (assumed for simplicity to be spherically symmetric tophat profiles) of the gaussian density field filtered at scale $R$ at time $z_{\rm ini}$; this collapse redshift $z$ is then identified with the formation time of the halo of mass $M(R)$. This assignment involves some linear dynamics (at times not much later than $z_{\rm ini}$) when the field is still linear, but more importantly it includes the fully non-perturbative collapse that defines the collapse redshift at the time $z$ when formally the density contrast $\delta(z) \rightarrow \infty$. This formal divergence is then associated with the time of halo formation, and thus approximated as instantaneous event.
\subsubsection{Standard $\Lambda$CDM linear extrapolation}
Typically, the mass function is not written in terms of equation \eqref{eq_timedepdcrit} but instead in terms of
\begin{equation}
\frac{\delta_\text{crit}}{\sigma_R(z)} \,,
\end{equation}
where
\begin{equation} \label{eq_deltacritLCDM}
\delta _{\rm crit} := \delta_c(z, z_{\rm ini}, R) \frac{D(z)}{D(z_{\rm ini})} \text{.}
\end{equation}
This ``linear extrapolation'' from $z_{\rm ini}$ to $z$ is possible, because in a purely CDM and $\Lambda$ dominated universe the growth $D(z)$ is scale independent. Thus,
\begin{eqnarray}
\label{eq_strongtimedep}
&&\frac{\delta^{\Lambda \rm CDM} _c(z, z_{\rm ini}, R)}{\sigma_R(z_{\rm ini})}
= \frac{\delta^{\Lambda \rm CDM} _c(z, z_{\rm ini}, R) D(z)/D(z_{\rm ini})}{\sigma_R(z_{\rm ini}) D(z)/D(z_{\rm ini})} \nonumber\\
&&= \frac{\delta^{\Lambda \rm CDM} _c(z, z_{\rm ini}, R) D(z)/D(z_{\rm ini})}{\sigma_R(z)} = \frac{\delta^{\Lambda \rm CDM} _{\rm crit}}{\sigma_R(z)}\,.
\end{eqnarray}
Writing things in this way in $\Lambda$CDM is convenient because it happens (see \citet{Weinberg2008CosmologyBook}, Chapter 8.2) that the linearly extrapolated $\delta_c$, $\delta_{\rm crit}$, is a constant $ \delta_{\rm crit}^{\rm EdS}=\frac{3}{20} (12 \pi)^{2/3} \simeq 1.686$ in an EdS universe, and is only very mildly dependent on $z$ (but still independent of $R$) in a $\Lambda$CDM dominated universe. More precisely
\begin{align}
\delta_{\rm crit}^{\Lambda \rm CDM} &\simeq \frac{3}{20} (12 \pi)^{2/3}(1+0.012299 \log_{10}(\Omega_m(z))) \label{eq_defDeltacritLCDM}\,\text{.}
\end{align}
This is the reason why it is common in the literature to use this linearly extrapolated $\delta_c$ (denoted here by $\delta_{\rm crit}$) and the variance $\sigma_R(z)$ of a fictitious linearly extrapolated density field, even though there is no physical interpretation for such an extrapolation. In order to more clearly separate linear from non-linear physics, and to make the least amount of guessing to arrive at our GDM mass function, we avoid here using the linearly extrapolated $\delta_c$, $\delta_{\rm crit}$, and let the excursion set theory unfold at $z_{\rm ini}$. { In other words, we are explicitly separating the purely linear effects of the GDM parameters on structure formation due to the initial density field in which halos begin to form (i.e. the accumulated changes to $P(k, z_{\rm ini})$ from times $z>z_{\rm ini}$), from the expected changes during the non-linear stages of collapse ($z<z_{\rm ini}$). These latter effects are incorporated in the changes to the spherical collapse density $\delta^{\Lambda \rm GDM}_c(z, z_{\rm ini}, R)$ in the next section (\ref{sec_gdm_deltacrit}).}
In order to proceed this way we rewrite the multiplicity function $f(\sigma)$, defined in Eq.\,\eqref{massfunc}, appearing in the mass function (see appendix \ref{sec_LCDMhalo}) using \eqref{eq_strongtimedep}. The relevant term appearing in the mass function is $\bar{B}/\sigma(z)$, see equation \eqref{asphericalBarierAchizcollapse}. Inspired by equations \eqref{eq_timedepdcrit} and \eqref{eq_strongtimedep} we multiply numerator and denominator by $\sigma(z_\text{ini})$, and define
\begin{eqnarray}
\bar{B}'=\frac{\sigma(z_\text{ini})}{\sigma(z)}\bar{B}=\delta^{\Lambda \rm CDM}_c(z, z_{\rm ini}, R)+\beta\frac{\sigma(z_\text{ini})}{\sigma(z)}\nonumber\\
=\delta^{\Lambda \rm CDM}_c(z, z_{\rm ini}, R)+\beta' \sigma^2(z_\text{ini})\text{,}
\end{eqnarray}
where $\beta'=\beta \sigma(z)/\sigma(z_\text{ini})$, the barrier is now written in terms of the ``strongly time-dependent'' spherical collapse barrier $\delta_c(z, z_{\rm ini}, R)$ and $z_\text{ini}$, and in $f(\sigma)$, $\bar{B}/\sigma(z)$ is replaced with $\bar{B}'/\sigma(z_\text{ini})$, making it manifest that the excursion set theory is applied to the random field at $z_{\rm ini}$. The Markovian part $f_0$ (see later) of the multiplicity function $f$ of the mass function is thus given by \eqref{MarkovianAchizcollapse}, replacing $\sigma(z)$ with $\sigma(z_\text{ini})$ and $\bar{B}$ with $\bar{B}'$,
\begin{align} \label{MarkovianAchizini}
f_{0}(\sigma(z_{\rm ini}),z)&=\frac{\bar B'- \sigma^2(z_{\rm ini}) d \bar B'/ d\sigma^2(z_{\rm ini})}{\sigma(z_{\rm ini})}\sqrt{\frac{2 a_b}{\pi}}e^{-\frac{a_b}{2} \left(\frac{\bar{B}'}{\sigma(z_{\rm ini})}\right)^2} \,\text{.}
\end{align}
At this point, this mass function is generic and is not derived in the context of any particular extension to $\Lambda$CDM cosmology.
To consider how this mass function relates to the standard CDM case, note that if the $d\bar{B}'/d\sigma^2$ term is ignored then this mass function reduces exactly to equation \eqref{MarkovianAchizcollapse}, just without the assumption of scale independent growth. This re-writing thus makes it clearer how scale dependent growth should manifest in the formalism. For the case of scale independent growth, the new derivative term $d\bar{B}'/d\sigma^2$ in \eqref{MarkovianAchizini} reduces to the previous form
\begin{equation}
\label{eqn_barrier}
\bar{B}'-\sigma^2_R(z_\text{ini})\frac{d\bar{B}'}{d\sigma^2_R(z_\text{ini})}=\delta^{\Lambda \rm CDM}_c = \delta^{\Lambda \rm CDM}_\text{crit}\frac{\sigma_R(z_\text{ini})}{\sigma_R(z)}\text{,}
\end{equation}
thus the mass function reduces exactly to the standard form for the $\Lambda$CDM case.
\subsubsection{GDM approach}
\label{sec_gdm_deltacrit}
We will now postulate the spherical collapse barrier $\delta^{\Lambda \rm GDM}_c$ for GDM by reversing the direction of definition in equation \eqref{eq_strongtimedep}. While in $\Lambda$CDM, $\delta_{\rm crit}^{\Lambda \rm CDM}$ is defined via \eqref{eq_strongtimedep}, leading to \eqref{eq_deltacritLCDM}, we now assume for $\Lambda$GDM the validity of equation \eqref{eq_strongtimedep}
\begin{equation}
\label{eq_strongtimedep_DefGDM}
\delta^{\Lambda \rm GDM}_c(z, z_{\rm ini}, R)
:= \delta_{\rm crit}^{\Lambda \rm CDM} \frac{\sigma_R(z_{\rm ini})}{\sigma_R(z)}\,
\end{equation}
but use it to define $\delta^{\Lambda \rm GDM}_c(z, z_{\rm ini}, R)$ while fixing $\delta_{\rm crit}^{\Lambda \rm CDM}$ on the right hand side to be given by \eqref{eq_defDeltacritLCDM}.
In the following we will drop the superscript $\Lambda$GDM and simply write $\delta_c(z, z_{\rm ini}, R)$ for the GDM spherical collapse barrier. The idea behind equation \eqref{eq_strongtimedep_DefGDM} is that this definition implements the intuitive idea that if in GDM power is removed in a scale dependent way at $z<z_{\rm ini}$ then the collapse should be inhibited compared to CDM, and thus the spherical collapse barrier should be increased. It is at this point unclear how to judge whether the threshold defined in this way is correct given the absence of spherical collapse simulations within a (still to be) non-linearly defined GDM model. However, in addition to increasing the threshold whenever power is removed in GDM compared the CDM, this definition smoothly and naturally reduces to the $\Lambda$CDM prescription in the CDM limit of GDM, and can thus be considered conservative for small GDM parameters.
The mean barrier for collapse at $z$ in $\Lambda$CDM can be approximated by (and we postulate this to hold in GDM too)
\begin{equation} \label{GDMmeanBarrier}
\bar B'(z , z_{\rm ini}, R) =\delta_c(z , z_{\rm ini}, R) + \tilde \beta(z) \sigma^2_R(z_{\rm ini})
\end{equation}
where $\beta(z)$ is a purely time dependent function. The multiplicity function \eqref{MarkovianAchizini} takes into account that the mean barrier $\bar B$ for a randomly selected point deviates from the spherical collapse barrier $\delta_{\rm c}$ because collapse is not spherically symmetric, and also that the barrier is diffusive, rather than 100\% absorbing. The former is parameterized by $\beta$, the latter by $a_b$. We saw above that a consistent choice in the context of scale dependent growth is $\beta'(z)=\beta\sigma(z)/\sigma(z_\text{ini})$. A conservative choice for the GDM mass function thus is
\begin{align}
\tilde \beta &= \beta \frac{\tilde D(z)}{\tilde D(z_{\rm ini})} \\
\tilde D(z) &\equiv \sigma_{R_{\rm max}}(z) \,,
\end{align}
where $R_{\rm max}$ is the largest smoothing scale used in the halo model code, and for all reasonably small values of GDM parameters, $\tilde{D}$ reduces to the growth function $D$. We remove the scale dependence from $\tilde {\beta}$ for two reasons. Firstly, it is not clear what it would mean to include scale dependence here; it is not done in any version of the excursion set that we know of. In addition, we wish our halo model to act like a standard $\Lambda$CDM halo model in the limit that the GDM parameters are zero. As we are working directly from a Boltzmann code and not performing the standard linear extrapolation to redshift zero, the radiation component of the universe will cause a scale dependent growth \textit{even in} $\Lambda$CDM. Thus we explicitly remove the scale dependence from this term and use a growth factor that is defined to be scale-independent even in GDM.
In order to calculate the mass function for GDM (or any other cosmology with scale-dependent growth), we need to evaluate the $d\bar{B}'/d\sigma^2$ term. The result is
\begin{equation}
\frac{d \delta_c}{d\sigma^2(z_\text{ini})} = \delta_c \frac{1}{2}\left(1- \frac{ \sigma^2_{R}(z_{\rm ini})}{ \sigma^2_{R}(z)} \frac{ d\sigma^2_{R}(z)/dR}{ d\sigma^2_{R}(z_{\rm ini})/dR}\right) \text{.}
\end{equation}
Thus, the moving barrier term is given by
\begin{equation}
\bar{B}'-\sigma^2(z_\text{ini})\frac{d\bar{B}'}{d\sigma^2(z_\text{ini})}= \delta_c \frac{1}{2}\left(1+\frac{ \sigma^2_{R}(z_{\rm ini})}{ \sigma^2_{R}(z)} \frac{ d\sigma^2_{R}(z)/dR}{ d\sigma^2_{R}(z_{\rm ini})/dR}\right) \text{,}
\end{equation}
which reduces to $\delta_c$ in the case where the initial and final $\sigma_R$ have the same shape. That is often taken to be exactly correct in a $\Lambda$CDM universe. More correctly, it depends on the value of $z_{\rm ini}$: for example, if $z_{\rm ini} =50$ since there is still enough radiation ``contamination'' left to modify the shape of the matter transfer function, such that the ratio $\frac{\sigma_R(z_{\rm ini})}{\sigma_R(z)}$ is not $R$ independent. The modification of the mass function is however less then $1\%$ for $\Lambda$CDM and can be neglected, see figure \ref{fig_sigmaratio}. However for GDM, where in general growth is scale dependent even during matter domination, we do not expect the second term in the bracket to be close to 1.
We will assume that GDM does not change the values of $\beta$ and $a_b$, which amounts to the assumption that collapse inhibition from asphericity and the scatter of the barrier due to environmental and stochastic processes are unchanged. In principle, these quantities could be measured in FDM and WDM simulations. We are not aware of any such measurements and so using the $\Lambda$CDM values seems to be the most sensible approach.
\subsection{Non-Markovian corrections and asphericity of collapse}
So far we only looked at the mass and time dependent spherical part of the collapse barrier. Now we turn to the mass and time dependent contributions to the barrier due to the asphericity of the collapse as well as the non-Markovian corrections to the mass function. For GDM, the Markovian part of the mass function, \eqref{MarkovianAchizcollapse} is replaced by \eqref{MarkovianAchizini}.
The non-Markovian corrections will be implemented in a similar fashion as done in \citet{KoppApplebyAchitouvEtal2013}. There a mass dependent spherical collapse barrier was obtained due to a modification of gravity that left the background cosmology unchanged. It was shown that the non-Markovian corrections could be included through a simple relation $f(\sigma) = f_0(\sigma) f^{\rm \Lambda CDM}(\sigma)/f_0^{\rm \Lambda CDM}(\sigma)$, where $f^{\rm \Lambda CDM}(\sigma)$ includes the known and calculable non-Markovian corrections in $\Lambda$CDM.
In our case we cannot use as reference $\Lambda$CDM because the background might be different in GDM due to $w$.
For that reason we will define another non-Markovian reference mass function $f^{\rm ref}$, using a mass-independent spherical collapse barrier closer to $\delta_{\rm c}^{\rm GDM}$.
We choose that reference mass-independent spherical collapse barrier to be $\delta_{\rm c, max}=\delta_{\rm c}^{\rm GDM}(M_{\rm max})$, i.e. the GDM spherical collapse barrier evaluated at the largest mass scale $M_{\rm max}$ used in the halo model code. The reference mass function thus corresponds to a fictitious GDM model in which the spherical collapse barrier is mass independent. The reason for choosing $M_{\rm max}$ is that this barrier will be similar to that of a GDM model with $c_s^2=c_{\rm vis}^2=0$ (since in the limit $k\rightarrow 0$ the effects of $c_s^2$ and $c_{\rm vis}^2$ disappear). This way we don't need to run \textit{class} twice for each model. The reference multiplicity function is given by
\begin{equation}
f^{\rm ref}(\sigma)=f^{\rm ref}_0(\sigma)+f_{1,\tilde \beta=0}^{m-m}(\sigma)+
f_{1,\tilde \beta^{(1)}}^{m-m}(\sigma)+f_{1,\tilde \beta^{(2)}}^{m-m}(\sigma)\,,\label{ftot}
\end{equation}
where
\begin{align*}
f_0^{\rm ref}(\sigma) &=\frac{\delta_{\rm c, max}}{\sigma}\sqrt{\frac{a_b}{2\pi}}e^{-\frac{a_b}{2} \left(\frac{\delta_{\rm c, max} + \tilde \beta \sigma^2}{\sigma}\right)^2} \\
f_{1,\beta=0}^{m-m}(\sigma)&=-\kappa a_b\dfrac{\delta_{\rm c, max}}{\sigma}\sqrt{\frac{2a_b}{\pi}}\left[\exp\left[-\frac{a_b \delta_{\rm c, max}^2}{2\sigma^2}\right]-\frac{1}{2} \Gamma\left(0,\frac{a\delta_{\rm c, max}^2}{2\sigma^2}\right)\right]\\
f_{1,\tilde \beta^{(1)}}^{m-m}(\sigma)&=- a_b\,\delta_{\rm c, max}\,\tilde \beta\left[\kappa a_b\,\text{Erfc}\left( \delta_{\rm c, max}\sqrt{\frac{a_b}{2\sigma^2}}\right)+ f_{1,\tilde \beta=0}^{m-m}(\sigma)\right]\\
f_{1,\tilde \beta^{(2)}}^{m-m}(\sigma)&=-a_b\,\tilde \beta\left[\frac{\tilde \beta}{2} \sigma^2 f_{1,\tilde \beta=0}^{m-m}(\sigma)+\delta_{\rm c, max} \,f_{1,\tilde \beta^{(1)}}^{m-m}(\sigma)\right] \\
\delta_{\rm c, max} & \equiv \delta_c(z,z_{\rm ini},R_{\rm max}) \\
\kappa &= 0.465 \\
a_b &= 0.7143 \\
\beta & = 0.12\,.
\end{align*}
We will also include a further correction to the mass function that has been observed to fit mass functions measured in warm dark matter simulations \citep{SchneiderSmithMaccioEtal2012,Marsh2016}. The origin of that correction is likely to also be non-Markovian in nature, and it arises due to the absence of power below the scale $k^{-1}_{\rm dec}$.
If the density field does not perform a random walk as function of $R$ it can happen that the mass function suffers a cutoff, see \citet{ParanjapeLamSheth2012}.
If the power is sharply dropping for scales $R < k^{-1}_{\rm dec}$, then the density field at a fixed point no longer performs a random walk for varying $R$ for $R < k^{-1}_{\rm dec}$.
Thus we expect the mass function to be more non-Markovian for those small scales, implying a cutoff determined by mass scale related to $k^{-1}_{\rm dec}$.
We follow the fit of \citet{SchneiderSmithMaccioEtal2012}, which works well for WDM.
The final expression for the multiplicity function entering \eqref{massfunc} then is
\begin{align}
\label{eqn_fullgdmmultiplicity}
f^{\rm GDM} =\left(1+ \frac{M_{1/2}}{M} \right)^{-0.6}\, \frac{f^{\rm ref}}{f_0^{\rm ref}}\,f_0^{\rm GDM}
\end{align}
where $M_{1/2}$, the half mode mass \eqref{halfmodemass}, is used instead of $M(k^{-1}_{\rm dec})$, see \citet{SchneiderSmithMaccioEtal2012}.
To summarize: the first two factors take into account non-Markovian effects. The first one that the random walk below $k^{-1}_{\rm dec}$ is highly non-Markovian, the second one the standard non-Markovian corrections for a diffusive barrier of the form const$_1$+const$_2 \sigma^2$. The last factor is the Markovian mass function for a moving diffusive barrier \eqref{GDMmeanBarrier}. As part of the halo model, this provides a non-linear prescription for computing the GDM matter power spectrum and comprises one of the main results of this paper. When the GDM parameters are zero, the halo model reduces to a $\Lambda$CDM halo model as expected.
We make one final modification to the $\Lambda$CDM prescription in appendix \ref{sec_LCDMhalo}: In order to apply the compensation for the 1-halo term (see appendix \ref{sec_LCDMhalo}) in GDM, we need to define a scale independent growth for GDM. In the code, we do so by replacing $\sigma_8(z)$ with $\sigma_8(z=0)\frac{\tilde{D}(z)}{\tilde{D}(z=0)}$, where $\tilde{D}(z)$ is defined to be the growth on the scale $R_{\rm max}$, which corresponds to the largest mass value computed by the code inside the halo model routine. This is chosen to be consistent with the definition of $\tilde{\beta}$.
{ In figure \ref{fig_hmf_z0}, we show the full halo mass function described here, for $\Lambda$CDM and also for several choices of constant GDM parameters.}
\subsection{$\Lambda$CDM reference model for GDM halo model}
In order to implement the halo model as described above, we need a reference $\Lambda$CDM cosmology when \texttt{class} is run for GDM (this is used to calculate the half mode mass; see equation \ref{halfmodemass}). For this, we use the Eisenstein-Hu fitting formula, for a cosmology with the same $\Omega_m$, $\Omega_b$, $\Omega_\Lambda$, $n_s$ and $H_0$ as the GDM cosmology. The two key references for the fitting formulas are \citet{EisensteinHu1998} and \citet{EisensteinHu1997}. The first of these includes the effect of baryon oscillation but not neutrinos, whereas the second takes an average over the oscillations but includes the damping effects of massive neutrinos. We implement the former of these, as used for \texttt{HMCODE} \citep{MeadPeacockHeymansEtal2015} to calculate the $\Lambda$CDM spectrum at the $k$- and $z$- values required in \texttt{class}, which are then stored in an array. This spectrum is normalised to have the same value as the \texttt{class} GDM spectrum at $k_\text{ref}$, as this scale should be above the scales that are affected by GDM.
The $\Lambda$CDM growth function from \citet{LahavLiljePrimackEtal1991} is used as part of the Eisenstein-Hu formulas,
\footnotesize
\begin{eqnarray}
D(z)&=&\frac{1+z_\text{eq}}{1+z}\frac{5}{2}\Omega_m(z)\left(\Omega_m(z)^{4/7}-\Omega_\Lambda(z)\right.\nonumber\\
&&\left.+\left(1+\frac{\Omega_m(z)}{2} \right)\left(1+\frac{\Omega_\Lambda}{70} \right) \right)^{-1}\\
\Omega_m(z)&=&\frac{\Omega_m(1+z)^3}{\Omega_m(1+z)^3+\Omega_\Lambda}\\
\Omega_\Lambda(z)&=&\frac{\Omega_\Lambda}{\Omega_m(1+z)^3+\Omega_\Lambda} \text{.}
\end{eqnarray}
\normalsize
Here, $z_\text{eq}$ is a parameter from the Eisenstein-Hu formulas and is included for completeness, however note that it is irrelevant once the growth factor is normalised to unity today. The reference wavenumber $k_\text{ref}$ is set to be the largest wavenumber in the table used by \texttt{class} that is less than $0.002$, and $z_\text{ref}$ is set to be the smallest redshift in the table used by \texttt{class} that is greater than 50.
\subsection{Comparison to WDM and FDM halo models}
\subsubsection{Theoretical comparison}
In \citet{BarkanaHaimanOstriker2001} the cutoff of the mass function at small masses for WDM was achieved by an additional mass dependence of the barrier (see also \citet{Marsh2016,BensonEtal2013}). This mass dependence of $\delta_c$ (a steep increase for small masses) was argued to be caused by the velocity dispersion, however it is unlikely that this is the true physical mechanism that suppresses the mass function below $M_{1/2}$ since WDM simulations have shown that the velocity dispersion is irrelevant for the large scale structure and the mass function \citep{SchneiderSmithReed2013,Vieletal2012}. This disparity was explained in \citet{SchneiderSmithReed2013} (p. 4 last paragraph before section 3) by splitting the effects of the velocity dispersion into two distinct time periods: the accumulated effect from times $z>z_{\rm ini}$, (which manifests in the usual linear theory matter power spectrum cutoff), and the late time velocity dispersion (as should be present but turns out to be negligible in N-body simulations).
In our GDM halo model, we have allowed for the possibility of both a cut-off of the matter power spectrum due to accumulated effects with $z>z_{\rm ini}$, \textit{and} a steepening of the barrier due to effects related to times $z<z_{\rm ini}$. Physically, $\delta^{\Lambda \rm GDM}_c(z , z_{\rm ini}, R)$, equation \eqref{eq_strongtimedep_DefGDM}, takes into account pressure and viscous effects that hinder collapse at $z>z_{\rm ini}$, whereas $\sigma^2_R(z_{\rm ini})$ is the integrated effect due to $z>z_{\rm ini}$. If $z_{\rm ini}$ is chosen during matter domination (as in our halo model) then only the latter (integrated) effect matters for WDM, and the cutoff in the WDM mass function must originate independent of late time velocity dispersion effects on $\delta_c$, since it is observed in simulations without any added velocity dispersion as in \citet{SchneiderSmithMaccioEtal2012}. Thus the correct implementation of the mass function cut-off\footnote{In \citet{SchneiderSmithMaccioEtal2012} it was argued that this mass function cutoff is due to a non-hierarchical structure formation for masses below $M_{1/2}$ that is in conflict with the excursion set picture. However, it might be possible to show that this effect is the same as the strongly non-Markovian random walk, responsible for a mass function cut-off in \citet{ParanjapeLamSheth2012}, such that the cut-off can be understood within the excursion set theory.} due to non-Markovian behaviour caused by the linear theory power spectrum cutoff is not via the steep increase of the barrier for small masses when $z_{\rm ini}$ is chosen during matter domination. Instead it manifests through the phenomenological prefactor that is present in equation \eqref{eqn_fullgdmmultiplicity}, which depends on $M_{1/2}$.
{
Furthermore, when the WDM halo model is expressed in terms of our halo model, and $z_{\rm ini}$ is deep in the matter dominated era (as our $z_{\rm ini}=50$) then
$$\delta^{\rm \Lambda WDM}_c(z , z_{\rm ini},R) \simeq \delta^{\rm \Lambda CDM}_c(z , z_{\rm ini})\,,$$ i.e. the spherical collapse threshold reduces to the scale-independent $\Lambda$CDM spherical collapse threshold. This is a special (approximate) property of all GDM models in which the GDM parameters grow strongly with redshift, like the WDM scaling $(1+z)^2$ of pressure and viscosity, such that the late universe dynamics is guaranteed to be more CDM-like compared to early times. In more general GDM models, and in particular for constant GDM parameters, we do not expect this approximation to hold, which is why we allow our halo model to have both a cut-off due to integrated earlier time behaviour and a later time change of the dynamics modelled by a modifed barrier.
}
Note that the mass dependence of $\delta_c^{\Lambda \rm GDM}$ we have introduced in section \ref{sec_gdmdeltacrit} for GDM, equation \eqref{eq_strongtimedep_DefGDM}, is due to the evolution of the shape of the linear power spectrum after $z_{\rm ini}$. There is (approximately) no shape change for WDM and thus our mass function is very similar to the one used in \citet{SchneiderSmithMaccioEtal2012}. The difference is that we allow for evolution of the shape of the power spectrum after $z_{\rm ini}$, which is expected for constant GDM parameters, and thus we have mass dependent $\delta_c^{\Lambda \rm GDM}$, which then requires using a mass function that can deal with mass dependent barriers.
\subsubsection{Qualitative numerical comparison}
In principle, we can compare our non-linear matter power spectrum to the spectra in literature (e.g. \citet{SmithMarkovic2011}, \citet{Vieletal2012}, \citet{SchneiderSmithMaccioEtal2012} and \citep{Marsh2016}. However, we note that we focus our work here on constant values of the GDM parameters, whereas WDM and FDM correspond to time and scale dependent parameters. Thus the impact of the non-linearities will be different, although we can nonetheless perform a qualitative comparison of how the halo model affects the predictions of our GDM model compared to how it affects the predictions for the specific cases of WDM and FDM.
For all models, we see the same qualitative behaviour that the non-linear corrections increase the matter power spectrum and reduce the differences between $\Lambda$CDM and the modified matter content compared to the linear theory. However, there are differences in detail. For example, consider WDM with a mass of 0.25keV, where meaningful changes to the linear spectrum compared to $\Lambda$CDM begin to occur on scales of $k\geq1h \text{Mpc}^{-1}$, and these changes begin on even smaller scales as the mass increases. For FDM, in line with \citep{Marsh2016}, the changes are on even smaller scales, similar to WDM with mass 1keV. Whereas, at $k\geq1h \text{Mpc}^{-1}$, our GDM models consistent with Planck constraints differ from $\Lambda$CDM by two orders of magnitude in linear theory. This means that the GDM linear spectrum differs from $\Lambda$CDM on scales larger than those where the non-linear corrections matter for $\Lambda$CDM, whereas these two scales are swapped for the WDM and FDM models studied in the literature. This is because we have time-independent GDM parameters, whereas WDM would correspond to having them decay as $a^{-2}$, which causes their effects to appear only on small scales where nonlinearities are important. We expect that, once we allow for time dependent GDM parameters, the halo model will make a bigger difference relative to the linear spectrum, due to this reduced suppression in the linear theory at late times.
\section{Description of datasets and methodology}
\label{sec_method}
In this section we explain the data and methodology that were used to generate our results. We used the \texttt{class} code \citep{BlasLesgourguesTram2011}, modified as detailed in \citet{ThomasKoppSkordis2016,KoppSkordisThomas2016}, to evolve the GDM perturbation equations. We have added a module implementing the halo model as described in the previous section.
Our parameter constraints were obtained using the same basic methodology as in \citet{ThomasKoppSkordis2016}, see there for further details. We used the MCMC code MontePython \citep{AudrenLesgourguesBenabedetal2013} and established convergence of the chains using the Gelman-Rubin criterion~\citep{GelmanRubin1992}. We constrain a 6 parameter $\Lambda$CDM model $\{\omega_b, \omega_g, H_0,n_s, \tau, \ln 10^{10} A_s\}$, where $\omega_g$ is the density of the dark matter fluid, which is CDM in the $\Lambda$CDM case and GDM otherwise. We set uniform priors on $\tau$ and $H_0$ such that $0.01<\tau$. The helium fraction was set to $Y_{\rm He}=0.24667$ \citep{PlanckCollaborationXIII2015} and we assumed adiabatic initial conditions. We used two massless and one massive neutrino with mass $0.06$ eV keeping the effective number of neutrinos to $N_\mathrm{eff} = 3.046$ (thus for simplicity we refer to ``neutrino mass'' during the analysis, although this is equivalent to the sum of the neutrino masses for this choice of parameters). The base parameter set is augmented by 3 GDM parameters $\{w,c^2_s,c^2_{\text{vis}} \}$ for the GDM runs, and additionally also the neutrino mass $m_\nu$ (for the single massive neutrino species) for some runs.
We perform runs for $\Lambda$CDM that are purely linear, linear+halofit and linear+halo model (where the halo model is as documented in the previous section, which is why it is important that our GDM halo model reduces to $\Lambda$CDM for vanishing GDM parameters). The \textit{halofit} \citep{SmithEtal2003,TakahashiEtal2012} runs are performed using the \textit{halofit} model built into \texttt{class}. For GDM, we will perform purely linear runs and linear+halo model runs, these runs will be referred to as ``HM'' in the results table.
Our primary dataset is the Planck 2015 data release \citep{PlanckCollaborationXI2015} of the CMB anisotropies power spectra, consisting of the low-$l$ T/E/B likelihood and the TT/TE/EE high-$l$ likelihood with the full ``not-lite'' set of nuisance parameters.\footnote{For full details, see the Planck papers and wiki http://wiki.cosmos.esa.int/planckpla2015/index.php/.} These likelihoods combined are referred to as Planck Power Spectra (PPS). We also use BAO\footnote{In appendix \ref{sec_continuityappendix} we examine a possible subtlety with the use of BAO data, which would also be relevant if GDM were constrained using redshift space distortion data.} from 6dF Galaxy Survey~\citep{BeutlerBlakeCollessEtAl2011} and the Baryon Oscillation Spectroscopic Survey Sloan Digital Sky Survey~\citep{AndersonAubourgBaileyEtal2014} (collectively referred to as BAO hereafter), and the Planck CMB lensing likelihood (Lens).
The key additional dataset that we use here is the WiggleZ matter power spectrum \citet{ParkinsonEtal2012} (referred to as MPS in the results table). This includes galaxy power spectrum measurements at four redshifts, $z=\{0.22, 0.41, 0.60, 0.78 \}$. We follow the procedure laid out in \citep{ParkinsonEtal2012} for the likelihood from this data, as implemented in MontePython, with the following exceptions. We do not use the WiggleZ giggleZ non-linear prescription, as this is not valid for GDM. Instead, we will use both \textit{halofit} and our halo model for $\Lambda$CDM runs, as detailed above. For $\Lambda$GDM, we will perform purely linear runs and runs using our halo model. Note that the WiggleZ likelihood processes the input theory spectrum, including convolving with the window function and other transformations. In particular, an analytic marginalisation over the linear bias is performed, see \citet{ParkinsonEtal2012} for details. We also consider two different subsets of the whole WiggleZ data: a conservative cut, where we use the provided k-bands up to $k=0.1 h$Mpc$^{-1}$, and a less-conservative cut using the complete data up to $k=0.3 h$Mpc$^{-1}$, the latter of which was used by the WiggleZ collaboration for their $\Lambda$CDM results.
\section{Constraints}
\label{sec_results}
We divide our constraints into two groupings: those without MPS data that focus on examining the robustness of previous constraints, and those using MPS data that aim to improve the constraints on GDM. The main results from these two groupings are that the previously obtained constraints are indeed robust and that the MPS data improves the constraints on $c^2_s$ and $c^2_\text{vis}$ by a factor of three. The constraints from the different runs can be found in table \ref{table_results}.
\subsection{Robustness of previous results}
\label{sec_robust}
Our first result follows from looking at table \ref{table_results}. Here we show our previous constraints on the GDM parameters using the data combinations PPS and PPS+Lens, both with and without the inclusion of the halo model (HM). It is clear that the constraints are essentially independent of the halo-model correction. This is because the halo model implemented as described above only has a small effect in the GDM matter power spectrum relative to the purely linear theory for the scales relevant to the upper limits of the constraints (see figures \ref{fig_lensingphi} and \ref{fig_spectra}). This is partly due to the choice of constant GDM parameters; as explained above we expect that the impact relative to the linear spectrum would be more important if we chose time dependent forms for the parameters, e.g. an $a^{-2}$ time dependence, or general binned functions. The relative effect of our halo model on the linear spectrum is decreased as the $c^2_s$ and $c^2_\text{vis}$ parameters are increased, as can be seen by comparing the GDM and $\Lambda$CDM halo model curves in figure \ref{fig_spectra}. This is caused by the strong effects on the linear spectrum from the $k_{\text{dec}}$ phenomenology, which dominate over any changes to the matter power spectrum due to the non-linear effects. Thus, we expect that any constraints on $c^2_s$ and $c^2_\text{vis}$ obtained from the CMB temperature, polarisation and lensing spectra on these scales are actually more robust to potential non-linear complications than the standard $\Lambda$CDM parameters.
\begin{table*}
\caption{Constraints on the GDM parameters for the two types of models and different combinations of experiments, for the $95\%$ and $99\%$ credible regions.}
\centering
\begin{mytabular}[1.8]{|l||cc|cc|cc||}
\hline
\hline
Likelihood \hfill Model & \multicolumn{2}{|c|}{$10^2w$} & \multicolumn{2}{|c|}{$10^6c_s^2$ (upper bound)} & \multicolumn{2}{|c||}{$10^6c^2_\text{vis}$ (upper bound)} \\
(PPS+...) \hfill ($\Lambda$-GDM+...) & $95\%$ & $99\%$ & $95\%$ & $99\%$ & $95\%$ & $99\%$ \\
\hline
& $-0.040^{+0.473}_{-0.468}$ & $-0.040^{+0.700}_{-0.701}$ & $ 3.31$ & $ 6.31$ & $ 5.70$ & $ 11.3$ \\
+ Lens & $0.066^{+0.434}_{-0.427} $ & $0.066^{+0.654}_{-0.642}$ & $ 1.92$ & $ 3.44$ & $ 3.27$ & $ 5.99$ \\
+ Lens + BAO & $0.074^{+0.111}_{-0.110} $ & $0.074^{+0.164}_{-0.163}$ & $ 1.91$ & $ 3.21$ & $ 3.30$ & $ 6.06$ \\
\hline
\hfill + HM & $-0.029^{+0.477}_{-0.481}$ & $-0.029^{+0.716}_{-0.690}$ & $ 3.11$ & $ 5.39$ & $ 5.62$ & $ 11.1$ \\
+ Lens \hfill + HM & $-0.087^{+0.448}_{-0.461}$ & $-0.087^{+0.668}_{-0.649}$ & $ 1.92$ & $ 3.83$ & $ 3.13$ & $ 5.79$ \\
\hline
+ Lens + BAO \hfill $+\ m_\nu$ & $0.101^{+0.159}_{-0.143}$ & $0.101^{+0.248}_{-0.201}$ & $ 1.90$ & $ 3.54$ & $ 2.86$ & $ 4.82$ \\
\hline
+ Lens + BAO + MPS ($k<0.1h \text{Mpc}^{-1}$)& $0.040^{+0.109}_{-0.108}$ & $0.040^{+0.164}_{-0.157}$ & $ 0.667$ & $ 1.21$ & $ 1.10$ & $ 1.91$ \\
+ Lens + BAO + MPS ($k<0.1h \text{Mpc}^{-1}$)+ HM & $0.045^{+0.106}_{-0.109}$ & $0.045^{+0.161}_{-0.161}$ & $ 0.633$ & $ 1.11$ & $ 0.953$ & $ 1.83$ \\
\hline
+ Lens + BAO + MPS ($k<0.3h \text{Mpc}^{-1}$)& $0.035^{+0.112}_{-0.112}$ & $0.035^{+0.175}_{-0.168}$ & $ 0.0616$ & $ 0.103$ & $ 0.0958$ & $ 0.16 $ \\
+ Lens + BAO + MPS ($k<0.3h \text{Mpc}^{-1}$)+ HM & $0.046^{+0.113}_{-0.111}$ & $0.046^{+0.169}_{-0.163}$ & $ 0.201$ & $ 0.254$ & $ 0.333$ & $ 0.428$ \\
\hline
\hline
\end{mytabular}
\label{table_results}
\end{table*}
Also in table \ref{table_results} we show the constraints for the data combination PPS+Lens+BAO, when the neutrino mass is both fixed or varied ($m_\nu$). The differences between the posteriors with and without the inclusion of the neutrino mass can be seen in figure \ref{fig_mnudegen}. The perturbative GDM parameters $c^2_s$ and $c^2_\text{vis}$ are affected little by the inclusion of the neutrino mass, indeed the constraints improve very slightly, whereas the inclusion of the neutrino mass does noticeably worsen the constraints on the equation of state $w$. These effects are both caused by the degeneracies between the neutrino mass and the GDM parameters, which can also be seen in figure \ref{fig_mnudegen}; we shall now explore these in more detail.
The neutrino mass correlates with $c_s^2$ and $c_{\rm vis}^2$ in the same way that they are correlated with each other: The neutrino velocity dispersion is $c_\nu^2 = 2.78 \times 10^{-7}\, a^{-2} \left(1 \mathrm{eV}/m_\nu\right)^2$ \citep{LesgourguesPastor2006}, which causes a reduction in the lensing potential just like $c_s^2$ and $c_{\rm vis}^2$. This is not surprising since massive neutrinos can be described by a GDM fluid \citep{BlasLesgourguesTram2011}. This similarity between the effects of the neutrino mass and these GDM parameters can be seen by comparing the second and third panels of the left plot in figure \ref{fig_neumass}, showing the ratio of the spectra to $\Lambda$CDM with and without the lensing contribution. This is also shown in the lower plots of figure \ref{fig_neumass}, showing that all of these parameters result in a substantial reduction to the CMB lensing potential spectrum. The right plots of figure \ref{fig_mnudegen} show the 3D posterior of $c_s^2$, $c^2_{\rm vis}$ and $\sum m_\nu$. The lower of the two plots is colour coded according to the probability density, which peaks in the $\Lambda$CDM corner, as expected due to the lack of a detection of GDM parameters and neutrinos mass. The upper of the two insets shows the 90\% confidence level contour in orange, which is shown to be well modelled by constant $c_s^2+0.6 c_{\rm vis}^2 + 3.9\times 10^{-6} \sum m_{\nu}[eV]$. This is in rough agreement with the expression for $c_\nu^2$, as expected if the degeneracy is indeed due to the reduction of the lensing potential described here.
Note that the geometry of this situation is slightly non-trivial: we are dealing with a ``corner'' of a multi-dimensional parameter space where the parameters are all required by physics to be non-negative. As noted in \citet{HeavensSellentin2018}, marginalising over parameters in such situations can cause some subtle effects, and this may be the source of the slight improvement to the constraints on $c_s^2$ and $c_{\rm vis}^2$ when the neutrino mass is included and marginalised over.
Figure \ref{fig_mnudegen} also shows a clear degeneracy between $m_\nu$ and the equation of state $w$, which is due to the ability to generate cosmologies with identical $\theta$ but different $\omega_g$ when $w\neq0$. This can be seen by comparing the different panels in figure \ref{fig_neumass}, which show the CMB temperature power spectrum for different sets of parameters. Comparing the increased neutrino mass cosmology (red dashed) line in the third and fourth panels of the left plot, it can be seen that for fixed $\theta$ (the angular scale of the acoustic oscillations) and $\omega_c$ the main effect of the increased neutrino mass on a $\Lambda$CDM cosmology (aside from the lensing effect discussed above) is a reduction in the ISW effect and a tilt for higher $l$, see the ``no-lensing'' panel. The reduction of ISW (both early and late) as caused by the increased abundance non-relativistic matter content (compared to radiation and cosmological constant) when the neutrino mass is increased. One cannot simply compensate this with a change to $\omega_c$, because this would adversely affect expansion history at early times. However, when the parameter $w$ is introduced, it is possible to vary $\omega_g$, whilst also changing $w$ (and $H_0$) to keep the expansion history approximately fixed. This allows a GDM cosmology with increased neutrino mass to have the same ISW effect and high-$l$ tilt of $C^{TT}_l$ as a $\Lambda$CDM cosmology with lower neutrino mass, as can be seen by comparing the red (dashed) and blue (short-dashed) lines in the third panel on the left in figure \ref{fig_neumass}. This ability of $w, \omega_g$ to counteract these two effects of increasing $m_\nu$ drive the degeneracies between $m_\nu, w$ and $\omega_g$.
Note that the degeneracies between $m_\nu$ and the GDM parameters mean that if tighter constraints are put on the neutrino mass from other experiments, then using these results as a prior on CMB analysis could further improve the constraints on the GDM parameters.
\begin{figure*}
\centering
\includegraphics[width=6.in]{./plots/larger_triangle_GDM+nu-PPS+lens+BAO.pdf}\hspace{-7.cm} \raisebox{8.9cm}{\includegraphics[width=3.3in]{./plots/cs2cv2mnucontoursplot}} \hspace{-5.9cm}\raisebox{4.8cm}{\includegraphics[width=2.6in]{./plots/cs2cv2mnudensityplot}}
\caption{Posteriors for the neutrino mass and GDM parameters (plus other parameters of interest), where the red contours are for fixed neutrino mass and the blue contours are for when it is allowed to vary as an MCMC parameter. The 2D contours correspond to the $68\%$ and $95\%$ confidence levels. Changes to the 1D posteriors on the GDM parameters are visible when the neutrino mass is included as a parameter. This is due to the degeneracies that can be seen in the 2D posteriors: the neutrino mass is correlated with $w$ due to their similar impacts on the expansion history, and with the sound speeds due their similar impacts on CMB lensing. The right panels show the 3D posterior for $m_\nu, c_s^2$ and $c_\text{vis}^2$ which is peaked close to the origin and decreases further from this point, showing the expected degeneracy between all three parameters. For this posterior, the surfaces of constant confidence level are approximately planes, and are an extension of the $k_\text{dec}$ phenomenology found in \citet{ThomasKoppSkordis2016}, see section \ref{sec_robust} for details.
}
\label{fig_mnudegen}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=4.0in]{./plots/CompTTClcs2wmnu.pdf}\begin{minipage}[b]{3.1in}
\includegraphics[width=2.9in]{./plots/CompMPScsmnuCls} \\
\vspace{1.0cm}
\\
\\
\includegraphics[width=3.in]{./plots/CompPhiPhicsmnuCls}
\end{minipage}
\caption{
On the left, we show the effect of GDM parameters and neutrino mass on the temperature power spectrum (left panel), the matter power spectrum (upper right panel) and the lensing potential (lower right panel). In all cases $\theta$ is kept fixed, so that the peak position remains at the same $l$ value. The three lower panels on the left show the ratio of the different spectra to the $\Lambda$CDM spectrum, where the lensing contribution has been removed from the second of these, and both the lensing and ISW contributions have been removed from the lowest panel. The full (green) and long-dashed (red) lines change the model by either turning on $c_s^2$ or increasing the neutrino mass from 0.06 eV to 0.35 eV, while keeping the DM abundance $\omega_g = \omega^{\rm \Lambda CDM}_c$ fixed. Comparing the ratio of these models with the fiducial $\Lambda$CDM model in the second and third panels in the left plot shows why $c_s^2$ and $\sum m_\nu$ are degenerate: they reduce the amount of CMB lensing in a similar fashion. This is also clear by looking at the lower right plot displaying the lensing potential spectrum. The short dashed (blue) line maintains the increased neutrino mass and reduces the DM abundance, while at the same time increasing the DM equation of state from 0 to $w=0.002$. This shows why $w$ and $\sum m_\nu$ are degenerate: adjusting $w$ can make the expansion history of the massive neutrino cosmology more similar to the fiducial $\Lambda$CDM model, which can be seen by comparing the long dashed (red) line with the short-dashed (blue) line in third and fourth panel. The thinner version of these lines in the plots indicate the models that have been calculated using the halo model. The change of $C^{\phi \phi}_l$ for these models compared to $\Lambda$CDM is a direct consequence of the changes they cause for $P(k)$, see the upper right plot.
}
\label{fig_neumass}
\end{figure*}
\subsection{Use of MPS data}
\subsubsection{Conservative cut - No detection of GDM}
In the lower half of table \ref{table_results} we show the constraints obtained when including the WiggleZ matter power spectrum data (MPS), for two ranges of wavenumbers: $k<0.1h \text{Mpc}^{-1}$ and $k<0.3h \text{Mpc}^{-1}$. We discuss the former (more conservative) of these first. Note that for $\Lambda$CDM, the WiggleZ team found that the linear theory appears to give a better fit than \textit{halofit} (see figure 3 in \citet{ParkinsonEtal2012}), and we obtain the same result both for \textit{halofit} and the halo model, although the halo model has a smaller effect on the large scales than \textit{halofit}. We will focus the discussion on the perturbative parameters $c^2_s$ and $c^2_\text{vis}$, because the inclusion of the WiggleZ data has little effect on the constraints on $w$. This is because the effect of $w$ for the scales under consideration is primarily a small change to the amplitude, with little change to the shape of the spectrum, see \citet{KoppSkordisThomas2016}. Thus, the effect of $w$ will be almost entirely removed by the marginalisation over the bias in the WiggleZ likelihood, although note that future matter power spectrum data could constrain $w$ due to its effect on the peak location.
The inclusion of the matter power spectrum with the conservative cut has a strong effect on the perturbative parameters $c^2_s$ and $c^2_\text{vis}$, improving the constraints by a factor of three, see table \ref{table_results}. This can also be seen in figure \ref{fig_cs2andcv2HM}, where the 1D posteriors narrow considerably when the WiggleZ data is included. As this figure shows, this improvement is the only significant change to the posteriors due to the extra data. The green contours in this figure are from the linear implementation of GDM, however we note that the changes due to the extra data are approximately independent of whether the linear or halo model implementation of GDM is used; this can be seen both from the constraints in table \ref{table_results} and in the right panel of figure \ref{fig_cs2andcv2HM} by comparing the green (linear) and black dotted (halo model) contours. This is important because it implies that the constraints using the conservative cut come from physics that is well understood, and is not sensitive to detailed considerations of the non-linear regime.
\begin{figure*}
\centering
\includegraphics[width=6in]{./plots/triangle_GDM-PPS+Lens+MPS_GDM-PPS+Lens+BAO.pdf}\hspace{-5.2cm}\raisebox{8cm}{\includegraphics[width=2.5in]{./plots/cs2andcv2HM}}
\caption{Posteriors of GDM and some cosmological parameters when WiggleZ data is used. The 2D contours correspond to the $68\%$ and $95\%$ confidence levels. The triangle compares our previous constraints \citep{ThomasKoppSkordis2016} (red) to those obtained when we include MPS data with a conservative cut $k<0.1h/$Mpc (green; these contours are for the linear implementation of GDM). The primary effect is a tightening of the 1D posteriors for $c_s^2$ and viscosity $c_{\rm vis}^2$.
The right plot shows a more detailed comparison of constraints on these two parameters for the conservative and less-conservative cuts, for both linear and halo model modelling of the GDM matter power spectrum. Here, the green (filled) contours and black (dotted line) contours show the constraints obtained for the conservative cut, for linear and halo model GDM respectively. These contours show that including quasi-linear scales $k<0.1 h/$Mpc is robust: the constraints are not sensitive to the inclusion of the halo model. The two smaller sets of contours show the constraints for the less-conservative cut ($k<0.3 h/$Mpc): the blue contours are for linear modelling of GDM and the yellow contours are for the constraints obtained with the halo model. The GDM parameters are now more strongly constrained for both sets of less-conservative contours, however the halo model shows a clear preference for $\Lambda$GDM over $\Lambda$CDM ($c_s^2=c_{\rm vis}^2=0$), see the yellow contours, while there is no such preference if we use the linear theory to fit the data (the blue contours). This is an indication that we currently cannot robustly constrain GDM parameters using these smaller scales and that more work needs to be done. The white dashed lines indicate the direction of constant $c_s^2 + 0.6 c_{\rm vis}^2$ following the $k_\text{dec}$ phenomenology and the direction perpendicular to this, which is the most strongly constrained direction.
}
\label{fig_cs2andcv2HM}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./combine_plots/combine_gdmoldvsnew_0pt3_a.png}\\
\includegraphics[width=\columnwidth]{./combine_plots/combine_gdmvslcdm_halomodel_0pt3_a.png}
\caption{Comparison of GDM and $\Lambda$CDM theoretical spectra to WiggleZ data for the lowest redshift bin; In both plots, the red points show the data.
\textit{Upper:} GDM curves with $c^2_\text{vis}=0$ and non-zero $c^2_s$ (but note that the plot for $c^2_s=0$ and non-zero $c^2_\text{vis}$ is essentially identical, since the degeneracies between these parameters hasn't been broken). The (dashed) blue and green curves correspond to the linear and halo-model predictions respectively, for parameters corresponding to our previous (linear) constraints in \citet{ThomasKoppSkordis2016} ($c^2_s=0.000003$). The (solid) orange and yellow curves correspond to the linear and halo-model predictions respectively, with GDM parameters corresponding to the improved constraints in this paper when the MPS data is used with the conservative cut ($c^2_s=0.000001$). It can be seen that the previous constraints have some tension with the WiggleZ data, and thus that its inclusion improves the constraints on the GDM parameters by requiring smaller values of the parameters to reduce the tension. For all parameter choices here, the difference between the linear and non-linear spectra is small.
\textit{Lower:} Theoretical spectra constructed using the best fit parameters from the MCMC runs with the less-conservative cut for the WiggleZ data. The spectra correspond to linear $\Lambda$CDM (black dot-dashed), $\Lambda$CDM with the halo model (orange dotted), linear GDM (blue dashed) and GDM with the halo model (green solid). The linear GDM best fit parameters are $c^2_s=6.284\times10^{-10}$ and $c^2_\text{vis}=2.55\times10^{-8}$ and the GDM plus halofit best fit parameters are $c^2_s=1.6\times10^{-7}$ and $c^2_\text{vis}=4.3\times10^{-8}$. Similarly to WiggleZ \citep{ParkinsonEtal2012}, we get a better fit for linear $\Lambda$CDM than when the halo model is included. The best fit model for linear GDM has small GDM parameters and a spectrum that is very similar to the $\Lambda$CDM spectrum, as expected since the constraints in this case are consistent with $\Lambda$CDM. For the GDM halo model case, the best fit model has a much larger sound speed, and thus deviates more from the $\Lambda$CDM best fit.
}
\label{fig_mpscs2halomodel}
\end{figure}
The improvement on the $c^2_s$ and $c^2_\text{vis}$ constraints is due to the decay of the matter power spectrum for $k>k_{\rm dec}$, which creates a slope to the matter power spectrum that is inconsistent with the data for larger values of $c^2_s$ and $c^2_\text{vis}$. This can be seen in the upper panel of figure \ref{fig_mpscs2halomodel}, which compares theoretical spectra with various upper limits of GDM parameters to the lowest redshift bin in the WiggleZ data. Note that in both panels of this figure the theory spectra are transformed in line with how the likelihood is computed, in order to be compared to the data. This includes convolving with the survey window function, marginalising over the linear bias and accounting for the difference in background to the fiducial cosmology. See \citet{ParkinsonEtal2012} for a full description and explanation of these processes. The upper panel of figure \ref{fig_mpscs2halomodel} compares GDM spectra computed with non-zero $c^2_s$ values corresponding to upper limits of previous constraints (``old'') and the constraints when the matter power spectrum data is included (``new''). There is a tension between the slope of the theoretical spectra and the data that is reduced when the lower value (associated to the constraints from the WiggleZ data) is used. The equivalent plot for non-zero $c^2_\text{vis}$ is essentially identical, since the degeneracy between these parameters has not been broken (see below). In accordance with what was noted earlier, the use of the halo model makes little difference to the theoretical spectra in figure \ref{fig_mpscs2halomodel}, and thus to the constraints obtained with and without the halo model. The constraining power here does not come from the measured amplitude of the matter power spectrum because of the marginalisation over the linear bias. We expect that these constraints could increase even further if the matter power spectrum is measured on larger scales, particularly the turnover around the peak. This would have the additional advantage of staying inside our conservative regime that we have seen is robust to non-linear modelling. Considering how the difference in slopes shown in figure \ref{fig_mpscs2halomodel} continues for $k>0.1h \text{Mpc}^{-1}$, we expect that the less-conservative cut for the WiggleZ data will improve the constraints further, see below.
One of the motivations for considering matter clustering data for GDM constraints is the attempt to break the degeneracy between $c^2_s$ and $c^2_\text{vis}$ caused by the $k_\text{dec}$ phenomenology. The scales we are looking at with the WiggleZ data here are insufficient to break this degeneracy, see figure \ref{fig_gdmspectra}: up to $k=0.1h \text{Mpc}^{-1}$, there is little difference between $c^2_s$ and $c^2_\text{vis}$. Note that from \citet{KoppSkordisThomas2016} we expect the value where the oscillations start to be $k_J = 1/(0.2 c_s \tau)\simeq 0.2$, which is in agreement with what we see here. Even on scales down to $k=0.3h \text{Mpc}^{-1}$ (i.e. to the level of our less-conservative cut; see below), the difference between the spectra generated by the two parameters is not large, although beyond $k=0.3h \text{Mpc}^{-1}$ the difference between the linear spectra increases substantially. Interestingly, the non-linear spectra corrections act to recreate the degeneracy between the $c^2_s$ and $c^2_\text{vis}$ spectra on scales below approximately $k=0.5h \text{Mpc}^{-1}$ (see figure \ref{fig_gdmspectra}). If this modelling of GDM non-linearities is accurate then this means that there is only a small range of scales (around $k=0.3h \text{Mpc}^{-1}$), in which data could allow us to distinguish the effects of these two parameters.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./plots/spectra_gdm.png}
\caption{The linear and non-linear matter power spectra for the two perturbative GDM parameters. The black curve shows the linear $\Lambda$CDM spectrum. The blue (dashed) and green (solid) curves show the linear and non-linear spectra for non-zero $c^2_s$ and the red (dotted) and orange (solid) curves show the linear and non-linear spectra for non-zero $c^2_\text{vis}$. The values of $c^2_s$ and $c^2_\text{vis}$ are chosen to produce the same value of $k_\text{dec}$. Up to the level of our conservative cut ($k=0.1h \text{Mpc}^{-1}$), there is little difference between the two linear spectra and between the two non-linear spectra, and this difference only begins to manifest close to the smallest scales in our less-conservative cut ($k=0.3h \text{Mpc}^{-1}$). Note that on larger scales, the non-linear modelling acts to recreate the degeneracy between $c^2_s$ and $c^2_\text{vis}$.
}
\label{fig_gdmspectra}
\end{figure}
\subsubsection{Non-conservative cut - Possible detection of GDM}
As mentioned above, we also consider a less conservative cut, with $k<0.3h \text{Mpc}^{-1}$. The constraints are presented in the final two lines of table \ref{table_results}, where it can be seen that a significant gulf opens between the constraints obtained with and without the halo model. The constraints including the halo model improve by a factor of 3 compared to the more conservative cut, amounting to a combined improvement compared to previous constraints of an order of magnitude. However, the constraints without the halo model are another factor of 3 or so stronger still. This weakening of the constraints due to the inclusion of the halo model naively seems to match our expectations of weaker results when the non-linear effects are included, however there is a deeper story here.
The right panel of figure \ref{fig_cs2andcv2HM} shows the 2D posterior contour plots for $c^2_s$ and $c^2_\text{vis}$ for the conservative and less-conservative cuts, for both linear and halo model implementations of GDM. As discussed above, the linear and halo model versions of GDM result in very similar constraints for the conservative cut to the matter power spectrum data. The less-conservative cut for linear GDM results in a similar looking set of contours, in the sense that the contours are all essentially right-angled triangles with a similar slope on the side joining the two axes. I.e. the contours are solely upper bounds with a particular slope and the two GDM parameters both being zero is consistent with the data. The only difference is that the upper bounds have been significantly reduced.
However, the shape of the less-conservative cut for GDM with the halo model is significantly different; the lower contour is now apparent, resulting in a trapezoidal contour and a clear inconsistency of the $\Lambda$CDM point ($c^2_s=0$ and $c^2_\text{vis}=0$) with the contours. There is now a clear preference for a non-zero GDM parameter. This difference between the linear and halo model GDM results is further seen by looking at the right panel of figure \ref{fig_mpscs2halomodel}. Here we plot the spectra generated using the best fit parameters from the different MCMC runs. The linear GDM best fit parameters are $c^2_s=6.284\times10^{-10}$ and $c^2_\text{vis}=2.55\times10^{-8}$ and the GDM plus halofit best fit parameters are $c^2_s=1.6\times10^{-7}$ and $c^2_\text{vis}=4.3\times10^{-8}$. The best fit model for linear GDM has small GDM parameters and a spectrum that is very similar to the $\Lambda$CDM spectrum, as expected since the constraints in this case are consistent with $\Lambda$CDM. For the GDM halo model case, the best fit model has a much larger sound speed, and thus deviates more from the $\Lambda$CDM best fit. Thus we can see the discrepancy between the linear GDM and halo model GDM manifesting here as well. Interestingly, the GDM halo model run returns a best fit spectrum that is closer to the linear $\Lambda$CDM spectrum than the halo model $\Lambda$CDM spectrum is. The $\chi^2$ for the two GDM curves and the linear $\Lambda$CDM curve are almost indistinguishable,\footnote{We should caution here that an MCMC code such as MontePython is not optimised in terms of finding the lowest possible value of the likelihood, so we expect that the best fit values we have found here are close to the absolute minimum, but not precisely the lowest values.} showing further that the inclusion of the halo model into GDM can compensate for higher values of the GDM parameters.
Despite the preference for a non-zero GDM parameter shown in the GDM+HM contours in the right panel of figure \ref{fig_cs2andcv2HM}, the individual 1D posteriors (not shown here as their interpretation is dubious; see below) show no preference to be non-zero due to the degeneracy between the two parameters. This means that marginalising over the other parameter results in no ``detection'' of a non-zero value of either parameter. Despite the resulting large difference in upper bounds between the less-conservative results with and without the halo model, the maximal width of the contours along the $45^\circ$ line marked in the plot is similar in the two cases.
The slope of the degeneracy in these contours is well understood as the direction along which $k_\text{dec}$ remains fixed, and it is the direction perpendicular to this (the $45^\circ$ line marked in the plot) that is most constrained by the data. In principle, it would be possible to create a new parameter that describes this direction (i.e. essentially $c^2_+ \equiv c^2_s+8/15\, c^2_\text{vis}$), and then the second coordinate (i.e. $d\equiv (1 + 8/15\, c_{\rm vis}^2/c_s^2)^{-1}$, $0\leq d \leq 1$) of this 2D space can be marginalised over, in order to create a 1D posterior for $c^2_+ $. If we had chosen uniform priors on $c^2_+$ and $d$, we would expect the 1D-posterior of this new $c^2_+ $-parameter to peak at non-zero values for the yellow contour in Fig.\ref{fig_cs2andcv2HM}, and to peak at zero for the blue contour. And thus this parameter would allow to compactly quantify the detection of perturbative GDM parameters. However, we note that our current choice of priors (uniform priors on $0\leq c_s^2<0.1$ and $0\leq c_{\rm vis}^2<0.1$) would make $c^2_+$ peak at non-zero value for any sensible 2D posterior, in particular also for the blue contour in Fig.\ref{fig_cs2andcv2HM}. We will explore these issues in a forthcoming paper.
We note that this difference between the linear and non-linear results for the less-conservative cut is a strong justification of the motivation behind this paper, namely that correctly modelling these non-linear scales will be crucial for using late-time clustering data to constrain the GDM parameters. At the level of the modelling we have done here, we cannot be sure of our results with either the linear or non-linear modelling. Instead, we take these results to show that there is a need to look into the non-linear modelling in substantially more detail, before a detection of non-zero GDM parameters using late time clustering data could be claimed. Furthermore, even if the less-conservative linear contours agreed with the non-linear contours, then we would be hesitant to claim a detection of non-zero GDM parameters because of the caveats related to marginalising over parameter subspaces with special geometries. Nonetheless, our results show that late time matter clustering data can strongly constrain the GDM parameters, to the level where a detection is possible.
\section{Conclusion}
\label{sec_conc}
We have investigated how considerations around large scale structure can affect constraints on the generalised dark matter parameters. The main results of this work are the development of the halo model for GDM presented in section \ref{sec_halo} and the improved constraints on the GDM parameters presented in table \ref{table_results}.
In section \ref{sec_halo} we argued for modifying the $\Lambda$CDM halo model in a particular way, by backtracking the ``linearly-extrapolated'' critical density for collapse. This allows the mass dependence of the collapse barrier to be implemented in a natural way. This halo model reduces to a standard $\Lambda$CDM halo model in the case of scale independent growth, and produces qualitatively similar results to \textit{halofit}. Having derived the halo model for GDM, we note that the non-linear corrections are much less significant for GDM (with constant $c_s^2$ and $c_{\rm vis}^2$) than for $\Lambda$CDM, because the strong linear decay dominates over the corrections from the halo model.
We use this halo model to test the robustness of previously obtained constraints based on Planck CMB power spectra, as seems circumspect considering the magnitude of the $\Lambda$CDM non-linear corrections and difference between $\Lambda$CDM and GDM spectra in figure \ref{fig_lensingphi}. We find that the GDM constraints change little, as expected from the aforementioned phenomenology of the GDM halo model. Interestingly, the perturbative GDM parameters $c^2_s$ and $c^2_\text{vis}$ are less sensitive to the non-linear corrections than the standard $\Lambda$CDM parameters, which will be increasingly important for future CMB lensing surveys, such as Simons Observatory \citep{Simons2018}. We additionally checked the changes to previous GDM constraints when the neutrino mass is allowed to vary as a free parameter, primarily finding a worsening of the constraints on the equation of state (of GDM) $w$, due to the degenerate effects on the expansion history. We also elucidate the three-way degeneracy between $m_\nu$, $c^2_s$ and $c^2_\text{vis}$, and note that the geometry of this situation requires that marginalisation over these parameters is done carefully \citep{HeavensSellentin2018}.
We examined the effect of including the WiggleZ matter power spectrum data when constraining the GDM parameters, finding a factor of three improvement on the sound speed $c^2_s$ and viscosity $c^2_\text{vis}$ constraints when a conservative cut in wavenumber $k$ is used. When increasing the $k$-range that is included, these constraints improve by a further factor of three, for a total improvement from the use of matter power spectrum data of an order of magnitude. This shows the value of datasets that constrain the $k_\text{dec}$ phenomenology of GDM. Since we analytically marginalise over the linear bias, we expect that once large scale structure measurements reach the peak of the matter power spectrum, the total constraining power will be sufficient to either constrain the GDM parameters to the point of cosmological irrelevance or yield a detection of beyond $\Lambda$CDM physics.\footnote{Note that, being more precise, this ``beyond $\Lambda$CDM physics'' could just be a more careful and precise modelling of the $\Lambda$CDM universe, according to the EFTofLSS interpretation of GDM.} These improved constraints are one of the key results of this work.
The results from extending the $k$-range that is included show some important features for future work. The first is that there is a difference between the constraints obtained by linear and non-linear modelling, thus showing the importance of robust non-linear modelling of the GDM model for the use of late time matter clustering data. This suggests that a re-evaluation of the tight constraints obtained in \citet{KunzNesserisSawicki2016} could be interesting. This also shows that future surveys, combined with a more detailed analysis of the non-linear completion of GDM, have the potential to show that the matter power spectrum is not consistent with the $\Lambda$CDM. Furthermore, we have shown that GDM with the halo model yields a 2D contour that is clearly inconsistent with the $\Lambda$CDM point ($c^2_s=c^2_\text{vis}=0$), although we advise caution in the interpretation of this due to the difference between the linear and non-linear results. Even in this case, neither $c^2_s$ nor $c^2_\text{vis}$ is individually detected, due to the degeneracy between these two parameters, and we have noted that it is not straightforward to quantify a detection in such cases, because marginalisation in areas of parameter space with corners can lead to biases. Doing so requires a careful analysis of priors, see e.g. \citet{HeavensSellentin2018}.
We plan to explore this issue further in the future.
As the $\gamma_1,\gamma_2$ parameters in the halo model were originally calibrated for WDM, it may be preferable to treat them as nuisance parameters and vary them in an MCMC analysis for GDM. However, this further increases the computational demands of the already expensive codes and is unlikely to make a difference to our results as we have made no detection of GDM. Furthermore many other aspects of the halo model, like the precise form of the spherical collapse barrier, remain currently an educated guess and require a detailed study using numerical simulations of a suitably defined non-linear GDM model. Thus we leave an investigation into these issues for future work.
Throughout this work we used the simplest parameterisation of the GDM parameters; a single value with no redshift or scale dependence. We expect the halo model to have a larger impact for time dependent GDM parameters, although we leave this investigation to future work. In particular, we expect that for $c^2 \propto a^{-2}$, corresponding to WDM and FDM, that the halo model has a larger impact. { This is because the late universe GDM parameters will have a much smaller impact on the DM dynamics for fixed early universe values, so that the dynamics can be approximated by CDM in the late universe, allowing non-linearities to develop despite the small scale decay of the linear matter power spectrum imprinted during early times}. The assumption of constant GDM parameters was relaxed in \citet{KoppEtal2018}, where the equation of state $w$ was measured in multiple redshift bins. The importance of the WiggleZ data considered here for the constraints on $c^2_s$ and $c^2_\text{vis}$ suggest that this data could be crucial for putting strong constraints on redshift and scale dependent forms for $c^2_s$ and $c^2_\text{vis}$. Given the results here for $w$ when including $m_\nu$, it would be interesting to revisit the results using time dependent $w$ bins from \citet{KoppEtal2018}. In particular, it may be the case that the neutrino mass is only degenerate with $w$ over certain redshift ranges, and thus the time variation of $w$ allows the degeneracy with $m_\nu$ to be broken, and most of the constraining power on $w$ to be recovered. In addition we note that if the GDM parameters are given specific time (and scale) dependence corresponding to either FDM or WDM, then it would be interesting to perform an in-depth quantitative comparison of the existing FDM and WDM halo models with the halo model presented here for general GDM models. That comparison is beyond the scope of this paper.
\section*{Acknowledgements}
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC Grant Agreement n. 617656 ``Theories
and Models of the Dark Sector: Dark Matter, Dark Energy and Gravity''. The Primary Investigator is C. Skordis. DM acknowledges support from the UK Science \& Technology Facilities Council through grant ST/N000668/1 and from the UK Space Agency through grant ST/N00180X/1. DBT acknowledges support from Science and Technology Facilities Council (STFC) grant ST/P000649/1. MCMC chains for this analysis were partially run on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEPNet and the University of Portsmouth. We thank the developers of \texttt{class}, \texttt{montepython} and \texttt{getdist} for making their codes public. We thank the WiggleZ collaboration for making their data and likelihood public, and thank D. Parkinson for help with understanding these. We thank C. Skordis and S. Ili\'c for helpful discussions. This research has made use of NASA's Astrophysics Data System.
\bibliographystyle{mnras}
|
1,314,259,995,655 | arxiv |
\subsection{Data}\label{sec:Dataset}
Our analyses rely on daily adjusted closing prices and daily number of traded shares (volumes) for 12 representative constituents of the S\&P100 index in the period from December 31\textsuperscript{st}, 2014 to November 29\textsuperscript{th}, 2021 ($V= 1,741$ trading days). The data is retrieved from Yahoo Finance. These 12 stocks are selected based on their market capitalization and their market sector. For each sector we select the first two stocks of highest-but-comparable capitalization, a practice well-supported by financial theory \cite{fama1993common}.
Market sectors provide a natural grouping for securities: analyses conducted at a sector level are a common practice for granting comparability and robustness of the results, as across market sectors the dynamics of economic variables are well-known to be asymmetric. Table~\ref{tb:data} lists our stock selection.
Each stock is expressed as a trivariate time series consisting of daily prices, volumes, and returns. This way, CRPs express temporal similarities in joint terms of the price level, traded volume, and daily return, providing a generalized definition of similarity in time-series dynamics at a multivariate level.
For our bivariate analysis on two time series we have $(12^2-12)/2 = 66$ pairs of stocks.
For each stock pair, we use the first $70\%$ of the data for training ($V_{train} = 1,218$ days) and the last $30\%$ for testing ($V_{test} = 523$ days). As the future input instances should not affect the training process, the order of the input data during the training is fixed. The input and targets of the train data and the test data are, respectively,
\begin{align*}
\text{Inputs:}& \;\left\lbrace CRP_{(\mathcal{N}^w_{i} ,\mathcal{M}^w_{i})} \right\rbrace_{i \in I}\text{,}\\
\text{Targets:}& \;\left\lbrace\text{diag}\left(CRP_{(\mathcal{N},\mathcal{M})}\right)_i \right\rbrace_{i \in T} \text{,}
\end{align*}
where $I = w,\dots,V_\text{train}$ and $T =w+1,\dots, V_\text{train}+1$ for the training set and $I = V_\text{train}+1,\dots,V-1$ and $T =V_\text{train}+2,\dots, V$ for the test set.
We train the neural network once over the data for all the picks of the stock pairs. This pooled approach is a common practice in closely-related Machine Learning literature \cite[e.g.]{ntakaris2018benchmark,tran2018temporal} and supported e.g. by the empirical findings of \cite{Sirignano2019universal}, suggesting the existence of an universal price formation mechanism (model), and thus price dynamic, not specific for individual assets. In practice, the input and output data is the concatenation of the individual pairs' inputs-targets. For example, for a set window size $w$, for the train set the input-target data consists of $(V'_\text{train}-w) \times 66$ examples, that is $(V'_\text{train}-w) \times 66$ pairs of cross-recurrent matrices and (scalar) targets, where $V'_\text{train} = (V_\text{train}-\tau (k-1))$.
In the training phase, the training data is used to estimate the optimal weights of the CNN. The test data is then parsed to the estimated CNN and the quality of the network outputs is evaluated against the actual targets. Details are provided in the following two subsections.
\begin{table}
\caption{List of selected stocks.}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{1}{c}{Sector} & \multicolumn{1}{c}{Ticker} & \multicolumn{1}{c}{Stock name} \\
\midrule
Electronic Technology (ET) & INTC & Intel Corporation \\
Electronic Technology & QCOM & Qualcomm Inc. \\
Energy Minerals (EM) & XOM & Exxon Mobil Corporation \\
Energy Minerals & CVX & Chevron Corporation \\
Finance (F) & JPM & JP Morgan Chase \& Co. \\
Finance & V & Visa Inc. \\
Health Technology (HT) & JNJ & Johnson \& Johnson \\
Health Technology & PFE & Pfizer, Inc. \\
Retail Trade (RT) & HD & Home Depot, Inc. (The) \\
Retail Trade & WMT & Walmart Inc. \\
Technology Services (TS) & MSFT & Microsoft Corporation \\
Technology Services & GOOG & Alphabet Inc. \\
\bottomrule
\end{tabular}
\label{tb:data}
\end{table}
For the training of the CNN we adopt the ADAM optimizer with the following hyper-parameters: learning rate $0.01$ (reduced by a factor of $5$ every $40$ epochs), momentum parameters $0.9$ and $0.999$, batch size $128$ and epoch size $300$. Across the epochs we keep track of the F1-score on the validation set, which is set to the last 15\% portion of the training set. For our classification task we adopt the binary cross-entropy loss. As the target classes are unbalanced, the loss is weighted for the targets' class proportion. Details on the filter sizes, kernel sizes and the max pooling size are provided in Figure \ref{fig:model}.
With respect to the CRP computations, throughout our analyses the embedding dimension $k$ is set to 2 or 3 (estimated via FNN method) based on input type, and the delay parameter $\tau$ is set to $1$. Values $0.45$, $0.55$, $0.65$, and $0.75$ are used for the threshold $\varepsilon$. These hyper-parameters are selected according to the guidelines and discussion in \cite{schinkel2008selection} and \cite{wallot2019multidimensional}. The same values are applied for both the computation for the CRP related to the targets and the CRPs related to the inputs.
In our experiments we consider two different choices for the window-size hyperparameter, namely $w=\{10,30, 50, 60, 80\}$ days.
With the above settings, $V=V'=1,741$ days, $V_\text{train} = V'_\text{train} =1,218$, and $V_\text{test} = V'_\text{test} = 523$ days. For $i = w,\dots,V-1$, $CRP_{(\mathcal{N}^w_i,\mathcal{M}^w_i)}$ are square matrices of size $w'=w$ and $CPR_{(\mathcal{N},\mathcal{M})}$ a square matrix of size $V$ on whose diagonal are found the relevant targets, i.e. $\text{diag}\left(CPR_{(\mathcal{N},\mathcal{M})}\right)_i$, $i = w+1,\dots,V$.
\subsection{Experiments Results}\label{subsec:experiments}
Stock pairs from the same sector or two different sectors with different co-movement behaviors can provide comprehensive experimental data to show the ability of the proposed method in predicting the state of synchronization.
To evaluate the performance of our proposed method, all pairs of stocks are used as the input of the method. We collect all pairs of stocks and for each pair, we follow the steps of the proposed method (Fig~\ref{fig:method}) to create the inputs and targets. We stack the input-target pair-specific data to create a single train and test set for all pairs.
Tables \ref{tb:rez-p-V} and ~\ref{tb:rez-p-V-R}, show the performance of our proposed approach for all pairs of stocks using two types of input: (price, volume) and (price, return, volume) respectively. Given that the target classes are generally imbalanced, the preferred reference performance metric is the F1 score. Yet, we also include accuracy, precision, and recall to have a clearer overview of the classification performance.
For robustness, we run our experiments over a range of values for the window-size $W$ and threshold $\varepsilon$ hyperparameters, a setup that further clarifies the effect of these hyperparameters on prediction performance.
Results for the (price, volume) time-series input are provided in Table~\ref{tb:rez-p-V}, results for the (price, return, volume) input in Table~\ref{tb:rez-p-V-R}.
In general, our results show that the task of predicting the state of synchronization is not only feasible, but, under our setup, quite satisfactory. Indeed our preferred performance F1 metric is as high as 84\%. Yet, as expected, the results appear to be sensible to the choice of the window size and threshold parameter. In particular, the performance metrics decrease in their values as the threshold parameter and the window size increase. This means that stricter $\varepsilon$-neighbourhoods are easier to predict and that the relevant information for the prediction of the synchronization state is found in the most recent instances of the CRP.
This suggests the existence of patterns in the data that are strongly indicative of close $\varepsilon$-neighbourhoods, for which the prediction is very satisfactory. I.e. the CNN detects clear patterns indicative of the fact that the day-ahead synchronization is likely to be very strong (the $\varepsilon$-neighbourhoods is tight), indeed, as $\varepsilon$ increases, the performance metric decrease, indicating that the model indeed detects strong evidence of \enquote{strong} day-ahead synchronization.
Regarding, the window size, Long-lagged CRP information appears to introduce noise in the system without providing any predictive gains, aligned with the intuition that further-in-time information is less and less related to the current state of the system and of little use for prediction.
Suspecting that the use of prices and returns might be redundant, since they are closely related to each other, we also run a second experiment involving volumes and returns only.
It is interesting to note that the inclusion of the returns does not seem to provide any advantage with respect to the (price, volume) input time series, but rather the opposite effect.
It is indeed expected that the inclusion of further input variables complicates the patterns in the CRP chessboard so that under the same network architecture the performance metrics decrease.
Furthermore, and aligned with the above, in additional experiments here not reported, we included squared returns (as a gross measure of daily volatility) finding that they also appear to have a detrimental on the performance metrics and prediction task. This perhaps suggests that the network architecture needs to scale up with the complexity of the input data (number of time series) that reasonably induces more complex patterns in the CRP.
\begin{table}
\centering
\caption{Performance measures on the test set using (adjusted) price and volume as input variables.}
\begin{tabular}{cccccc}
$w$ & $\varepsilon$ & Accuracy & Precision & Recall & f1-score \\
\midrule
10 & 0.45 & 0.960 & 0.886 & 0.818 & \textbf{0.848} \\
10 & 0.55 & 0.981 & 0.842 & 0.762 & 0.796 \\
10 & 0.65 & 0.992 & 0.836 & 0.684 & 0.737 \\
10 & 0.75 & 0.997 & 0.999 & 0.647 & 0.727 \\
\midrule
30 & 0.45 & 0.957 & 0.877 & 0.804 & 0.836 \\
30 & 0.55 & 0.979 & 0.821 & 0.752 & 0.782 \\
30 & 0.65 & 0.991 & 0.816 & 0.684 & 0.732 \\
30 & 0.75 & 0.996 & 0.998 & 0.539 & 0.571 \\
\midrule
50 & 0.45 & 0.956 & 0.861 & 0.808 & 0.832 \\
50 & 0.55 & 0.982 & 0.907 & 0.719 & 0.784 \\
50 & 0.65 & 0.993 & 0.907 & 0.668 & 0.737 \\
50 & 0.75 & 0.997 & 0.935 & 0.665 & 0.739 \\
\midrule
60 & 0.45 & 0.954 & 0.850 & 0.814 & 0.831 \\
60 & 0.55 & 0.976 & 0.775 & 0.730 & 0.751 \\
60 & 0.65 & 0.992 & 0.937 & 0.633 & 0.703 \\
60 & 0.75 & 0.997 & 0.816 & 0.689 & 0.737 \\
\midrule
80 & 0.45 & 0.950 & 0.827 & 0.809 & 0.818 \\
80 & 0.55 & 0.979 & 0.839 & 0.725 & 0.769 \\
80 & 0.65 & 0.990 & 0.776 & 0.666 & 0.707 \\
80 & 0.75 & 0.997 & 0.820 & 0.709 & 0.753 \\
\bottomrule
\end{tabular}%
\label{tb:rez-p-V}
\end{table}%
\begin{table}
\centering
\caption{Performance measures on the test set using (adjusted) price, volume and returns as input variables.}
\begin{tabular}{cccccc}
$w$ & $\varepsilon$ & Accuracy & Precision & Recall & f1-score \\
\midrule
10 & 0.45 & 0.946 & 0.858 & 0.802 & \textbf{0.827} \\
10 & 0.55 & 0.973 & 0.810 & 0.733 & 0.765 \\
10 & 0.65 & 0.988 & 0.776 & 0.694 & 0.728 \\
10 & 0.75 & 0.995 & 0.795 & 0.664 & 0.711 \\
\midrule
30 & 0.45 & 0.943 & 0.845 & 0.800 & 0.820 \\
30 & 0.55 & 0.971 & 0.789 & 0.742 & 0.763 \\
30 & 0.65 & 0.987 & 0.760 & 0.678 & 0.711 \\
30 & 0.75 & 0.996 & 0.998 & 0.601 & 0.667 \\
\midrule
50 & 0.45 & 0.938 & 0.826 & 0.785 & 0.804 \\
50 & 0.55 & 0.971 & 0.795 & 0.731 & 0.758 \\
50 & 0.65 & 0.987 & 0.758 & 0.667 & 0.702 \\
50 & 0.75 & 0.995 & 0.998 & 0.503 & 0.505 \\
\midrule
60 & 0.45 & 0.938 & 0.823 & 0.788 & 0.804 \\
60 & 0.55 & 0.967 & 0.752 & 0.735 & 0.743 \\
60 & 0.65 & 0.987 & 0.756 & 0.654 & 0.692 \\
60 & 0.75 & 0.996 & 0.955 & 0.595 & 0.657 \\
\midrule
80 & 0.45 & 0.933 & 0.805 & 0.788 & 0.796 \\
80 & 0.55 & 0.967 & 0.755 & 0.730 & 0.742 \\
80 & 0.65 & 0.989 & 0.893 & 0.616 & 0.677 \\
80 & 0.75 & 0.995 & 0.998 & 0.548 & 0.586 \\
\bottomrule
\end{tabular}
\label{tb:rez-p-V-R}
\end{table}
\section{Introduction}
\input{03_Introduction}
\section{Financial Time Series Recurrence Analysis}\label{S:RecurrenceAnalysis}
\input{03b_RP_CRP_in_finance}
\section{Proposed Method}\label{S:Method}
\input{04_proposed_method}
\section{Experiments}\label{S:Experiments}
\input{05_experiments}
\section{Conclusion}\label{S:Conclusion}
\input{06_Conclusion}
\section*{Acknowledgments}
The research received funding from the Independent Research Fund Denmark project DISPA (project No. 9041-00004), and the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie project BNNmetrics (grant agreement No. 890690).
\bibliographystyle{apalike}
|
1,314,259,995,656 | arxiv |
\section{Introduction}
\label{sec:intro}
The introduction of
WebAssembly\xspace~\cite{haas_bringing_2017}, a portable low-level language with
focus on security and efficiency, has led to an array of
security-sensitive applications. Cryptography libraries such as
libsodium~\cite{libsodium} and HACL*~\cite{protzenko_formally_2019}
are a prime example of such applications.
Unfortunately, WebAssembly\xspace programs can be vulnerable to different types of
attacks~\cite{255318}, including timing side channels.
The constant-time programming discipline is a well-known practice
to defend against timing attacks~\cite{MolnarPSW05,almeida_verifiable_2016}.
The main idea is to disallow the program's control flow and the memory
access patterns that depend on program secrets.
This is surprisingly challenging because many cryptographic routines
are
human-written~\cite{libsodium,bearssl,stuber_torstenstuebertweetnacl-webassembly_2019-1}
and thus, prone to errors, while compilers that preserve constant time
are yet to emerge~\cite{libsodium,bearssl}.
This motivates the need for verification of constant-time
implementations in WebAssembly\xspace.
Drawing on the verification-friendly structure of WebAssembly\xspace, existing solutions
such as CT-wasm~\cite{watt_ct-wasm_2019-1} enrich the WebAssembly\xspace type system with security annotations to enforce constant time.
The efficiency of CT-wasm comes at the expense of a conservative analysis, e.g., by considering the whole memory as secret, thus leading to false positives or refactoring of constant-time programs.
This paper explores the use of \ac{RelSE} to verify constant-time implementations in WebAssembly\xspace.
The approach relies on an accurate modelling of the memory and other program optimizations, enabling a precise analysis that scales to real-world cryptographic
implementations.
In summary, this paper offers the following contributions:
\begin{itemize}
\item An \ac{RelSE}-based approach for verifying constant-time
implementations in WebAssembly\xspace
programs.
\item An automated invariant generation technique for
analyzing implementations with loops.
\item A thorough evaluation on 45 secure implementations and 12
insecure implementations in WebAssembly\xspace, including the previously
non-verified WebAssembly\xspace implementation of HACL* (WHACL*).
\item \textsc{Vivienne}\xspace, an open-source implementation of the
approach.
\end{itemize}
\section{Problem Setting}
\label{sec:background}
This section presents the problem setting, including the constant-time
policy, and background on WebAssembly\xspace and related works.
\subsection{Constant-time Policy}
\label{ssec:ctime}
Constant-time programming discipline is a software-based
defense against timing side-channel attacks.
This discipline relies on the constant-time
policy~\cite{almeida_verifying_2016}, which classifies values as
secret (\texttt{high}) and public (\texttt{low}).
The policy constrains the control-flow instructions and the memory
operations to solely depend on public values, thus disallowing any
secret-dependent control-flow instructions and memory accesses.
Intuitively, the policy requires that any program executions with the
same \texttt{low} values execute the same instructions and yield the
same memory access patterns, independently of \texttt{high}
values. This indicates that execution time of the program is not
affected by secret data.
\lstinputlisting[style=cstyle,
caption={C function \texttt{tls1\_cbc\_remove\_padding}},
label=lst:cfunc_loop]{code/lucky13_paper.c}
Listing~\ref{lst:cfunc_loop} reports a code snippet of the OpenSSL's
Lucky 13 timing vulnerability~\cite{al_fardan_lucky_2013} to illustrate the issue.
Function \texttt{tls1\_cbc\_remove\_padding} removes the padding from
a decrypted message that contains the plain text (secret), the
\ac{MAC} tag, and the padding.
The size of the padding affects the execution time, which in turn
reveals information about the size of the plain text.
Specifically, \texttt{rec->data} holds the
decrypted message together with the \ac{MAC} tag and the padding, and
is thus secret.
Variables \texttt{i} and \texttt{ii} (line 6) contain the last item of
array \texttt{rec->data}, which holds the padding size.
Hence, the number of iterations of the \texttt{for} loop at line 9
depends on the secret-dependent variable \texttt{i}, which affects the
execution time of the function.
Similarly, the guard of \texttt{if} statement at line 10
depends on \texttt{ii}, which is also secret.
Memory accesses also reveal information through timing due to the presence of caches.
At line 10, the access to \texttt{rec->data[j]} reveals
information about the value of index \texttt{j} by timing its presence in the cache.
\subsection{WebAssembly\xspace}
\label{ssec:wasm}
WebAssembly\xspace~\cite{haas_bringing_2017} is a stack-based typed low-level language
serving as backend for both client-side computations, e.g., web browsers, and server-side computations~\cite{255318} including
stand-alone applications~\cite{clark2019standardizing}.
With some exceptions~\cite{stuber_torstenstuebertweetnacl-webassembly_2019-1}, WebAssembly\xspace code is compiler generated, e.g., via
LLVM with support for C, C++,
and Rust. Other languages, like Python and Julia, also provide support for WebAssembly\xspace.
WASI Libc~\cite{clark2019standardizing} is
a library built on top of WASI system calls to enable I/O and memory management for WebAssembly\xspace programs.
The execution model of WebAssembly\xspace~\cite{haas_bringing_2017}
consists of 1) an execution stack $es$ that stores
the instructions; 2) a value stack $vs$ that holds the input
arguments of the instructions, 3) a linear memory, and
4) the local and the global stores.
WebAssembly\xspace has a structured control flow; for indirect calls
(\texttt{call\_indirect}), the call destination is an index to a
function table; for conditional branch (\texttt{br\_if}), the branch
destination is an index $i$ to enter (\texttt{loop}) or exit
(\texttt{block}) the $i$th scope.
Memory operations read from (\texttt{load}) and write to (\texttt{store})
the linear memory, and global variables are visible to all functions in a module. A function may also define local variables \texttt{lv$n$} including the function parameters.
Modules are collections of functions with their own linear memory,
and global variables~\cite{haas_bringing_2017}.
Listing~\ref{lst:wfunc_loop} shows an example WebAssembly\xspace module.
The code is a simplified compiled version (using clang-10) of the C
code in Listing~\ref{lst:cfunc_loop}.
The code consists of a module (line 1-33), which imports a memory
instance (\texttt{"\_memory"}) from another module \texttt{\$env}
(line 3) and declares function \texttt{tls1\_cbc\_remove\_padding}
(line 4).
The function takes four input parameters of type 32-bit integer and
returns a 32-bit value (line 5).
At line 6, the function declares five local variables and the rest of
the function consists of the function body.
The block at line 8 performs multiple
initializations before the beginning of the loop (line 15).
At line 10, instruction \texttt{local.tee} stores the top value of
$vs$ (here \texttt{rec->data + 1}) to \texttt{lv6} and
pushes the same value back to $vs$.
At line 15, the loop starts by loading \texttt{lv6} and \texttt{lv1}
to $vs$.
Instruction \texttt{i32.add} adds these two values and pushes back the
result to $vs$.
Finally, instruction \texttt{i32.load8\_u} loads from the linear memory (\texttt{"\_memory"}) the value at the index taken from
the top of $vs$, i.e.\ the result of the addition.
The loop body executes until instruction \texttt{br\_if},
which reads one value from $vs$; if the value is non zero
(\texttt{true}), the execution breaks out of the outermost block
(lines 8-31), whereas if the value is zero (\texttt{false}),
the execution continues to the next instruction, \texttt{br}, which
unconditionally jumps back to the beginning of the loop (line 15).
\lstinputlisting[style=wasmstyle,
caption={Wasm function \texttt{tls1\_cbc\_remove\_padding}},
label=lst:wfunc_loop]{code/lucky13_paper_O3.wast}
WebAssembly\xspace programs may be vulnerable to timing side-channel attacks. The
constant-time policy for WebAssembly\xspace concerns control-flow instructions, i.e.\
\texttt{br\_if}, \texttt{if},
\texttt{br\_table}, and \texttt{call\_indirect}, and the memory
operations, i.e.\ \texttt{load} and \texttt{store}.
\subsection{Related Work}
\label{ssec:relwork}
Several works have aimed at improving the security of
WebAssembly\xspace~\cite{255318,watt_ct-wasm_2019-1,
watt_weakening_2019,vassena_automatically_2020,narayan_swivel_2021}.
CT-wasm~\cite{watt_ct-wasm_2019-1} proposes a type system to check the
constant-time policy.
Type checking is very efficient but it suffers from the annotation
burden and the conservative nature of the analysis.
In CT-wasm, this is reflected by the treatment of the whole memory as
secret, e.g.\ requiring that every \texttt{load} operation returns a
\texttt{high} value, which may require refactoring of the programs to
make them amenable to the analysis (e.g., \texttt{poly1305\_blocks}
and \texttt{poly1305\_update} functions of a WebAssembly\xspace TweetNaCl
implementation~\cite{stuber_torstenstuebertweetnacl-webassembly_2019-1}).
Our approach aims at overcoming these limitations by means of
\ac{RelSE}, using a more accurate memory model and no extensive
annotation burden.
\review{ Moreover, we expect our analysis to yield less false positives because
it relies on symbolic execution which is more precise than security type systems.
For example, an expression such as $\texttt{secret} - \texttt{secret}$
would be correctly identified as the constant $\texttt{0}$. }
However, as we will see, our solution comes with a computation cost
due to the increased precision.
Almeida et al.~\cite{almeida_verifying_2016} use product programs to
verify constant-time for C implementations.
A drawback of verifying the constant-time policy for
high-level languages is that the analysis does not provide guarantees
on the security of the generated code (see \texttt{ct\_select}
implementations~\cite{daniel_binsecrel_2020}).
Daniel et al.~\cite{daniel_binsecrel_2020} verify constant-time
programs at the binary level using \ac{RelSE}.
Web browsers using WebAssembly\xspace typically leverage \ac{JIT} compilation,
which does not result in binary file generation.
Moreover, the verification of constant-time at the WebAssembly\xspace level
provides opportunities for optimization due to WebAssembly\xspace's structured
design.
HACL*~\cite{zinzindohoue_hacl_2017} uses a high-level specification
language to generate a formally verified cryptographic library that is
available in different languages including C and
WebAssembly\xspace~\cite{protzenko_formally_2019}.
\section{\textsc{Vivienne}\xspace: \ac{RelSE} for WebAssembly\xspace}
\label{sec:tool}
\begin{figure}
\centering
\input{figs/tool_arch.tex}
\caption{\label{fig:arch} \textsc{Vivienne}\xspace Architecture}
\end{figure}
\textsc{Vivienne}\xspace analyzes WebAssembly\xspace implementations with respect to constant
time.
Figure~\ref{fig:arch} shows a high-level view of the tool.
\textsc{Vivienne}\xspace takes three inputs: 1) the \textit{WebAssembly\xspace modules} containing
the functions to analyze, 2) the \textit{security policy} annotating the
memory regions and the parameters of the entry function,
and 3) the \textit{entry point} describing the entry
function to analyze.
Then, \textsc{Vivienne}\xspace performs \ac{RelSE} on the entry function, reporting the
discovered constant-time
vulnerabilities (if any). We describe the
different components of \textsc{Vivienne}\xspace using Listing~\ref{lst:wfunc_loop} as a running example.
\textbf{WebAssembly\xspace Modules}
The modules include the entry function to verify and its
dependencies, possibly involving different modules.
For example, the module in Listing~\ref{lst:wfunc_loop} imports the
memory from another module \texttt{\$env} (line 3) and defines
function \texttt{tls1\_cbc\_remove\_padding} (lines 4-28).
\textbf{Security Policy and Entry Point}
The security policy specifies the parts of the memory and the
arguments of the entry function that contain public or secret
values.
Listing~\ref{lst:policyentry} reports the policy for function
\texttt{tls1\_cbc\_remove\_padding}.
The policy specifies the bytes 2000 to 2039 (i.e.\ pointer \texttt{s})
and the memory of struct \texttt{rec} as public (not shown),
and the bytes 2048 to 2111 (i.e.\ \texttt{rec->data}) as
secret, thus reflecting the specification in
Listing~\ref{lst:cfunc_loop}.
Moreover, \textsc{Vivienne}\xspace requires the
code of the modules (line
8) and the \textit{entry function} (lines 9-11).
The latter includes the security policy for its arguments
which can be either concrete or symbolic values.
Lines 9--11 specify the concrete and symbolic arguments for
analyzing function \texttt{tls1\_cbc\_remove\_padding} via
\ac{RelSE}.
The function takes four arguments: 1) the memory index of \texttt{s};
2) the memory index of struct
\texttt{rec}; 3) the block size, which is a public symbolic
value; and 4) the \texttt{\ac{MAC}} size which is also a
public symbolic value.
\textsc{Vivienne}\xspace recognizes public (secret) symbolic values
that start with letter \texttt{l} (\texttt{h}).
\begin{lstlisting}[style=wasmstyle,
caption={Security policy and Entry Function},
label=lst:policyentry]
(module $env
(memory (;0;) $memory (export "_memory") 2)
(public (i32.const 2000) (i32.const 2039));;s
...
(secret (i32.const 2048) (i32.const 2111));;data
)
;;definition of tls1_cbc_remove_padding-Listing 2
...
(symb_exec "tls1_cbc_remove_padding"
(i32.sconst 2000) (i32.sconst 2040) ;; concrete
(i32.sconst l|$_1$|) (i32.sconst |l$_2$|)) ;; symbolic
\end{lstlisting}
\begin{figure}
\[\arraycolsep=1.6pt
\begin{array}{lcl}
v~(values) &::=& h_{n} \alt l_{n} \alt c, \llspace c\in\mathbb{Z}, n\in\mathbb{N}_0 \\
\rho~(relational\ values) &::=& \langle v, v \rangle \\
e~(expressions) &::=& \rho \alt \add{e}{e} \alt \sub{e}{e} \alt ... \\
& & |~\myle{e}{e} \alt \load{e}{\mu} \\
i~(instructions) &::=& \brif{l} \alt ... \alt \loadins, \lspace l\in\mathbb{N}_0 \\
\hline
\mu~(memory) &::=& \bot \alt \store{e}{e}{\mu} \\
st~(stack) &::=& \varnothing \alt e :: st \\
pc~(path\ condition) &::=& \review{\trueval} \alt e \land pc \\
es~(execution\ stack) &::=& \varnothing \alt i::es \\
lv~(local\ variables) &::=& \{lv_0 \mapsto e, ..., lv_n \mapsto e\} \\
\end{array}
\]
\caption{\label{fig:sym} Symbolic Data Structures}
\end{figure}
\textbf{\acl{RelSE}}
\textsc{Vivienne}\xspace uses the above-mentioned inputs to initiate
\ac{RelSE}~\cite{farina_relational_2019-1} for the
entry function.
\ac{RelSE} performs symbolic execution on relational states representing
two program executions with identical public values but
different secret values.
We now describe the ingredients underpinning the constant-time analysis with \textsc{Vivienne}\xspace.
\paragraph{Symbolic State}
A symbolic state $\sigma$ consists of 1) the
execution stack $es$, that contains the WebAssembly\xspace instructions,
2) the symbolic stack $st$, 3) the symbolic memory $\mu$, 4) the
symbolic local (and global) variables $lv$, and 5) the path
condition $pc$.
Figure~\ref{fig:sym} summarizes these five components of a symbolic state $\sigma =
\state{ es}{st}{\mu}{lv}{pc}$.
By convention, the values starting with $h$ ($l$) are
secret (public). Our symbolic analysis operates on pairs of symbolic values $\rho$.
We write $\rho_{|l}$ ($\rho_{|r}$) to denote the first (second) element of a pair $\rho$.
For public values, we have that $\rho_{|l} = \rho_{|r}$ and
write $\val{v}$, while for secret values $\rho_{|l}$ and $\rho_{|r}$ may differ.
We lift this notation to expressions and
the memory as expected.
\paragraph{Execution Path Exploration}
We use small-step symbolic evaluation to analyze the instructions.
At every step, the analysis takes a symbolic state as input and
returns a list of symbolic states that correspond to the feasible
execution paths.
We visit the instructions in a depth-first search fashion and collect all path conditions $pc$ to check path feasibility using an \ac{SMT} solver.
\paragraph{Symbolic Stack}
The symbolic stack holds symbolic expressions $e$ resulting from
stack operations on symbolic values.
Consider the \texttt{get} instructions at lines 16--17 in Listing~\ref{lst:wfunc_loop} with
the current symbolic memory $\mu$ and empty symbolic stack $st$.
The program loads the symbolic expressions of \texttt{lv6}
i.e.\ $\val {2112}$, and \texttt{lv1}
i.e.\ $\sub{\val{1}}{\load{\val{2111}}{\mu}}$ to the stack $st$.
At line 18, the analysis of instruction \texttt{add} pops the two symbolic expressions off the
stack $st$ and pushes back the result,
$\add{\val{2112}}{\sub{\val{1}}{\load{\val{2111}}{\mu}}}$.
\paragraph{Memory Operations}
\label{par:mo}
When analyzing a memory operation at index $e$, as in \ $\state{
\loadins\cons es}{e\cons st}{\mu}{lv}{pc}$ or $\state{
\storeins\cons es}{e_1\cons e\cons st}{\mu}{lv}{pc}$, the analysis
generates a formula, $\phi = (T(e)|_{|r} \neq T(e)_{|l})$ to check
that the index is not secret-dependent.
The function $T: e \to \langle Exp, Exp\rangle$ translates the index
expression $e$ to a pair of \ac{SMT} expressions $ Exp$.
If $e$ only depends on public values, then for all valuations of $e$,
$e_{|r} = e_{|l}$, thus $\phi$ is \textit{unsatisfiable} and the
memory operation is \texttt{safe}.
However, if $\phi$ is \textit{satisfiable}, then there are concrete
values, such that the memory addresses for the two executions,
$e_{|r}$ and $e_{|l}$, are different.
This is only possible if expression $e$ depends on secret values, and,
thus, the solution to $\phi$ reveals a violation of constant time.
In our example in Listing~\ref{lst:wfunc_loop}, load operation
\texttt{load8\_u} at line 19 has as index the top value of $st$,
$\add{\val{2112}}{\sub{\val{1}}{\load{\val{2111}}{\mu}}}$.
The policy in Listing~\ref{lst:policyentry} specifies
$\load{\val{2111}}{\mu}$ as secret, i.e.\ $\load{\val{2111}}{\mu} =
\dval{h_1}{h'_1}$ with $h_1 \ne h'_1$.
Thus, the generated formula $\phi = (2112 + (1 - {h_1})) \neq
(2112 + (1 - {h'_1}))$ is satisfiable for different values of
$h_1$ and $h'_1$.
This means that there exist the two concrete executions
that differ with regards to the memory index, which violates
constant-time.
\paragraph{Control-flow Instructions}
Like memory operations, control-flow instructions require checking
that boolean expression $e$, as in $\state{ \brif{0}\cons es}{e\cons
st}{\mu}{lv}{pc}$, is not secret-dependent. Our analysis generates a
formula to check whether the two paths of the relational state take
different branches.
WebAssembly\xspace considers value \textit{zero} as
false and \textit{any non-zero} value as true, hence
the generated formula is $\phi = (T(e)_{|r} = 0)
\land (T(e)_{|l} \neq 0)$.
Formula $\phi$ is satisfiable only if there is a valuation of $e$ such
that the two executions follow different execution paths, indicating a
violation of the constant-time policy.
\textbf{Formula Simplification}
\label{sssec:fs}
When \ac{RelSE} needs to check the constant-time policy for an
expression $e$, it first passes $e$ to the simplification step
(SS).
SS translates the expression to a pair of \ac{SMT}
expressions, $e' = T(e)$, using the theory of bitvectors and arrays
(32-bit indexed byte array), \texttt{QF\_ABV}.
The transformation includes simplification and memoization
steps to reduce the recalculation overhead.
Finally, based on the type of the query, namely memory operation or
control-flow statement, this step generates formula $\phi$.
For our previous example, SS first
translates expression $e =
\add{\val{2112}}{\sub{\val{1}}{\load{\val{2111}}{\mu}}}$ to two SMT
expressions $2112 + (1 - h_1)$ and $2112 + (1 - h'_1)$, which
are then simplified to $2113 - h_1$ and $2113 - h'_1$,
hence the final formula becomes $\phi = (2113 - h_1) \neq (2113 - h'_1)$.
To solve the simplified formula, \textsc{Vivienne}\xspace invokes an
\ac{SMT} solver.
For simple formulas, however, the resulting $\phi$ may already be a concrete boolean, e.g., \texttt{false}, allowing \textsc{Vivienne}\xspace skip a call to the \ac{SMT} solver.
\textbf{\acs{SMT} Solver}
\label{sssec:smtsolver}
After the simplification step, \textsc{Vivienne}\xspace invokes an \ac{SMT} solver
for solving the simplified formula, $\phi$.
The \ac{SMT} solver of \textsc{Vivienne}\xspace has two modes, one for small formulas
and one for large and complex formulas.
For small formulas, \textsc{Vivienne}\xspace uses a solver that provides bindings to
the implementation language of \textsc{Vivienne}\xspace and thus, has a reduced
communication cost.
However, for larger formulas, the communication overhead is less
significant compared to the benefit of using a more powerful \ac{SMT}
solver.
In particular, for larger queries \textsc{Vivienne}\xspace uses a portfolio solver
were many solvers take as input the same formula and the solver that
finishes first returns the result.
To decide over which solver mode to use, \textsc{Vivienne}\xspace uses the
\textit{number of expressions} in the formula.
\textbf{Invariant Generation}
\label{sssec:inv}
\textsc{Vivienne}\xspace has an optional invariant generation step for analyzing loops.
When invariant generation is enabled and the analysis visits a
loop at location $loc$, \textsc{Vivienne}\xspace starts a preprocessing step to
automatically generate a relational invariant $I$.
The invariant defines the variables (local variables, global
variables, and memory) that are public, i.e.\ $I = \{ \forall
x \in V_p\subseteq V.~ x_{|l} = x_{|r} \}$, where $V$ is the set of all
variables modified in the loop and $V_p$ is the subset of the modified
variables that are public.
To discover whether a variable is public or secret, the preprocessing
step queries the \ac{SMT} solver about the security policies of the
modified variables, $V$, \review{after symbolically executing one loop iteration}.
\review{That is, given a variable $x \in V$, the preprocessing step
generates a query, $\phi = (x_{|l} \ne x_{|r})$.
%
If the query is unsatisfiable, then the variable is assumed to be
public and $x$ is added to $V_p$, otherwise, it is assumed to
be secret.
%
In the special case of $x_{|l} = x_{|r} = c \in \mathbb{Z}$, the
analysis assumes that $x$ has a symbolic value $c$ and adds the equality constraint $x = c$ to the
invariant, $I$.}
After generating invariant $I$, the analysis continues with verifying
this invariant.
To do that, \textsc{Vivienne}\xspace 1) \review{generates fresh symbolic variables} (havoc) for
all modified variables $x\in V$, 2) assumes that the invariant, $I$,
holds, 3) performs \ac{RelSE} on the loop body with the havoced values
and discovers possible vulnerabilities, 4) verifies that the invariant
holds by asserting $I$ on the new relational state.
\review{If the generated invariant is not a loop invariant,
then the last step will fail.
%
}
After analyzing the loop body, the analysis continues outside the loop.
\review{The invariant verification algorithm is a generalization of standard (functional) invariant checking, hence we expect the loop analysis to be sound, as supported by the experiments.}
Consider the loop at $loc=15$ in Listing~\ref{lst:wfunc_loop}.
Local variables 1 and 4 are modified in the loop body,
i.e.\ $V=\{lv1,lv4\}$.
Of these, $lv1$ stores \texttt{j} (line 24), which is secret because it
depends on \texttt{rec->data[l-1]} and $lv4$ stores value 1, which
is public.
Thus, $V_p = \{lv4\}$, hence the invariant is $I = \{ lv4_{|l} =
lv4_{|r} \}$.
To analyze the loop, \textsc{Vivienne}\xspace 1) havocs $lv1$ and $lv4$, 2) assumes
the invariant $I$, i.e.\ that $lv4$ is initially public, 3) performs
\ac{RelSE} at the loop body to discover constant-time vulnerabilities,
and 4) asserts the invariant $I$.
Here, the program assigns $lv4$ only once in the loop body, at line
22, where, $lv4$ takes value one, which is public, and thus,
the invariant $I$ holds.
\textbf{Output} \textsc{Vivienne}\xspace outputs the discovered constant-time
violations (\faBug), if any, as well as the \ac{SMT} solver-generated
counterexamples that witness these violations.
\textsc{Vivienne}\xspace is implemented as an extension of the WebAssembly\xspace reference
interpreter~\cite{ref-interpreter} in OCaml, using OCaml compiler 4.06.
\textsc{Vivienne}\xspace uses the OCaml interface of z3~\cite{de_moura_z3_2008-1} to
generate and simplify the constant-time formulas, and solve queries that
have a small number of expressions.
For larger formulas, \textsc{Vivienne}\xspace uses a portfolio solver consisting of
four solvers, i.e.\ Boolector~\cite{brummayer_boolector_2009},
Yices2~\cite{dutertre_yices_2014}, CVC4~\cite{barrett_cvc4_2011}, and
Z3~\cite{de_moura_z3_2008-1} running in parallel.
\textsc{Vivienne}\xspace is publicly available online at
\url{https://github.com/romits800/Vivienne}.
\section{Evaluation}
\label{sec:eval}
We evaluate \textsc{Vivienne}\xspace with respect to three research questions:
\textbf{RQ1: Can we use \ac{RelSE} for
constant-time analysis of real-world cryptographic implementations
in WebAssembly\xspace?}
To investigate the effectiveness and efficiency of \ac{RelSE} for
constant-time analysis on WebAssembly\xspace programs, we use \textsc{Vivienne}\xspace to analyze
the implementations of seven cryptographic libraries within a time
limit of 90 minutes.
\textbf{RQ2. To what extent do the automatically generated loop
invariants affect the scalability and precision of \ac{RelSE}?}
We evaluate \textsc{Vivienne}\xspace's support for automatic invariant generation on
our benchmarks and compare it to the results of RQ1.
\textbf{RQ3. How does \textsc{Vivienne}\xspace compare to existing approaches for
constant-time analysis of WebAssembly\xspace?}
We compare \textsc{Vivienne}\xspace with CT-wasm~\cite{watt_ct-wasm_2019-1} with regards to
simplicity, permissiveness, and efficiency.
\subsection{Experimental Setup and Overview of Benchmarks}
\label{ssec:es}
We run the experiments on a machine running Debian GNU/Linux 10 (buster) on an Inte
Core\texttrademark i9-9920X processor 3.50GHz with 64GB of RAM.
We used the LLVM-10 compiler with WASI
libc~\cite{clark2019standardizing} and two optimization levels
(\texttt{-O0} and \texttt{-O3}) for compiling our C benchmarks to
WebAssembly\xspace.
\textsc{Vivienne}\xspace uses a time limit of 90 minutes for each benchmark and a
threshold of 1500 expressions to trigger a call to the portfolio
solver.
We evaluate \textsc{Vivienne}\xspace with seven cryptography libraries, including
both constant-time and non-constant-time implementations.
Some benchmarks have been used in prior works
~\cite{watt_ct-wasm_2019-1,daniel_binsecrel_2020} to evaluate
constant-time policies, which provides us with common ground for
comparison.
We extract the security policies for the first two libraries from the
type annotations of CT-wasm~\cite{ctwasm} and use the policies of
Binsec/Rel~\cite{binsec} for the other libraries.
The full details of our benchmarks are available at
\url{https://github.com/romits800/Vivienne_eval}.
\textbf{CT-wasm benchmarks (CTw):} Three handwritten WebAssembly\xspace benchmarks
from CT-wasm~\cite{watt_ct-wasm_2019-1}. We verify the
\texttt{encrypt} and \texttt{decrypt} functions of \texttt{Salsa20}
and \texttt{TEA}, and the \texttt{transform} and \texttt{update}
functions of \texttt{SHA256}.
\textbf{TweetNaCl WebAssembly\xspace (Tw):} WebAssembly\xspace implementation of
TweetNaCl~\cite{stuber_torstenstuebertweetnacl-webassembly_2019-1} previously verified by
CT-wasm~\cite{watt_ct-wasm_2019-1}.
We verify
\texttt{core\_hsalsa20}, \texttt{core\_salsa20}, and
\texttt{crypto\_onetimeauth}.
\textbf{WHACL* (WH):} A formally verified cryptography library compiled to WebAssembly\xspace ~\cite{protzenko_formally_2019}.
We verify \texttt{Chacha20}, \texttt{Curve25519\_51},
\texttt{Poly1305\_32}, \texttt{Salsa20}, and \texttt{Hash\_SHA2} in WHACL*
v3.0.0.
To our best knowledge, this is the first time WHACL* is verified.
\textbf{Libsodium (L0, L3):} A
cryptography library written in C~\cite{libsodium}.
\textsc{Vivienne}\xspace verifies the constant-time implementations of
\texttt{crypto\_aead}, \texttt{crypto\_auth}, \texttt{crypto\_stream},
\texttt{crypto\_onetimeauth}, \texttt{crypto\_core}, and
\texttt{crypto\_hash} for Libsodium v.1.0.18-stable with
optimization levels \texttt{-O0} and \texttt{-O3}.
\textbf{BearSSL (B0, B3):} An
implementation of SSL/TLS in C.
We verify the constant-time functions \texttt{aes\_ct\_cbcenc} and
\texttt{des\_ct\_cbcenc} and the non constant-time functions \texttt{aes\_big\_cbcenc} and \texttt{des\_tab\_cbcenc}. B0 includes the functions with optimization level \texttt{-O0} and B3 is optimization \texttt{-O3}.
\textbf{Almeida et al.~\cite{almeida_verifying_2016} (A0, A3):}
Five constant-time and three non-constant-time implementations of
\texttt{select} and \texttt{sort}.
We analyze WebAssembly\xspace binaries compiled with optimization
levels \texttt{-O0} and \texttt{-O3}.
\textbf{Lucky 13 (Lu0, Lu3):} A known timing
vulnerability~\cite{al_fardan_lucky_2013} of TLS implementations (see
Listing~\ref{lst:cfunc_loop}).
We analyze function \texttt{tls1\_cbc\_remove\_padding} of
OpenSSL
1.0.1 \cite{repo1}
with optimization levels \texttt{-O0} and \texttt{-O3}.
\begin{table}
\centering
\input{results/merged_sum.tex}
\caption{\label{tab:eval} Verifying 57 cryptography functions with
\textsc{Vivienne}\xspace, with unrolling and with \review{invariant inference}.
The numbers in {\color{red}red} denote incomplete results.}
\end{table}
\subsection{Results}
\label{ssec:res}
This section discusses the evaluation results for each of the
research questions.
Table~\ref{tab:eval} presents the aggregated results of the analysis
with \textsc{Vivienne}\xspace.
The columns under \texttt{Bench} describe the benchmarks, i.e.\ the
abbreviated library name, \texttt{BS}, and the number of analyzed
algorithms, \texttt{A}.
The next two columns present \textsc{Vivienne}\xspace's results with loop unrolling
(\textsc{Vivienne}$_{\text{unroll}}$\xspace) and with loop invariant (\textsc{Vivienne}$_{\text{inv}}$\xspace).
We report the number of verified constant-time implementations,
\cmark, the number of vulnerable implementations \xmark, the number of
formulas subject to simplification, \texttt{\#FS}, and the number of
queries that \textsc{Vivienne}\xspace propagates to the \ac{SMT} solver,
\texttt{\#SS}.
Note that \texttt{\#SS} is the subset of \texttt{\#FS} that requires a
call to the \ac{SMT} solver.
We highlight in \textcolor{red}{red} the incomplete results.
For example, \textsc{Vivienne}\xspace with loop unrolling (\textsc{Vivienne}$_{\text{unroll}}$\xspace) was able
to verify successfully five out of six implementations of WH within
the time limit of 90 minutes.
Appendix~\ref{sec:fulleval} includes the full evaluation results
for \textsc{Vivienne}$_{\text{unroll}}$\xspace, while the results for \textsc{Vivienne}$_{\text{inv}}$\xspace are available
as supplementary material online~\cite{tsoupidisupp2021}.
\subsubsection{RQ1: Can we use \ac{RelSE} for
constant-time analysis of real-world cryptographic implementations
in WebAssembly\xspace? }
To evaluate the effectiveness of \textsc{Vivienne}\xspace in analyzing cryptographic
libraries, we consider the rate of successfully analyzed algorithms for both secure (\cmark) and insecure (\xmark) implementations.
The summarized results (Sum) in Table~\ref{tab:eval} show that
\textsc{Vivienne}$_{\text{unroll}}$\xspace analyzes successfully 44 out of 45 constant-time
implementations and 11 out of 12 non-constant-time implementations for
a total 55/57 implementations.
This corresponds to 96\% success rate while reporting no false
positive.
The two outliers are \texttt{Hacl\_Curve25519\_51\_scalarmult} of WH
and \texttt{tls1\_cbc\_remove\_padding} of Lu0.
The former contains a
loop with 256 iterations, each generating 9108 queries.
One of these queries affects an increasingly large part of the total
execution time for an iteration.
The corresponding formula models the satisfiability of a branch condition
that depends on the stack pointer, which WHACL* stores in memory.
As a result, the formula has to encode the whole memory, which contributes with 3054 new memory stores for every iteration, thus increasing the time for
the generation and simplification of the formula.
This can be inferred from the results of
Table~\ref{tab:eval}, where the total six implementations of WH
generate 126K formulas (\texttt{\#FS}), of which 80896 correspond to
\texttt{Hacl\_Curve25519\_51\_scalarmult}.
The second outlier is
\texttt{tls1\_cbc\_remove\_padding} with \texttt{-O0}, which contains
a loop with non-constant bound, as reported in line 9 in Listing~\ref{lst:cfunc_loop}.
The lack of a constant bound forces \textsc{Vivienne}$_{\text{unroll}}$\xspace to consider all possible values for \texttt{rec->data[l-1]}, which is an eight-byte value.
This leads to maximum 256 iterations for every path that visits the
loop.
We find that the optimization level \texttt{-O0} includes a number of
stack operations that modify the memory at every iteration.
As we can see in Table~\ref{tab:eval}, this leads to 25K \texttt{\#FS}
and 4K \texttt{\#SS}.
The former requires on average 0.01 seconds (4 minutes in total) for simplification, whereas the latter requires 0.87 seconds (58 minutes in total) for SMT solving.
In summary, our results show that \ac{RelSE} can be used to analyze
real-world cryptographic implementations, while the memory operations
and loops remain the main bottleneck for the SMT solver.
\textsc{Vivienne}$_{\text{inv}}$\xspace addresses the challenge of loops by generating relational
loop invariants automatically.
Further discussion about the SMT solver results of our
analysis are available as supplementary
material~\cite{tsoupidisupp2021}.
\subsubsection{RQ2. To what extent do the automatically generated loop
invariants affect the scalability and precision of \ac{RelSE}?}
Our results in Table~\ref{tab:eval} show that \textsc{Vivienne}$_{\text{inv}}$\xspace is able to
successfully analyze constant-time implementations for the first three
benchmark libraries.
It also analyzes successfully the implementations of WH and Lu0
that \textsc{Vivienne}$_{\text{unroll}}$\xspace could not handle.
Perhaps surprisingly, \textsc{Vivienne}$_{\text{inv}}$\xspace performs poorly on the benchmarks
B0, B3, L0, and L3, analyzing only 29\% of the implementations.
The main reason is that the havocing of modified variables during the
invariant generation replaces constant values with unbounded symbolic
values.
This triggers a path explosion whenever a conditional instruction is
analyzed with the new symbolic values.
Moreover, it increases the search space for the solver and the
complexity of queries whenever a symbolic value indexes the memory in
store operations.
In Table~\ref{tab:eval}, the number of solver queries, \texttt{\#SS},
for \textsc{Vivienne}$_{\text{inv}}$\xspace is larger than for \textsc{Vivienne}$_{\text{unroll}}$\xspace, which reflects the
increase in the complexity because the solver queries
(\texttt{\#SS}) report the formulas that cannot be resolved during
the simplification stage.
For the benchmarks Tw and
B3, the number of queries, \texttt{\#FS}, also increases due to path
explosion.
By contrast, for the benchmarks that \textsc{Vivienne}$_{\text{inv}}$\xspace analyzes
successfully, \texttt{\#FS} decreases due to the reduction of loop
iterations by the loop invariant.
In summary, \textsc{Vivienne}$_{\text{inv}}$\xspace analyzes
successfully 56\% of the implementations, including
two implementations for which \textsc{Vivienne}$_{\text{unroll}}$\xspace failed.
This shows that \textsc{Vivienne}$_{\text{inv}}$\xspace complements \textsc{Vivienne}$_{\text{unroll}}$\xspace for
constant-time analysis.
\subsubsection{RQ3: How does \textsc{Vivienne}\xspace compare to existing approaches for constant-time analysis of WebAssembly\xspace?}
To our best knowledge, CT-wasm~\cite{watt_ct-wasm_2019-1} is the only
constant-time analysis tool for WebAssembly\xspace.
We consider three
dimensions for comparison: 1) simplicity, 2) permissiveness, and 3) efficiency.
Simplicity refers to the required user effort to verifying a
target implementation.
CT-wasm relies on type annotations for the program,
which can be partially inferred~\cite{watt_ct-wasm_2019-1}.
By contrast, \textsc{Vivienne}\xspace requires only the security policies and
entry-point function, otherwise no further modifications to the
generated WebAssembly\xspace binary are needed.
This reduces the user effort for analyzing a program.
Permissiveness refers to the ability of the method
to analyze and successfully verify cryptographic implementations.
CT-wasm considers the whole memory as secret, which rules out any
secure programs that store public values in memory.
For example, CT-wasm required refactoring three functions of the
TweetNaCl~\cite{stuber_torstenstuebertweetnacl-webassembly_2019-1}
library, i.e.\ \texttt{poly1305\_blocks}, \texttt{poly1305\_update},
and \texttt{poly1305\_finish}, to make it amenable to verification.
By contrast, \textsc{Vivienne}\xspace analyzes and verifies
the whole implementation of (\texttt{crypto\_onetimeauth}),
with no modifications to the original code.
Moreover, \textsc{Vivienne}\xspace could analyze and verify 57 WebAssembly\xspace implementations,
including the two libraries CTw and Tw which were verified by
CT-wasm~\cite{watt_ct-wasm_2019-1}.
With regards to efficiency, CT-wasm is clearly superior to \textsc{Vivienne}\xspace
because it relies on type checking, while \textsc{Vivienne}\xspace performs expensive
symbolic analysis and constraint solving.
However, as we have seen in RQ1, \textsc{Vivienne}\xspace was still able to analyze
real-world WebAssembly\xspace implementations within a reasonable time limit.
To summarize, \textsc{Vivienne}\xspace verifies a larger number of cryptographic
implementations than CT-wasm with no need for refactoring and with
minimal annotation efforts at the expense of an efficiency cost.
\subsubsection{Discussion}
\review{Ideally, an accurate analysis should be implemented as close to the
hardware as possible to avoid vulnerabilities introduced by compiler
transformations.
For \textsc{Vivienne}\xspace, the structured control flow of WebAssembly\xspace facilitates the
analysis, while binary-level analyses face challenges with
unstructured control flow and diversity of
architectures~\cite{balliu_automating_2014,daniel_binsecrel_2020}.
This raises the question of whether constant-time programs at WebAssembly\xspace
level preserve the property at the machine level.
The machine code generated from a WebAssembly\xspace binary relies on the
compiler of the respective runtime system.
Unfortunately, a direct analysis of this machine code with tools like
Binsec/Rel~\cite{daniel_binsecrel_2020} is not possible due to the
different calling conventions and implementation details of
Binsec/Rel.
A comparison of \textsc{Vivienne}\xspace's results at the WebAssembly\xspace level with Binsec/Rel's results at the machine level for the benchmarks L0, L3, B0, B3, A0, A3, Lu0, and Lu3
in Table~\ref{tab:eval} shows that both tools yield the same result
on all benchmarks, except of the \texttt{select} implementations of
the benchmarks of Almeida et al.~\cite{almeida_verifying_2016}.
The difference is manifested in the compilation of the \texttt{select}
implementations at optimization level \texttt{-O3}, which Binsec/Rel
identifies as insecure.
In our experiments, LLVM-10 with flag \texttt{-O3} compiles all the C
implementations of \texttt{select} (A3 in Table~\ref{tab:eval}) to
one WebAssembly\xspace \texttt{select} instruction.
The compilation from WebAssembly\xspace to machine code translates the WebAssembly\xspace
\texttt{select} instruction either to a constant-time conditional
assignment (safe), e.g.\ \texttt{cmov} for x86, or to a set of
instructions that include a branch instruction (unsafe), depending on the
target machine and the compiler implementation.
To account for these differences, \textsc{Vivienne}\xspace provides a command-line
option for treating the WebAssembly\xspace \texttt{select} instruction as unsafe.}
\begin{comment}
\subsubsection{RQ4. How accurately does our analysis at the WebAssembly\xspace level reflect the analysis at binary level?}
Ideally, an accurate analysis should be implemented as close to the
hardware as possible to avoid vulnerabilities introduced by compiler
transformations.
For \textsc{Vivienne}\xspace, the structured control flow of WebAssembly\xspace facilitates the
analysis, while binary-level analysis face challenges with
unstructured control flow and diversity of
architectures~\cite{balliu_automating_2014,daniel_binsecrel_2020}.
This raises the question of whether constant-time programs at WebAssembly\xspace
level preserve the property at the binary level.
Inside the browser, WebAssembly\xspace code relies on
the browser's \ac{JIT} compiler which hinders the direct
analysis of the machine code.
Outside the browser, CT-wasm~\cite{watt_ct-wasm_2019-1} provides a modified
implementation of \texttt{Node.js} which preserves constant-time for the generated machine code; \textsc{Vivienne}\xspace can leverage the same implementation to preserve security.
Unfortunately, a direct comparison with tools like Binsec/Rel~\cite{daniel_binsecrel_2020} not possible due to the different calling
conventions of the runtime and the implementation details of Binsec/Rel.
Therefore, we compare \textsc{Vivienne}\xspace's results at the WebAssembly\xspace level to Binsec/Rel's results at the binary level. Both tools yield the same result, except for the Almeida et
al.~\cite{almeida_verifying_2016} benchmarks.
The difference is manifested in the compilation of \texttt{select} instruction at optimization level \texttt{-O3}, which Binsec/Rel identifies as insecure.
Specifically, LLVM-10 with flag \texttt{-O3} compiles
C implementations of \texttt{select} from A3 to one WebAssembly\xspace \texttt{select}
instruction.
The compilation from WebAssembly\xspace to binary code
translates the \texttt{select} to either a constant-time conditional
assignment (safe), e.g.\ \texttt{cmov} for x86, or to a branch
instruction (unsafe), depending on the target machine and the compiler
implementation.
To account for these differences, \textsc{Vivienne}\xspace provides a command-line
option for treating the \texttt{select} instruction as unsafe.
To summarize, constant-time verification at the WebAssembly\xspace level
should rely on constant-time compiler implementations, like the existing Node.js
implementation.
For other environments, \textsc{Vivienne}\xspace provides a command-line option to
that treats the \texttt{select} instruction
as either safe or unsafe, depending on the target architecture
and the compiler implementation.
\end{comment}
\section{Conclusion}
\label{sec:conc_fw}
This paper presented \textsc{Vivienne}\xspace, an open-source tool for analyzing
constant-time for WebAssembly\xspace programs.
\textsc{Vivienne}\xspace relies on \ac{RelSE} and leverages the structure of WebAssembly\xspace to
implement several optimizations, including automated invariant
generation.
We used \textsc{Vivienne}\xspace to analyze successfully 57 cryptographic
implementations with minimal annotation overhead and no code
refactoring.
Moreover, \textsc{Vivienne}\xspace is the first tool to verify constant time for the
WebAssembly\xspace implementation of HACL*.
\section*{Acknowledgments}
We thank anonymous reviewers for their helpful feedback. This work is partially supported by the Wallenberg AI, Autonomous Systems, and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation, the TrustFull project funded by the Swedish Foundation for Strategic Research (SSF), the JointForce project funded by the Swedish Research Council (VR), and Digital Futures.
\bibliographystyle{IEEEtran}
|
1,314,259,995,657 | arxiv | \section{Introduction}
\label{sec:intro}
Strong gravitational lensing (SGL) is a powerful tool to investigate a large variety of open questions in cosmology. The formation of the distorted images of background galaxies, the ``sources'', depend on the total mass of the foreground gravitational systems acting as ``deflectors'' or ``lenses''. In case the deflectors are galaxies, SGL provides us
accurate constraints on different properties
correlated to their total mass, e.g., the mass-to-light ratio \citep{2021MNRAS.506.6144G}, the dark matter fraction \citep{2009ApJ...705.1099A,2010ApJ...721L...1T}, the slope of the total density profile (\citealt{2006ApJ...649..599K,2009ApJ...703L..51K}, \citealt{2009ApJ...705.1099A}) and its relation with other parameters (\citealt{2012ApJ...757...82B}, \citealt{2015ApJ...803...71S}; \citealt{2018MNRAS.480..431L}), the evolution history of the galaxies via merging (\citealt{2012ApJ...757...82B}, \citealt{2013ApJ...777...98S,2014ApJ...786...89S,2015AAS...22530902S}),
the initial mass function in massive ellipticals (\citealt{2011MNRAS.417.3000S, 2012MNRAS.423.1073B}), and the dark substructures around large galaxies (\citealt{ 2018MNRAS.481..819G,2019A&A...631A..40S}).
Moving to more cosmological constraints, SGL can also provide a method to measure the Hubble constant($H_0$), and other cosmological parameters (\citealt{2013ApJ...766...70S, 2017MNRAS.468.2590S,2019MNRAS.490..613S,2020MNRAS.498.1440R,2020MNRAS.498.1420W}).
Strong lenses are generally searched in imaging data, where one can clearly distinguish the lensing features in a form of arcs, of multiple images of far away sources, like galaxies or quasars
(\citealt{2008ApJ...682..964B}; \citealt{2012ApJ...744...41B}; \citealt{2013ApJ...777...98S}; \citealt{2013ApJ...766...70S}). Here, a great impulse to lens hunting has been recently provided by automatized tools for lens finding \citep{2014ApJ...785..144G}. In particular, machine learning (ML) techniques have been lately found to be very powerful in collecting hundreds of high quality (HQ) candidates (Dark Energy Survey -- DES: \citealt{2019yCat..22430017J}, Kilo Degree Survey -- KiDS: \citealt{2019MNRAS.482..807P}; \citealt{2020ApJ...899...30L, 2021ApJ...923...16L}, Hyper Supreme-Cam -- HSC: \citealt{2018PASJ...70S..29S}).
After the identification of HQ candidates, a spectroscopical follow-up is needed to confirm their gravitational lensing nature (\citealt{2019A&A...625A.119M}; \citealt{2021MNRAS.508.1686W}). In practice, one needs to collect the spectra of the lens and the source and measure their relative redshift, confirming that the lens is located in front of the source as expected from
ray-tracing lensing models
(see \citealt{2018ApJ...853..148C}, \citealt{2020ApJ...904L..31N}).
This is a severe bottle neck in the SGL studies and, so far, there have been only sparse programs dedicated to this follow-up observations (\citealt{2019MNRAS.483.3888S,2019MNRAS.485.5086S}; \citealt{2020MNRAS.494.3491L}; \citealt{2020MNRAS.494.1308N}). However, future large sky spectroscopic surveys (e.g. Taipan: \citealt{2017PASA...34...47D} 4MOST: \citealt{2012SPIE.8446E..0TD}, DESI: \citealt{2016arXiv161100036D}) will provide an unprecedented opportunity for massive
follow-ups of lensing candidates, e.g. by reserving dedicated observing nieces in wide programs, or accommodating them as filler targets in large-sky, multi-purpose surveys.
More interestingly, these large spectroscopic sky surveys will offer a unique chance to be used
as a playground for lens finding, e.g. by looking for blended emission lines of background galaxies, e.g. star-forming systems, in the spectrum of a forward massive galaxies, acting as a lens.
This method has been extensively used in the last years to produce tens of discoveries of new unknown lens candidates.
The first example of a search of this kind was presented by
\citet{2004AJ....127.1860B}, within The Sloan Lens ACS (SLACS). They found 49 SGL candidates in 50\,996 Sloan Sky Digital Survey (SDSS) spectra of luminous red galaxies (LRG).
They used
the principal-component analysis to subtract the main components of the foreground LRG spectrum and a Gaussian kernel to find the best emission lines in the residual flux. They mainly focused on [OII] (3728\AA), [OIII] (4960\AA, 5007\AA) and $H_\beta$ (4863\AA) lines, hence exploring a redshift range of $z=(0.16-0.49)$ for the lenses and $z=(0.25-0.81)$ for the sources.
Later analyses increased the number of SLACS candidates to 131 (\citealt[][SLACS hereafter]{2008ApJ...682..964B}).
Within the BOSS Emission-Line Lens Survey, \citet[][BELLS hereafter]{2012ApJ...744...41B}
extended the spectroscopic search in SLACS
to higher redshift, looking for lenses up to $z=0.7$ and the background sources
up to $z=1.4$, with no color pre-selection. This allowed them to finally find
45 SGL candidates in 133\,852 SDSS galaxy spectra.
Along the same line of approaches, in the SLACS Survey for the Masses (S4TM) project, \citet[][S4TM hereafter]{2015ApJ...803...71S,2017ApJ...851...48S} have extended the searching of SGL candidates to lower masses
and found 118 new lens candidates.
On the other hand, \citet[][BELLS GALLERY]{ 2016ApJ...824...86S,2016ApJ...833..264S} and \cite{2020MNRAS.499.3610C} looked for high-redshift Ly$\alpha$ emitters as background sources,
and found 361 candidates.
The main disadvantage of these spectroscopy selected samples is the missing images. Indeed,
even if spectra can provide the evidence of two different emitting sources along the line of sight located at different redshifts, they cannot guarantee that they represent an SGL event.
Hence, high-resolution imaging
from space telescopes or adaptive optics if needed to have a visual confirmation of the lenses.
Currently, there are 135 confirmed lenses with Hubble Space Telescope (HST) observations of the 294 selected using optical lines (70/131 from SLACS, 25/45 from BELLS, 40/118 from S4TM), and 17/21 Ly$\alpha$ candidates from BELLS GALLERY.
With the lesson learned from SDSS/BOSS, other experiments have combined the spectroscopic selection and imaging: \citet{2016ApJ...832..135C} matched 45 spectra from the Galaxy And Mass Assembly (GAMA) survey and confirmed 10 of them with Hyper Suprime-Cam (HSC) imaging; \citet{2022MNRAS.510.2305H} selected lens candidates in AAOmega spectra and followed up 56 of them with HST to find 9 confirmations.
The discovery power of this approach will be pushed to unprecedented limits by future surveys combining spectroscopy and imaging from space (e.g. Euclid mission and CSST)
and produce a revolution in the lensing searches.
However, this revolution will stand on the ability to effectively analyze
gigantic spectroscopic data loads,
which will imply the inspection of millions of spectra and the identification of (sometimse very faint) emission lines from background lensed systems. This is a prohibitive task for standard human-driven analyses, unless one adopts severe selections to reduce the spectra to visually inspect.
Machine learning techniques can provide, instead, fast and efficient methods to overcome these difficulties and systematically search for lensing features in spectra. For instance, Convolutional Neural Networks (CNNs) have been previously applied
for lens searches
by \cite[][Li+19, hereafter]{2019MNRAS.482..313L}. In particular, they have
focused on the identification of Ly$\alpha$ emitters
at higher redshift ($2<z<3$) in the spectra of lower redshift early-type galaxies ($z<0.6$), and showed that
these techniques can be efficiently used as a classifier for galaxy spectra.
In this paper, we expand this approach and develop a new CNN tool to look for SGL
in the Baryon Oscillation Spectroscopic Survey (BOSS) spectroscopic database (\citealt{2016AJ....151...44D}). Since these sources are usually star forming galaxies,
we plan to use machine learning techniques to search for higher redshift emission lines such as [OII], [OIII], $H_\alpha$, $H_\beta$ and $H_\delta$
mixed in the foreground galaxy spectra.
To do that, we build 3 CNN models:
a classifier, to search for reliable emission lines in spectra, and two regression models, to measure the foreground galaxy and the background source redshifts, respectively.
Then, we
combine the predictions of the 3 CNNs to provide a list of high probability events that we visually inspect to select HQ candidates.
Finally, we compare this {\it first deep learning spectroscopic-selected sample} with the most complete spectroscopic sample of SGL candidates in BOSS observations from \citet[][T+21 hereafter]{2021MNRAS.502.4617T}, obtained with standard cross-correlation techniques. This catalog consist of 838 likely, 448 probable, and 265 possible strong lens candidates, for a total of 1551 objects. They have also obtained a preliminary confirmation of 477 of them with low-resolution imaging.
The paper is organized as follows. In \S2, we will introduce the whole idea of this project and the details of the new CNN models. In \S3, we introduce the modelled emission lines and the construction of training data. In \S4, we show the training and testing results of the new CNNs. In \S5, we will apply the new CNNs to the BOSS spectra and derive a list of candidates
that we qualify via visual inspection of their spectra, finally providing a catalog of HQ candidates. In \S6, we will discuss the results and estimate a tentative confirmation rate based on the match with ground-based imaging. We also discuss some avenues for improvement of future CNNs. In the final \S7, we draw some conclusions.
\section{Convolutional Neural Networks for 1D spectroscopy}
\subsection{The challenge of searching for strong lenses in spectra}
\label{sec:challange}
Next generation spectroscopic surveys will target tens of millions of galaxies (\citealt{2019BAAS...51c.363M}). These huge samples will allow us to
systematically search for high-probability candidates from integrated spectra, as the number of expected events is noticeable, given the large number of background galaxies potentially giving rise to lensing events.
Using the set of predictions from Collett (2015)\footnote{https://github.com/tcollett/LensPop}, we have estimated that the number of lenses with a $1''$ Einstein radius, $R_{\rm E}$, producing lensed images of the source, observable with a spectroscopic survey with a 2$''$ diameter fiber, over 15\,000 deg$^2$ of the sky, is of the order of 7\,000. This is obtained assuming that the source is bright enough in some visual band (e.g. Euclid visual mag$=24.5$) to make also the signal-to-noise ratio of the emission lines high enough to be detected from the ground for typical spectroscopic surveys (e.g. 4MOST or DESI).
This estimate is subject to different factors, including some flux loss, but it also excludes the contribution from sources with slightly larger $R_{\rm E}$s that might eventually scatter part of their light into a 2$''$ fiber. Hence, combining all these effects, this forecast is possibly not far from realistic. This is a wealth of data extremely valuable, because it provides for free the information on the lens and the source redshifts, which are crucial for the lensing modeling. Standard techniques based on sophisticated selection criteria (T+21) still require rather time consuming visual inspection. Hence, more practical solution to perform a systematic search of lens candidates in these datasets are mandatory.
This is possibly true also for current spectroscopic surveys.
For instance, using the same set of predictions
for the BOSS area ($\sim 10000$ deg$^2$), and assuming a fainter limiting magnitude for the sources, $r=$23.5, we get $\sim 920$ lenses within a 2$''$ fiber, which become $\sim 1470$ within a 3$''$ fiber (e.g. the one available for SDSS releases earlier than 12). Currently, the largest collection of candidate lenses with BOSS spectra consists of 477 objects with lensing evidence within low-resolution images (T+21).
Taking this sample as a {\it bona fide} high-completeness sample, this is rather far from the expected number of discoverable lenses, meaning that there might more lenses to find in the full BOSS dataset. Given the full set of BOSS spectra available, i.e. $\sim2.6$M items (\citealt{2020ApJS..249....3A}), this means that we should expect one real blended emission line object every $\sim$3500 spectra.
In this work, we want to tackle the problem of
systematic searches of lens candidates in spectra with
deep learning, and use the BOSS dataset to test the efficiency of this approach.
\begin{comment}
Previous collection of $176$, of which only 93 have been confirmed with space observations (SLACS: \citealt{2008ApJ...682..964B}; BELLS: \citealt{2012ApJ...744...41B}), m
number of candidate lenses with SDSS/BOSS spectra is $176$, of which only 93 have been confirmed with space observations (SLACS: \citealt{2008ApJ...682..964B}; BELLS: \citealt{2012ApJ...744...41B}), meaning that there is a large portion of potential lenses yet to be discovered.
Given the full set of SDSS/BOSS spectra available, i.e. $\sim2.6$M items (\citealt{2020ApJS..249....3A}), and taking into account that visually inspecting these spectra to look for emission lines of the background source is prohibitive, the only viable solution to perform a systematic search of lens candidates
remains machine learning, even in the current datasets.
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{paper/fig/CNN_model_detail.png}
\caption{The CNN model adopted for the GasNet-L1, L2, and L3. The network structure is the same for the three GaSNets, except the activation and loss functions as reported in Table \ref{table:my activation and loss}.
\label{CNN_model_detail}
\end{figure}
\subsection{Convolutional Neural Networks as lens classifiers in 1D spectra}
When searching for strong lens candidates in 1-dimension (1D) spectra, one needs to
identify two main features: 1) the potential emission lines from the background sources, to determine
the redshift of the source, and
2) the absorption or emission lines of the foreground galaxies, to determine the redshift of the lens and compare this with the one of the putative source to possibly qualify the whole system as a lensing candidate.
In most of the current and planned surveys, the redshift of the main galaxy (the lens) is a standard data product, hence this can be assumed to be a label of the spectra under use.
This can be either used as a first guess for the lens classifier, to estimate the lens redshift itself,
or kept fixed, asking the CNN to identify tentative background lines (see below).
\subsection{Galaxy Spectra convolutional neural Networks (GaSNets)}
\label{sec:galnets}
In this work, we present the first set of Galaxy Spectra convolutional neural Networks (GaSNets) for Lensing (-L). These are CNNs trained to identify strong lensing event candidates in 1D galaxy spectra. To perform this task, we have built
three different GaSNet models:
\begin{enumerate}
\item GaSNet-L1. This CNN is a classifier, trained to look for the presence of emission lines blended in the features of the foreground galaxy and give the probability to be a lens ($P_L$). In doing this, we do not assume any specific morphology for the lens, which can be either a standard early-type galaxy (ETG), dominated by absorption line features, or a late-type galaxy (LTG), with ongoing star-formation. The GaSNet-L1\ will learn whether, in the spectra of either kinds, there are higher-redshift emission lines, to finally give the $P_L$.
\item GaSNet-L2. This CNN is a regression algorithm, trained to identify potential emission lines, among a list of standard features from star-forming galaxies, overlapping a foreground galaxy spectrum and predict their redshift ($z_{PE}$).
\item GaSNet-L3. This CNN is also regression algorithm, trained to predict the redshift of the foreground galaxy ($z_{PG}$) from the combination of continuum plus a) classical absorption features from ETG spectra or b) emission lines of LTGs.
Having such an output, will make the overall Network general enough to be applied to spectroscopic databases, regardless these have gone though a pipeline to estimate galaxy redshifts. In our analysis below, even though
we can assume that the redshift of the lenses are given (as they are provided with the BOSS spectra, see Sect. \ref{sec:data}), we opt to use the redshift predictions of our GaSNet-L3\ for the candidate selection and use the BOSS redshifts as ground truth to assess the accuracy of the deep learning estimates.
\end{enumerate}
The three CNN models have the same structure. They are built by 6 convolution layers and 3 total connected layers (see Fig. \ref{CNN_model_detail}), assembled by Python modules TensorFlow and Keras.
{
In the last layer in Fig. \ref{CNN_model_detail}, due to the different task to perform (classification vs. regression), for GaSNet-L1\ we use a ``sigmoid" activation function (labeled as ``my activation''), while for GaSNet-L2\ and L3 we need no activation.}
For the same reason, we also use different loss functions.
For GaSNet-L1\ we adopt a ``binary cross-entropy" loss, which is commonly used for a binary classifier. For GaSNet-L2\ and GaSNet-L3, which are two regression models, instead of the commonly used MAE and MSE loss functions, we apply the ``Huber" loss. This is defined as
\begin{equation}
L_{\delta}(a) =
\begin{cases}
\dfrac{1}{2}(a)^2, \ \ \ |a|\leq\delta\\
\delta\cdot(|a|-\dfrac{1}{2} \delta), \ \ \ {\rm otherwise}.
\end{cases}
\end{equation}
where $a=y_{\rm true}-y_{\rm pred}$, $y_{\rm true}$ is the real redshift, $y_{\rm pred}$ is the predicted redshift by the CNNs and $\delta$ is a parameter that can be preset ($0.001$ in this work). The choice of the ``Huber" Loss has been made because, as shown in CNN regression models for galaxy light profiles (i.e. the GaLNets, \citealt{2021arXiv211105434L}), it can achieve higher accuracy than MAE and MSE and better convergence.
Both ``activation'' and ``loss functions''
are summarized in Table \ref{table:my activation and loss}.
\begin{table}
\footnotesize
\caption{CNN ``my activation'' and ``loss function''
}
\begin{tabular}{l | c | c | c}
\hline
\hline & GaSNet-L1\ & GaSNet-L2\ & GaSNet-L3\ \\
\hline my \ activation & sigmoid & - & - \\
\hline loss \ function & binary \ crossentropy & huber\_loss & huber\_loss \\
\hline
\hline
\end{tabular}
\label{table:my activation and loss}
\end{table}
\begin{figure}
\centering
\includegraphics[height=10cm,width=1\linewidth]{paper/fig/whole_prej.png}
\caption{Flow-chart describing the process to obtain the HQ candidates combining the output of the three GaSNets. The final step is the visual inspection of the candidates selected using the probability criterion, $P_L > P_{\rm thresh}$, combined with the presence of background emission lines, $z_{PE} > z_{PG} +0.1$.
\label{whole_CNN_model}
\end{figure}
\begin{comment}
The ``Huber" loss have been proved to be much better that mean absolute error(``MAE'') and mean square errors(``MSE'')(references), where $y_t$ is the true value and $y_p$ is the predicted values.
\begin{align*}
& {\rm sigmoid} = \frac{1}{1 + e^{-x}} \\
\end{align*}
Finally, we will according the $P_L$, $z_E$ and $z_G$ value give some CNN predict candidates, fore example, if a spectrum gain a larger value $P_L$, it must gain $z_E > z_G$.
\end{comment}
From Fig. \ref{CNN_model_detail} we see that the CNNs all accept a 1D spectrum (i.e. a vector of wavelength and fluxes) as input and produce as predicted parameters, either a probability ($P_L$ for GaSNet-L1) or a redshift (i.e. $z_{PE}$ and $z_{PG}$ for GaSNet-L2\ and GaSNet-L3, respectively).
\subsection{Decomposing a complex CNN model}
\label{sec:dcompose of CNN}
To conclude this section, we briefly discuss the choice
to combine the outcome of three CNNs to improve the accuracy of the identification of high-quality (HQ) candidates and minimize the chance of false detection.
This task involves two steps: 1) the identification of different kinds of features that can suggest the presence of a lensing event, i.e. the coexistence of absorption and emissions lines from different objects along the line-of-sight, and 2) the verification that
(some of) the emission lines come from the background system. This is a complex classification task
that can be more efficiently performed by
combining different CNNs with different specializations.
Indeed, GaSNet-L1\ is designed to identify
a specific series of emission lines at a higher redshift overlying
a lower redshift spectrum characterised either by a continuum plus absorption lines typical of ETGs or continuum plus emission lines from LTGs. Even if the training sample is made of real galaxy spectra, where the simulated emission lines from mock background sources are randomly
redshifted
with respect to the main galaxy (see \S\ref{sec: training data}), GaSNet-L1\
can only give a probability of the coexistence of a lens and a source at different redshifts, but cannot predict by how much the emissions of the source are misplaced. Since this process can be uncertain, we cannot exclude that
GaSNet-L1\ can confuse a lensing event with other ``local'' emission processes (e.g. active galactic nuclei,
ongoing star-formation, gas outflows,
etc.), and viceversa. On the other hand, the GaSNet-L2\ and GaSNet-L3\ are able to predict the redshift of the tentative source and the lens, independently,
meaning that they cannot predict, individually, if there is another object at a different redshift, compatibly with a lensing event.
Only using the outputs of these three GaSNets together, we can both give a ``high probability'' that there are two different systems contributing to the spectrum and establish that the closer one is a galaxy, with redshift $z_{PG}$, and the background one is a fainter line emitter, with redshift $z_{PE}$.
In particular, to qualify a spectrum as a candidate, we use the following conditions: 1) $P_L > P_{\rm thresh}$, and 2) $z_{PE} > z_{PG}$, where $P_{\rm thresh}$ is an appropriate lower probability threshold that will be chosen later to define the high-probability candidates that will be further visually investigated to assemble the list of candidates to pass to the visual inspection, which finally produces the HQ candidate list.
The full process for the selection and grading of the HQ candidates is schematized in Fig. \ref{whole_CNN_model}.
\begin{figure}
\hspace{-0.5cm}
\centering
\includegraphics[width=1\linewidth]{paper/fig/all_spe/All_specZGdis.png}
\includegraphics[width=1\linewidth]{paper/fig/all_spe/All_specSNRdis.png}
\caption{The redshift and SNR distribution of the DR16-predictive sample (see text for details).
}
\label{fig:z_dis_and_SNR}
\end{figure}
\section{Training and predictive data}
\label{sec: training data}
The construction of the training sample is a critical step of any supervised ML algorithm. Indeed, to avoid biased predictions and fictitious performances, the training samples need to be
as close as possible to real observations.
In our case we build our training sample starting from real spectra from BOSS, over which we simulate the presence of emission lines.
Here below, we first introduce the dataset we use for our analysis. Then, we describe the way we have constructed the training set.
This is
constituted by two samples.
First, the {\it negative} sample, representing a catalog of galaxy spectra with no background sources blended-in. As mentioned earlier, we do not make any selection of galaxy types and we include ETGs and LTGs.
Second, the so-called {\it positive} sample, representing a simulated sample of spectra that emulates the presence of emission lines from a background source. This is made of the same galaxy spectra of the negative sample, but with the addition of artificial emission lines, redshifted with respect to the ``foreground'' galaxies.
\subsection{Data selection and predictive sample}
\label{sec:data}
The Sloan Digital Sky Survey (SDSS, see \citealt{2000AJ....120.1579Y})
has observed over 10\,000 deg$^2$ of the sky, performing multi-band photometry and spectroscopy \citep{1999cs........7009S}.
In 2009, before the start of the Baryon Oscillation Spectroscopic Survey \citep[BOSS,][]{2009astro2010S.314S},
in the third stage of the project (SDSS-III),
the spectrograph operating the observations has been upgraded.
Compared to SDSS-I/II, the number of fibers was increased from 640 to 1000, and the fiber diameter has been reduced from 3$''$ to 2$''$
\citep{2012ApJS..203...21A}.
The extended version of the BOSS survey, eBOSS \citep{2016AJ....151...44D}, has overall
produced spectra for around 2.6 million galaxies, in the wavelength range 361–1014 nm. These are
publicly available through the latest data release 16 (DR16, \citealt{2020ApJS..249....3A}). This is the dataset we use in this work\footnote{For convenience we will address this as eBOSS or DR16.}, over which we operate a series of selections to ensure the quality of the spectra to analyse.
In particular, we select only: 1) plates labeled as ``good'' quality, 2) ``Object'' flags labeled as ``galaxy'', 3) spectroscopic redshift between $0.05-0.8$, 4) spectra with SNR$>2$, 5) wavelength range 3700–9200\AA.
\begin{table}
\caption{Model parameters of equation \ref{eq:line flux function}.}
\centering
\begin{tabular}{l l c l l}
\hline
\hline & $\lambda_{e,1}$ & $\lambda_{e,2}$ & $h_e$ & $z_{max}$ \\
\hline $[OII]^1$ & 3726.2 & 3728.9 & [2,10] & 1.44 \\
\hline [OIII] & 4959.0 & n/a & [1,5] & 0.82 \\
\hline [OIII] & 5007.0 & n/a & [1,15] & 0.82 \\
\hline $H_\alpha$ & 6562.8 & n/a & [1,15] & 0.39 \\
\hline $H_\gamma$ & 4340.5 & n/a & [1,5] & 1.10 \\
\hline $H_\beta$ & 4861.3 & n/a & [1,5] & 0.87 \\
\hline
\hline
\end{tabular}
\label{table:model parameters}
\end{table}
This latter criterion is applied to avoid a rather noisy region of the spectra, at $\lambda>9200$\AA, where the residuals from the telluric line subtraction might be a source of spurious detections. This is a problem we expect to deal in future developments, but we wanted to avoid in this first test. We stress, though, that the reduced wavelength range will allow us to train tools that can be straightforwardly applicable to the SDSS-I/II spectra, whose wavelength range is 3700-9200\AA. Of course, this makes our tools less sensitive to higher-$z$ systems, as many of the emission lines we want to detect will fall out of the range at lower redshift (see below).
Criterion 3 is dictated by the line observability. Indeed, we will assume that typical sources in the SGL events are star-forming galaxies characterized by emission lines as reported in Table \ref{table:model parameters}. Here, for each line, we list the central wavelength(s), the maximum redshift the emission line can reach below 9200\AA, $z_{\rm max}$,
and an intensity parameter, $h_e$, that will be used in \S\ref{sec: artificial emission model} for the simulated spectra.
According to this list, for redshift $z_E\gsim1.4$, all lines would fall out of the eBOSS wavelength range, while with $z_E\lsim1.2$, we can still retain two emission lines, i.e. [OII] and $H_\gamma$. In order to select lenses that are compatible with the visibility of the background lines and with a reasonable lens-source distance to guarantee a SGL event, we collect spectra in the range $z_G=(0.05 \sim 0.8)$.
Criterion 4, on the other hand,
is an optimistic lower limit
we have chosen to increase the completeness. We have considered that the emission lines from background sources have a SNR which is not necessarily correlated to the one of the whole spectrum and, thus, can be seen also in noisy galaxy spectra.
The final selected sample consists of
$1\,339\,895$ spectra: in the following, we will refer to this as the {\it DR16-predictive} sample.
In Fig. \ref{fig:z_dis_and_SNR} we show the SNR distribution of the selected spectra
and the redshift of the central galaxy.
\subsection{Construction of the negative sample}
\label{sec:neg sample}
The first step to produce our training dataset is the selection of the negative sample. This is chosen in order to make the CNN as general as possible, hence assuming that every type of galaxies can work as a lens, with no particular restrictions in luminosity or color, like it is typically done to contain the predictive samples in imaging classifiers (see e.g. Petrillo et al. 2019, Li et al. 2020).
For this purpose, we select
140\,000 galaxies spectra from the DR16-predictive sample,
with a wavelength range of 3700-9200 \AA.
We take particular care that the selected spectra uniformly cover, in number, the full $z_G$ range, by counting the spectra in redshift bins of 0.05. This is crucial to avoid any bias in the prediction of the $z_G$ from the poor sampling of one redshift bin with respect to the close ones.
To mimic the presence of emission lines from local processes, for 1/5 of the negative sample, we add artificial emission lines with the same redshift of the galaxy, while the remaining 4/5 of the negative sample is left unchanged.
In particular, for this simulated ``local emissions'', we use
the same lines, reported in Table \ref{table:model parameters}, that will be used to simulate the background source emissions, which we expect the GaSNets\ to distinguish from the local ones (if any). .
\begin{comment}
\subsection{terms and symbols}
The selection and construction of CNN's training data is one of keys that make sure CNN give a correct prediction. Before introducing the training data, we first define several terms and symbols:
$z_G$: the redshift of the galaxies, given by eBOSS data, but in the following section we will built a CNN model to predict the redshift of the galaxies, in order to check the reliability of our model(see \S\ref{sec: CNN3 train}).
$z_E$: the redshift of emission line. For a lens system, except the potentially emission line of foreground galaxy, the background galaxy often will provide some emissions line in the spectrum of foreground galaxy, differ(larger) value from foreground galaxy($z_G$), We label the redshift of those kinds of emission line as $z_E$. In the next part of this section we will introduce the details of constructing artificially this kinds of emission line (see \S\ref{sec: artificial emission model}). In this article, unless otherwise specified, word “emission line” always represent this case. Final we also built a CNN model to predict $z_E$(see \S\ref{sec: CNN2 train}).
Foreground or background emission line: A galaxy spectrum contains emission line, emitted by other sources, if $z_E>z_G$, we call those lines as background emission lines, otherwise, foreground emission lines. For the spectrum of lens' system, always comes up background emission lines.
Negative or positive sample: for training a two-classifier CNN model, we often need a negative sample, in here represent those galaxy's spectrum without other emission line sources except itself; a positive sample, which can be constructed by negative sample, directly adding artificial emission line, only make sure that $z_E \ne z_G$. In our sample, we make sure that $|z_E - z_G| > 0.04$, in this condition, we constructed foreground emission lines, for those $z_E < z_G$; also constructed background emission lines, for those $z_E > z_G$.
$P$: we define the $P$ is the probability of a spectrum contain external emission line source given by CNN classifier(see \S\ref{sec: CNN1 train}). if $P=1$, must $|z_E - z_G| > 0.04$ for a spectrum.
\end{comment}
\begin{figure}
{\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiOxII0.0.pdf}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiOxII1.0.pdf}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiHa0.0.pdf}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiHa1.0.pdf}
\end{minipage}}
\caption{Detail of the line profiles of the simulated emissions. Artificial Oxygen, OII (3727.1\AA\ and 3729.9\AA) lines (top) and Halpha (6562.8\AA) lines (bottom), as observed at different redshifts. The line fluxes are redshifted and dimmed according to Eqs. 4-6.
\label{fig:Oxgen
\end{figure}
\begin{figure}
{\begin{minipage}[t]{1.0\linewidth}
\hspace{-0.5cm}
\includegraphics[width=1.08\textwidth]{paper/fig/PureEmi0.0.pdf}
\end{minipage}}
{\begin{minipage}[t]{1.0\linewidth}
\hspace{-0.5cm}
\includegraphics[width=1.08\textwidth]{paper/fig/PureEmi0.5.pdf}
\end{minipage}}
{\begin{minipage}[t]{1.0\linewidth}
\hspace{-0.5cm}
\includegraphics[width=1.08\textwidth]{paper/fig/PureEmi1.0.pdf}
\end{minipage}}
\caption{Full spectrum of artificial emission lines added to the negative sample (see Table \ref{table:model parameters} and text for details). The line fluxes are redshifted and dimmed according to Eqs. 4-6.
\label{fig:all emi
\end{figure}
\subsection{Artificial emission line model}
\label{sec: artificial emission model}
In this section, we give more details about the
artificial emission lines we want to add to the original eBOSS spectra to emulate both some local and higher-$z$ emissions in the negative and positive samples, to be used for the CNN training.
Following Li+19, we use a 1-dimensional ``double gaussian'' profile,
defined as:
\begin{equation}\label{eq:line flux function}
F(\lambda) = h_1 \exp\{-\frac{(\lambda - \lambda_{e,1})^2}{2 \sigma_1 ^2}\} + h_2 \exp\{-\frac{(\lambda - \lambda_{e,2})^2}{2 \sigma_2 ^2}\}
\end{equation}
where $F$ is the flux,
$\lambda_{e,1}$ and $\lambda_{e,2}$ are the central
wavelengths of the emission lines
{and $h_1$, $h_2$, $\sigma_1$, $\sigma_2$ are the four model parameters. The $\lambda_{e,1}$, $\lambda_{e,2}$, $h_1$, and $h_2$ are listed in Table \ref{table:model parameters}.
These parameters are further defined to satisfy the following conditions:}
\begin{equation}
\label{eq:line condition}
\begin{cases}
\sigma_1 \in [0.8,1.6] \\
\sigma_2 \in \sigma_1 + [0.5 \sigma_1 ,1 \sigma_1 ] \\
h_1 = \frac{h_e}{(1+z_E)^2} \\
h_2 = \frac{h_e}{4(1+z_E)^2} \rm ~if ~ \lambda_{e,2}\neq n/a; & =0~ \rm otherwise \\
\end{cases}
\end{equation}
where the $\sigma_1$ is uniformly selected in the interval $[0.8,1.6]$,
the amplitude parameter, $h_e$, is given in Table \ref{table:model parameters}, and $z_E$ is the redshift of emission line we want to simulate, assumed to be uniform in the range [$z_G+0.1$, 1.2]\footnote{As we will discuss in \S\ref{sec:build positive sample}, since $z_G$ is assumed to be also uniform for the positive sample, this condition produces a final $z_E$ distribution which is pseudo log-normal with a cut at 1.2. For the negative sample, discussed in \S\ref{sec:neg sample}, this condition produces a $z_E$ distribution that follows the distribution of the $z_G$ as in Fig. \ref{fig:z_dis_and_SNR}.}
The $\sigma_1$ range is determined under the assumption
that the emission lines from sources are enlarged by rotation. Hence, the line broadening in wavelength can be written as $\Delta \lambda \approx 2 \lambda_0 v_r/c $, where $\lambda_0$ is the central wavelength of the emission line, $v_r$ is the max velocity along the line-of-sight, and $c$ is the speed of light.
Then, we can approximate $2 \sigma_1 \approx \Delta \lambda $, which, for a rotation $v_r \sim 100$ km/s and $\lambda_0 \approx 370$nm, gives $\sigma_1 \approx 1.2$\AA. Taking into account a larger wavelength range and rotation spectrum, we can reasonably make $\sigma_1$ varying over a further $\pm0.4$\AA\ range, i.e. the one we have assumed in Eqs. \ref{eq:line condition}.
\begin{comment}
the parameters with a range of distribution both using exponential decay distribution equation \ref{eq:distru of parameter}.
\end{comment}
Finally, we remark that the absolute amplitude of the emission lines, depending on $h_e$, is not of major importance, as the final SNR of the line strongly depends on the continuum of the spectrum the lines are added to. On the other hand, two other important features are 1) the relative distance of the line central wavelengths ($\lambda_{e,1}$ and $\lambda_{e,2}$) and 2) their relative full width half maximum (FWHM), connected to the $\sigma_1$ and $\sigma_2$ parameters.
\begin{comment}
After the above constraints are met, we try to make sure all the parameters obey exponential decay distribution:
\begin{equation}\label{eq:distru of parameter}
f(x;\beta) = \frac{1}{\beta} \ exp \{-\frac{x}{\beta} \} + bias
\end{equation}
the distribution of those parameters were setting below parameter:
\begin{table}
\centering
\begin{tabular}{ c c c c c }
\hline
\hline parameter & $\beta$ & $bias$ & cut\\
\hline $a$ & 1 & 0.5 & $<5 h_r$ \\
\hline $W$ & 1 & 0.5 & None \\
\hline $\sigma_1$ & 1 & 0 & $0.4 \le \sigma_1 \le 1.2$ \\
\hline $\sigma_2$ & 1 & 0 & $0.4 \le \sigma_1 \le 1.2$ \\
\hline
\hline
\end{tabular}
\caption{model parameters of equation \ref{eq:distru of parameter}}
\label{table:model parameters}
\end{table}
for example, in first line, we set $\beta=1$, $bias=0.5$ for $a$, and make sure its value smaller than $<5 h_r$. Finally We select 10\,000 mock samples to see the distribution of those parameters:
\begin{figure}
\centering
\subfigure[distribution of $h_1$]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/height1_distr.png}
\end{minipage}}\subfigure[distribution of $h_2$]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/height2_distr.png}
\end{minipage}}
\subfigure[distribution of $\sigma_1$]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/sigma1_distr.png}
\end{minipage}}\subfigure[distribution of $\sigma_2$]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/sigma2_distr.png}
\end{minipage}}
\caption{parameters' distribution. the relatively small amount of low value in (d) due to the suppression of conditions 3rd of equation \ref{eq:line condition}
\label{fig:parameters distr}
\end{figure}
\end{comment}
\begin{figure}
\centering
{\begin{minipage}[t]{1.0\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/AddEmi0.0.pdf}
\end{minipage}}
{\begin{minipage}[t]{1.0\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/AddEmi0.5.pdf}
\end{minipage}}
{\begin{minipage}[t]{1.0\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/AddEmi1.0.pdf}
\end{minipage}}
\caption{The spectra after add artificial emission line. We add the mock emission lines in several different redshift position($z_E$) in real spectrum, in order to simulate the positive sample.
\label{fig:simulate galaxy spec}
\end{figure}
Simulated lines are first randomly generated at $z_E$=0 and then randomly redshifted to $z_E>z_G+0.1$, where $z_G$ is the redshift of the negative spectrum from which the positive is generated (see \S\ref{sec:build positive sample}).
The flux at the redshift $z_E$ is then defined according to the standard equation:
\begin{equation}
F_{z_E}(\lambda) = F\left(\frac{\lambda}{1+z_E}\right)
\end{equation}
where the function $F$ is the rest frame emission line flux function (Eq. \ref{eq:line flux function}).
The central wavelength of $F_{Z}$, $\lambda_{cz}$, is defined as
\begin{equation}
\frac{\lambda_{cz}}{1+z_E} = \lambda_{c0} \ \to \ \lambda_{cz} = (1+z_E)\lambda_{c0}.
\end{equation}
The interval of $\lambda$ is equal
\begin{equation}
\frac{\mathrm{d}\lambda'}{1+z_E} = \mathrm{d}\lambda \ \to \ \mathrm{d}\lambda' = (1+z_E)\mathrm{d}\lambda
\end{equation}
where $\lambda_{c0}$ is the central wavelength of the rest frame. According to the equations above, $\lambda_{c0}$ shifted to $(1+z_E)\lambda_{c0}$, and the interval of $\lambda$ in rest frame will broaden to $(1+z_E)\mathrm{d}\lambda$.
Figs. \ref{fig:Oxgen}
and \ref{fig:all emi} show how typical simulated emission lines from Table \ref{table:model parameters} are simulated according to the random parameters from Eqs. \ref{eq:line condition} and shifted to 0.5 and 1 redshifts.
\begin{comment}
\begin{figure}
\centering
\subfigure[$z_E = 0.0$]
{\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiHa0.0.pdf}
\end{minipage}}\subfigure[$z_E = 0.5$]
{\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiHa0.5.pdf}
\end{minipage}}\subfigure[$z_E = 1.0$]
{\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/SingEmiHa1.0.pdf}
\end{minipage}}
\caption{artificial $H_\alpha$
\label{fig:Hydrogen y
\end{figure}
\end{comment}
\subsection{Simulating the positive sample}
\label{sec:build positive sample}
Next step is to build a positive sample by adding simulated emission lines to the negative sample.
As anticipated, we use the same lines as in Table \ref{table:model parameters}, this time with lines redshifted to $z_E>z_G+0.1$, with the condition that $z_E\mathrel{\rlap{\lower3.5pt\hbox{\hskip0.5pt$\sim$} 1.2$.
\begin{comment}
before do it, we inspected the real galaxy's spectrum(Fig. \ref{fig:lens spec}).
\begin{figure}
\centering
\subfigure[“huge” strength intensity of emission line]
{\begin{minipage}[t]{0.8\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/real spect/0275-519139-0230.png}
\end{minipage}}
\subfigure[“weak” strength intensity of emission line]
{\begin{minipage}[t]{0.8\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/real spect/0541-51959-0145.png}
\end{minipage}}
\caption{Inspection of the real spectrum and its emission lines,
from SDSS Science Archive Server (https://dr16.sdss.org/optical/spectrum/view)}
\label{fig:real spect}
\end{figure}
As you can see, the intensity of real emission line distribute from order one to several tens, the later represents intense star formation inner galaxy, the another the represents the opposite side. So we need to rescaling our simulated emission line to ensure it could cover this range, we just add a weight factor multiply the flux function, and we select the emission line in Table \ref{table:model parameters} to simulate, the flux of emission line we add is:
\begin{equation} \label{eq:total flux}
flux = W* \sum_{all \ \ line} F_Z(\lambda)
\end{equation}
Again, we set the weight factor W obey the distribution of Equation \ref{eq:distru of parameter}, the distribution of mean flux no multiply W and multiply W is:
\begin{figure}
\centering
\subfigure[the mean flux intensity no multiply W]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/mean_flux_high_notW_distr.png}
\end{minipage}}\subfigure[the mean flux intensity multiply W]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/line model/mean_flux_high_W_distr.png}
\end{minipage}}
\caption{the mean flux intensity(total divided 7) of per 10\,000 sample}
\label{fig:mean flux distr}
\end{figure}
\end{comment}
\begin{figure}
\centering
{\begin{minipage}[t]{1\linewidth}
\centering
\includegraphics[width=1.01
\textwidth]{paper/fig/len spectrum/393-51794-456.pdf}
\end{minipage}}
{\begin{minipage}[t]{1\linewidth}
\centering
\includegraphics[width=1.01\textwidth]{paper/fig/len spectrum/436-51883-633.pdf}
\end{minipage}}
{\begin{minipage}[t]{1.01\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/len spectrum/391-51782-88.pdf}
\end{minipage}}
\caption{Spectra of confirmed gravitational lenses in \citet{2008ApJ...682..964B}, red line is the location of the identified emission lines.
\label{fig:lens spec}
\end{figure}
In Fig. \ref{fig:simulate galaxy spec} we show three simulated positive spectra
for a single moderate SNR ($\sim 7$) negative spectrum (SDSS-468-51912). Here, we
have marked the location of the simulated emission lines at different redshifts, on top of the continuum of the real eBOSS galaxy spectrum.
Looking at these spectra, one can visually figure the major challenges to identify the ``ground truth'' emission lines in them. First, the line's SNR, depending not only on the $h_1$ and $h_2$ but also on the intrinsic spectrum noise.
Second, the contamination from residual sky lines, e.g. at $\lambda>8000$ \AA. Third, the effect of the source redshift, which can shift most of the relevant emission lines from Table \ref{table:model parameters} out of the spectral range (at $\lambda > 9200$\AA, e.g. in Fig. \ref{fig:simulate galaxy spec}-c). This latter issue could be in principle solved by considering more emission lines in our reference catalog.
We will consider this for the next developments of GaSNets. However, here we stress that adding more lines, which in most of the cases have much lower SNRs in real galaxies, might introduce more uncertainties in the predictions of the GaSNet-L2, as they might be easier confused with random noise, especially in low-SNR spectra.
As a comparison with real lensing events, in Fig. \ref{fig:lens spec} we report some spectra of confirmed lenses from \citet[][see also \S\ref{sec:confirmed_lens}]{2008ApJ...682..964B}. Here
the locations of the emission lines of the background sources are marked, again, as red vertical lines.
In particular, in the figure
we show spectra with different SNRs to visualize the impact of the spectra quality on the recognisability of the lines. In SDSS-393-51794 all lines are clearly visible and show a pattern similar to the simulated lines in Fig. \ref{fig:all emi}. In SDSS-436-51883, despite the spectrum SNR is comparable to the one above, the lower signal of the background lines makes some of them embedded in the noise, although some others still stick out rather clearly. Here, the number of visible lines is reduced by the higher redshift of the source ($z_E=0.452$). Finally, in SDSS-391-51782, the redshift of the source ($z_E=0.931$) consents the observations of only two lines, which are yet rather easy to spot because of the decent SNR of the spectrum and the high signal of the lines. Overall,
these examples show the kind of features the CNN needs to be trained on identifying in the spectra and the impact of the spectra quality and SNR of the background emission on the final
line detection and redshift determination.
Similarly, these examples provide textbook cases of HQ candidates we will visually grade among the high probability candidates provided from the GaSNets (see \S\ref{subsec:visual}).
\begin{comment}
and soon afterwards we will introduce how to purify it. We call this sample as “contaminated negative sample”.
Then base on this contaminated negative sample, we constructed a positive sample using the method in \S\ref{sec:build positive sample}, the simulated line $z_E$ distribution see Fig. \ref{fig:galaxy Z distr}(b), we call this sample as “contaminated positive sample”. By the way, the contaminated positive sample may contain not correct $z_E$ label, but its galaxy spectrum always contains emissions line source except itself(maybe simulated,or both of simulated and natural).
\begin{figure}
\centering
\subfigure[$z_G$ of contaminated negative sample (amount: 367404)]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/z_G-Contaminated-nagative-sample-367404-sn-10-distr.png}
\end{minipage}}\subfigure[$z_E$ of contaminated positive sample (amount:406040)]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/z_E-V1-positive-sample-406040-sn-10-distr.png}
\end{minipage}}
\subfigure[$z_G$ of contaminated positive sample]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/z_G-V1-positive-sample-406040-sn-10-distr.png}
\end{minipage}}\subfigure[$z_E$ vs. $z_G$ of contaminated positive sample]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/z_G-vs-z_E-V1-positive-sample-406040-sn-10.png}
\end{minipage}}
\caption{contaminated sample. in FIG(a)(b), the more sample of $z_G$ in $0.05 - 0.40$, naturally the less sample of $z_E$ in $0.05 - 0.40$, because $|z_E-z_G|>0.04$
\label{fig:galaxy Z distr}
\end{figure}
Now we already gain a negative and positive sample(although they are contaminated, but we assumed only very small fraction). You can see in Fig. \ref{fig:galaxy Z distr}(d), existing gap in low $z_G$(gap located in $z_G<0.05$) and high $z_G$(gap located in $z_G>0.5$), it will bring problems in CNN2 and CNN3 predictions(see \S\ref{sec: CNN2 train} and \ref{sec: CNN3 train} ),in the training of CNN1, we will cut off this part and preserve the part of $0.05 \le z_G \le 0.5$ as training sample.
\end{comment}
\begin{comment}
The only thing left is purifying the contaminated negative sample, we hope the CNN can finish the purifying job, we will introduce the details of purify contaminated negative sample in \S\ref{sec: CNN1 train}.
\end{comment}
\subsection{Confirmed lenses from previous spectroscopic searches} \label{sec:confirmed_lens}
As anticipated in the previous section, we also collect candidate/confirmed lenses from previous spectroscopic searches in
SDSS/BOSS, using standard techniques, to be used as a real test sample for our deep learning tools. In particular we have collected 131 objects from \cite{2008ApJ...682..964B}, 45 from \cite{2012ApJ...744...41B} and 118 from \citet{2017ApJ...851...48S}, that have secure confirmation based on HST follow-up.
This ``test sample'' made of real systems is useful for two main purposes: 1) to measure the {\it completeness} of our tool, by checking how many of these lenses are recovered by GaSNet-L1; 2) to test how {\it accurate} the GaSNet-L2\ and GaSNet-L3\ are in determining the $z_E$ and $z_G$, respectively.
We will also compare our final catalog of HQ candidates vs. the latest highly complete sample of spectroscopic selected candidates in eBOSS from T+21. This will allow us to check the presence of candidates missed by standard techniques, and draw conclusions about the different approaches.
\begin{comment}
first we train a CNN classifier program, called it CNN0, using the former contaminated negative sample and the contaminated positive sample as training data. Then set the contaminated negative sample as input data, using this model to predict the probability of galaxy which contains emission line other source(differ at least 0.01). Finally we cut the part of probability larger 0.2, the remaining part is purified negative sample(To a great extent, reduce contamination). In order to test the effectiveness of this method, we get 131 confirmed lens from \citep{2008ApJ...682..964B},as the lens test sample, the probability larger 0.2 is 115, the efficiency roughly $0.88$.
Base on the purified negative sample,we anew add artificial emission line to construct a positive sample.(Now in order to make sure the negative and positive sample symmetry, and make sure the CNN can learn the information of a spectrum with emission line in different location, we pick out 10,000 spectrum in negative sample, and every spectrum construct 3 different $z_E$ positive spectrum, finally we get 348,009 vs. 300,000, negative sample vs. Positive sample.)
Here are the after truncated and purified negative sample and positive sample $z_G$ and $z_E$ distribution:
\begin{figure}
\centering
\subfigure[$z_G$ of negative sample]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/Prify nega.png}
\end{minipage}}\subfigure[$z_E$ positive sample]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/PrifyPos.png}
\end{minipage}}
\subfigure[$z_G$ of positive sample]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/prifyz_G-Positive_galaxy-300000-distr.png}
\end{minipage}}\subfigure[$z_G$ vs. $z_E$ of positive sample]
{\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\textwidth]{paper/fig/galaxy distr/Prifypositibe_ZG_ZE.png}
\end{minipage}}
\caption{truncated and purified sample
\label{fig:purified galaxy Z distr}
\end{figure}
In the next section we will introduce the details of CNN classifier, in fact, the CNN0 used here is the same as the CNN1 in \S\ref{sec:CNN_classifier}, the only different is the training data.
\end{comment}
\begin{figure}
\centering
\includegraphics[height=6cm,width=1\linewidth]{paper/fig/Build_train_data.png}
\caption{Summary of the training data building process. After constructed the negative and positive samples, they are labeled before they are fed into the training process of the three GaSNets. In this scheme we illustrate the steps made to add the label to the two training samples.} \label{fig:build data}
\end{figure}
\section{Training and Testing}
\label{sec:test of CNN}
{To proceed with the construction of the training and test samples, we totally collect 140\,000 positives and the same number of negatives. These samples are further split into the 3 datasets: 100\,000 for training, 20\,000 for validation and 20\,000 for testing. The first two samples are used to train the GaSNets\ and evaluate how well the model predicts the ground truth targets based on the unseen data during the training process. The last sample is used to qualify the final performance of the GaSNets. Finally, we also test the performance against real candidates from literature, as discussed in \S\ref{sec:confirmed_lens}.}
\begin{figure*}[t]
\centering
{\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=1.03\linewidth]{paper/fig/CNN_fig/CNN1.png}
\end{minipage}}
{\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=1.03\linewidth]{paper/fig/CNN_fig/CNN2.png}
\end{minipage}}
{\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=1.03\linewidth]{paper/fig/CNN_fig/CNN3.png}
\end{minipage}}
\caption{GasNets training results. Left: L1; middle: L2; right: L3. Left panel show the accuracy and the loss of GasNets-L1, the training and evaluate curve both converge to same point, and show high accuracy; Middle panel shows the MAE and loss of GasNets-L2, in here its seem not very good converge to same point but still ok, it also shows the low MAE we expected; Right panel also the MAE and loss of GasNets-L2, but this time it well converge to same point because the $z_G$ is more easy predict than $z_E$.
\label{fig:CNN1}
\label{fig:CNN2}
\label{fig:CNN3}
\end{figure*}
\begin{figure}
\centering
{\begin{minipage}{0.95\linewidth}
\centering
\includegraphics[width=1\linewidth]{paper/fig/CNN_fig/new_fpr_tpr.pdf}
\end{minipage}}
\caption{ROC curve of training samples where the true-positive rate (TPR) is plotted against the false-positive rate (FPR) (see text for details).}
\label{fig:ROCcuve
\end{figure}
\subsection{Training the Networks}
\label{sec:Training the Networks}
According to the description in \S\ref{sec:galnets} and the tasks they are expected to fulfill, during the training, the GaSNets\ are fed
with the training spectra to produce accurate predictions of the ``target'' quantities. For GaSNet-L1\, the inputs are the spectra of the positives and negatives as well as their labels to give as output the probabilities ($P_L$) to be lens candidates. For GaSNet-L2, the inputs are the simulated positive spectra with their labels, while the outputs are the predicted redshifts of the emission lines $z_E$. For GaSNet-L3, the inputs are the labeled spectra of positives and negatives and the output are the redshifts of the foreground spectra ($z_G$).
The full process of the training sample building and labelling is summarized in Fig. \ref{fig:build data}.
\begin{table}[t]
\caption{Statistical properties of the predicted parameters}
\label{table:Statistical properties}
\begin{center}
\footnotesize
\begin{tabular}{c c c c c c c c}
\hline
\hline
Sample & var. & $R^2$ & Out. fract. & NMAD & MAE &MSE \\
\midrule
Test & $z_{PE}$ & $0.941$ & $0.0121$ & $0.0029$ & $0.0164$ & $0.0032$ \\
Test & $z_{PG}$ & $0.998$ & $0.0003$ & $0.0009$ & $0.0017$ &$0.0001$\\
Real & $z_{PE}$ & $0.770$ & $0.0840$ & $0.0062$ & $0.0535$ & $0.0173$\\
Real & $z_{PG}$ & $0.989$ & $0.0047$ & $0.0012$ & $0.0033$ & $0.0004$\\
All & $z_{PG}$ & $0.988$ & $0.0012$ & $0.0009$ & $0.0020$ & $0.0002$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
{Regarding the training step, for GaSNet-L1\ and GaSNet-L3\ we use the 120\,000 positive (traning+validation data) and 120\,000 negative samples, i.e. a total of 240\,000 spectra. Since GaSNet-L2\ only predicts the $z_E$, in this case the training+validation sample in made by only a 120\,000 positive sample.
For each GaSNet, we use the training data to train 30 epochs with a learning rate of 0.0001 and use validation data to evaluate the performance. We have found that this produces rather stable validation results. During the training process, we optimize the 3 GaSNets\ with the Adam optimizer (\citealt{Friedman99+huberloss}).}
\subsection{Testing on simulation data}
\label{sec: test_simul}
\begin{comment}
\begin{figure}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/evalue_process.png}
\caption{'stupid' procedure
\label{fig:evalue_process
\end{figure}
\end{comment}
\begin{figure*}
\centering
\vspace{-0.3cm}
{\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=1.1\linewidth]{paper/fig/CNN_fig/TrainSamplePL.png}
\end{minipage}}
{\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=1.1\linewidth]{paper/fig/CNN_fig/TrainZE.png}
\end{minipage}}
{\begin{minipage}[t]{0.32\linewidth}
\centering
\includegraphics[width=1.1\linewidth]{paper/fig/CNN_fig/TrainZG.png}
\end{minipage}}
\caption{GaSNets\ results on the test sample. Left: the $P_L$ distribution from GaSNet-L1; Center: predicted emission line redshifts from GaSNet-L2, $z_{PE}$, vs. ground truth values, $z_{E}$; Right: predicted lens galaxy redshifts from GaSNet-L3, $z_{PG}$, vs. ground truth values, $z_{G}$.
\label{fig:pos_test}
\label{fig:neg_test}
\end{figure*}
{After training, we first test the GaSNets' performances using the simulated ``test'' spectra. As anticipated, the test sample is made by 20\,000 positive and 20\,000 negative samples for GaSNet-L1\ and GaSNet-L3\ and 20\,000 positive samples for GaSNet-L2\ .}
In Fig. \ref{fig:CNN1} we first show the results of the training run for the three GaSNets\ to have a first evaluation of their performances.
In particular, we plot the first 30 training epochs. The solid lines in Fig. \ref{fig:CNN1} represent the accuracy reached
on the training data as the average deviation of the predictions from the ground truth (loss). The dot-dashed lines represent the same quantity on the test data.
For each GaSNet, we set a different evaluation function: for GaSNet-L1,
being a classifier giving a probability as output, we use the ``accuracy" (acc) as loss variable;
for GaSNet-L2\ and GaSNet-L3\, as they predict the $z_E$ and $z_G$, we set the mean absolute error (MAE) as loss variable.
From Fig. \ref{fig:CNN1}, GaSNet-L1\ and GaSNet-L3\ both show good convergence at about the same epoch toward the end of the training, while GaSNet-L2\ shows a larger loss because the degeneracy between noise and emission lines (see comment above).
One possibility to improve this result might be the adoption of some spectra pre-processing, e.g. via smoothing. However, this would imply an incursion on the data characterization that is beyond the purposes of this paper, and we rather plan to address in the next analyses. Here, we just stress that the accuracy reached by GaSNet-L2\ is high enough to clearly separate the background emission lines from the foreground spectral features in lens candidates, hence more than sufficient for its actual purposes (see also \S\ref{sec:Slightly shift}).
In Table \ref{table:Statistical properties}, we report some statistical estimators to measure the GaSNets\ performances. Besides the standard MAE and MSE, we add other three estimators.
First, the R-squared ($R^2$) is used to evaluate the linear relationship between prediction and true values. It is defined as
$$
R^2 = 1 - \frac{\sum_{i}^{} (z_P-z_T)^2}{\sum_{i}(z_T-\Bar{z_T})^2}
$$
where the $z_P$ is the predicted value and $z_T$ is the true value, and the $\Bar{z_T}$ is the average value of $z_T$. The closer the $R^2$ is to 1, the better is the prediction. In Table \ref{table:Statistical properties} we see that for the test sample, $R^2$ is close to 1 for both $z_{PG}$ and $z_{PE}$, meaning that both GaSNet-L2\ and GaSNet-L3\ are expected to produce accurate results.
Second, the outlier fraction, which is defined as the fraction of predicted redshifts scattering more than 15\% from the true values:
$$
\delta Z = \frac{\left | z_P - z_T \right | }{1+z_T} > 0.15.
$$
For the test sample, in Table \ref{table:Statistical properties} we show that the outlier fractions are $\lsim1\%$, implying a very small fraction of anomalous predictions.
Third, the normalized median absolute deviation (NMAD), which is defined as:
$$
NMAD = 1.4826 \times median(\left |\delta Z - median(\delta Z)\right |).
$$
It gives the absolute deviation of the predicted value from the central value of $\delta Z$. As seen in Table \ref{table:Statistical properties}, the NMAD for the test sample is close to zero, meaning again a very small deviation from the true values, i.e. very accurate predictions.
In Fig. \ref{fig:ROCcuve} we also show the receiver operating characteristic (ROC) curve
where we plot the true-positive rate (TPR) against the false-positive rate (FPR). The TPR is the fraction of lenses that are correctly classified with respect to the total number of ``ground truth'' lenses, while the FPR is the fraction of non lenses that are misclassified as lenses with respect to the total number of non lenses.
The ROC curve can be used to decide the probability threshold to adopt as a trade-off between true detection and contaminants from false-positives. In the same figure, we report the TPR-FPR for different $P_L$s. We can see that, for a $P_L=0.95$, we almost reach 90\% completeness with a negligible false positive rate. We stress here that this result is derived from simulated spectra in rather ideal conditions. Hence, both the TPR and, most of all, the FPR might be just an upper and lower limit, respectively, as compared to the real cases. However, the $P_L=0.95$ occurs before the slope of the ROC becomes flatter, meaning that the gain in the number of true detections, at lower thresholds, increases at the cost of a larger number of contaminants.
We will come back to these results later, when we will discuss the threshold to adopt to select HQ candidates in real data.
Finally, in Fig. \ref{fig:pos_test} we detail the results obtained for the test sample. On the left panel we show the distribution of the $P_L$ from GaSNet-L1\ for both the negative and the positive samples. As expected the former tends to cluster more toward a peak at $P_L=1$, but with a rather long tail toward the $P_L=0$, meaning that, statistically, there is a significant fraction of true positives that have been given a low probability to be a lens
from GaSNet-L1. We have checked these latter cases and found no correlation with the overall SNR of the spectra. Instead, we have found a correlation of the low $P_L$ objects with the $z_E$, in the sense that the larger the $z_E$, the bigger is the number of object with $P_L<0.5$. This suggests that either the lower number of lines or the intrinsically lower SNR of the lines suppress the $P_L$ and makes the classification of the lensing event more difficult at higher-$z$.
In the central panel of the same figure, we show the output of the GaSNet-L2\ by comparing the predicted $z_{PE}$, against the ground truth values, $z_E$.
Overall, the majority of the predicted values are tightly distributed around the one-to-one relation, as also quantified by the large $R^2$ values found in Table \ref{table:Statistical properties} (Test/$z_{PE}$). There are numerous predictions that scatter quite largely from the perfect correlation, because of the degeneracy of noise and background emission lines, as mentioned above. However, these are statistically irrelevant as the estimated outlier fraction in the Table \ref{table:Statistical properties} is close to 1\%.
Finally, in the right panel of Fig. \ref{fig:pos_test}, we show the predicted $z_{PG}$ from GaSNet-L3\ against the ground truth values, $z_G$. In this case, the correlation is quite perfect and the outlier fraction is negligible ($<0.1\%$ see Table \ref{table:Statistical properties} -- Test/$z_{PG}$), both for the positive and the negative sample. Indeed, for this latter test, we have also input the negative sample to check the performance of GaSNet-L3, as a pure automatic spectroscopic redshift tool, in absence of artificial emission lines. This shows that the ability of GaSNet-L3\ to predict the galaxy redshift is not driven by the emission lines, easier to spot, but from the overall features of the spectrum (i.e. continuum and absorption/emission lines).
\begin{comment}
\begin{figure}
{\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/CNN_PL_loss.png}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/CNN_PL_acc.png}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/CNN_ZE_loss.png}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/CNN_ZE_mae.png}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/CNN_ZG_loss.png}
\end{minipage}}
{\begin{minipage}[t]{0.45\linewidth}
\includegraphics[height=4cm,width=1\linewidth]{paper/fig/CNN_ZG_mae.png}
\end{minipage}}
\caption{CNN training results. Top: CNN1; middle: CNN2; bottom: CNN3.
\label{fig:CNN1
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}
\centering
\subfigure[loss of CNN2]
\caption{CNN2 training results
\label{fig:CNN2
\end{figure}
\begin{figure}
\centering
\subfigure[loss of CNN3]
\caption{CNN3 training results
\label{fig:CNN3
\end{figure}
\end{comment}
\subsection{Test on HST confirmed samples}
\label{sec:real_data}
Previous analyses of the SDSS/BOSS spectra have brought to the collection of
294
strong lenses candidates:
131 from SLACS, 45 from BELLS and 118 from S4TM (see \S\ref{sec:intro}).
Being candidates based on spectroscopic features, these samples contain both real lenses and contaminants. Indeed, space imaging follow-ups have confirmed 70/131 SLACS candidates (the Grade-A objects in Table 4 of \citealt{2008ApJ...682..964B}), 25/45 BELLS candidates (Grade-A objects in Table 3 of \citealt{2012ApJ...744...41B}) and 40/118 S4TM (Grade-A in Table 1 of \citealt{2017ApJ...851...48S}).
As also commented in \S\ref{sec:intro}
these correspond to an average confirmation rate of 46\%. Note, though, that the HST samples often tend to optimize the confirmation rate by pre-selecting targets with low-resolution imaging (see e.g. \citealt{2004AJ....127.1860B}; \citealt{2016ApJ...824...86S}), hence this can be considered an optimistic upper limit estimate.
These are the main statistical samples that have been systematically followed-up to collect space imaging confirmations of spectroscopically selected SGL candidates, using optical lines. As such, these represent the most secure sample to check our results against.
These data can be used for two main purposes: 1) to compare the classification of the GaSNets\ against human selection and help us setting a reasonable threshold to optimize the chance of finding real lenses with the minimal contamination from false positives; 2) to forecast the success rate we might expect from our set-up, since we have a reference sample of ``candidates'' and ``confirmed'' events.
Being this literature sample far from complete (see \S\ref{sec:challange}), it cannot be fully used to draw firm conclusions about the completeness of the GaSNets, however this is the only sample we can use to benchmark the GaSNets' performances, with a necessary grain of salt.
On the other hand, the large sample from T+21, having no space observations cannot be used for the same purpose of the ones above. As anticipated, we will use it for an {\it a posteriori} test to assess the differences (if any) of standard and deep learning approaches.
\begin{figure*}
\centering
\vspace{-0.3cm}
{\begin{minipage}[t]{0.33\linewidth}
\includegraphics[width=1.07\linewidth]{paper/fig/real_len/AllLenPL.png}
\end{minipage}}
{\begin{minipage}[t]{0.33\linewidth}
\includegraphics[width=1.07\linewidth]{paper/fig/real_len/AllLenZE.png}
\end{minipage}}
{\begin{minipage}[t]{0.33\linewidth}
\includegraphics[width=1.07\linewidth]{paper/fig/real_len/AllLenZG.png}
\end{minipage}}
\caption{Results of the GaSNets\ applied to spectra of strong gravitational lens candidates (in blue) and HST-confirmed events (in red) from SLACS, BELLS and S4TM samples (see text for details). Left: the $P_L$ distribution from GaSNet-L1; Center: predicted emission line redshifts from GaSNet-L2, $z_{PE}$, vs. literature redshifts, $z_{E}$, for the candidate objects (top) and the HST confirmed (bottom); Right: predicted lens redshifts from GaSNet-L3, $z_{PG}$, vs. literature redshifts, $z_{G}$, for the candidate objects (top) and the HST confirmed (bottom).
\label{fig:real_len
\end{figure*}
\begin{figure*}
\vspace{-0.5cm}
\centering
{\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=1.07\linewidth]{paper/fig/all_spe/All_specPL.png}
\end{minipage}}
{\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=1.07\linewidth]{paper/fig/all_spe/DeltaZvsZG.png}
\end{minipage}}
{\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=1.07\linewidth]{paper/fig/all_spe/All_specZG.png}
\end{minipage}}
\caption{Predictions of the three GaSNets\ on the DR16-predictive sample ($\sim 1.3M$ objects): the $P_L$ from GaSNet-L1\ (left), the $z_{PE}-z_{PG}$ (center) and the $z_{PG}$ (right), from GaSNet-L2\ and GaSNet-L3\ outputs, vs. $z_G$, the galaxy redshift from eBOSS/DR16 catalog (see text for the details). In the center and right panels we show the $P_L>0.95$ sample (in red) and the $P_L<0.95$ in blue.
\label{fig:all_spec}
\end{figure*}
To proceed with the test of the HST confirmed catalogs against GaSNets, we first select the literature spectra that are located in the predictive range of our CNNs (i.e. $0.05<z_G<0.8$ and $0.15<z_E<1.2$). These are 264/294 candidates and 121/135 confirmed objects.
In Fig. \ref{fig:real_len} we show the probability predicted from the GaSNet-L1\ (left panel), the redshift of the source predicted from GaSNet-L2\ (central panel), and the redshift of the lens galaxies predicted by the GaSNet-L3\ (right panel), for the candidates and confirmed literature objects face-to-face.
In particular, we see that
GaSNet-L1\ predicts high probabilities for most of the lenses: e.g., 69\% of the candidates and 80\% of the confirmed objects have $P_L>0.95$, which becomes 81\% of the candidates and 90\% of the confirmed objects for $P_L>0.8$.
More importantly, the ratio of the confirmed/candidates increases dramatically from $0.8<P_L<0.95$ to $P_L>0.95$, as we have 12/33, i.e. 36\% for the former and 97/182, i.e. 53\%, for the latter,
vs. the overall 46\% estimated for the full sample (see above). On the other hand, for $P_L<0.8$ the confirmation rate drops to 12/49, i.e. 24\%, which is
too low
for successful space observations and still anti-economical for lens search in spectra. Indeed, as discussed in \S\ref{sec: test_simul}, at $P_L<0.8$ the FPR becomes prohibitive, producing massive false detections in large samples that should be cleaned with tedious visual inspections.
Interestingly enough,
for $P_L>0.95$ the fraction of true SGL events recovered (80\%) is rather close to the TPR ($\sim89\%$) predicted by the ROC curve (Fig. \ref{fig:ROCcuve}) for an idealized mock population of strong lenses. This means that the performances of the GaSNets\ on the real data might be not far from the expectations from simulated data.
However, In Fig. \ref{fig:real_len} (left)
a misalignment between the deep learning and human filtered selections is further demonstrated by the fact that
some confirmed lenses
have received a small probability by GaSNet-L1. As discussed in the previous section, these are mainly low-SNR emission line spectra or higher-$z$ systems that, even if accounted in the training sample, are difficult to be highly scored by GaSNet-L1\ but might have been picked by the human eye with higher confidence. Hence, we conclude that a $P_L=0.95$ threshold is very likely to produce effective completeness higher than the 80\% obtained above over a complete and unbiased true SGL sample.
\begin{figure*}
\centering
\centering
\includegraphics[width=1\textwidth]{paper/fig/candida/7164-56597-275.pdf}
\includegraphics[width=1\textwidth]{paper/fig/candida/7143-56572-678.pdf}
\includegraphics[width=1\textwidth]{paper/fig/candida/4486-55588-218.pdf}
\caption{HQ candidate spectra. The SDSS/BOSS spectra are plotted with highlighted their main spectral features. Red vertical lines indicate the emission lines of the background source at the redshift $eye z_E$ (i.e. the one corrected durung the visual inspection). Blue vertical lines indicate the spectral features of the lens at redshift
$z_G$. Green lines show the location at rest frame ($Z=0$) of the emission lines from sky). On the top of each spectrum from left to right we report the probability from GaSNet-L1, the average SNR of the spectrum, the catalogued redshift of the galaxy from SDSS/BOSS and predicted redshift of the galaxy and the emission lines from the GaSNet-L2\ and GaSNet-L3\ together with the redshift corrected in the visual inspection, and finally, RA, DEC and ID of the target.
\label{fig:candida_spec}
\end{figure*}
The middle and the right panel of Fig. \ref{fig:real_len} show that both GaSNet-L2\ and GaSNet-L3\ can make good predictions on the redshift of the emission lines and the lens galaxies.
In general, GaSNet-L3\ performs better than GaSNet-L2\ (see Table \ref{table:model parameters}), possibly because the spectra of the lens galaxies can provide more information, both from the continuum and the absorption or emission lines,
while GaSNet-L2\ relies only on a few emission lines,
which provide intrinsically less information.
We also see that the confirmed objects generally show a smaller scatter and outlier fraction than the candidates, especially in $z_{PE}$, and also that the highest probability objects show tighter one-to-one predictions. This demonstrates that misclassifications of SGL events might be related to uncertainties on the redshift of the background sources, which tend to be placed further away than sometimes they are, i.e. confusing ``local'' emissions with background ones. However, the chance of such misclassification is reduced for $P_L>0.95$ systems.
All in all,
Fig. \ref{fig:real_len}
indicates that the $P_L>0.95$ sample is accurate enough to produce reliable lens candidates from the DR16-predictive sample.
\section{New strong lensing candidates from eBOSS spectra}
\label{sec:applying_BOSS}
In this section, we apply the trained GaSNets to the DR16-predictive sample, introduced in \S\ref{sec:data}. This is made of 1\,339\,895 galaxy spectra and represents the sample among which we want to find new strong lens candidates and, for them, determine
the redshift of the background source, $z_E$.
\subsection{Predictions on the eBOSS spectra}
\label{sec:DR16data}
According to the workflow described in Fig. \ref{whole_CNN_model}, the first step to perform is the classification of candidates using GaSNet-L1. In Fig. \ref{fig:all_spec} (left) we report the probability $P_L$ distribution obtained from GaSNet-L1 for the DR16-predictive sample.
From this histogram, we see that
using a $P_L>0.8$, which, according to the ROC curve, would return almost 95\% of the true lenses,
would produce a list of about 10\,000 candidates. This is a sample hard to handle for two main reasons: 1) it is time consuming to visually inspect and 2) it is foreseen to be severely contaminated from false detections.
This latter case has been confirmed by randomly inspecting $100$ candidates with $0.8<P_L<0.95$ to find that about 90\%
are very poor candidates.
{On the other hand, choosing $P_L>0.95$, which, for the true lens cases, allowed to recover $80\%$ of the confirmed lens known in SDSS/BOSS, would produce a more manageable sample of $\sim4000$ candidates.
Hence, at the cost of some acceptable incompleteness, for this first test, we decide to adopt a more conservative approach and search for
high-quality candidates among the ones with $P_L > 0.95$. }
We can now look into the predictions of the GaSNet-L2\ and GaSNet-L3\ in order to finalize the sample to visually inspect.
In Fig. \ref{fig:all_spec} (center) we report the redshift gap between the lens and the source, $\Delta Z=z_{PE}-z_{PG}$ as a function of the lens redshift $z_G$ for the full predictive sample. Here we highlight the objects with $P_L>0.95$, from all the other spectra in the predictive sample. We can distinguish a few features: 1) the upper limit imposed to the $z_E$ produces a zone of avoidance in the up-right side of the image; 2) there is a crowded sequence of high $P_L$ in the box defined by $z_G$=[0.5,0.6] and $\Delta z$=[0.4,0.6]. This is due to the presence of a rather redundant residual emission lines from sky subtraction in the SDSS pipeline at $\lambda\sim5600$\AA\ (see Fig. \ref{fig:candida_spec}) that is very often ignored by GaSNet-L2\ but that in many cases is confused as a real emission. As we will see later, this sequence is easily filtered out by the visual inspection, but it has to be better accounted in the training sample in order to reduce its impact in future analyses.
A similar effect is produced by the residual sky lines at $\lambda>8000$\AA, which also produce a sequence of spurious $z_E$ predictions (see $z_G\sim0.2$ and $\Delta Z\sim0.9$). These have a small $P_L$, according to GaSNet-L1, and thus they do not bother, as they are excluded by the following analysis.
Overall, the $P_L>0.95$ sample looks rather unbiased, as seen by the $z_G$ estimates from GaSNet-L3\ in the right panel of Fig. \ref{fig:all_spec}, where the predicted $z_{PG}$ is extremely tightly correlated to the eBOSS catalog values (see also the statistical estimators in Table \ref{table:Statistical properties}).
However, before proceeding with the visual inspection of the background emissions estimated by the GaSNet-L2, in order to minimize the heterogeneity in the human grading, we pre-select the spectra that show an average SNR, computed at the expected positions of the reference lines from Table \ref{table:model parameters},
$\langle$SNR$_{\rm lines}\rangle$,
to be larger than one.
This further selection gives us 931 potential candidates to pass to the visual inspection.
\subsection{Visual inspection of spectra}
\label{subsec:visual}
The 931 candidates are visually inspected from the three authors, according to
an ABCD ranking scheme, being A=``sure positive'', B=``maybe positive'', C=``maybe not a positive'' and D=``sure negative''. To combine the human grading with the $P_L$, we have turned the ranking above in a score according to the conversion A=10, B=7, C=3, D=0 (see also Li+21). We finally select the spectra for which we have obtained an average score $\ge$7, as the final high-quality candidate sample. This is made of 497 objects in total.
Some spectra of this ``high quality'' sample are plotted in Fig. \ref{fig:candida_spec}. Here we clearly see the emission lines, marked as red vertical lines, from background lensed star-forming galaxies.
During the visual inspection process, besides grading, we also check that
the prediction value $z_{PE}$ and $z_{PG}$ given by GaSNets are perfectly aligned with visible spectral features. This is not often the case as the prediction process has some intrinsic uncertainty. For instance, the two GaSNets need to interpolate across a grid of training spectra that have been shifted with a coarse sampling (i.e. 0.05 in redshift, see Sect. \ref{sec:neg sample}). However, other sources of errors are possibly causing even more significant shifts, as we will discuss in more detail in Sect. \ref{sec:Slightly shift}. Using an interactive GUI developed by one of us (ZF), we then determine by eye the needed shift to obtain a perfect visual alignment and a ``corrected'' redshift for
the $z_{PE}$, assuming the $z_{G}$ from the eBOSS catalog as an unbiased estimate of the main galaxy redshift.
Finally, to qualify a spectrum as a
lensed galaxies candidates we check that 1) the emission lines do not belong to the sky lines (green lines in Fig. \ref{fig:candida_spec}) and 2) that the identified emission lines, i.e. red lines in Fig. \ref{fig:candida_spec}, having redshift $z_{PE}$ from GaSNet-L2, do not correspond to any line from the galaxy (i.e. blue lines in Fig. \ref{fig:candida_spec} at redshift $z_{PG}$ from GaSNet-L3). In other words, the $\Delta Z=z_{PE}-z_{PG}$ has to be larger than 0.1, as shown in Fig. \ref{fig:candidate_distr}, where it is plotted as a function of the estimated $z_{PG}$.
Here we also see that the $\Delta z$
is decreasing with the $z_{PG}$ because the further the lenses, the smaller the difference in redshift with the background source.
From Fig. \ref{fig:candidate_distr} it is clear that this is mainly
a selection effect due to our condition of on the $z_{PE}<1.2$, however, since the high quality candidates do not cluster toward the upper bound of the zone of avoidance, we conclude that the candidate distribution becomes incomplete when the $z_{PE}\sim1.2$. This is consistent with the correlation of the low $P_L$ with the higher-$z_{PE}$ we have discussed in \S\ref{sec: test_simul}. An encouraging feature, in the same figure, is that the combination of the $\langle$SNR$_{\rm lines}\rangle>1$ and the visual inspection, allows us to drop the stripe of spurious detection from residual sky lines discussed in \S\ref{sec:DR16data}.
\begin{figure}
\vspace{-0.3cm}
\centering
\includegraphics[width=1\linewidth]{paper/fig/candida/DeltaZvsZGHQ.png}
\caption{$ z_{PE} - z_{PG}$ vs. $z_{PG}$ distribution of visual inspection 497 good potential candidates and all $P_L>0.95$ spectra
\label{fig:candidate_distr
\end{figure}
\subsection{Deep learning vs. traditional methods}
\label{sec:T+21}
We end this section by comparing our HQ catalog, based on deep learning, with the catalog of 1551 candidates selected with the rest frame optical bands from T+21, using traditional selection methods. They used the complete eBOSS/DR16 database and applied the standard spectroscopic detection method introduced in the eBOSS Emission-Line Lens Survey (BELLS) and added Gaussian fit information, grading, additional inspection observables, and additional inspection methods to improve the BELLS selection method.
They used a total of 2 million objects with no selection on the redshift of the lenses.
Furthermore, they used a larger database of reference lines, including also [NII]a/b and [SII]a/b: these are best suited for low-redshift detections being all placed at $\lambda>6500$\AA, leaving the only [OII] doublet as a feature for the identification of background sources at $z\gsim1.2$.
As such, their predictive sample is wider in the parameter space than the DR16-predictive we have adopted. For a proper comparison, we have selected the T+21 candidates that fall in the GaSNets\ predictive space (i.e. $z_{G}=0.05-0.8$, spectra SNR$>2$, $z_E\lsim1.2$, $z_E=z_G+0.1$) and finally obtain 778 ``compatible'' candidates ($\sim 50\%$ of the original sample). We have checked the excluded 773 and found that 739 detections are, indeed, based on a single line (generally in spectra with SNR$>2$) and 29/5 are based on 2/3 lines (all with spectra SNR$<2$), according to the T+21 catalog. Hence, the majority of these ``known candidates'' would have been missed anyways in our HQ catalog because of the conservative selection in the number of lines to use for the classification, either in the deep learning training or visual ranking.
We have, then, matched the compatible 778 candidates with our HQ sample of 497 entries and, surprisingly, we have found a match for only 68 objects!
The positive note is that {\it GaSNets\ have found $\sim 430$ new HQ candidates that have been missed by standard techniques}. The negative note is that the GaSNets\ seem to have missed $710$ candidates from T+21!
{Is this true?} To answer this question we need to first check how many of these objects are lost by the GaSNets\ according to the criteria imposed to their outputs, i.e. they do not fall in the criteria $P_L>0.95$ and $z_E-z_G>0.1$: they are 327, i.e. $42\%$ of the compatible sample. This is larger than the fraction of lost objects found in the test against the real systems in \S\ref{sec:real_data} (i.e. $100-69=31$\% of ``candidates'' and $100-80=20$\% confirmed ones, having $P_L<0.95$). One explanation of this excess of lost objects with low $P_L$ can be that these are mainly optimistic candidates in T+21, for which the GaSNets\ have given low reliability. To confirm this we have checked that 215/327 are single line detections, according to T+21, and, furthermore, only 87/327 have scored A+ or A in their check against low-resolution imaging\footnote{As we will comment later, the image quality of the low-resolution DES imaging used by T+21 does not consent a firm classification, except for very clear features. Hence, we have conservatively assumed the A+ and A scores sufficient to preliminary quantify the confirmation rate.}. Hence, we can fairly conclude that this sample of lost candidates is overall low-valuable, having a tiny (albeit insecure) confirmation rate. This also implies that the fraction of lost SGL ``real'' events in our catalog is in line with the one estimated in \S\ref{sec:real_data}, reported above (i.e. 20\%).
Going to the remaining lost candidates ($710-327=383$), in Fig. \ref{fig:missed_plot} we show the spectra (not line) SNR
vs. the estimated redshift of the background lines from T+21, color-coded by the number of detected lines.
From this figure, we observe that:
\begin{figure}
\vspace{+0.3cm}
\centering
\includegraphics[width=1.05\linewidth]{paper/fig/SNR_z_T+21.jpeg}
\caption{Sample of missing candidates in the HQ catalog from GaSNets + visual inspection. In this figure we show the distribution of the missing candidates in the parameter space adopted for the training of the GaSNets\ (i.e. spectra SNR$>2$ and $z_E<1.2$). Each candidate is color coded by the number of detected lines in their spectra (according to T+21). Most of the missed candidates are 1-line and did not qualify in our HQ sample.
\label{fig:missed_plot
\end{figure}
\begin{figure*}
\vspace{+0.3cm}
\centering
\includegraphics[width=0.895\linewidth]{paper/fig/T+21-fig/3856-55269-309.pdf}
\includegraphics[width=0.9\linewidth]{paper/fig/T+21-fig/3861-55274-393.pdf}
\includegraphics[width=0.91\linewidth]{paper/fig/T+21-fig/4207-55475-933.pdf}
\includegraphics[width=0.92\linewidth]{paper/fig/T+21-fig/4269-55502-479.pdf}
\caption{Sample of missing candidates in the HQ catalog from GaSNets\ + visual inspection and found in T+21. The red vertical lines are the features identified as multi-lines in T+21, but that have been excluded by us because either too faint or embedded in noisy regions, making them poorly reliable to qualify as HQ candidates.
\label{fig:missed_spec
\end{figure*}
1) The majority (286/383, i.e. 75\%) of the missing candidates have 1-line detection, thus they are lost from our HQ catalog because we excluded them in our filtering (both because of the $\langle$SNR$_{\rm lines}\rangle$ or the visual inspection, see \S\ref{sec:DR16data} and \ref{subsec:visual}). According to the T+21 low-resolution grading, 164/286 of the 1-line detections have A or A+ scores, which implies a rather large confirmation rate, $\sim$60\%, if confirmed by higher-quality imaging. This is a sample we can easily intercept with GaSNets, by simply releasing the conservative criterion of the 1-line. From Fig. \ref{fig:missed_plot}, we see that above $z~1.05$ we loose some 2-line candidates, which supports further the conclusion in \S\ref{subsec:visual} that we are incomplete at $z_E\lsim1.2$.
2) The remaining 97 multi-line objects, in Fig. \ref{fig:missed_plot}, majorly concern us, as according to their $P_L$ and number of lines should have been picked by the GaSNets\ + visual inspection. First, we have found 10/97 objects classified as quasar or unknown in DR16, so these could not be in our catalog. For all the other 87 we have visually inspected the spectra and found that despite they were classified as multi-lines in T+21, no line, except the [OII] doublet, had an acceptable SNR. Hence, these are all candidates that have been substantially treated as 1-line from us or given a rather poor visual grade. We give some examples of these spectra in Fig. \ref{fig:missed_spec}. Since 60/97 have received A or A+ scores
from the low-resolution confirmation in T+21, i.e. 60\%, this is a sample that is likely to be valuable and should not be missed.
However, we need to point out that this sample was not lost by the GaSNets\ but by human selection.
\subsection{First catalog of new HQ strong lensing candidates in eBOSS from Deep Learning}
\label{sec:HQ_catalog}
After having subtracted the 68 candidates already found in T+21, we obtain a final catalog of 429 new HQ candidates in eBOSS, the first fully derived using deep learning.
The full catalog
is reported in Appendix \ref{appendix}.
This includes information about 1) RA/DEC coordinates; 2) plate ID; 3) MJD (Modified Julian Day), the observation date; 4) the GaSNet-L1\ probability, $P_L$; 5) the redshift of the galaxy from the eBOSS catalog; 6) the predicted redshift of the galaxy from GaSNet-L3; 7) the predicted redshift of the background source from GaSNet-L2; and 8) the corrected redshift of the source from the visual inspection (see Sect. \ref{sec:Slightly shift}); the total probability, $P_T=P_L\times0.1$ visual scores, i.e. combining the GaSNets\ and human probabilities to be a lens.
\section{Discussion}
In the previous section, we have presented the final list of 429 new strong galaxy lensing candidates, obtained by applying the three GaSNets\ to the latest eBOSS database (DR16), and further cleaning the sample via visual inspection.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{paper/fig/FIG_HSC_image.jpeg}
\caption{Some examples of ground based color cutouts ($20'' \times 20''$) of GaSNet candidates. Top row: match with HSC known candidates, re-graded as in the bottom-left corner. GaSNets\ have found them as HQ candidates independently.
Bottom 4 rows: A, B, C and D ranked HSC counterparts of GaSNet candidates from the HQ sample in \S\ref{sec:HQ_catalog}.
\label{fig:HSC_match}
\end{figure*}
Strictly speaking, the GaSNets' candidates consist of systems where, in the spectrum of a foreground galaxy,
we have found emission lines that are incompatible with belonging to the same galaxy. We have assumed, so far, that all these lines come from lensing events. In reality, they can be emitted by other kinds of sources, like overlapping galaxies along the line of sight, outflows in late-type galaxies, interacting systems, etc., although we have set a redshift gap, $\Delta z$, that might have prevented the confusion with some ``local'' phenomena. Hence, to fully assess the new catalog, we need to estimate a fiducial confirmation rate based on space observations or high-quality ground base imaging. Such a confirmation rate is important 1) to compare with the one from standard techniques, to see whether Deep Learning can outperform them in terms of reliability of the candidates; 2) to check whether the large spectroscopically selected samples accumulated so-far, are compatible with expected numbers of SGL events from theoretical predictions (see e.g. \ref{sec:challange}), or we might expect to find more events with more refined tools.
Besides the confirmation rate, in this section, we also discuss
the possibility to use the GaSNet-L2\ and GaSNet-L3\ as automatic tools for redshift estimates and spectra classification.
We will conclude this discussion with some perspective for the next improvements of the GaSNets.
\subsection{Confirmation rate via ground based imaging}
\label{sec:confir_rate}
To properly derive a fiducial confirmation rate for the 429 HQ candidates in \S\ref{sec:HQ_catalog}, we have checked the HST archive observations to look for serendipitous matches with our newly discovered candidates, but found no matches. Hence, the only remaining check we can perform is inside archive observations from the ground.
There are three datasets potentially useful for the test: 1) DECaLS\footnote{https://portal.nersc.gov/cfs/cosmo/data/legacysurvey/dr7/};
2) KiDS\footnote{https://kids.strw.leidenuniv.nl/DR4/access.php} and 3) HSC\footnote{https://hsc-release.mtk.nao.ac.jp/das\_cutout/pdr3/}. We have found 279 matches with DECaLS, 16 with KiDS and 63 with HSC, however: 1) the quality of the DECaLS {\it grz} color images from the public data is rather poorer than other surveys and made the identification of the lensing features extremely uncertain (see Appendix \ref{app:B}); 2) the number of KiDS matches is too small to have a fair statistics and we decided to leave the few convincing candidates for future analyses; 3) the HSC sample is the one with a sufficient large statistics, image quality and uniformity to make a fair estimate of the fraction of convincing lenses without strong biases.
Looking in this latter sample, we find that 7 candidates have corrupted color images or are too close to some bright source to be used with sufficient confidence. Hence, we finally inspect 56 systems. Of these, our HQ candidates match 8 known lens candidates from HSC \footnote{http://www-utap.phys.s.u-tokyo.ac.jp/~oguri/sugohi/} (e.g., \citealt{2018PASJ...70S..29S, 2019A&A...630A..71S}), although they are all C-graded by the imaging only in their catalogs.
We have visually inspected them again and, applying the ABCD scheme as in \S\ref{subsec:visual} and taking into account the spectroscopic evidence, we have reclassified 3 of them with A-grade and 5 with B-grade.
Of the remaining 49 matches,
we have classified 7 candidates
as A-grade and 17 as B-grade systems.
Taking the A-grade as {\it bona fide} confirmed lenses and weighting the B-grade ones by a 0.5 factor to account that they maybe not lenses,
we conclude that the lens confirmation rate is 21/56 or 38\%, which is lower than the confirmation rate estimated in \S\ref{sec:real_data} using space imaging.
In Fig. \ref{fig:HSC_match} we show a gallery of the ``confirmed'' lens and, as a comparison, the ``unconfirmed'' ones (i.e. the ones C- and D-graded). In the first row, {we report some of the lenses previously found in the HSC imaging and confirmed and re-graded by us,} in the second and third rows some examples of new GaSNets' confirmed lenses with A-grade,
and B-grade, respectively. In the final two rows the unconfirmed C and D cases. These latter clearly show the variety of potential contaminants, including arc-like features of unclear nature, blue/faint background galaxies similar to other objects in the field-of-view, interacting systems and large late-type or lenticular galaxies. In these latter examples, especially the large galaxies, if we exclude the cases where it is likely that the background emissions found in the spectra come from unlensed faint background systems as they can be seen in field-of-view, it is difficult to identify any other potential high-$z$ emitters. This leaves the nature of these emissions unresolved. In principle we cannot exclude that, given the small area covered by the fibers in eBOSS ($2''$, see also Fig. \ref{fig:HSC_match}) there is some very low separation arc, embedded in the bright foreground galaxy light, remaining undetected in the seeing-confused images from HSC. In this case, we can argue that the confirmation rates estimated above (38\%) might represent a lower limit.
If this conclusion is correct, we can attempt to derive a prediction of the total number of true SGL events in eBOSS, based on the current candidates from T+21 and this work. Put together they are 1551+429=1980. Assuming a pessimistic confirmation rate of 38\%, they make
$752$
real SGL events, {while for a more optimistic $46\%$ conformation rate of SLACS+BELLS+S4TM, it makes $911$ real SGL.} If we add the other candidates found in BOSS from BELLS (25) and BELLS GALLERY (17\footnote{Note that more can be still found on their sample of remaining 155 candidates remaining unconfirmed. Assuming $\sim 50\%$ confirmation rate they can be $\sim70$.}) we reach 794 and 953 real SGL, which nicely bracket the expected number we have estimated in \S\ref{sec:challange} for BOSS ($\sim920$). This suggests that we have possibly reached the full completeness of lens finding, in the largest spectroscopic database currently available.
\subsection{Statistical errors of GaSNet-L2\ and GaSNet-L3}
\label{sec:Slightly shift}
GaSNet-L2\ and GaSNet-L3\ are two CNNs that can perform the generic task to estimate the redshift of given features in 1D spectra. As such, they can be applied to spectroscopic databases regardless of the specific task of looking for strong gravitational lenses.
Certainly, the search for lenses requires a much lower accuracy in the $z_{PG}$ and $z_{PE}$, because the only condition to ring the bell for potential events is $\Delta z = z_{PE}-z_{PG}>0.1$, which is rather higher than typical spectroscopic redshift errors based on the human measurements. However, this condition is physically meaningful if
$\Delta z$ is larger than the combination of the typical errors on $z_{PE}$ from GaSNet-L2\ and $z_{PG}$ from GaSNet-L3, which also include the uncertainties that a deep learning process might introduce (activation, loss, training, etc.).
Hence, if on one hand, the assessment of the ``bias'' and typical ``statistical errors'' of the two GaSNets\ (L2 and L3) is needed to validate the pre-condition for the HQ candidates, on the other hand, they can also quantify
the accuracy of the individual CNN as ``automatic tools'' for redshift measurements. In this latter case, we can possibly require the typical errors to be of the order of $<1\%$, and systematics smaller than this precision. At the same time, we should expect a negligible fraction of outliers/catastrophic events.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{paper/fig/candida/Candidate_eyeZE-PZE.png}
\caption{$Eye z_E$ vs. $z_{PE}$ and $eye z_E$, $z_{PE}$ distribution of visual inspected 429 HQ candidates.
\label{fig:ZE_accuracy}
\end{figure}
As mentioned in \S\ref{subsec:visual}, during the visual inspection we had the chance to check the accuracy of the $z_{PE}$ estimates
and correct them by hand. This process is not error-free itself, as the resulting $eyez_E$ is a combination of a subjective identification of the line center and the accuracy in the line alignment by eye.
However, we can confidently use these corrections, together with the nominal $z_G$ given in the SDSS catalogs, to compute the scatter of the GaSNet-L2\ and GaSNet-L3\ predictions and derive systematics and statistical errors for $z_{PE}$ and $z_{PG}$.
For the $z_{PG}$ we can use all the galaxies in the predictive catalog as shown in Fig. \ref{fig:all_spec} (right), for which $z_G$ is known, to determine the $\delta _{z_{PG}}=z_{PG}-z_{G}$.
The scatter in this case is $\sigma(\delta_{z_{PG}})=0.015$, and the outliers, defined as the spectra for which the $(|z_{PG}-z_{G}|)/(1+z_{G})>$0.15, are about 0.113\%.
For the $z_{PE}$ we can use the spectra that we have visually inspected and for which we have collected the average $eyez_E$ estimated by the three of us. These are shown against the $z_{PE}$ in Fig. \ref{fig:ZE_accuracy} and used to estimate the $\delta_{z_{PE}}=z_{PE}-eyez_{E}$. In this case, we have estimated the scatter $\sigma(\delta_{z_{PE}})=0.046$, while the outliers are about 1.21\%.
In both cases, the scatter and accuracy are reasonably good, and so it is the outlier fraction. This result confirms that the adoption of the $\Delta z>0.1$ is conservative enough to account for the nominal statistical errors of the predicted redshifts. Furthermore, if we consider that the SNR is generally poor for the majority of the emission lines of the background galaxies, then we believe that both GaSNets\ (L2 and L3) are a very promising start and can be possibly be already used to automatically provide a first accurate guess of the redshift of galaxies in large surveys, while a more dedicate training would possibly improve the overall accuracy. We will dedicate future analyses to test the GaSNets\ on this latter and more applications, including specialized tasks for spectra classification (e.g. starburst galaxies, active galactic nuclei, irregular systems etc.).
\begin{figure}
\centering
\vspace{+0.5cm}
\includegraphics[width=1\linewidth]{paper/fig/new_CNN.png}
\caption{A possible scheme for increasing the interplay between the three GaSNets. Here, we foresee to input GaSNet-L1\ with the outputs of GaSNet-L2\ and -L3, as shown from the yellow line show, to improve the $P_L$.}
\label{fig:new_CNN_model}
\end{figure}
\subsection{Improvements of CNN model}
In this work we have used
3 independent CNN models and combined their outputs according to some physically meaningful conditions (see Fig. \ref{whole_CNN_model}), to identify strong lens candidates.
In fact, because of
the physics of the SGL, which involves the position of the source and lens with respect to the observer, the properties of the projected potential, etc., the 3 outputs of the GaSNets\ are not fully independent. Rather,
they must be connected via the ray-tracing equation of the SGL. For instance, one can define a more meaningful probability for spectra to have caught a lens candidate, looking at the relative distance of the $z_{PG}$ and $z_{PE}$, or at the absolute value of $z_{PG}$ (e.g. giving a lower $P_L$ if the galaxy is a very low redshift) etc.
One possible future development is to
connect different individual CNN networks (just like the neurons in our brain), for example, as in Fig. \ref{fig:new_CNN_model}, to make a more educated probability for a spectrum to be an SGL system.
In this figure, we
suggest to use the prediction of $z_{PG}$ as conditional information for the prediction of $P_L$, and then using the prediction of $z_{PG}$ and $P_L$ as auxiliary information for the prediction of $z_E$.
If, on one hand, this architecture can help to improve the accuracy, the cost to pay is
the model complexity, including a larger correlation among the different branches with some large back-propagation. This would make the overall model more time consuming in terms of training and prediction, but likely more accurate and false detection free.
\begin{comment}
In Fig. \ref{fig:some candidates}, we have seen there are slightly of predictions $z_E$. Because of CNN model property, no like using template, CNN output $z_E$ come from all of pixels, differ only the weight of them, so noises also contribute small slightly of $z_E$, but using template method, will give a best fits value.
In here we only use three same very simple CNN model to give the $z_G$,$z_E$ and $P$, in fact, we can build a more complex CNN model to improve the accuracy of predictions.
Fig. \ref{fig:CNN2 test}(d) show us that still some deviation of $z_E$ prediction. As \S\ref{sec: CNN2 train} say, it is difficult to recovery all kinds of patterns of a lens spectrum, our method of constructing emissions still a naively simulating, in the further step of simulating spectrum, we will including more property and patterns of a lens spectrum, even substrate the emission line information from real galaxy spectrum.
In Fig. \ref{fig:CNN1 test}(a)(b) shown, still a little fluctuation in CNN model, it may caused by “too similar” of positive and negative sample, due to the large noise line, and too high study ration of CNN training. In Next we will find a way to improve the stability of CNN.
\end{comment}
\section{conclusions}
In this paper, we have presented a novel deep learning tool to search for strong gravitational lensing (SGL) events in 1D galaxy spectra. This is the first attempt to use multiple emission lines after Li+19 used Ly$\alpha$ only.
The new algorithm is made of different CNNs, dubbed Galaxy Spectra convolutional neural Networks (GaSNets). These are optimized to work together to provide SGL candidates, but can also perform classification and regression tasks independently. As such, they are extremely suitable for further applications in large databases of tens to hundreds of millions of spectra, as the ones expected from the next generation spectroscopic surveys (4MOST, DESI, EUCLID, CSST).
In this paper, we have started by applying these new tools to strong lensing search in the eBOSS/DR16 database (\citealt{2020ApJS..249....3A}). To this aim we have introduced: 1) GaSNet-L1\ giving to each eBOSS spectrum the probability to be an SGL event ($P_L$); 2) GaSNet-L2\ estimating the redshift of background sources ($z_{PE}$) from a series of pre-selected emission lines (see Table \ref{table:model parameters}); and 3) GaSNet-L3\ estimating the redshift of the galaxy itself ($z_{PG}$), using the information it learns from the continuous spectrum, including local absorption/emission features. Only working together, the three GaSNets\ efficiently pinpoint SGL candidates combining a high $P_L$ with the condition that the $z_{PE}>z_{PG}$, as expected for typical strong lensing configurations.
In particular, by testing the GaSNets on a list of known spectroscopically selected gravitational lenses in SDSS/BOSS (from \citealt{2008ApJ...682..964B}, \citealt{2012ApJ...744...41B} and \citealt{2017ApJ...851...48S}) we have found that using a $P_L>0.95$ we can recover about 80\% of the strong lenses confirmed by HST. This very conservative probability threshold provided a reasonable trade-off between a significant completeness and a reasonably small sample to visually inspect,
with low contamination from false-positive detection.
Using this set-up, with the condition that $z_{PE}>z_{PG}+0.1$, we have applied the GaSNets\ to $\sim 1.3$ million of spectra from the SDSS-DR16, after having imposed some appropriate cuts to guarantee a good spectrum quality and the visibility of at least two emission lines from the putative sources (namely, [OII] and $H_\gamma$), assumed to be star-forming galaxies.
We have collected $\sim 930$ candidates that have been further cleaned by misclassified SGL events, via visual inspection. The final sample of visual HQ candidates is made of 497 spectroscopic selected objects. This catalog has been {\it a posteriori} compared to the most extended catalog of spectroscopic selected lens candidates from \citet{2021MNRAS.502.4617T} and found an overlap of only 68 candidates, meaning that 429 of our candidates are newly found. On the other hand, we have demonstrated that GaSNets\ did not recover the remaining T+21 sample because of the conservative constraints we have adopted for the number of lines to be detected ($>2$). Releasing them, half of the sample from T+21 (i.e. the one for which GaSNets\ has $P_L>0.95$) remains under the GaSNets\ discovery reach.
For the new HQ catalog we provide RA, DEC, the probability, $P_L$, the redshift of the galaxy from the eBOSS catalog, the predicted redshift of the galaxy from GaSNet-L3, the predicted redshift of the background source from GaSNet-L2, the corrected redshift of the source from the visual inspection, in Appendix \ref{appendix}.
To estimate a tentative confirmation rate of these candidates, we have matched the coordinates with archive HST observations and found no matches. Instead, we have found optical counterparts in DECaLS, KiDS and HST observations, but only HSC has provided sufficient statistics and image quality to confidently confirm the first sample of GaSNets' candidates. Among these, we have independently confirmed 8 SGL candidates from previous HSC lens imaging searches, thus providing spectroscopic evidence of lensing events, even though for only 3 of them we have found convincing features in the imaging to be ``sure lens''.
Besides these ``known'' lenses, we have found a preliminary optical confirmation of further 24 GaSNet HQ candidates, although, also in this case, for 17 of them the HSC images allowed only a ``maybe lens'' B-grade and only 7 have a ``sure lens'' A-grade. {Taking the A-grade as {\it bona fide} lenses and giving a 0.5 weight to the B-grade candidates,
we have estimated a confirmation rate of $38\%$ for our HQ catalog.}
Some examples of the HSC matched are shown in Fig. \ref{fig:HSC_match}, where we also show low-graded imaging of GaSNet candidates. The possible contaminants are higher redshift galaxies, overlapping in the fiber spectra, or maybe local phenomena (outflows?) mimicking an SGL event.
In this paper, we have demonstrated that Deep Learning represents a very efficient method to search for strong lenses in galaxy spectra.
This can be applied to next generation spectroscopic surveys in a fast and automated way. This first application to the eBOSS database has confirmed that the spectroscopic selection of SGL candidates is complementary to the imaging based SGL searches. For instance, of the 32 A/B grade candidates from the GaSNets\ matching with HSC imaging, only 8 were found previously on HSC images. This over-performance of the spectroscopic searches with respect to imaging is particularly evident for ground based observations, where the typical seeing has no impact on emission lines of background sources in spectra, but makes it hard to resolve low-separation gravitational arcs of the same sources.
For this first application, we have made conservative choices regarding: 1) the number of features to use for the training of the GaSNets; 2) the overall Network architecture, e.g. limiting the interconnections between the three GaSNets\; 3) the probability threshold to optimize the sample to visual inspect and keep the false positive under control. These are all directions to consider for future improvements.
As a final positive note, we have discussed that the GaSNet-L3, in particular, has reached an accuracy and scatter of its predictions, sufficient enough to be used to automatically measure galaxy redshifts in large spectroscopic surveys.
\section*{Acknowledgements}
We thank Dr. C. Tortora and Dr. Y. Shu for useful comments to the manuscripts. RL acknowledge the science research grants from the China Manned Space Project (No CMS-CSST-2021-B01,CMS-CSST-2021-A01). NRN acknowledge financial support from the “One hundred top talent program of Sun Yat-sen University” grant N. 71000-18841229.
\section*{Data Availability}
The data that support the findings of this study are available at the URLs provided in the text and the Table in Appendix A. All other data that are not provided in the paper can be requested to the authors.
\bibliographystyle{mnras}
|
1,314,259,995,658 | arxiv | \section{Introduction}
Throughout this paper the term {\it operator}\/ means a bounded linear
transformation of a Hilbert space into itself$.$ An operator is {\it posinormal}
if its range is included in the range of its adjoint$.$ The class of
posinormal operators includes the class of hyponormal operators$.$ An
operator is {\it quasiposinormal }if the closure of its range is included in the
closure of the range of its adjoint---if ether of these ranges is closed,
then they are both closed and the concepts of posinormal and quasiposinormal coincide.
\vskip6pt
A necessary and sufficient condition for the product of a pair of
closed-range operators to have a closed range is given in Theorem 1 of
\cite{B}, which also provides an example of a closed-range
operator whose square does not have closed-range \cite[Corollary 5]{B}. A simpler example is given in \cite[Example 1]{BKT}. The square of a
posinormal operator is not necessarily posinormal \cite[Example 1]{KVZ} but
every positive-integer power of a posinormal operator with closed range is posinormal
with closed range \cite[Corollary 14]{JV} (and
so powers of a hyponormal operator with closed range have closed range).
The closed-range assumption is crucial even if the operator and its adjoint
are both posinormal: Proposition 4.3 of \cite{BT} describes examples of non-closed-range posinormal operators having posinormal adjoints for which all sufficiently large powers fail to be
posinormal. The fact that powers of
a closed-range posinormal operator is again a closed-range posinormal
operator prompts the following question.
\begin{question}\label{Q:1.1}
{\it Is the product of two commuting posinormal operators, both with closed
range, a posinormal operator with closed range}$\,?$
\end{question}
Motivated by the preceding (still open) question, we explore necessary conditions and sufficient conditions for posinormal operators to have closed range, as well as investigate the structure of matrix representations of a pair of commuting, closed range posinormal operators on a Hilbert space ${\mathcal H}$ relative to a natural orthogonal decomposition of ${\mathcal H}$. We note that Question \ref{Q:1.1} above has an affirmative answer if ``posinormal'' is replaced by ``normal'' because (i) the product of two commuting normal operators is normal, thanks to Fuglede's Theorem, and (ii) the product of two commuting normal operators $A$ and $B$ having closed range will also have closed range by, e.g., Proposition~\ref{P:2.2} below: the kernel of $A$ will be reducing for $B$ by Fuglede's Theorem, showing that the operator $Y$ of part (a) of Proposition~\ref{P:2.2} must be the zero operator, so that the hypotheses part (b) of Proposition~\ref{P:2.2} hold. The product of two closed-range normal operators that do not commute need not have closed range---see Example~\ref{TE} below.\
\vskip6pt
{\it A normal operator has closed range if and
only if\/ $0$ is not a limit point of its spectrum}\/ (e.g,. set
${\lambda=0}$ in \cite[Proposition XI.4.5]{CFA})$.$ In Section 3, we identify three properties of Hilbert-space operators, each one of which normal operators possess, such that if $T$ is an operator on a Hilbert space ${\mathcal H}$ having these three properties, then $T$ has closed range if and only if $0$ is not a limit point of the spectrum of $T$. As corollaries of this result, we show that
\begin{itemize}
\item if $T$ is a hyponormal operator such that $0$ is not a limit point of the spectrum of $T$, then the range of $T$ is closed (Corollary \ref{C:3.3}),
\item if $T$ is a posinormal operator such that $0$ is not a limit point of the spectrum of $T$ and the restriction of $T$ to the orthogonal complement of the kernel of $T$ is isoloid, then the range of $T$ is closed (Corollary \ref{C:3.4}), and
\item if $T$ is a posinormal operator with closed range, then $0$ is not a limit point of the spectrum of $T$ if and only if the adjoint of $T$ is also posinormal (Proposition \ref{P:3.5}).
\end{itemize}
\vskip6pt
If $A$ is a posinormal operator on a Hilbert space ${\mathcal H}$, then, by definition, the range of $A$ is contained in the range of $A^*$, and, upon taking orthogonal complements, we see that the kernel $A$ is a subset of the kernel of $A^*$. Thus, for a posinormal operator $A$ on ${\mathcal H}$, the kernel ${\mathcal N}(A)$ of $A$ is a subspace of ${\mathcal H}$ that reduces $A$.
In general, if $B$ is an operator in the commutant of $A$ and if the kernel ${\mathcal N}(A)$ of $A$ reduces $A$, then relative to the orthogonal decomposition ${\mathcal H}={{\mathcal N}(A)^\perp\oplus{\mathcal N}(A)}$, the operators $A$ and $B$ have the following matrix representations:
$$
A=\big(\smallmatrix{A' & O \cr
O & O \cr}\big)
\quad\hbox{and}\quad
B=\big(\smallmatrix{B' & O \cr
Y & Z \cr}\big).
$$
In Section 4 (Corollary \ref{C:4.3}), we give a sufficient condition
for the product of closed-range commuting posinormal operators to be
posinormal with closed range:
\vskip6pt\noi
{\narrower{\it
If\/ $A$ and\/ $B$ are commuting posinormal operators with closed range,
then\/ $AB$ is posinormal with closed range if\/ $B'$ and\/ $Z^*$ are
posinormal\/.
}\vskip6pt}
\noindent Moreover, using the matrix representations described above, we show (Theorem \ref{T:4.6}) that if $A$ and\/ $B$ are commuting posinormal operators with closed range and one of $A$ and $B$ has posinormal adjoint, then\/ $AB$ is posinormal with closed range, a result that generalizes \cite[Theorem 3]{DD}.
\vskip6pt
Of course, products of noncommuting posinormal operators can be posinormal. One can check that for the posinormal operators $G=\big(\smallmatrix{1 & 1 \cr
0 & 1 \cr}\big)
\quad\hbox{and}\quad
P=\big(\smallmatrix{1 & 0 \cr
0 & 0 \cr}\big)$ on ${\mathbb C\kern.5pt}^2$, the operator $GP$ is posinormal, but $PG$ is not (and all these operators have closed range because all linear operators on ${\mathbb C\kern.5pt}^n$ have closed range). Another example: on any complex Hilbert space a pair of non-commuting unitary operators will be a pair of non-commuting, closed-range normal operators whose product is normal with closed range (in fact, unitary).
The product of two closed-range normal operators need not have closed range:
\begin{example} \label{TE} There exist normal operators $A$ and $B$ having closed range such that $AB$ does not have closed range.\end{example}
Let ${(e_j)_{j=0}^\infty}$ be the natural basis of $\ell^2$ so that the
sequence $e_j$ has $1$ as its $j$-th term and zeros elsewhere. Let ${{\mathcal M}}_{1}$ be the closure of the span of $\{e_{2k}: k = 0,1,...\}$ and ${{\mathcal M}}_{2}$ be the closure of the span of $\{g_k := e_{2k} + e_{2k+1}/(2k+1): k =0,1,\ldots\}$. (Note that $\{g_k: k \ge 0\}$ is orthogonal.) Set $A = (I-P_{{{\mathcal M}}_{1}})$ (i.e. orthogonal projection onto the orthogal complement of ${{\mathcal M}}_{1}$) and $B = P_{{{\mathcal M}}_{2}}$. We use Bouldin's criterion, stated below, to establish that AB does not have a closed range.
\begin{quotation} Bouldin's Criterion \cite{B}: {\it If $S$ and $T$ are operators on $\mathcal{H}$ having closed range then $ST$ also has closed range if and only if the angle between ${\rm ran\,} T$ and $\ker S\cap(\ker S \cap {\rm ran\,} T)^\perp$ is positive. }
\end{quotation}
Observe that $\ker A = {{\mathcal M}}_{1}$ and ${\rm ran\,} B = {{\mathcal M}}_{2}$, so that $\ker A \cap {\rm ran\,} B = {{\mathcal M}}_{1} \cap {{\mathcal M}}_{2} = \{0\}$. Thus, $(\ker A \cap {\rm ran\,} B)^\perp = {\mathcal H}$. We show that the angle between ${\rm ran\,} B = {{\mathcal M}}_{2}$ and $\ker A \cap (\ker A \cap {\rm ran\,} B)^\perp = \ker A = {{\mathcal M}}_{1}$ is $0$, showing the range of $AB$ is not closed by Bouldin's Criterion.
The angle $\theta$ between ${{\mathcal M}}_{1}$ and ${{\mathcal M}}_{2}$ of $\mathcal{H}$ is given by
$$
\theta = \cos^{-1}\left(\rule{0in}{.15in}\sup\{|\langle f, g\rangle|: f\in {{\mathcal M}}_{1}, g\in {{\mathcal M}}_{2}, \|f\| =1= \|g\|\}\right),
$$
where $\langle \cdot, \cdot\rangle$ denotes the inner product of $\ell^2$. For $n\ge 0$, let
$$
f_n = e_{2n}\quad \text{and} \quad g_n = \frac{\left(e_{2n}+ \frac{e_{2n+1}}{2n+1}\right)}{\sqrt{1+ 1/(2n+1)^2}}
$$
and observe that $(f_n)$ and $(g_n)$ are sequences of unit vectors such that $\langle f_n, g_n\rangle = 1/\sqrt{1+ 1/(2n+1)^2}\rightarrow 1$, as $n\to \infty$. We see the angle between ${{\mathcal M}}_{1}$ and ${{\mathcal M}}_{2}$ is $0$, as desired.\qed
\vskip6pt
For the normal operators $A$ and $B$ of the preceding example, it's easy to show that $AB$ is not normal. In general, it's possible to show that for orthogonal projections $A$ and $B$, the product $AB$ is normal if and only if $A$ and $B$ commute (and the projections $A$ and $B$ of Example~\ref{TE} do not commute).
The paper is organized into four more sections$.$ Notation, terminology
and auxiliary results are considered in Section 2$.$ The results
summarized above are treated in Sections 3 and 4$.$ Section 5 brings a
detailed discussion of EP operators and matrices and how they relate to
posinormal operators and matrices, concluding with a discussion of, as well
as a new proof of, the Hartwig--Katz Theorem, which characterizes when the
product of two posinormal matrices is a posinormal matrix.
\section{Notation, Terminology, and Auxiliary Results}
Let ${\mathcal H}$ be an infinite-dimensional complex Hilbert space$.$ If ${\mathcal M}$ is a
subspace of ${\mathcal H}$, then we let ${\mathcal M}^-$ denote its closure and ${\mathcal M}^\perp$ its orthogonal complement. The
algebra of all operators on ${\mathcal H}$ will be denoted by ${\B[\H]}.$ For any operator
${T\kern-1pt\in{\B[\H]}}$, let ${\mathcal N}(T)$ stand for the kernel of $T$, which is a
closed subspace of ${\mathcal H}$, and let ${\mathcal R}(T)$ stand for the range of $T$. Let ${T^*\!\in{\B[\H]}}$ denote the adjoint of
${T\in{\B[\H]}}$. Posinormal operators are \hbox{defined as follows}.
\vskip6pt
An operator ${T\kern-1pt\in{\B[\H]}}$ is {\it posinormal}\/ if
$$
{\mathcal R}(T)\sse{\mathcal R}(T^*)
\qquad\hbox{(which implies $\;{\mathcal N}(T)\sse{\mathcal N}(T^*)\kern.5pt$)},
$$
and {\it quasiposinormal}\/ if
$$
{\mathcal R}(T)^-\!\sse{\mathcal R}(T^*)^-\!
\qquad\hbox{(equivalently, $\,{\mathcal N}(T)\sse{\mathcal N}(T^*)\kern.5pt$)}.
$$
Posinormal operators are quasiposinormal and the concepts coincide if
${\mathcal R}(T)$ is closed$.$ If $T$ is injective, then it is quasiposinormal; if
$T^*$ is surjective (equivalently, if $T$ is injective with closed range), then $T$ is posinormal$.$ For equivalent definitions of
posinormal operators, see, e.g., \cite[Theorem 2.1]{R1},
\cite[Theorem B]{I}, \cite[Theorem 1]{JKKP}, \cite[Proposition 1]{KD},
\cite[Definition 1]{KVZ})$.$ An operator ${T\kern-1pt\in{\B[\H]}}$ is called
{\it coposinormal}\/ or {\it coquasiposinormal}\/ if its adjoint
${T^*\!\in{\B[\H]}}$ is posinormal or quasiposinormal, respectively.
\vskip6pt
Posinormal operators were introduced and systematically investigated by
Rhaly in \cite{R1}, which appeared in 1994$.$ The class of posinormal
includes the hyponormal operators but is not included in the class of
normaloid operators$.$ For a comprehensive exposition on posinormal
operators see, e.g., \cite{R1} and \cite{KD}$.$ For basic properties of
posinormal operators, see, e.g., \cite[Corollary 2.3]{R1},
\cite[Propositions 3 and 4]{JKKP}, \cite[Lemma 1, Remark 2]{KD}, and
\cite[Proposition 1]{KVZ}. Those properties
required in this paper are summarized below.
\begin{proposition}\label{P:2.1}
{\it Let $T$ be a Hilbert-space operator}\/.
\begin{description}
\item{$\kern-4pt$\rm(a)}
{\it If\/ $T$ is quasiposinormal\/ $($in particular, posinormal\/$)$, then\/
${\mathcal N}(T)$ reduces}\/ $T$.
\vskip4pt
\item{$\kern-4pt$\rm(b)}
{\it The restriction of a posinormal\/ $($quasiposinormal\/$)$ operator to a closed
invariant subspace is posinormal\/ $($quasiposinormal\/$)$}\/.
\vskip4pt
\item{$\kern-4pt$\rm(c)}
{\it If\/ $T$ is quasiposinormal\/ $($in particular, posinormal\/$)$,
then}\/ ${{\mathcal N}(T^2)={\mathcal N}(T)}$.
\end{description}
\end{proposition}
\noindent Regarding part (a) of the preceding proposition, we note that, in fact, a Hilbert space operator $T$ is quasiposinormal if and only if ${\mathcal N}(T)$ reduces $T$:
\begin{quotation}
$T$ is quasiposinormal$\iff {\mathcal N}(T)\subseteq {\mathcal N}(T^*) \iff {\mathcal N}(T)$ reduces $T$.
\end{quotation}
The next lemma facilitates our exploration of properties of matrix representations of commuting posinormal operators.
\begin{lemma}\label{L:2.2}
Let $B$ be an operator with closed range on ${\mathcal H}$. Suppose that with respect to an orthogonal decomposition ${\mathcal H}={\mathcal H}_1\oplus {\mathcal H}_2$,
\[B = \begin{bmatrix}
B' & 0 \\
Y & Z
\end{bmatrix}.\]
Then
\begin{itemize}
\item[(a)] ${\mathcal R}(B'^{*})^{-}\subseteq {\mathcal R}(B'^{*}) + {\mathcal R}(Y^{*}|_{{\mathcal N}(Z^{*})})$. As a result, if ${\mathcal R}(Y^{*}|_{{\mathcal N}(Z^{*})})\subseteq{\mathcal R}(B'^{*})$, then ${\mathcal R}(B'^{*})$ is closed and hence, ${\mathcal R}(B')$ is closed.
\item[(b)] If $B$ is also assumed to be posinormal, then
\[{\mathcal R}(Z)^{-} \subseteq {\mathcal R}(Z^{*}).\]
As a result, if $Z^{*}$ is quasiposinormal, then ${\mathcal R}(Z^{*})^{-}\subseteq{\mathcal R}(Z)^{-}$ and hence ${\mathcal R}(Z^{*}) = {\mathcal R}(Z)^{-}$, which implies that ${\mathcal R}(Z^{*})$ is closed.
\end{itemize}
\end{lemma}
\begin{proof}
(a) Let $x\in{\mathcal R}(B'^{*})^{-}$ and let $(x_n)$ be a sequence in ${\mathcal R}(B'^{*})$ that converges to $x$. For each $n$, there exists $u_n\in{\mathcal H}_1$ such that
${x_n = {B'}^*u_n}$ and we have
${B^*(u_n,0)=({B'}^*u_n, 0)=(x_n,0)}$. So
${(x_n,0)\in{\mathcal R}(B^*)}$ for each $n$ and $(x_n,0)\to(x,0)$. Since
${\mathcal R}(B^*)$ is closed (because ${\mathcal R}(B)$ is closed) it follows that
${(x,0)\in{\mathcal R}(B^*)}$. Thus there exists $(u,v)\in{\mathcal H}_1\oplus{\mathcal H}_2$
such that
\[(x,0)=B^*(u,v)=({B'}^*u+Y^*v, Z^*v).\]
This implies $Z^{*}v = 0$, which shows that $Y^{*}v\in {\mathcal R}(Y^{*}|_{{\mathcal N}(Z^{*})})$. Since
$x = B'^{*}u + Y^{*}v$, we conclude that
\[x\in {\mathcal R}(B'^{*}) + {\mathcal R}(Y^{*}|_{{\mathcal N}(Z^{*})}).\]
(b) Now assume that $B$ is posinormal. Let $y\in{\mathcal R}(Z)^{-}$. Then $(0,y)\in{\mathcal R}(B)^{-} = {\mathcal R}(B)\subseteq{\mathcal R}(B^{*})$ because $B$ is posinormal with closed range. Then there exists $(u,v)\in{\mathcal H}_1\oplus{\mathcal H}_2$ such that
\[(0,y) = B^{*}(u,v) = ({B'}^*u+Y^*v, Z^*v).\]
This shows that $y\in{\mathcal R}(Z^{*})$.
\end{proof}
\vskip6pt
Let $A$ and $B$ be operators on a Hilbert space ${\mathcal H}.$ According to
Proposition \ref{P:2.1}(a), if $A$ is quasiposinormal (or posinormal), then ${\mathcal N}(A)$
reduces $A$.
\begin{proposition}\label{P:2.2}
{\it Suppose that $A$ and\/ $B$ commute and that ${\mathcal N}(A)$ reduces $A$.
\vskip6pt\noi
{\rm(a)}$\,$
With respect to the decomposition\/
${\mathcal H}={\mathcal N}(A)^\perp\oplus{\mathcal N}(A)$,
$$
A=\big(\smallmatrix{A' & O \cr
O & O \cr}\big)
\quad\hbox{\it and}\quad
B=\big(\smallmatrix{B' & O \cr
Y & Z \cr}\big)
\quad\hbox{\it with}\quad
Y\!A'=O,
$$
and\/ ${Y=O}$ if and only if\/ ${\mathcal N}(A)$ reduces\/ $B$.
\vskip6pt\noi
{\rm(b)}
If\/ ${\mathcal R}(A)$ and ${\mathcal R}(B)$ are closed and
${\mathcal R}(Y^*|_{{\mathcal N}(Z^*)})\!\sse\!{\mathcal R}({B'}^*)$, then
\hbox{$\kern-.5pt{\mathcal R}(AB)\kern-1pt$ is closed}}\/.
\end{proposition}
\begin{proof}
(a)
Consider the decomposition ${{\mathcal H}\kern-1pt=\kern-1pt{\mathcal N}(A)^\perp\!\oplus{\mathcal N}(A)}.$
Since ${\mathcal N}(A)$ \hbox{reduces $A$,}
$$
A=\big(\smallmatrix{A' & O \cr
O & O \cr}\big)=A'\oplus\,O,
\quad
B=\big(\smallmatrix{B' & X \cr
Y & Z \cr}\big),
\quad
BA=\big(\smallmatrix{B'\!A' & O \cr
Y'\!A & O \cr}\big),
\quad
AB=\big(\smallmatrix{A'B' & A'X \cr
O & O \cr}\big)
$$
with $A'\!=A|_{{\mathcal N}(A)^\perp}$ and $B'$ in ${\mathcal B}[{\mathcal N}(A)^\perp].$ Thus if $A$
and $B$ commute, then
$$
AB=BA=\big(\smallmatrix{B'\!A & O \cr
O & O \cr}\big)
=\big(\smallmatrix{A'B' & O \cr
O & O \cr}\big)
=B'\!A'\oplus\,O=A'B'\oplus\,O,
$$
with $A'X=O$ and $Y\!A'=O.$ Since $A'$ is injective and ${A'X=O}$ we get
${X=O}.$ So
$$
B=\big(\smallmatrix{B' & O \cr
Y & Z \cr}\big)
\quad\hbox{and so}\quad
B^*=\big(\smallmatrix{{B'}^* & Y^* \cr
O & Z^* \cr}\big),
$$
and hence\/ ${Y=O}$ if and only if\/ ${\mathcal N}(A)$ also reduces\/ $B$.
\vskip6pt\noi
(b)
Suppose ${\mathcal R}(B)$ is closed in ${\mathcal H}$ and ${\mathcal R}(Y^*|_{{\mathcal N}(Z^*)})\sse{\mathcal R}({B'}^*)$. By Lemma \ref{L:2.2}(a), we conclude that ${\mathcal R}(B')$ is closed in ${\mathcal N}(A)^{\perp}$.
Next suppose ${\mathcal R}(A)$ is closed in ${\mathcal H}$$.$ Since ${\mathcal R}(A)={\mathcal R}(A')\oplus\{0\}$ and
${\mathcal R}(A)$ is closed,
$$
\hbox{${\mathcal R}(A')$ is closed in ${\mathcal N}(A)^\perp$}.
$$
Then $A'$ is injective with closed range, which means it is bounded below---there is a positive constant $c$ such that $\|A'x\| \ge c\|x\|$ for all $x\in {\mathcal N}(A)^\perp$.
Now let $(x_n)$ be an arbitrary convergent sequence in ${\mathcal R}(A'B')$. Then, for each $n$, $x_n = A'B'u_n$ for some $u_n\in {\mathcal N}(A)^\perp$. Because $(x_n)$ is convergent $(A'B'u_n)$ is Cauchy, which implies, because $A'$ is bounded below, that $(B'u_n)$ is Cauchy. Thus $(B'u_n)$ converges and, because ${\mathcal R}(B')$ is closed, there is a $u\in {\mathcal R}(B')$ such that $\lim (B'u_n) = B'u$. We have $A'B'u = \lim (A'B'u_n) = \lim (x_n)$ and we see that ${\mathcal R}(A'B')$ is closed, as desired.
\end{proof}
\section{Closed-Range Posinormal Operators}
In this section, we identify three properties of Hilbert-space operators, each one of which normal operators possess, such that if $T$ is an operator on a Hilbert space ${\mathcal H}$ having these three properties, then $T$ has closed range if and only if $0$ is not a limit point of the spectrum of $T$.
Let $T$ be a bounded linear operator on a complex Hilbert space, let
$\sigma(T)$ and $\rho(T)={{\mathbb C\kern.5pt}\backslash\sigma(T)}$ denote the spectrum and
the resolvent set of $T$, respectively, and consider the standard partition
of the spectrum into $\sigma_{\kern-1ptP}(T)$, the point spectrum,
$\sigma_{\kern-1ptR}(T)$, the residual spectrum, and
$\sigma_{\kern-.5ptC}(T)$, the continuous spectrum. An operator $T$ is
{\it isoloid}\/ is every isolated point of the spectrum $\sigma(T)$ is an
eigenvalue$.$ In particular, if the spectrum $\sigma(T)$ is a
singleton $\{\lambda\}$, then $T$ is isoloid if and only if
${\sigma_{\kern-1ptP}(T)=\{\lambda\}}.$ Vacuously, every operator whose
spectrum has no isolated \hbox{point is isoloid}$.$ There exist posinormal
operators that are not isoloid, e.g., an injective weighted unilateral shift
$T_+$ on $\ell_+^2$ with weight sequence $\left(\frac{1}{k}\right)$ is a
(compact, quasinilpotent) posinormal operator (cf$.$ \cite[Section 3]{KD})
whose spectrum $\sigma(T_+)=\{0\}$ coincides with the residual spectrum
$\sigma_{\kern-1ptR}(T_+)\kern.5pt$.
{\it A normal operator has closed range if and only if zero is not a
limit point of its spectrum}\/$.$ This is a particular case of
\cite[Proposition XI.4.5]{CFA}, whose proof is based on the
Spectral Theorem as well as the Open Mapping Theorem. The preceding characterization of closed range normal operators doesn't extend to posinormal operators; in fact, it doesn't extend to hyponormal operators. For example, the forward shift operator on $\ell^2$ is hyponormal with closed range but its spectrum is the entire closed unit disk (so that $0$ is a limit point of the spectrum). We seek additional conditions that ensure a posinormal operator has closed range if and if $0$ is not a limit point of its spectrum.
If $T$ is a posinormal operator (or even a quasiposinormal operator), we have ${\mathcal N}(T) \subseteq {\mathcal N}(T^*)$, which ensures that ${\mathcal N}(T)$ reduces $T$. Suppose that for some $T\in {\B[\H]}$, we know ${\mathcal N}(T)$ reduces $T$. Then, we have
$$
\quad \quad T=T'\oplus 0
\quad\hbox{on}\quad
{\mathcal H}={\mathcal N}(T)^\perp\oplus{\mathcal N}(T),
$$
where ${T'=T|_{{\mathcal N}(T)^\perp}\!\in{\mathcal B}[{\mathcal N}(T)^\perp\kern-1pt]}$. Assuming that $T$ has the representation $T=T'\oplus 0$ above, we will show that if we want the condition ``$T$ has closed range'' to imply $0$ is not a limit point of the spectrum of $T$, then it's sufficient to assume that $0$ does not belong to the residual spectrum of $T'$, i.e., $0\not\in \sigma_R(T')$. We will also show that if we want the condition ``$0$ is not a limit point of $\sigma(T)$'' to imply ${\mathcal R}(T)$ is closed, then it's sufficient to assume that $T'$ is isoloid.
If $T$ is a normal operator, observe that
\begin{itemize}
\item[(i)] ${\mathcal N}(T)$ reduces $T$,
\item[(ii)] $0\not\in \sigma_R\left(T|_{{\mathcal N}(T)^\perp}\right)$,
\item[(iii)] $T|_{{\mathcal N}(T)^\perp}$ is isoloid.
\end{itemize}
As for property (i), not only does ${\mathcal N}(T)$ reduce $T$ when $T$ is normal, we have ${\mathcal N}(T) = {\mathcal N}(T^*)$. To see that normal operators satisfy (ii), let $T$ be normal and observe $T|_{{\mathcal N}(T)^\perp}$ is normal because ${\mathcal N}(T)^\perp$ reduces $T$; thus, $0$ is either an eigenvalue of $T|_{{\mathcal N}(T)^\perp}$ (so that $0\not\in \sigma_R\left(T|_{{\mathcal N}(T)^\perp}\right)$) or fails to be eigenvalue of both $T|_{{\mathcal N}(T)^\perp}$ and its adjoint, and $0$'s failing to be an eigenvalue of $T|_{{\mathcal N}(T)^\perp}$ means $T|_{{\mathcal N}(T)^\perp}$ has dense range (so that $0\not\in \sigma_R\left(T|_{{\mathcal N}(T)^\perp}\right)$). We have already noted that if $T$ is normal, then $T|_{{\mathcal N}(T)^\perp}$ is also normal and because all normal operators are isoloid, we see $T|_{{\mathcal N}(T)^\perp}$ is isoloid; i.e., (iii) holds. That normal operators are isoloid is a consequence of the Spectral Theorem, and we note that with the help of the Riesz Decomposition Theorem isoloidness can be extended to hyponormal operators \cite[Theorem 2]{S}. We say a Hilbert-space operator $T$ is of {\it class} $\mathcal{N,\hspace{-.08in}L}$ (``normal like'') provided $T$ satisfies conditions (i)--(iii) above.
We have already pointed out that property (i) of $\mathcal{N,\hspace{-.08in}L}$ operators is satisfied by any quasiposinormal operator (and hence by any posinormal and hyponormal operator). Also, hyponormal operators satisfy property (iii) of $\mathcal{N,\hspace{-.08in}L}$ operators (because the restriction of a hyponormal operator to a reducing subspace is hyponormal and, as we noted in the preceding paragraph, hyponormal operators are isoloid).
\begin{theorem}\label{T:3.1}
{\it An operator of class\/ $\mathcal{N,\hspace{-.08in}L}$ has closed range if and only if zero is not
a limit point of its spectrum}\/.
\end{theorem}
\begin{proof}
Suppose ${T\kern-1pt\in{\mathcal B}[{\mathcal H}]}$ is a closed-range operator on ${\mathcal H}$ of class $\mathcal{N,\hspace{-.08in}L}$. Because $T$ is of class $\mathcal{N,\hspace{-.08in}L}$, we know (i) ${\mathcal N}(T)$ reduces $T$ and (ii) $0\not\in \sigma_{\kern-1ptR}(T|_{{\mathcal N}(T)^\perp})$. Because ${\mathcal N}(T)$ reduces $T$, we have the decomposition
$$
T=T'\oplus 0
\quad\hbox{on}\quad
{\mathcal H}={\mathcal N}(T)^\perp\oplus{\mathcal N}(T),
$$
where ${T'=T|_{{\mathcal N}(T)^\perp}\!\in{\mathcal B}[{\mathcal N}(T)^\perp\kern-1pt]}$, so that
${\mathcal N}(T')=\{0\}$ and ${\mathcal R}(T')$ is closed because ${\mathcal R}(T)$ is closed. Also observe that the representation $T=T'\oplus\ 0 \quad\hbox{on}\quad
{\mathcal H}={\mathcal N}(T)^\perp\oplus{\mathcal N}(T)$ shows that for $\lambda \ne 0$, the operator $T-\lambda I$ is invertible on ${\mathcal H}$ if and only if $T'-\lambda I'$ is invertible on ${\mathcal N}(T)^\perp$, where $I$ is the identity on ${\mathcal H}$ and $I'$ is the identity on ${\mathcal N}(T)^\perp$.
Because $0$ is not an eigenvalue of $T'$ and $0\not\in \sigma_{\kern-1ptR}(T|_{{\mathcal N}(T)^\perp})$, the range of $T'$ must be dense in ${\mathcal N}(T)^\perp$. But the range of $T'$ is closed, so that $T'$ is surjective. Hence, $T'$ is invertible; that is, $0\in \rho(T')$. Because $\rho(T')$ is open, there is an $\epsilon > 0$ such that $T'-\lambda I'$ is invertible on ${\mathcal N}(T)^\perp$ whenever $|\lambda| < \epsilon$. Thus, $T-\lambda I$ is invertible whenever $0< |\lambda| < \epsilon$ and we see $0$ is not a limit point of the spectrum of $T$.
Conversely, suppose that $0$ is not a limit point of the spectrum of $T$ where $T$ is of class $\mathcal{N,\hspace{-.08in}L}$. In particular, we know that (i) ${\mathcal N}(T)$ reduces $T$ and (iii) $T|_{N(T)^\perp}$ is isoloid. As we discussed in the first paragraph of the proof, because (i) holds, we have the representation $T = T' \oplus\ 0$ on ${\mathcal H}={\mathcal N}(T)^\perp\oplus{\mathcal N}(T)$, where ${\mathcal N}(T') = \{0\}$ and for nonzero $\lambda$, the operator $T-\lambda I$ is invertible on ${\mathcal H}$ if and only if $T'-\lambda I'$ is invertible on ${\mathcal N}(T)^\perp$. Because $0$ is not a limit point of the spectrum of $T$, we see that $0$ is not a limit point of the spectrum of $T'$. Thus $0$ is either not in the spectrum of $T'$ or it's an isolated point of the spectrum; however, the latter is not a possibility---because $T|_{N(T)^\perp}$ is isoloid, if $0$ were an isolated spectral point, then it would be an eigenvalue but we know ${\mathcal N}(T') = \{0\}$. Thus $T'$ is invertible and it follows that ${\mathcal R}(T) = {\mathcal N}(T)^\perp$ is closed.
\end{proof}
\vskip6pt\noi
The class $\mathcal{N,\hspace{-.08in}L}$ is constructed so that the following holds.
\begin{corollary}\label{C:3.2}
{\it If $T$ is a posinormal operator on ${\mathcal H}$ such that $T|_{{\mathcal N}(T)^\perp}$ is a isoloid operator whose residual spectrum does not contain $0$, then ${\mathcal R}(T)$ is closed if and only if $0$ is not a limit point of $\sigma(T)$. }
\end{corollary}
\vskip2pt
Let $T$ be hyponormal, then for every $\lambda \in {\mathbb C\kern.5pt}$, the operator $T-\lambda I$ is hyponormal. By Proposition \ref{P:2.1}(a) we know ${\mathcal N}(T-\lambda I)$ reduces $T-\lambda I$ and by \cite[Theorem 2]{S}, we know $T-\lambda I$ is isoloid. Thus $T-\lambda I$ satisfies (i) and (iii) of class $\mathcal{N,\hspace{-.08in}L}$. Hence, the last paragraph of the proof of Theorem \ref{T:3.1} yields the following.
\begin{corollary}\label{C:3.3}
{\it If $T$ is a hyponormal operator on ${\mathcal H}$, then whenever $\lambda$ is not a limit point of $\sigma(T)$ the range of $T-\lambda I$ is closed.}
\end{corollary}
Similarly, the hypotheses of the next corollary imply that $T$ satisfies (i) and (iii) of class $\mathcal{N,\hspace{-.08in}L}$.
\begin{corollary}\label{C:3.4}
{\it If $T$ is a posinormal or quasiposinormal operator such that $0$ is not a limit point of $\sigma(T)$ and $T|_{{\mathcal N}(T)^\perp}$ is isoloid, then ${\mathcal R}(T)$ is closed.}
\end{corollary}
We now characterize when a posinormal operator with closed range satisfies condition (ii) of class $\mathcal{N,\hspace{-.08in}L}$.
\begin{proposition}\label{P:3.5}
{\it Let $T$ be a posinormal operator on ${\mathcal H}$ having closed range. The following are equivalent:
\begin{itemize}
\item[(a)] $0\not\in \sigma_R\left(T|_{{\mathcal N}(T)^\perp}\right)$;
\item[(b)] $T|_{{\mathcal N}(T)^\perp}$ is invertible;
\item[(c)] $T$ is coposinormal;
\item[(d)] 0 is not a limit point of $\sigma(T)$.
\end{itemize}}
\end{proposition}
\begin{proof}
(a)$\implies$(b): Let $T$ be a posinormal operator on ${\mathcal H}$ having closed range such that $0\not\in \sigma_R\left(T|_{{\mathcal N}(T)^\perp}\right)$. Because ${\mathcal N}(T)$ reduces $T$, recall that we have the decomposition
$$
T=T'\oplus 0
\quad\hbox{on}\quad
{\mathcal H}={\mathcal N}(T)^\perp\oplus{\mathcal N}(T),
$$
where ${T'=T|_{{\mathcal N}(T)^\perp}\!\in{\mathcal B}[{\mathcal N}(T)^\perp\kern-1pt]}$, so that
${\mathcal N}(T')=\{0\}$ and ${\mathcal R}(T')$ is closed because ${\mathcal R}(T)$ is closed. Because $0$ is not an eigenvalue of $T'$ and we are assuming $0\not\in\sigma_{R}(T')$, the range of $T'$ must be dense. However the range of $T'$ is closed and thus $T'$ is surjective as well as injective---it is invertible.
(b)$\implies$(c): Because $T'$ is invertible, we have ${\mathcal R}(T') = {\mathcal N}(T)^\perp = {\mathcal R}(T^*)$, with the latter equality holding because ${\mathcal R}(T^*)$ is closed (because ${\mathcal R}(T)$ is closed). Because ${\mathcal R}(T') = {\mathcal R}(T^*)$, we see ${\mathcal R}(T) = {\mathcal R}(T^*)$, which, by definition, yields $T$ is coposinormal.
(c)$\implies$(d): Because $T$ is coposinormal as well as posinormal, and the range of $T^*$ is closed, we have ${\mathcal R}(T) = {\mathcal R}(T^*) = {\mathcal N}(T)^\perp = {\mathcal R}(T')$. Thus $T'$ is both injective and surjective--it is invertible. Recall that for $\lambda\ne 0$, $T-\lambda I$ is invertible if and only if $T'-\lambda I'$ is invertible. Because $\rho(T')$ is open, there is an $\epsilon > 0$ such that $T'-\lambda I'$ is invertible on ${\mathcal N}(T)^\perp={\mathcal R}(T^*)$ whenever $|\lambda| < \epsilon$. Thus, $T-\lambda I$ is invertible whenever $0< |\lambda| < \epsilon$ and we see $0$ is not a limit point of the spectrum of $T$.
(d)$\implies$(a): We establish the contrapositive implication. Suppose that $0\in\sigma_R(T')$. Because we are assuming ${\mathcal R}(T)$ is closed, ${\mathcal R}(T')$ is also closed; moreover, it's injective. Thus, $T'$ is bounded below. Because $0$ is in the spectrum of $T'$ (in fact in the residual spectrum), it cannot be in the boundary of $\sigma(T')$ because that would put $0$ in the approximate point spectrum of $T'$ (implying $T'$ is not bounded below). Thus, $0$ is an interior point of $\sigma(T')$, so that there is an $\epsilon > 0$ such that $T'- \lambda I'$ is not invertible whenever $|\lambda| < \epsilon$. Hence, $T-\lambda I$ is not invertible whenever $0<|\lambda| < \epsilon$. Thus $0$ is a limit point of $\sigma(T)$, completing the proof.
\end{proof}
\section{Posinormal Product of Posinormal Operators}
Recall the matrix representations developed in Section 2, for a pair of commuting operators $A$ and $B$ on a Hilbert space ${\mathcal H}$ for which ${\mathcal N}(A)$ reduces $A$:
\begin{center}\begin{minipage}{4in}
\noindent With respect to the decomposition\/
${\mathcal H}={\mathcal N}(A)^\perp\oplus{\mathcal N}(A)$,
\begin{equation*}
\label{Eqn:ddager}
\tag{$\ddagger$} \qquad A=\big(\smallmatrix{A' & O \cr
O & O \cr}\big)
\quad\hbox{\it and}\quad
B=\big(\smallmatrix{B' & O \cr
Y & Z \cr}\big)
\quad\hbox{\it with}\quad
Y\!A'=O,
\end{equation*}
and\/ ${Y=O}$ if and only if\/ ${\mathcal N}(A)$ reduces\/ $B$.
\end{minipage}\end{center}
In this section, we use these matrix representations to obtain necessary conditions and sufficient conditions ensuring the product of two posinormal operators will be posinormal.
\begin{lemma}\label{L:4.1}
{\it If\/ $A$ and\/ $B$ are commuting quasiposinormal operators on ${\mathcal H}$ having the matrix representations \eqref{Eqn:ddager} with respect to the decomposition ${\mathcal H}={\mathcal N}(A)^\perp\oplus{\mathcal N}(A)$, then}
$$
({\rm a}) \quad {\mathcal N}(Z)\sse{\mathcal N}(Z^*)\cap{\mathcal N}(Y^*)
\qquad\hbox{\it and}\qquad
({\rm b}) \quad {\mathcal N}(B')\cap{\mathcal N}(Y)\sse{\mathcal N}({B'}^*).
$$
{$\kern17pt${\rm (c)}}$\kern8pt$
{\it If $B^*$ is also quasiposinormal, then the above inclusions become
identities}\/.
\end{lemma}
\begin{proof}
Suppose that a quasiposinormal $A$ commutes with $B$, then $A$ and $B$ have matrix representations \eqref{Eqn:ddager} and
$B^*\!=\big(\smallmatrix{{B'}^* & Y^* \cr
O & Z^* \cr}\big).$
Because $B$ is quasiposinormal, ${{\mathcal N}(B)\sse{\mathcal N}(B^*)}$, so that for an arbitrary ${(u,v)\in{\mathcal N}(A)^\perp\oplus{\mathcal N}(A)}$,
$$
(B'u,Yu+Zv)=(0,0)
\quad\limply\quad
{({B'}^*u+Y^*v,Z^*v)=(0,0)}.
$$
\vskip6pt\noi
(a)
Set ${u=0}$ in ${\mathcal N}(A)^\perp.\!$ Then ${(0,Zv)}={(0,0)}$ implies
${(Y^*v,Z^*v)}={(0,0)}$ for any ${v\in{\mathcal N}(A)}.$ Thus
${{\mathcal N}(Z)\kern-1pt\sse\kern-1pt{\mathcal N}(Z^*)\cap{\mathcal N}(Y^*)}$.
\vskip6pt\noi
(b) Now set ${v=0}$ in ${\mathcal N}(A).$ Then ${(B'u,Yu)=(0,0)}$ implies
${({B'}^*u, 0)=(0,0)}$ for any ${u\in{\mathcal N}(A)^\perp}.\!$ Thus
${{\mathcal N}(B')\cap{\mathcal N}(Y)\sse{\mathcal N}({B'}^*)}$.
\vskip6pt\noi
(c)
If ${{\mathcal N}(B)={\mathcal N}(B^*)}$, then the above argument shows that
\[
{\mathcal N}(Z)={\mathcal N}(Z^*)\cap{\mathcal N}(Y^*)
\qquad\hbox{and}\qquad
{\mathcal N}(B')\cap{\mathcal N}(Y)={\mathcal N}({B'}^*). \qedhere
\]
\end{proof}
\vskip-2pt
\begin{theorem}\label{T:4.2}
{\it Let\/ $A$ and\/ $B$ be commuting quasiposinormal operators having the matrix representations \eqref{Eqn:ddager}}\/.
\begin{description}
\item{$\kern-4pt$\rm(a)}
{\it If\/ $B'$ is quasiposinormal, then\/ $AB$ is quasiposinormal}.
\vskip4pt
\item{$\kern-4pt$\rm(b)}
{\it If\/ $Z^*$ is quasiposinormal, then\/ $AB$ has closed range whenever\/
$A$ and\/ $B$ have closed range}.
\item{$\kern-4pt$\rm(c)}
{\it If\/ $Z^*$ quasiposinormal and $A$ and\/ $B$ have closed range, then
$B'$ and $Z$ have closed range}\/.
\end{description}
{\it In other words}\/,
\vskip4pt\noi
\begin{description}
\item{$\kern-6pt$\rm(a$'$)}
{\it If\/ $B^*|_{{\mathcal N}(A)^\perp}\!$ is coquasiposinormal, then\/ $AB$ is
quasiposinormal}.
\vskip4pt
\item{$\kern-6pt$\rm(b$'$)}
{\it If\/ $B|_{{\mathcal N}(A)}$ is coquasiposinormal, then\/ $AB$ has closed range
whenever\/ $A$ and\/ $B$ have closed range}.
\item{$\kern-6pt$\rm(c$'$)}
{\it If\/ $B|_{{\mathcal N}(A)}$ is coquasiposinormal and $A$ and\/ $B$ have closed
range, then the above coquasiposinormal restrictions have closed range}\/.
\end{description}
\end{theorem}
\begin{proof} Suppose that a quasiposinormal $A$ commutes with $B$, then $A$ and $B$ have matrix representations \eqref{Eqn:ddager}:
$$
\!A=\big(\smallmatrix{A' & O \cr
O & O \cr}\big)
\quad\hbox{and}\quad
B=\big(\smallmatrix{B' & O \cr
Y & Z \cr}\big)
\quad\hbox{with}\quad
B^*=\big(\smallmatrix{{B'}^* & Y^* \cr
O & Z^* \cr}\big),
$$
\vskip-4pt\noi
$$
\hbox{where}\quad
{B'}^*\!=B^*|_{{\mathcal N}(A)^\perp}
\quad\hbox{and}\quad
Z=B|_{{\mathcal N}(A)}.
$$
Thus (a), (b) and (c) are equivalent to (a$'$), (b$'$) and (c$'$),
respectively$.$ Also,
$$
\hbox{$A$ is quasiposinormal $\iff$ $A'$ is quasiposinormal}.
$$
\vskip-2pt
\vskip6pt\noi
(a)
Note that ${AB=BA=B'\!A'\oplus\,O=A'B'\oplus\,O.}$ Moreover, since
${{\mathcal N}(A')=\{0\}}$,
$$
{\mathcal N}(B'\!A')={\mathcal N}(B').
$$
Indeed
$x\in{\mathcal N}(B'A')\Leftrightarrow x\in{\mathcal N}(A'B')\Leftrightarrow A'B'x=0
\Leftrightarrow B'x=0\Leftrightarrow x\in{\mathcal N}(B').$ Now suppose
$B'$ is quasiposinormal, that is,
$$
{\mathcal N}(B')\sse{\mathcal N}({B'}^*).
$$
In this case, since ${\mathcal N}(B')\sse{\mathcal N}({B'}^*)$ and ${\mathcal N}(B'\!A')={\mathcal N}(B')$,
$$
{\mathcal N}(B'\!A')={\mathcal N}(B')\sse{\mathcal N}({B'}^*)\sse{\mathcal N}({A'}^*{B'}^*)={\mathcal N}((B'\!A')^*),
$$
so that $B'\!A'$ is quasiposinormal$.$ Thus $A'B'$ is quasiposinormal and so
is $AB.$ Hence
$$
\hbox{$B'$ quasiposinormal $\;\limply$ $A'B'$ quasiposinormal $\iff$ $AB$
quasiposinormal}.
$$
(The assumption ``$B$ is quasiposinormal'' was not \hbox{used in item (a).)}
\vskip6pt\noi
(b)
Now suppose $B$ is also quasiposinormal$.$ Then Lemma \ref{L:4.1}(a) ensures
that ${{\mathcal N}(Z)\sse{\mathcal N}(Z^*)\cap{\mathcal N}(Y^*)}$, which means
$$
\hbox{$Z\;$ is quasiposinormal \quad and \quad $Y^*|_{{\mathcal N}(Z)}=O$}.
$$
Thus if the quasiposinormal $Z=B|_{{\mathcal N}(A)}$ is coquasiposinormal as well,
then ${\mathcal N}(Z)={\mathcal N}(Z^*)$ so that $Y^*|_{{\mathcal N}(Z^*)}=O.$ Therefore $AB$ has closed
range whenever $A$ and $B$ have closed range according to Proposition \ref{P:2.2}(b).
\vskip6pt\noi
(c)
Let $A$ and $B$ have closed range$.$ Because $A$ and $B$ are commuting posinormal operators Lemma~\ref{L:4.1} yields ${\mathcal N}(Z)\sse{\mathcal N}(Z^*)\cap{\mathcal N}(Y^*) \subseteq {\mathcal N}(Y^*)$. Thus, ${Y^*|_{{\mathcal N}(Z^*)}=O}$ because ${\mathcal N}(Z^*)\subseteq {\mathcal N}(Z)$ given our assumption that $Z^*$ is quasiposinormal. Lemma \ref{L:2.2}(a) now shows that $B'$ has closed range.
Since $B$ is posinormal (i.e., quasiposinormal with closed range) and $Z^{*}$ is quasiposinormal, Lemma \ref{L:2.2}(b) shows that $Z=B|_{{\mathcal N}(A)}$ has closed range.
\end{proof}
\vskip6pt
A rewriting of Theorem \ref{T:4.2} gives a partial answer to Question \ref{Q:1.1}.
\begin{corollary}\label{C:4.3}
{\it If\/ $A$ and\/ $B$ are commuting posinormal operators with closed
range, then\/ $AB$ is posinormal with closed range if\/ $B'$ is posinormal
and\/ $Z$ is coposinormal}\/.
\end{corollary}
\begin{remark}\label{R:4.4}
If\/ $A$ and\/ $B$ are commuting quasiposinormal operators, then
$$
{\mathcal N}(B')\cap{\mathcal N}(Y)\sse{\mathcal N}({B'}^*)
$$
according to Lemma \ref{L:4.1}(b)$.$ Thus Theorem \ref{T:4.2}(a) ensures that
$$
{\mathcal N}(B')\sse{\mathcal N}(Y)
\;\;\limply\;\,
\hbox{$B'$ is quasiposinormal}
\;\;\limply\;\,
\hbox{$AB$ is quasiposinormal}.
$$
(The above holds, in particular, if ${Y\!=O}$; i.e., if ${\mathcal N}(A)$ also
reduces $B$)$.$ Thus (cf$.$ Proposition \ref{P:2.2}(b)$\kern.5pt$), {\it if\/
$A$ and\ $B$ commute and\/ ${{\mathcal N}(B')\sse{\mathcal N}(Y)}$, then}
$$
\hbox{\it $A$ and $B$ posinormal with closed range}
\;\;\limply\;\,
\hbox{\it $AB$ posinormal with closed range}.
$$
\end{remark}
We can replace the commuting assumption with coincident kernels.
\begin{proposition}\label{P:4.5}
{\it If\/ $A$ and\/ $B$ are posinormal operators with closed range, and if they
have the same kernel, then their product is posinormal with closed range.}
\end{proposition}
\begin{proof}
Let $A$ and $B$ be closed-range posinormal operators on ${\mathcal H}$ such that
${{\mathcal N}(A)={\mathcal N}(B)}$. Then ${\mathcal N}(A)$ is reducing for both $A$ and $B$ and by Proposition \ref{P:2.2}(a), $A$ and $B$ have the following matrix representations with respect to the decomposition $ {\mathcal H} = {\mathcal N}(A)^\perp \oplus {\mathcal N}(A)$:
$$
A=\big(\smallmatrix{A' & O \cr
O & O \cr}\big)
\quad\hbox{ and}\quad
B=\big(\smallmatrix{B' & O \cr
O & Z \cr}\big).
$$
Because $A$ and $B$ have closed range, the same is true of $A'$ and $B'$. Moreover both $A'$ and $B'$ are injective, which means $A'$ and $B'$ are bounded below. Thus, their products $A'B'$ and $B'A'$ are bounded below and we see both $A'B'$ and $B'A'$ have closed range. Moreover, as ${\mathcal N}(A')={\mathcal N}(B')=\{0\}$, we
get ${\mathcal N}(A'B')={\mathcal N}(B'\!A')=\{0\}$ so that $A'B'$ and $B'\!A'$ are quasiposinormal
with closed range, thus posinormal. It follows that $AB$ and $BA$ are posinormal; e.g.,
\[{\mathcal R}(AB) = {\mathcal R}(A'B') \subseteq {\mathcal R}((A'B')^*) = {\mathcal R}(B'^*A'^*) = {\mathcal R}(B^*A^*) = {\mathcal R}((AB)^*).\qedhere\]
\end{proof}
\vskip6pt\noi
If a pair of closed-range commuting posinormal operators is such that at
least one of them is coposinormal, then their product is closed-range
posinormal.
\begin{theorem}\label{T:4.6}
{\it If a closed-range operator $A$ that is both posinormal and coposinormal commutes with
a closed-range posinormal operator $B$, then\/ $AB$ is posinormal with closed
range}\/.
\end{theorem}
\begin{proof}
Let $A$ and $B$ be commuting, closed-range operators such that $A$ is both posinormal and coposinormal and $B$ is posinormal. Because ${\mathcal N}(A)$
reduces $A$ (Proposition \ref{P:2.1}(a)), $A$ and $B$ have the matrix representations \eqref{Eqn:ddager} relative to the decomposition ${\mathcal H} = {\mathcal N}(A)^\perp \oplus {\mathcal N}(A)$. Because $A$ is both posinormal and coposinormal ${\mathcal N}(A) = {\mathcal N}(A^*)$. Therefore
${\mathcal H}={{\mathcal N}(A)^\perp\oplus{\mathcal N}(A)}={{\mathcal N}(A^*)^\perp\oplus{\mathcal N}(A^*)}.$ Hence
${A'}^*=A^*|_{{\mathcal N}(A^*)^\perp}$ is injective (as well as $A'$). Because $A$ and $B$ commute, Proposition \ref{P:2.2}(a) ensures that ${Y\!A'=O}$ so that ${{A'}^*Y^*=O}$; equivalently, $A^*Y^* = 0$. Hence, because $A^*$ is injective, $Y^* = O$ and therefore $Y = O$. Hence ${\mathcal N}(A)$ reduces $B$ by
Proposition \ref{P:2.2}(a). This implies that $B'$ is posinormal because $B$ is
posinormal. By Theorem \ref{T:4.2}(a), $AB$ is quasiposinormal; however, ${{\mathcal R}(AB)}$ is closed
by Proposition \ref{P:2.2}(b) because ${Y=O}$, and thus $AB$ is posinormal.
\end{proof}
\vskip6pt
The preceding theorem generalizes Theorem 3 of \cite{DD}, a result stated in
language different from ours:
\begin{Dthm}
If\/ ${A,B\in{\mathcal B}[{\mathcal H}]}$ are EP operators with closed ranges and\/ ${AB=BA}$,
then\/ $AB$ is the $EP$ operator with a closed range also.
\end{Dthm}
\vskip6pt
Djordjevic's theorem arises in a line of investigation distinct from that
started by Rhaly in 1994 when he introduced the notion of posinormality$.$
In the final section of this paper, we briefly explore the connections
between posinormal operators and EP operators, presenting a new proof of the
Hartwig-Katz Theorem \cite{HK} characterizing ``EP matrices''$.$
\vskip6pt
The following corollary of our Theorem \ref{T:4.6} above is equivalent to
Djordjevi\'{c}'s theorem:
\begin{corollary}\label{C:4.7}
{\it Suppose that \/ $A$ and\/ $B$ are commuting posinormal and coposinormal operators
with closed range; then\/ $AB$ is posinormal and coposinormal
with closed range}\/.
\end{corollary}
\begin{proof}Applying Theorem \ref{T:4.6}, we see that $AB$ has closed range and is posinormal. Applying Theorem \ref{T:4.6} with $B^*$ playing the role of $A$ and $A^*$ playing the role of $B$, we obtain $B^*A^*$ is posinormal; i.e., $AB$ is coposinormal.
\end{proof}
\section{Posinormal Operators, EP Operators, and the Hartwig--Katz Theorem}
If $T$ is a posinormal operator on ${\mathbb C\kern.5pt}^n\!$, then the range of $T$ is
necessarily closed and the inclusion ${{\mathcal R}(T)\sse{\mathcal R}(T^*)}$ implies that
${{\mathcal R}(T)={\mathcal R}(T^*)}$ because ${\mathcal R}(T)$ and ${\mathcal R}(T^*)$ have the same dimension$.$
Thus, in the finite-dimensional setting, an operator is posinormal if and
only if its range equals that of its adjoint; that is, in finite dimensions, a posinormal operator is necessary both posinormal and coposinormal. Switching to matrix language,
we see that an ${n\times n}$ matrix (with complex entries) is posinormal
provided its range (column space) is the same as the range of its conjugate
transpose$.$ Such matrices are known as ``EP matrices.''
\vskip6pt
The modern definition of ``EP matrix'' evolved from Schwerdtfeger's notion
of ``$EP_r$ matrix'' (\cite[p$.$ 130]{Sch}, 1950)$.$ For an ${n\times n}$
matrix $A$, let $A_{(j)}$ denote the $j$-th column of $A$ and $A^{(j)}$
denote the corresponding $j$-th row (${1\le j\le n}$)$.$ Schwerdtfeger
defines an ${n\times n}$ matrix $A$ of rank $r$ to be a $P_r$ matrix
provided there exist indices ${i_1,i_2,\ldots i_r}$ with
${1\le i_1<i_2<\ldots<i_r\le r}$, such that
${\{A_{(i_1)},A_{(i_2)},\ldots A_{(i_r)}\}}$ and
${\{A^{(i_1)}, A^(i_2)\},\ldots A^{(i_r)}\}}$ are both linearly independent
sets of vectors$.$ When these sets are appropriately chosen
(see p$.$ 130 of \cite{Sch}), there is reason to call the vectors these sets
contain {\it principal vectors}\/ (because they correspond to an
${r\times r}$ rank $r$ principal submatrix of $A$)$.$ Schwerdtfeger points
out that every symmetric and every skew symmetric matrix $A$ is a $P_r$
matrix$.$ Schwerdtfeger defines ``$EP_r$ matrix'' in the two sentences
following Theorem 18.1 on page 130 of \cite{Sch}:
\vskip6pt
\begin{quotation}
{\small
The notion of $P_r$ matrix may be further restricted so that it still covers
the symmetric and skew symmetric matrices as well as other types of matrices
to be mentioned later on$.$ An $n$-matrix $A$ [an $n$-matrix is an
${n\times n}$ matrix] may be called and $EP_r$ matrix if it is a $P_r$
matrix and the linear relations among its rows are the same as those among
its corresponding columns.
}
\end{quotation}
\vskip6pt\noi
Schwerdtfeger then explains what ``same linear relations'' means$.$ In
modern notation, it means that ${{\mathcal N}(A)={\mathcal N}(A^T)}.$ In fact, Schwerdtfeger's
definition of $EP_r$ matrix may be expressed: An ${n\times n}$ matrix $A$ is
an $EP_r$ matrix provided that ${\mathcal N}(A)={\mathcal N}(A^T).$ We see immediately, that
Schwerdtfeger has ``further restricted so that it still covers the symmetric
and skew symmetric matrices$.$'' \hbox{Note well that, e.g.,}
$$
A=\big(\smallmatrix{1 & i \cr
i & -1 \cr}\big),
$$
being symmetric, is an $EP_r$ matrix$.$ However, $A$ is not an EP matrix
because its range, the one-dimensional subspace of ${\mathbb C\kern.5pt}^2$ spanned by
${\big(\smallmatrix{1 \cr
i \cr}\big)}$,
is {\it not}\/ the same as the range of $A^*$, which is the one-dimensional
subspace of ${\mathbb C\kern.5pt}^2$ spanned by
${\big(\smallmatrix{1 \cr
-i \cr}\big)}.$
Of course, if an $EP_r$ matrix $A$ has real entries, then
${{\mathcal N}(A)={\mathcal N}(A^T)={\mathcal N}(A^*)}$ and, upon taking orthogonal complements, we have
${{\mathcal R}(A)={\mathcal R}(A^*)}.$ Thus, an $EP_r$ matrix with real entries is an EP
matrix$.$
\vskip6pt
Pearl (\cite[p$.$ 674]{Prl}, 1966) mischaracterized Schwerdtfeger's
definition of $EP_r$ matrix, stating that
\vskip6pt
\begin{quotation}
{\small
Schwerdtfeger
(\cite[p$.$ 130]{Sch})
has called a square matrix of rank $r$ an $EP_r$ matrix if it satisfies
the condition:
$$
AX=0\;\text{if and only if}\;A^*X =0\quad[\text{i.e.,}\;{\mathcal N}(A)={\mathcal N}(A^*)].
$$
}
\end{quotation}
\vskip0pt\noi
It seems that since Pearl's paper appeared, most have assumed that
Schwerdtfeger's $EP_r$ matrices are precisely today's EP matrices, but
that's not quite true.
\vskip6pt
Generalizing the notion of ``EP matrix,'' Campbell and Meyer
(\cite{CampMeyer}, 1975) introduced EP operators, which may be characterized
as follows: a Hilbert space operator $T$ is EP provided that $T$ has closed
range and ${{\mathcal R}(T)={\mathcal R}(T^*)}.$ Thus, according to the Campbell--Meyer
definition, an EP operator in ${\B[\H]}$ is a closed-range operator that is both
posinormal and coposinormal$.$ It's not clear why Schwerdtfeger chose the
$E$ in his ``$EP_r$'' designation$.$ Fortunately, there is a useful
interpretation of ``EP'': an EP-operator is naturally associated with
``Equal Projectors.''
\vskip6pt
We now discuss why EP operators may be viewed as ``equal-projector''
operators$.$ Let ${T\in{\B[\H]}}$ have closed range so that ${\mathcal H}$ has orthogonal
decompositions ${\mathcal H}={{\mathcal R}(T^*)\oplus{\mathcal N}(T)}$ and ${\mathcal H}={{\mathcal R}(T)\oplus{\mathcal N}(T^*)}.$
Thus, in particular we have
$$
{\mathcal R}(T)=T({\mathcal H})=T\left({\mathcal R}(T^*)\oplus{\mathcal N}(T)\right)=T({\mathcal R}(T^*)),
$$
and we see that the restriction ${T|_{{\mathcal R}(T^*)}}$ is an invertible operator
mapping ${\mathcal R}(T^*)$ onto ${\mathcal R}(T).$ The {\it generalized inverse}\/ $T^\dagger$
of $T$, which is also called its pseudoinverse or its Moore-Penrose inverse,
is the operator that takes an element of ${\mathcal R}(T)$ to its unique $T$ preimage
in ${\mathcal R}(T^*)$ and takes elements of ${\mathcal N}(T^*)$ to zero; thus,
$$
T^\dagger=(T|_{{\mathcal R}(T^*)})^{-1}P_{{\mathcal R}(T)}.
$$
Observe that
$$
T^\dagger T=P_{{\mathcal R}(T^*)}
\quad\text{and}\quad
TT^\dagger=P_{{\mathcal R}(T)}.
$$
Hence, if $T$ is an operator with closed range, then $T$ is an EP operator
(${\mathcal R}(T)= {\mathcal R}(T^*)$) if and only if the two projectors $T^\dagger T$ and
$TT^\dagger$ are equal; that is, $T$ is an EP-operator if and only if $T$
commutes with its generalized inverse$.$ Pearl (\cite{Prl}, 1965), working
in the matrix setting, was the first to provide this characterization; in
addition, Pearl provides an explicit formula for $T^\dagger$ when $T$ is a
matrix---see Lemma 1 and Corollary 1 of \cite{Prl}.
\vskip6pt
We have seen that a closed-range operator ${T\in{\mathcal B}({\mathcal H})}$ is EP if and only if
$T^\dagger T-{TT^\dagger=0}.$ Itoh (\cite{Itoh}, 2005) introduced hypo-EP
operators, defining them as \hbox{follows}: ${T\in{\B[\H]}}$ is hypo-EP provided
that $T$ has closed range and ${T^\dagger T-TT^\dagger\ge0}.$ Itoh
\cite[Proposition 2.1]{Itoh} then provides several conditions equivalent to
an operator's being hypo-EP, including ${T\in{\mathcal B}({\mathcal H})}$ is hypo-EP if and only
if $T$ has closed range and ${{\mathcal R}(T)\kern-1pt\sse\kern-1pt{\mathcal R}(T^*\kern-1pt)}.$
$\kern-1pt$Thus a hypo-EP operator is a posinormal operator with
\hbox{closed range}.
\vskip6pt
In \cite{JV}, Johnson and Vinoth provide the following sufficient condition
for a product of operators to be EP:
\begin{JVthm}
Let\/ $A$ be a hypo-EP operator and\/ ${B\in{\B[\H]}}$ have closed range$.$ If\/
${{\mathcal R}(B)\sse{\mathcal R}(A)}$ and\/ ${{\mathcal N}(B)\sse{\mathcal N}(A)}$, then\/ $AB$ is hypo-EP.
\end{JVthm}
\vskip0pt
As an immediate corollary \cite[Corollary 14]{JV}, Johnson and Vinoth
obtain the following result, which was described in the introduction of
this paper in the following equivalent way ``Every positive-integer power
of a posinormal operator with closed range is posinormal with closed range'':
\vskip6pt\noi
{\bf Corollary (Johnson--Vinoth).}
{\it Let\/ $A$ be a hypo-EP operator on\/ ${\mathcal H}.$ Then\/ $A^n$ is hypo-EP for
any integer}\/ ${n\ge1}$.
\vskip6pt\noi
Djordjevi\'{c}'s paper \cite{DD}, whose Theorem 3 is Corollary \ref{C:4.7} of the
preceding section, contains a generalization of the well-known Hartwig--Katz
Theorem, which characterizes when the product of two EP matrices is EP$.$
Here's Djordjevi\'{c}'s generalization (\cite[Theorem 1]{DD}, 2000):
\begin{theorem}[{Djordjevi\'{c}'s generalization of the Hartwig--Katz Theorem}]\label{T:5.1}
{\it Let\/ ${A,B\in{\mathcal B}[{\mathcal H}]}$ be EP operators$.$ Then the following statements
are equivalent}\/.
\begin{itemize}
\item[(a)]
$AB$ {\it is an EP operator}\/;
\item[(b)]
${{\mathcal R}(AB)={\mathcal R}(A)\cap{\mathcal R}(B)}$ {\it and}\/ ${{\mathcal N}(AB)={\mathcal N}(A)+{\mathcal N}(B)}$;
\item[(c)]
${{\mathcal R}(AB)={\mathcal R}(A)\cap{\mathcal R}(B)}$ {\it and\/ ${\mathcal N}(AB)$ is the closure of}\/
${{\mathcal N}(A)+{\mathcal N}(B)}$.
\end{itemize}
\end{theorem}
\vskip6pt
Keep in mind that we are assuming EP (and hypo-EP) operators have
closed range.
\vskip6pt
We now turn our attention to the Hartwig--Katz Theorem. Our goal is to provide
a new, elementary proof of it based on a characterization of ${n\times n}$
matrices $A$ such that for every $n\times n$ matrix satisfying
${{\mathcal R}(AB)\sse{\mathcal R}(B)}$,
$$
{\mathcal R}(AB)={\mathcal R}(A)\cap{\mathcal R}(B).
$$
Solving a problem that had been open for 25 years (see
\cite[p$.$ 98]{BasKatz}, 1969), Hartwig and Katz proved the following
result (\cite{HK}, 1997) in which ``RS'' denotes {\it row space}:
\begin{HKthm}
Let\/ $A$ and\/ $B$ be\/ ${n\times n}$ EP matrices$.$ The
following are equivalent:
\begin{itemize}
\item[(a)]
${{\mathcal R}(AB)={\mathcal R}(A)\cap R(B)}$ and\/ ${RS(AB)=RS(A)\cap RS(B)}$;
\item[(b)]
${{\mathcal R}(AB)\sse{\mathcal R}(B)}$ and\/ ${RS(AB)\sse{\mathcal R}(A)}$.
\item[(c)]
$AB$ is EP.
\end{itemize}
\end{HKthm}
\vskip0pt
Upon taking complex conjugates of all elements in a row space of an
${n\times n}$ matrix $A$ and then transposing, we obtain the column space of
$A^*$, the conjugate-transpose of $A.$ Thus the condition
${RS(AB)=RS(A)\cap RS(B)}$ of part (a) of the Hartwig-Katz Theorem is
equivalent to ${{\mathcal R}((AB)^*)={\mathcal R}(A^*)\cap{\mathcal R}(B^*)}.$ Taking orthogonal
complements, we see ${{\mathcal R}((AB)^*)={\mathcal R}(A^*)\cap{\mathcal R}(B^*)}$ is equivalent to
${\mathcal N}(AB)={\mathcal N}(A)+{\mathcal N}(B).$ Similarly, we see that ${RS(AB)\subseteq{\mathcal R}(A)}$ is
equivalent to ${\mathcal N}(A)\subseteq {\mathcal N}(AB).$ Thus, the Hartwig--Katz equivalent
conditions may be restated as follows for EP matrices $A$ and $B$:
\begin{itemize}
\item[(a)]
${{\mathcal R}(AB)={\mathcal R}(A)\cap{\mathcal R}(B)}$ and ${{\mathcal N}(AB)={\mathcal N}(A)+{\mathcal N}(B)}$;
\item[(b)]
${{\mathcal R}(AB)\sse{\mathcal R}(B)}$ and ${{\mathcal N}(A)\sse{\mathcal N}(AB)}$;
\item[(c)]
$AB$ is EP.
\end{itemize}
We see that Djordjevi\'{c} in Theorem 5.1 has generalized to infinite
dimensional Hilbert space the equivalence of (a) and (c) of the
Hartwig--Katz Theorem but did not speak to the issue of generalizing the
equivalence of (b) and (c)$.$ Here's an example illustrating that the
equivalence of (b) and (c) does not generalize to infinite dimensions.
\vskip6pt\noi
\begin{proposition}\label{P:5.2}
{\it There exist EP operators\/ $A$ and\/ $B$ on\/ a Hilbert space ${\mathcal H}$ for which\/
{\rm(i)} $\,{\mathcal R}(AB)\sse{\mathcal R}(B)$ and\/ {\rm(ii)} $\,{{\mathcal N}(A)\sse{\mathcal N}(AB)}$ such
that\/ $AB$ is not EP}\/.
\end{proposition}
\begin{proof}
Let ${(e_j)_{j=0}^\infty}$ be the natural basis of $\ell^2$ so that the
sequence $e_j$ has $1$ as its $j$-th term and zeros elsewhere:
${(a_0, a_1,a_2,\ldots)}=\sum_{n=0}^\infty a_j e_j.$ Let ${\mathcal M}$ be the one
dimensional subspace of $\ell^2$ spanned by $e_0$ and let $P_{{\mathcal M}}$ be the
orthogonal projection of $\ell^2$ onto ${\mathcal M}.$ Let $F$ be the forward shift on
$\ell^2$, $F\left(\sum_{j=0}^\infty a_je_j\right)= \sum_{j=0}^\infty a_je_{j+1}$, so that
$F^*$ is the backward shift,
$F^*\left(\sum_{j=0}^\infty a_je_j\right)=\sum_{j=1}^\infty a_je_{j-1}$. Note that
$F^*F=I$, while ${FF^*=P_{{{\mathcal M}}^\perp}}$.
\vskip6pt
Let ${{\mathcal H}=\ell^2\oplus\ell^2}$, let $I$ be the identity on $\ell^2$, and
define $A$ and $B$ on ${\mathcal H}$ by
$$
A=\big(\smallmatrix{0 & 0 \cr
0 & I \cr}\big)
\quad\hbox{and}\quad
B=\big(\smallmatrix{F & P_{{\mathcal M}} \cr
0 & F^* \cr}\big).
$$
Observe that $B$ is unitary (hence surjective):
$$
BB^*=\big(\smallmatrix{F & P_{{\mathcal M}} \cr
0 & F^* \cr}\big)
\big(\smallmatrix{F^* & 0 \cr
P_{{\mathcal M}} & F \cr}\big)
=\big(\smallmatrix{P_{{{\mathcal M}}^\perp}+P_{{\mathcal M}} & \;P_{{\mathcal M}}F \cr
F^*P_{{\mathcal M}} & \;I \cr}\big)
=\big(\smallmatrix{I & 0 \cr
0 & I \cr}\big),
$$
while
$$
B^*B=\big(\smallmatrix{F^* & 0 \cr
P_{{\mathcal M}} & F \cr}\big)
\big(\smallmatrix{F & P_{{\mathcal M}} \cr
0 & F^* \cr}\big)
=\big(\smallmatrix{I & \;F^*P_{{\mathcal M}} \cr
P_{{\mathcal M}}F & \;P_{{\mathcal M}}+P_{{{\mathcal M}}^\perp} \cr}\big)
=\big(\smallmatrix{I & 0 \cr
0 & I \cr}\big).
$$
Because $B$ is surjective, clearly (i) ${{\mathcal R}(AB)\subseteq{\mathcal R}(B).}$ Also,
${\mathcal N}(A)$, which equals ${\ell^2 \oplus 0}$ is clearly contained in the kernel
of
$AB=\big(\smallmatrix{0 & 0 \cr
0 & F^* \cr}\big).$
The operator $B$ is EP because it's invertible, and $A$ is EP because it's self-adjoint.
However,
$AB=\big(\smallmatrix{0 & 0 \cr
0 & F^* \cr}\big)$
is not EP, because its range is $0 \oplus \ell^2$, but the range of its
adjoint is $0 \oplus {{\mathcal M}}^\perp$, a proper subset of ${\mathcal R}(AB)$ (making $AB$
coposinormal, but not posinormal).
\end{proof}
\vskip0pt
Note that the condition ${{\mathcal R}(AB)={\mathcal R}(A)\cap{\mathcal R}(B)}$ holds whenever $A$, $B$, and $AB$ are all EP operators. We rely on the following to yield the
Hartwig--Katz Theorem as a straightforward corollary.
\begin{theorem}\label{T:5.3}
{\it Let ${A\!:{\mathbb C\kern.5pt}^n\to{\mathbb C\kern.5pt}^n}$ be linear. The following are equivalent}\/.
\begin{itemize}
\item[(a)]
${\mathcal N}(A^2)={\mathcal N}(A)$;
\item[(b)]
{\it for each linear ${B\!:{\mathbb C\kern.5pt}^n\to{\mathbb C\kern.5pt}^n}$ satisfying}\/ ${\mathcal R}(AB)\sse{\mathcal R}(B)$,
$$
{\mathcal R}(AB)={\mathcal R}(A)\cap{\mathcal R}(B);
$$
\item[(c)]
${\mathcal R}(A^2)={\mathcal R}(A)$;
\item[(d)]
$r(A^2)=r(A)$, {\it where $r$ denotes rank}\/.
\end{itemize}
\end{theorem}
\begin{proof}
(a)$\implies$(b): Suppose that ${{\mathcal N}(A^2)={\mathcal N}(A)}$ and that
${B\!:{\mathbb C\kern.5pt}^n\to{\mathbb C\kern.5pt}^n}$ is a linear mapping whose range is invariant for $A$;
i.e., ${{\mathcal R}(AB)\sse{\mathcal R}(B)}.$
\vskip6pt
Observe that ${M\!:={\mathcal R}(A)\cap{\mathcal R}(B)}$ is also invariant for $A.$ Consider
$$
A|_M\!:{\mathcal R}(A)\cap{\mathcal R}(B)\rightarrow{\mathcal R}(A)\cap{\mathcal R}(B).
$$
Suppose that ${v\in{\mathcal R}(A)\cap{\mathcal R}(B)}$ is such that ${Av=0}.$ Then, because
${v=Aw}$ for some ${w\in{\mathbb C\kern.5pt}^n}$, we see ${0=AAw=A^2w}$, so that
${w\in{\mathcal N}(A^2)={\mathcal N}(A)}$ and ${0=Aw=v}.$ Thus, $A|_M$ is injective and hence
surjective$.$ We see
$$
{\mathcal R}(A)\cap{\mathcal R}(B)=A|_M({\mathcal R}(A)\cap{\mathcal R}(B))= A({\mathcal R}(A)\cap{\mathcal R}(B))\sse A({\mathcal R}(B))={\mathcal R}(AB).
$$
The reverse inclusion, ${{\mathcal R}(AB)\sse{\mathcal R}(A)\cap{\mathcal R}(B)}$ holds because
${\mathcal R}(AB)$ is clearly contained in ${\mathcal R}(A)$ while ${{\mathcal R}(AB)\sse{\mathcal R}(B)}$ by
hypothesis$.$ Hence, (a)$\implies$(b).
\smallskip
To see that (b)$\implies$(c), apply (b) with ${B=A}$. That (c) implies
(d) is clear$.$ By the rank-nullity theorem, condition (d) implies that
${\mathcal N}(A^2)$ and ${\mathcal N}(A)$ have the same dimension, but ${\mathcal N}(A)\sse{\mathcal N}(A^2)$, so
that (a) holds. We've shown (d) implies (a), completing the proof.
\end{proof}
\vskip0pt
Recall that for all posinormal operators $T$, we have ${\mathcal N}(T^2)={\mathcal N}(T)$ by
Proposition \ref{P:2.1}(c) (which is Proposition 3 of \cite{JKKP})$.$ Thus,
condition (a) of Theorem \ref{T:5.3} holds whenever $A$ is EP.
\begin{corollary}[{Hartwig--Katz Theorem}]\label{C:5.4}
{\it Suppose that\/ $A$ and\/ $B$ are EP operators on\/ ${\mathbb C\kern.5pt}^n\!.$ Then\/
$AB$ is EP if and only if\/ {\rm(i)} $\,{{\mathcal R}(AB)\sse{\mathcal R}(B)}$ and}\/ {\rm(ii)}
$\,{{\mathcal N}(A)\sse{\mathcal N}(AB)}$.
\end{corollary}
\begin{proof}
Suppose that $AB$ is EP$.$ Then
$$
{\mathcal R}(AB)={\mathcal R}(B^*A^*)\sse{\mathcal R}(B^*)= {\mathcal R}(B),
$$
where the final equality holds because $B$ is EP$.$ Thus, (i) holds.
Similarly
$$
{\mathcal R}(B^*A^*)={\mathcal R}(AB)\sse{\mathcal R}(A)={\mathcal R}(A^*).
$$
Thus, ${{\mathcal R}(A^*)^\perp\sse{\mathcal R}(B^*A^*)^\perp}$; that is, ${{\mathcal N}(A)\sse{\mathcal N}(AB)}$,
so that (ii) holds.
\vskip6pt
Now suppose that (i) and (ii) hold for EP operators $A$ and $B.$
\vskip6pt
Because $A$ is EP, ${{\mathcal N}(A^2)={\mathcal N}(A)}$; thus, since we assuming (i) holds,
Theorem \ref{T:5.3}, (a)$\implies$(b) yields
\begin{equation}\label{RAB}
{\mathcal R}(AB)\sse{\mathcal R}(A)\cap{\mathcal R}(B).
\end{equation}
Because $B^*$ is EP, ${{\mathcal N}(B^{*2})={\mathcal N}(B^*)}$; taking orthogonal complements, (ii) yields ${\mathcal R}(B^*A^*) \subseteq {\mathcal R}(A^*)$.
Hence, by Theorem \ref{T:5.3}, (a)$\implies$(b),
\begin{equation}\label{RASBS}
{\mathcal R}(B^*A^*) = {\mathcal R}(B^*)\cap{\mathcal R}(A^*).
\end{equation}
Because ${{\mathcal R}(A)={\mathcal R}(A^*)}$ and ${{\mathcal R}(B)={\mathcal R}(B^*)}$ for the EP operators $A$ and
$B$, (\ref{RAB}) and (\ref{RASBS}) yield
$$
{\mathcal R}(AB)={\mathcal R}((AB)^*),
$$
so that $AB$ is EP, as desired.
\end{proof}
\vskip0pt
We note that Koliha (\cite{Kh}, 1999) obtained a simple proof of the
Hartwig--Katz Theorem quite different from ours.
\bibliographystyle{amsplain}
|
1,314,259,995,659 | arxiv | \section{introduction}
Beam dump experiments have been aimed at searching for new particles, such as dark photons and axions (see, e.g. \cite{Essig:2013lka} and references therein) that decay to lepton and/or photon pairs. Electron beam dumps in particular have received a large amount of theoretical attention in recent years~\cite{Bjorken:2009mm,Andreas:2012mt}. The typical setup of an electron beam dump experiment is to dump an electron beam into a target, in which the electrons are stopped (For a discussion of proton beam dumps, which is beyond the scope of this work, see, e.g.~\cite{Blumlein:2013cua,deNiverville:2016rqh}). The new particles produced by the bremsstrahlung-like process pass through a shield region and decay. These new particles can be detected by their decay products, electron and/or photon pairs, measured by the detector downstream of the decay region. Previous work simplified the necessary phase space integral by using the Weizs\"{a}cker-Williams (WW) approximation \cite{vonWeizsacker:1934nji,Williams:1935dka} which, also known as method of virtual quanta, is a semiclassical approximation. The idea is that the electromagnetic field generated by a fast moving charged particle is nearly transverse which is like a plane wave and can be approximated by real photon. The use of the WW approximation in bremsstrahlung processes was developed in Refs.~\cite{Kim:1973he,Tsai:1973py} and applied to beam dump experiments in Refs.~\cite{Bjorken:2009mm,Tsai:1986tx}. The WW approximation simplifies evaluation of the integral over phase space and approximates the 2 particle to 3 particle (2 to 3) cross section in terms of a 2 particle to 2 particle (2 to 2) cross section. For the WW approximation to work in a beam dump experiment, it needs the incoming beam energy to be much greater than the mass of the new particle, $m_\phi$, and electron mass $m_e$.
The previous work \cite{Bjorken:2009mm} used the following three approximations:
\begin{enumerate}
\item WW approximation;
\item a further simplification of the phase space integral, see Eq. (\ref{eq:tmin tmax});
\item $m_\phi\gg m_e$.
\end{enumerate}
The combination of the first two approximations has been denoted \cite{Kim:1973he} the improved WW (IWW) approximation. The name ``improved WW" might be somewhat misleading since the procedure reduces the computational time but not to improve accuracy). In this paper, we will focus on examining the validity of WW and IWW approximations (The validity of WW approximation is also discussed in other processes, e.g.~\cite{Brodsky:1971ud}). The third approximation used to simplify the calculation of amplitude, however, is not in our scope because it is merely a special case by cutting off our results when $m_\phi\lesssim 2m_e$. Nevertheless, we should point out that without using the third approximation we can use beam dump experiments to explore a larger parameter space.
As an example, we use the beam dump experiment E137 \cite{Bjorken:1988as} and the production of a new scalar boson, which we denote $\phi$. Interest in a new scalar boson arose recently because such particle which couples to standard model fermions can solve the proton radius puzzle and muonic anomalous magnetic moment discrepancy simultaneously \cite{TuckerSmith:2010ra,Liu:2016qwd}.
The outline of this paper is as follows. In Sec.~\ref{sec:dynamics}, we calculate the squared amplitude for 2 to 3 and 2 to 2 processes. In Sec. \ref{sec:cross section}, the cross sections for the 2 to 3 and 2 to 2 processes are calculated in the lab frame without any approximation. In Sec.~\ref{sec:WW approximation}, we introduce the WW approximation. In Sec.~\ref{sec:cross section comparison}, we derive and compare the cross sections with and without approximations. In Sec.~\ref{sec:particle production}, we compare the number of new particles produced in beam dump experiments with and without approximations. In Sec.~\ref{sec:data analysis}, we assume that this new scalar boson is observed and measured in beam dump experiment, determine the mass and coupling constant, and compare the results with and without approximations. A discussion is presented in Sec.~\ref{sec:discussion}.
\section{dynamics---a new scalar boson as an example}\label{sec:dynamics}
For simplicity, we assume that the new scalar boson $\phi$ only couples to electron by a Yukawa interaction, i.e. the scalar boson does not couple to other standard model fermions other than electron. The Lagrangian in the mostly-plus metric is
\begin{align}
\mathcal{L}\supset-\frac{1}{2}(\partial\phi)^2-\frac{1}{2}m_\phi^2\phi^2+e\epsilon\phi\bar\psi\psi
\end{align}
where $\epsilon=g/e$, $e$ is the electric charge, and $\psi$ is the electron field. Once the scalar boson is produced, it will decay to photons pairs through the electron loop,
\begin{align}\label{eq:decay to electrons}
\Gamma_{\phi\to\gamma\gamma}=\epsilon^2\frac{\alpha^3}{4\pi^2}\frac{m_\phi^3}{m_e^2}f\left(\frac{m_\phi^2}{4m_e^2}\right),
\end{align}
where $m_e$ is the electron mass and $f(\tau)=\frac{1}{4\tau^2}\left|1+(1-\frac{1}{\tau})\left(\sin^{-1}\sqrt{\tau}\right)^2\right|^2$. If $m_\phi>2m_e$, the scalar boson can also decay to electron pairs,
\begin{align}\label{eq:decay to photons}
\Gamma_{\phi\to e^+e^-}=\epsilon^2\frac{\alpha}{2}m_\phi\left(1-\frac{4m_e^2}{m_\phi^2}\right)^{3/2}.
\end{align}
\subsection{2 to 3 production}
\begin{figure}
\centering
\includegraphics[scale=0.8]{2_to_3}
\caption{\label{fig:2 to 3} Lowest order 2 to 3 production process: $e(p)+A(P_i)\rightarrow e(p')+A(P_f)+\phi(k)$. $A$, $\gamma$, $e$, and $\phi$ stand for the target atom, photon, electron, and the new scalar boson.}
\end{figure}
The leading production process is the bremsstrahlung-like radiation of the scalar from the electron, shown in Fig.~\ref{fig:2 to 3},
\begin{align}\label{eq:2 to 3 production process}
e(p)+A(P_i)\rightarrow e(p')+A(P_f)+\phi(k)
\end{align}
where $e$, $A$, and $\phi$ stand for electron, target atom, and the new scalar boson, respectively. We define the following quantities using the mostly-plus metric
\begin{align}\label{eq:2 to 3 variables}
\tilde{s}&=-(p'+k)^2-m_e^2=-2p'\cdotp k+m_\phi^2\nonumber\\
\tilde{u}&=-(p-k)^2-m_e^2=2p\cdotp k+m_\phi^2\nonumber\\
t_2&=-(p'-p)^2=2p'\cdotp p+2m_e^2\\
q&=P_i-P_f\nonumber\\
t&=q^2\nonumber
\end{align}
which satisfy
\begin{align}
\tilde{s}+t_2+\tilde{u}+t=m_\phi^2.
\end{align}
For definiteness, we assume the atom is a scalar boson (its spin is not consequential here) so that the Feynman rule for the photon-atom vertex is
\begin{align}
ieF(q^2)(P_i+P_f)_\mu\equiv ieF(q^2)P_\mu
\end{align}
where $F(q^2)$ is the form factor which accounts for the nuclear form factor \cite{DeJager:1987qc} and the atomic form factor \cite{atomic form factor}. Here, we only include the elastic form factor since the contribution of the inelastic one is much smaller and can be neglected in computing the cross section. The amplitude of the process in Fig.~\ref{fig:2 to 3} is
\begin{align}
\mathcal{M}^{2\to3}&=e^2g\frac{F(q^2)}{q^2}\bar{u}_{p',s'}\left[\slashed{P}\frac{-(\slashed{p}-\slashed{k})+m_e}{-\tilde{u}}+\frac{-(\slashed{p'}+\slashed{k})+m_e}{-\tilde{s}}\slashed{P}\right]u_{p,s}
\end{align}
where $u_{p,s}$ is the electron spinor; $s$ and $s'$ are equal to $\pm 1$. After averaging and summing over initial and final spins, we have
\begin{align}
\overline{|\mathcal{M}^{2\to3}|^2}=\left(\frac{1}{2}\sum_s\right)\sum_{s'}|\mathcal{M}^{2\to3}|^2=e^4g^2\frac{F(q^2)^2}{q^4}\mathcal{A}^{2\to3}
\end{align}
where
\begin{align}
\mathcal{A}^{2\to3}=-\frac{(\tilde{s}+\tilde{u})^2}{\tilde{s}\tilde{u}}P^2-4\frac{t}{\tilde{s}\tilde{u}}(P\cdotp k)^2-\frac{(\tilde{s}+\tilde{u})^2}{\tilde{s}^2\tilde{u}^2}(m_\phi^2-4m_e^2)\left[P^2 t+4\left(\frac{\tilde{u}P\cdotp p+\tilde{s}P\cdotp p'}{\tilde{s}+\tilde{u}}\right)^2\right].
\end{align}
\subsection{2 to 2 production}
\begin{figure}
\centering
\includegraphics[scale=0.8]{2_to_2}
\caption{\label{fig:2 to 2} Lowest order 2 to 2 production process: $e(p)+\gamma(q)\rightarrow e(p')+\phi(k)$. $\gamma$, $e$, and $\phi$ stand for photon, electron, and the new scalar boson.}
\end{figure}
For the 2 to 2 process in Fig. \ref{fig:2 to 2}, a ``subprocess'' of the full 2 to 3 interaction,
\begin{align}\label{eq:2 to 2 production process}
e(p)+\gamma(q)\rightarrow e(p')+\phi(k).
\end{align}
With the same definition in Eq. (\ref{eq:2 to 3 variables}), $\tilde{s}$, $\tilde{u}$, and $t_2$ satisfy
\begin{align}
\tilde{s}+t_2+\tilde{u}&=m_\phi^2
\end{align}
and the amplitude in Fig. \ref{fig:2 to 2} is
\begin{align}
\mathcal{M}^{2\to2}=eg\epsilon^\mu_\lambda\bar{u}_{p',s'}\left[\gamma_\mu\frac{-(\slashed{p}-\slashed{k})+m_e}{-\tilde{u}}+\frac{-(\slashed{p'}+\slashed{k})+m_e}{-\tilde{s}}\gamma_\mu\right]u_{p,s}
\end{align}
where $\epsilon$ is the photon polarization vector and $\lambda=\pm 1$. After averaging and summing over the initial and final spins and polarization,
\begin{align}\label{eq:2 to 2 M}
\overline{|\mathcal{M}^{2\to2}|^2}&=\left(\frac{1}{2}\sum_s\right)\sum_{s'}\left(\frac{1}{2}\sum_\lambda\right)|\mathcal{M}^{2\to2}|^2=e^2g^2\mathcal{A}^{2\to2}
\end{align}
where
\begin{align}\label{eq:2 to 2 A}
\mathcal{A}^{2\to2}=&-\frac{(\tilde{s}+\tilde{u})^2}{\tilde{s}\tilde{u}}+2(m_\phi^2-4m_e^2)\left[\left(\frac{\tilde{s}+\tilde{u}}{\tilde{s}\tilde{u}}\right)^2m_e^2-\frac{t_2}{\tilde{s}\tilde{u}}\right].
\end{align}
\section{cross section}\label{sec:cross section}
\subsection{2 to 3}
The cross section for the 2 to 3 process, see Eq. (\ref{eq:2 to 3 production process}) and Fig. \ref{fig:2 to 3}, in the lab frame is given by
\begin{align}
d\sigma=\frac{1}{4|\textbf{p}|M}\overline{|\mathcal{M}^{2\to3}|^2}(2\pi)^4\delta^4(p'+k-p-q)\frac{d^3\textbf{p}'}{(2\pi)^3 2E'}\frac{d^3\textbf{P}_f}{(2\pi)^3 2E_f}\frac{d^3\textbf{k}}{(2\pi)^3 2E_k}
\end{align}
where $M$ is the mass of the target atom. Integrating over $\mathbf{p}'$ and changing the variable from $\mathbf{P}_f$ to $\mathbf{q}$, we have
\begin{align}
d\sigma=\frac{\overline{|\mathcal{M}^{2\to3}|^2}}{1024\pi^5|\textbf{p}|M E_f E' E_k}\delta(E'+E_k-E-q_0)d^3\textbf{q}d^3\textbf{k}.
\end{align}
In order to integrate over $\mathbf{q}$, we choose the spherical coordinate $(Q,\theta_q,\phi_q)$ where $Q=|\mathbf{q}|$, and $\theta_q$ and $\phi_q$ are the polar and azimuthal angles of \textbf{q} in the direction of $\mathbf{V}=\mathbf{k}-\mathbf{p}$. First, we use the remaining $\delta$-function to integrate out $Q$, and then change variables from $\theta_q$ to $t$. We obtain
\begin{align}
d\sigma=\frac{d^3\textbf{k}}{128\pi^4|\textbf{p}|V E_k}\int^{t_{max}}_{t_{min}}dt\left(\frac{1}{8M^2}\int_0^{2\pi}\frac{d\phi_q}{2\pi}\overline{|\mathcal{M}^{2\to3}|^2}\right)
\end{align}
where $V=|\textbf{V}|$, $t(Q)=q^2=2M(\sqrt{M^2+Q^2}-M)$, $t_{max}=t(Q_+)$, $t_{min}=t(Q_-)$, and
\begin{align}
Q_\pm=\frac{V[\tilde{u}+2M(E'+E_f)]\pm(E'+E_f)\sqrt{\tilde{u}^2+4M\tilde{u}(E'+E_f)+4M^2V^2}}{2(E'+E_f)^2-2V^2}.
\end{align}
Integrate over the polar angle, $\theta$, and azimuthal angle of \textbf{k} in the direction of \textbf{p}, and then change the variable from $|\mathbf{k}|$ to $x$ where $x\equiv E_k/E$. We have
\begin{align}\label{eq:2 to 3}
\frac{d\sigma}{dx d\cos\theta}&=\frac{|\textbf{k}|E}{64\pi^3|\textbf{p}|V}\int^{t_{max}}_{t_{min}}dt\left(\frac{1}{8M^2}\int_0^{2\pi}\frac{d\phi_q}{2\pi}\overline{|\mathcal{M}^{2\to3}|^2}\right)\nonumber\\
&=\epsilon^2\alpha^3\frac{|\textbf{k}|E}{|\textbf{p}|V}\int^{t_{max}}_{t_{min}}dt\frac{F(t)^2}{t^2}\left(\frac{1}{8M^2}\int_0^{2\pi}\frac{d\phi_q}{2\pi}\mathcal{A}^{2\to3}\right).
\end{align}
\subsection{2 to 2}
The 2 to 2 cross section, see Eq. (\ref{eq:2 to 2 production process}) and Fig. \ref{fig:2 to 2}, in the lab frame is straightforwardly expressed in terms of the amplitude,
\begin{align}\label{eq:2 to 2}
\frac{d\sigma}{d(p\cdotp k)}=2\frac{d\sigma}{dt_2}=\frac{\overline{|\mathcal{M}^{2\to2}|^2}}{8\pi\tilde{s}^2}=\epsilon^2\alpha^2\frac{2\pi}{\tilde{s}^2}\mathcal{A}^{2\to2}.
\end{align}
\section{Weizs\"{a}cker-Williams approximation}\label{sec:WW approximation}
It is explained in Ref.~\cite{Kim:1973he} that the WW approximation relies on the incoming electron energy being much greater than $m_\phi$ and $m_e$, such that the final state electron and scalar boson are highly collinear. In that case the phase space integral can be approximated by
\begin{align}\label{eq:WW}
\frac{1}{8M^2}\int\frac{d\phi_q}{2\pi}\mathcal{A}^{2\to3}\approx\frac{t-t_{min}}{2t_{min}}\mathcal{A}^{2\to2}_{t=t_{min}}.
\end{align}
With the WW approximation, Eq.~(\ref{eq:2 to 3}) can be approximated to be
\begin{align}
\frac{d\sigma}{dx d\cos\theta}\approx\epsilon^2\alpha^3\frac{|\textbf{k}|E}{|\textbf{p}|V}\frac{\mathcal{A}^{2\to2}_{t=t_{min}}}{2t_{min}}\chi
,\end{align}
where
\begin{align}\label{eq:chi}
\chi=\int^{t_{max}}_{t_{min}}dt\frac{t-t_{min}}{t^2}F(t)^2.
\end{align}
Using Eq. (\ref{eq:2 to 2}), we have
\begin{align}
\frac{d\sigma}{dx d\cos\theta}\approx\frac{\alpha\chi}{4\pi}\frac{|\textbf{k}|E}{|\textbf{p}|V}\frac{\tilde{s}^2}{t_{min}}\left.\frac{d\sigma}{d(p\cdotp k)}\right|_{t=t_{min}}.
\end{align}
Following the discussion in Refs.~\cite{Bjorken:2009mm,Tsai:1986tx}, near $t=t_{min}$ (when $\mathbf{q}$ and $\mathbf{V}=\mathbf{k}-\mathbf{p}$ are collinear), we can approximate the following quantities
\begin{align}\label{eq:WW variables}
\tilde{s}&\approx-\frac{\tilde{u}}{1-x}\nonumber\\
\tilde{u}&\approx-xE^2\theta_\phi^2-m_\phi^2\frac{1-x}{x}-m_e^2 x\nonumber\\
t_2&\approx\frac{\tilde{u}x}{1-x}+m_\phi^2\\
V&\approx E(1-x)\nonumber\\
t_{min}&\approx\frac{\tilde{s}^2}{4E^2}\nonumber
\end{align}
Using Eq. (\ref{eq:WW variables}), we arrive at the well-known equation \cite{Bjorken:2009mm,Tsai:1986tx}
\begin{align}\label{eq:WW Tsai}
\frac{d\sigma}{dx d\cos\theta}\approx\frac{\alpha\chi}{\pi}\frac{xE^2\beta}{1-x}\left.\frac{d\sigma}{d(p\cdotp k)}\right|_{t=t_{min}}
\end{align}
where $\beta=\sqrt{1-m_\phi^2/E_k^2}$. Note that in Eq. (\ref{eq:WW Tsai}) $d\sigma/d(p\cdotp k)$ is evaluated at $t=t_{min}$. So the amplitude $\mathcal{A}^{2\to2}$ in Eq. (\ref{eq:2 to 2}) evaluated at $t=t_{min}$ using Eq. (\ref{eq:WW variables}) is
\begin{align}\label{eq:2 to 2 A tmin}
\mathcal{A}^{2\to2}_{t=t_{min}}\approx\frac{x^2}{1-x}+2(m_\phi^2-4m_e^2)\frac{\tilde{u}x+m_\phi^2(1-x)+m_e^2x^2}{\tilde{u}^2}.
\end{align}
\section{cross section comparison}\label{sec:cross section comparison}
\begin{figure}
\centering
\subfigure[\;$d\sigma/(\epsilon^2dx)$]{\includegraphics[scale=0.95]{cross_section}}
\subfigure[\;relative error of $d\sigma/(\epsilon^2 dx)$]{\includegraphics[scale=0.95]{cross_section_relative_error}}
\caption{\label{fig:cross_section} The solid green, dashed red, and dotted blue lines correspond to the differential cross section with no, WW, and IWW approximation. The relative error of $\mathcal{O}$ is defined by $(\mathcal{O}_{\rm approx.}-\mathcal{O}_{\rm exact})/\mathcal{O}_{\rm exact}$.}
\end{figure}
To test approximations of the cross section for $\phi$ production, we examine three cases.
\begin{enumerate}
\item The complete calculation, Eq. (\ref{eq:2 to 3}),
\begin{align}\label{eq:d sigma dx 1}
\frac{d\sigma}{dx}=\epsilon^2\alpha^3\frac{|\textbf{k}|E}{|\textbf{p}|}\int_0^{\theta_{max}} d\cos\theta\frac{1}{V}\int^{t_{max}}_{t_{min}}dt\frac{F(t)^2}{t^2}\left(\frac{1}{8M^2}\int_0^{2\pi}\frac{d\phi_q}{2\pi}\mathcal{A}^{2\to3}\right)
\end{align}
where $\theta_{max}$ depends on the configuration of the detector. For beam dump E137, $\theta_{max}\approx 4.4\times10^{-3}$.
\item WW: using the WW approximation, Eq. (\ref{eq:WW}),
\begin{align}\label{eq:d sigma dx 2}
\left(\frac{d\sigma}{dx}\right)_{WW}=2\epsilon^2\alpha^3|\textbf{k}|E(1-x)\int_0^{\theta_{max}} d\cos\theta\frac{\mathcal{A}^{2\to2}_{t=t_{min}}}{\tilde{u}^2}\chi
\end{align}
where $\theta_{max}$ is the same as the first case and $\chi$ is defined in Eq. (\ref{eq:chi}). Note that the upper and lower limits of $\chi$ depend on $x$ and $\theta$.
\item Improved WW (IWW): If the upper and lower limits of the $t$-integral in $\chi$ in Eq. (\ref{eq:d sigma dx 2}) are not sensitive to $x$ and $\theta$; i.e., the integration limit can be set to be independent of $x$ and $\theta$, we can further approximate the integration limits of $t$. Similar to the argument in Ref.~\cite{Bjorken:2009mm}, we set
\begin{align}\label{eq:tmin tmax}
t_{min}=\left(\frac{m_\phi^2}{2E}\right)^2 {\rm\; and\;\;} t_{max}=m_\phi^2+m_e^2
\end{align}
which is valid when the production cross section is dominantly collinear with $x$ close to 1. The difference in $t_{max}$ between \cite{Bjorken:2009mm} and our approach is because we do not assume $m_\phi\gg m_e$. Therefore, we can pull $\chi$ out of the integral over $\cos\theta$. Then, changing variables from $\cos\theta$ to $\tilde{u}$ and extending the lower limit of $\tilde{u}$ to $-\infty$, we have
\begin{align}\label{eq:d sigma dx 3}
\left(\frac{d\sigma}{dx}\right)_{IWW}&=\epsilon^2\alpha^3\chi\frac{|\textbf{k}|}{E}\frac{1-x}{x}\int^{\tilde{u}_{max}}_{-\infty}d\tilde{u}\frac{\mathcal{A}^{2\to2}_{t=t_{min}}}{\tilde{u}^2}\\
&=\epsilon^2\alpha^3\chi\frac{|\textbf{k}|}{E}\frac{m_e^2(2-x)^2-2x \tilde{u}_{max}}{3\tilde{u}_{max}^2}
\end{align}
where $\tilde{u}_{max}=-m_\phi^2\frac{1-x}{x}-m_e^2 x$ and in the last line, we use Eq. (\ref{eq:2 to 2 A tmin}). We emphasize that the name ``improved" means reducing the computational time (because of one fewer integral than in the WW approximation above) and does not imply more accuracy.
\end{enumerate}
In Fig. \ref{fig:cross_section}, we show the cross sections in each of the three cases for five values of the scalar boson mass, setting the incoming electron beam energy to 20 GeV. In both approximations, the cross section is of the same order of magnitude as that using the complete calculation. However, there are regions where there are ${\cal O}\left(1\right)$ relative errors. The WW approximation (dashed red lines in Fig.~\ref{fig:cross_section}) can differ from the complete calculation by 100\% when $m_\phi\lesssim 1$ MeV; in the IWW case (dotted blue lines in Fig.~\ref{fig:cross_section}), the approximation starts to fail when $m_\phi\gtrsim 100$ MeV.
\section{particle production}\label{sec:particle production}
There are two characteristic lengths which are crucial in beam dump experiments. The first is the decay length of the new particle in the lab frame,
\begin{align}\label{eq:absorption process}
l_\phi=\frac{E_k}{m_\phi}\frac{1}{\Gamma_\phi},
\end{align}
where $\Gamma_\phi=\Gamma_{\phi\to e^+e^-}+\Gamma_{\phi\to\gamma\gamma}$, see Eq. (\ref{eq:decay to electrons}) and (\ref{eq:decay to photons}). The new particle, after production, must decay after going through the target and shielding and before going through the detector in order to be observed. If the target is thick (much greater than a radiation length), most of the new particles will be produced in the first few radiation lengths. The production rate is approximately proportional to the probability $e^{-L_{sh}/l_\phi}(1-e^{-L_{dec}/l_\phi})$, where $L_{sh}$ is length of the target and shield and $L_{dec}$ is length for the new particle to decay into electron or photon pairs after the shield and before the detector.
The second characteristic length is the absorption length
\begin{align}
\lambda=\frac{1}{n_e\sigma_{abs}},
\end{align}
where $n_e$ is the number density of the target electrons and $\sigma_{abs}$ is the cross section of absorption process. The leading process of absorption is
\begin{align}\label{eq:absorption}
e(p)+\phi(k)\rightarrow e(p')+\gamma(q),
\end{align}
which is related to the 2 to 2 production process Eq. (\ref{eq:2 to 2 production process}) via crossing symmetry $\tilde{s}\leftrightarrow\tilde{u}$. Since Eq. (\ref{eq:2 to 2 A}) is symmetric in $\tilde{s}\leftrightarrow\tilde{u}$, the algebraic form of amplitude squared of absorption process is the same as Eq. (\ref{eq:2 to 2 A}) but differs by a factor 2 from summing over final state instead of averaging over initial state in Eq. (\ref{eq:2 to 2 M})
\begin{align}
\mathcal{A}^{2\to2}_{abs}=2\mathcal{A}^{2\to2}.
\end{align}
The cross section of the process (\ref{eq:absorption process}) is
\begin{align}
\frac{d\sigma}{d\Omega}&=\frac{1}{64\pi^2 m_e}\frac{|\mathbf{q}|}{|\mathbf{k}|}\frac{\overline{|\mathcal{M}^{2\to2}_{abs}|^2}}{E_k+m_e-|\mathbf{k}|\cos\theta_\gamma}\\
\sigma_{abs}&=\frac{\pi\epsilon^2\alpha^2}{m_e|\mathbf{k}|}\int_{-1}^1 d\cos\theta_\gamma\frac{|\mathbf{q}|\mathcal{A}^{2\to2}}{E_k+m_e-|\mathbf{k}|\cos\theta_\gamma},
\end{align}
where $\theta_\gamma$ is the angle between outgoing photon and incoming new particle. The new particle, after produced, must not be absorbed by the target and shield to be detected. If the target is thick (much greater than absorption length), the production rate will be approximately proportional to the probability $e^{-L_{sh}/\lambda}$.
The number of the new particles produced in terms of the cross section (without considering the absorption process) can be found in, {\it e.g.}, Refs.~\cite{Bjorken:2009mm,Tsai:1986tx,Andreas:2012mt}. Using the thick target approximation and including the absorption process, we find
\begin{align}
N_\phi\approx\frac{N_eX}{M}\int_{E_{min}}^{E_0}dE\int_{x_{min}}^{x_{max}}dx\int_0^TdtI_e(E_0,E,t)\frac{d\sigma}{dx}e^{-L_{sh}\left(\frac{1}{l_\phi}+\frac{1}{\lambda}\right)}(1-e^{-L_{dec}/l_\phi}),
\end{align}
where $M$ is the mass of the target atom (aluminium); $N_e$ is the number of incident electrons; $X$ is the unit radiation length of the target; $E_0$ is the incoming electron beam energy, $E_{min}=m_e+\max(m_\phi,E_{cut})$ and $x_{min}=\frac{\max(m_\phi,E_{cut})}{E}$ where $E_{cut}$ is the measured energy cutoff depending on the detectors; $x_{max}$, which is smaller but very close to 1 ($x_{max}$ can be approximated to be $1-\frac{m_e}{E}$ if the new particle and electron initial and final state are collinear); $T=\rho L_{sh}/X$ where $\rho$ is the density of the target; $l_\phi$ is the decay length of the new particle in lab frame; $\lambda$ is the absorption length of the new particle passing through the target and shield; $I_e$, derived in Ref.~\cite{Tsai:1966js}, is the energy distribution of the electrons after passing through a medium of $t$ radiation length
\begin{align}
I_e(E_0,E,t)=\frac{\left(\ln\frac{E_0}{E}\right)^{bt-1}}{E_0\Gamma(bt)},
\end{align}
where $\Gamma$ is the gamma function and $b=4/3$. For beam dump E137 which we take as our prototypical setup, $E_0=20$ GeV and $E_{cut}=2$ GeV; $N_e=1.87\times 10^{{20}}$; $L_{sh}=179$ m and $L_{dec}=204$ m. The experiment has a null result which translates to 95\% C.L. of $N_\phi$ to be 3 events.
\begin{figure}
\centering
\subfigure[\;exclusion plot]{\includegraphics[scale=1.02]{E137}}
\subfigure[\;exclusion plot (zoomed in)]{\includegraphics[scale=1]{E137_zoom}}
\subfigure[\;exclusion plot (linear scale)]{\includegraphics[scale=1]{E137_zoom_linear}}
\subfigure[\;relative error of exclusion boundary]{\includegraphics[scale=1.03]{E137_relative_error}}
\caption{\label{fig:E137} (a)--(c) Exclusion (shaded region) plot for $\epsilon$ using the beam dump experiment E137. The solid green, dashed red, and dotted blue lines correspond to the differential cross section with no, WW, and IWW approximation. (d) The solid red and dashed blue lines correspond to the relative error of exclusion boundary with WW and IWW approximation. The thin and thick lines correspond to the upper and lower boundaries of the exclusion plot.}
\end{figure}
In Fig.~\ref{fig:E137}, we show regions of coupling and mass excluded by the lack of a signal at E137, using the three different ways to calculate the differential cross section, $d\sigma/dx$. Because of the exponential factor from decay and absorption lengths, the error in the exclusion plot due to making approximations to the cross section is smaller along the upper boundary, which is mainly determined by whether $\phi$ lives long enough to make it to the detector. With the WW approximation, the 100\% error in cross section causes an error of less than 20\% along the lower boundary, and in a log-log plot across several scales, a 20\% error is almost indistinguishable by eyesight. On the other hand, with the IWW approximation, the difference is clearly visible when $m_\phi\gtrsim 100$ MeV. We emphasize that the similarity of the exclusions with or without the approximations in a log-log plot means that the cross section approximations are good to the order of magnitude but the relative error can deviate at the ${\cal O}\left(1\right)$ level.
In Fig. \ref{fig:E137}, we see that the absorption process, Eq. (\ref{eq:absorption}), cuts off the exclusion plot around $\epsilon\sim\mathcal{O}(1)$ where the coupling of $\phi$ to electrons is of same order of the electromagnetic coupling. Therefore, in this region, there is another significant process to consider for beam dump experiments. This is the trapping process due to the rescattering
\begin{align}
e(p)+\phi(k)\rightarrow e(p')+\phi(k').
\end{align}
The trapping process is expected to be as important as the absorption process in this example (new scalar particle, beam dump E137), and also cuts off the exclusion plot around $\epsilon\sim\mathcal{O}(1)$. However, in Fig. \ref{fig:E137} the region where $\epsilon>10^{-3}$ has been excluded by other experiments, such as electron $g-2$ \cite{Pospelov:2008zw,Bouchendira:2010es} and hydrogen Lamb shift \cite{Eides:2000xc}, which are discussed in Ref.~\cite{Liu:2016qwd} as well as astrophysical processes \cite{Essig:2013lka}. Therefore we do not include the trapping process in this example, but it might be crucial for other experiments.
\section{A positive signal}\label{sec:data analysis}
To further explore the accuracy of the approximations to the cross section, let us imagine that there is a signal of a new particle being produced at a beam dump experiment. In such a case, the mass and the coupling of this particle can be determined by examining the data, {\it i.e.}, the distribution of events as a function of energy deposited in the detector. We perform 3 sets of pseudoexperiment by using the setup of E137; assume that the scalar boson exists with $(m_\phi,\epsilon)=(110{\rm\;MeV,10^{-7}})$, $(m_\phi,\epsilon)=(200{\rm\;MeV,1.3\times10^{-7}})$, and $(m_\phi,\epsilon)=(0.3{\rm\;MeV,8\times 10^{-6}})$, which are outside of the current exclusion in Fig. \ref{fig:E137}. We increase the incoming beam luminosities by 36, 36, and 137 times (increasing the total number of electrons dumped into the target), so that the expected total number of events is around 100, 100, and 400. We assume that the resolution of the detector is 1 GeV (which means that there are 18 bins) and generate the ``observed" number of events using a Poisson distribution with the mean value from the complete calculation for each bin. Finally, we can fit the ``observed" data with the calculation with no, WW, and IWW approximation using $\chi^2$ test, and we assume that the variance of the calculated value also satisfies Poisson distribution ({\it i.e.} we ignore systematic errors on the observed numbers of events for simplicity). Therefore, the definition of $\chi^2$ becomes
\begin{align}
\chi^2=\sum_i\frac{(N_{cal,i}-N_{obs,i})^2}{\sigma^2_i}=\frac{(N_{cal,i}-N_{obs,i})^2}{N_{cal,i}+N_{obs,i}}
\end{align}
where $N_{cal}$ and $N_{obs}$ are calculated and ``observed" number of events; the subscript $i$ is for the bins. Since there are two independent parameters (mass and coupling) to fit, the $1\sigma$ and $2\sigma$ range correspond to $\Delta\chi^2=2.30$ and $\Delta\chi^2=6.18$, where $\Delta\chi^2=\chi^2-\chi^2_{min}$.
\begin{figure}
\centering
\subfigure[\;generated data]{\includegraphics[scale=1]{pseudo_exp_1_data}}
\subfigure[\;$\chi^2$ fit]{\includegraphics[scale=1.03]{pseudo_exp_1_fit}}
\caption{\label{fig:pseudo_exp_1} Assuming the scalar boson exists with $(m_\phi,\epsilon)=(110{\rm\;MeV,10^{-7}})$ and is observed in E137 with 36 times luminosity. (a) The number of events distribution with respect to the energy of the scalar boson: the thin red line is obtained by the complete calculation (no approximation), and the thick black lines is the ``data" generated by Poisson distribution with mean value given by the complete calculation. (b) The best fit point, $1\sigma$ range, and $2\sigma$ range with no, WW, and IWW approximation: the star is the ``true" value; the circle, triangle, and squares are the best fit parameters with no, WW, and IWW approximation, respectively; the black, dashed red, and dotted blue inner (outer) loop correspond to the $1\sigma$ ($2\sigma$) range with no, WW, and IWW approximation, respectively; the shaded area is the excluded region with no approximation from Fig. \ref{fig:E137}. The top and bottom rows correspond to the results of two separate pseudoexperiments.}
\end{figure}
\begin{figure}
\centering
\subfigure[\;generated data]{\includegraphics[scale=1]{pseudo_exp_2_data}}
\subfigure[\;$\chi^2$ fit]{\includegraphics[scale=1]{pseudo_exp_2_fit}}
\caption{\label{fig:pseudo_exp_2} Assuming the scalar boson exists with $(m_\phi,\epsilon)=(200{\rm\;MeV,1.3\times10^{-7}})$ and is observed in E137 with 36 times luminosity. See the caption in Fig. \ref{fig:pseudo_exp_1} for details.}
\end{figure}
\begin{figure}
\centering
\subfigure[\;generated data]{\includegraphics[scale=0.75]{pseudo_exp_3_data}}
\subfigure[\;$\chi^2$ fit]{\includegraphics[scale=0.75]{pseudo_exp_3_1_fit}}
\subfigure[\;$\chi^2$ fit (zoomed in)]{\includegraphics[scale=0.75]{pseudo_exp_3_2_fit}}
\subfigure[\;$\chi^2$ fit (change of coordinate)]{\includegraphics[scale=0.76]{pseudo_exp_3_3_fit}}
\caption{\label{fig:pseudo_exp_3} Assuming the scalar boson exists with $(m_\phi,\epsilon)=(0.3{\rm\;MeV,8\times 10^{-6}})$ and is observed in E137 with 137 times luminosity. (a)--(c) See the caption in Fig. \ref{fig:pseudo_exp_1} for details. (d) Change of coordinate of $\chi^2$ fit plot: $X=\ln\frac{m_0}{\rm 1\;GeV}+\ln\frac{m_\phi}{m_0}\cos\theta-\ln\frac{\epsilon}{\epsilon_0}\sin\theta$ and $Y=\ln\epsilon_0+\ln\frac{m_\phi}{m_0}\sin\theta+\ln\frac{\epsilon}{\epsilon_0}\cos\theta$, where $\theta=42.4^{\circ}$, $m_0=0.1$ MeV, and $\epsilon_0=2\times10^{-5}$. This means to rotate the coordinate $42.4^{\circ}$ with respect to $(m_\phi,\epsilon)=(0.1{\rm\;MeV,2\times 10^{-5}})$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{pseudo_exp_3_fix_coupling}
\caption{\label{fig:pseudo_exp_3_2} Assuming the scalar boson exists with $(m_\phi,\epsilon)=(0.3{\rm\;MeV,8\times 10^{-6}})$ and is observed in E137 with 137 times luminosity. The number of events distribution is the same in Fig. \ref{fig:pseudo_exp_3}. The value of $\chi^2$ with respect to $m_\phi$ (assuming $\epsilon$ is precisely measured): the black, dashed red, and dotted blue lines correspond to the $\chi^2$ values calculated with no, WW, and IWW approximation. The minimum of $\chi^2$ corresponds the best fit $m_\phi$; the circle dots $\odot$ correspond to 1$\sigma$ range ($\Delta\chi^2=1$); the circle crosses $\otimes$ correspond to 2$\sigma$ range ($\Delta\chi^2=4$). The gray vertical line indicates the true value of $m_\phi$.}
\end{figure}
We show the results of these pseudoexperiments with $(m_\phi,\epsilon)=(110{\rm\;MeV,10^{-7}})$ in Fig. \ref{fig:pseudo_exp_1}, $(m_\phi,\epsilon)=(200{\rm\;MeV,1.3\times10^{-7}})$ in Fig. \ref{fig:pseudo_exp_2}, and $(m_\phi,\epsilon)=(0.3{\rm\;MeV,8\times 10^{-6}})$ in Fig. \ref{fig:pseudo_exp_3}. We see that the ``true'' parameter values lie within the $1\sigma$ allowed regions when fitting with the complete calculation. On the other hand, although using approximation sometimes gives a fairly good estimate of cross section, the result of data fitting lies outside the $2\sigma$ range. It is worth noting that the shape of the $1\sigma$ or $2\sigma$ range is roughly along the exclusion boundary in Fig. \ref{fig:E137}, because the exclusion boundary is the isocontour of the number of events.
Next, we consider another scenario of the third pseudoexperiment with $(m_\phi,\epsilon)=(0.3{\rm\;MeV,8\times 10^{-6}})$. In this part of parameter space, the allowed coupling and mass can extend over roughly an order of magnitude. To illustrate the usefulness of the complete calculation, we perform fits to this data assuming that there is another experimental result that can sensitively measure the coupling. This would be the case if recently proposed experiments involving decays of radioactive nuclei underground see a nonzero signal~\cite{Izaguirre:2014cza,Liu:2016qwd} and we can use the beam dump experiment to determine the mass precisely. For simplicity, we assume that the other experiment measures the coupling with negligible error. Since there is one parameter to fit, the $1\sigma$ and $2\sigma$ range correspond to $\Delta\chi^2=1$ and $\Delta\chi^2=4$. We show the results in Fig. \ref{fig:pseudo_exp_3_2}. Again as expected, we see that the ``true'' parameter values lie within the $1\sigma$ allowed region when fitting with the complete calculation. Using the approximations, the ``true'' mass lies outside the $2\sigma$ ranges. We observe that using the complete calculation could be crucial in measuring the mass of a new particle in this region of parameter space.
\section{discussion}\label{sec:discussion}
Our results are based on a new scalar boson motivated by proton radius puzzle~\cite{Liu:2016qwd}. However, we expect that the qualitative description remains similar in other type of particles, such as pseudoscalar and vector. While the production amplitude, decay length, and the absorption length can differ in detail for particles with different quantum numbers, they are qualitatively similar. The approximations that we have examined deal with the phase space integral and coupling to electromagnetism of the target nucleus. Therefore, we expect similar results to hold for other bosons as in the scalar case. The similarity of our exclusion plot to the vector case \cite{Bjorken:2009mm,Andreas:2012mt} provides evidence in favor of this. Including a coupling to the muon may change the situation for $m_\phi>2m_\mu$~\cite{Liu:2016qwd} due to the opening of a new channel with typically a substantial partial width. A study of the production of vector particles in electron beam dumps that deals with some of the issues we have addressed can be found in Ref.~\cite{Beranek:2013yqa}.
There are some other beam dump experiments using a Cherenkov detector, such as E141 \cite{Riordan:1987aw} and Orsay \cite{Davier:1989wz}. Their exclusion plots do not extend to the region where $m_\phi<2m_e$. We show the results of the beam dump experiments E141 and Orsay in Ref.~\cite{Liu:2016qwd}.
In this work, we present a complete analysis of beam dump experiments. We show that a brute-force analytical calculation is possible. Software exists using Monte-Carlo simulations, such as \textsc{MadGraph/MadEvent}~\cite{Alwall:2007st} as used in, e.g.,~\cite{Essig:2010xa}, that can calculate the cross section without using approximations. Our work can be used as a consistency check for Monte-Carlo simulations. We show that using the WW approximation can be trusted to an order of magnitude in cross sections and exclusion plots. Additionally our work allows us to understand the errors introduced by the various common approximations. In certain regions of parameter space different errors partially cancel against each other, leading to results that are accidentally sometimes more accurate than might be expected. However, as we illustrated with several pseudoexperiments in a range of masses, in the event of a nonzero signal, a complete calculation can give very different results from the approximations. This could be useful given the possibility of future electron beam dump experiments~\cite{future}.
\section*{Acknowledgement}
We acknowledge J. Detwiler and R. Essig for invaluable discussions and suggestions. The work of G. A. M. and Y.-S. L. was supported by the U. S. Department of Energy Office of Science, Office of Nuclear Physics under Award Number DE-FG02-97ER-41014. The work of D.M. was supported by the U.S. Department of Energy under Grant No. DE-SC0011637.
|
1,314,259,995,660 | arxiv | \section{Introduction}
\label{intro}
{\sc Mercury} \citep{Chambers1999} is a general-purpose software package,
written in {\sc fortran77}, for doing N-body integrations to investigate dynamical problems in Astronomy. In a set of simulations to study accretion dynamics using {\sc Mercury} \citep{Torres2008}, it was found that the results contained fewer planets than expected. Analysis of output files and creation of scripts and movies to carefully follow the processes showed discontinuities in the number of embryos during the integrations. No events for these bodies were
being registered in the output files and their disappearance was non-physical.
\section{The Problem}
\label{problem}
In {\sc mercury}, an integer variable array called {\sc stat} keeps track of the statuses of particles: whether they are alive, have been involved in a collision or have been ejected from the system. At the beginning of an old or new integration, all the bodies must have its {\sc stat} equals to zero (Table~\ref{stat}). When a live body is collided or ejected, its {\sc stat} value is changed accordingly and the body is removed by the subroutine called {\sc mxx\_elim}.
In the code, the {\sc stat} array is not initialised, meaning its elements may access random values. If these values are negative, not only bodies that have been involved in an event are going to be removed, but also those with invalid negative values in its respective {\sc stat} elements, causing non-physical disappearance of some bodies during the integrations.
\begin{table}
\centering
\caption{Valid values for the variable STAT in Mercury integrator.}
\begin{tabular}{@{}cc@{}}
\hline
{\sc stat} value & Body status\\
\hline
0 & Alive\\
-2 & Collided\\
-3 & Ejected\\
\hline
\label{stat}
\end{tabular}
\end{table}
\subsection{Characterising the problem}
\label{charac}
Not all simulations are affected by this problem. Some {\sc fortran} compilers (e.g. \textit{ifort}) implicitly initialise variables on their declaration statement \citep{Fortran}. Using one of them, an integer array would have all its elements set to zero in its declaration, even without an explicit initialisation in the source code. Others compilers (e.g. \textit{g77} and \textit{gfortran}) begin initialising the early elements of an integer array with zero when the array is beyond a certain size; this size will be compiler and machine environment dependent. With these compilers, the results will be affected by the non-initialisation of the {\sc stat} array if uninitialised elements are used; it is this we are about to investigate further.
The {\sc stat} array has its size initially defined by the parameter {\sc nmax}, a maximum number of bodies set by the user in the configuration file {\sc mercury.inc}. During execution of {\sc mercury}, the number of {\sc stat} elements used is equivalent to the actual number of bodies ({\sc nbod}) in the simulation. If {\sc stat} had been initialised then {\sc nmax} would need merely to be equal to or greater than {\sc nbod}. However, when using a compiler such as \textit{g77} or \textit{gfortran}, problems will be encountered unless {\sc nmax} is greater than {\sc nbod} by several hundreds.
Using a \textit{g77} compiler, simulations with 25 different initial conditions (Table~\ref{sims}), including the example given by {\sc Mercury}'s author \citep{Chamberssite}, were ran one or more times with different values of {\sc nmax}. An audit was then conducted of the number of bodies and a pattern emerged: simulations with a value of {\sc nmax} considerably higher than the number of bodies did not have problems; bodies disappeared from simulations with a comparatively small {\sc nmax} value.
A set of 12 simulations, with different numbers of bodies, were used to find the minimum limit of {\sc nmax} required for a problem-free execution (depicted with a $*$ in Table~\ref{sims}. Two different environments were used, a Cygwin with GNU Bash, version 3.2.39(19)-release, and a Debian 4.0 with kernel 2.6.18-5-amd64. For each execution, a short integration time (about 100 years) was used. The value of {\sc nmax} was varied in each simulation until the lower limit required to avoid losing bodies in a non-physical manner was found. The lower limit was found to be proportional to the number of bodies (Figure~\ref{values}). As the number of bodies tends to zero, the lower limit of {\sc nmax} tends toward a non-zero value around 500--700, the exact value depending on the computing environment.
\begin{table*}
\centering
\begin{minipage}{160mm}
\centering
\caption{Tests made to find discontinuities problem in Mercury's results. $^a$}
\begin{tabular}{@{}cccccccc@{}}
\hline
Big Bodies & Small Bodies & Giant Planets & Collisions & Central & Style & Algorithm & Times\\
\hline
0 & 1 & 0 & no & Sun & Cartesian & Hybrid & x 1\\
$*$1 & 0 & 0 & no & Sun & Cartesian & Hybrid & x 1\\
$*$1 & 1 & 0 & no & Sun & Cartesian & MVS & x 1\\
9 & 204 & 4 & no & Sun & Cartesian & BS & x 3\\
9 & 204 & 4 & yes & Sun & Cartesian & Hybrid & x 1\\
9 & 0 & 4 & no & Sun & Cartesian & Hybrid & x 1\\
$*$11 & 1 & 0 & no & Jupiter & Asteroidal & BS & x 2\\
11 & 1 & 0 & yes & Jupiter & Asteroidal & Hybrid & x 2\\
$*$11 & 4 & 0 & yes & Jupiter & Asteroidal & Hybrid & x 1\\
$*$11 & 6 & 0 & no & Jupiter & Asteroidal & Hybrid & x 1\\
14 & 6 & 4 & yes & Sun & Asteroidal & BS2 & x 2\\
$*$14 & 2 & 4 & yes & Sun & Asteroidal & BS2 & x 1\\
18 & 200 & 4 & yes & Sun & Asteroidal & MVS & x 1\\
68 & 204 & 1 & yes & Sun & Asteroidal & Hybrid & x 2\\
69 & 204 & 2 & yes & Sun & Asteroidal & Hybrid & x 3\\
72 & 204 & 1 & yes & Sun & Asteroidal & Hybrid & x 1\\
73 & 204 & 2 & yes & Sun & Asteroidal & Hybrid & x 3\\
88 & 204 & 1 & yes & Sun & Asteroidal & Hybrid & x 3\\
$*$88 & 1525 & 1 & yes & Sun & Asteroidal & Hybrid & x 2\\
89 & 204 & 2 & yes & Sun & Asteroidal & Hybrid & x 9\\
$*$89 & 500 & 2 & yes & Sun & Asteroidal & Hybrid & x 1\\
$*$89 & 800 & 2 & yes & Sun & Asteroidal & Hybrid & x 1\\
$*$89 & 1000 & 2 & yes & Sun & Asteroidal & Hybrid & x 1\\
$*$89 & 1300 & 2 & yes & Sun & Asteroidal & Hybrid & x 1\\
$*$89 & 1450 & 2 & yes & Sun & Asteroidal & Hybrid & x 1\\
\hline
\label{sims}
\end{tabular}
\footnotetext{$^a$ Columns are: Number of big bodies, number of small bodies, number of giant planets, if accepts collisions or not, central body, style of data input, integrator algorithm and times the same instance of the simulation were simulated with small difference in bodies' positions. Simulations for Figure \ref{values} are shown by *.}
\end{minipage}
\end{table*}
\begin{figure*}
\centering
\epsfig{file=values,width=9.8cm,angle=270}
\caption{Minimum value of {\sc nmax} required for a problem-free execution as a function of number of bodies. The value depends somewhat on the environment; two were tested: Cygwin with GNU Bash, version 3.2.39(19)-release, and Debian 4.0 with kernel 2.6.18-5-amd64.}
\label{values}
\end{figure*}
The best fit for the points in the Cygwin environment is:
\begin{equation}
f(x) = 1.00341x + 542.708
\end{equation}
and in the Debian environment:
\begin{equation}
g(x) = 1.00009x + 637.329
\end{equation}
where x is the total number of bodies. These functions are basically the number of bodies plus an offset value. These tests offer a basic way to verify if old or recent simulations could be affected for the problem in {\sc stat} variable's values. We advise that any simulation with the value of the parameter {\sc nmax} close to these limits to have its results checked, if the compiler used was \textit{g77}, \textit{gfortran} or one with similar features.
A simple test program was run, in which a variable array was declared but not initialised and its values output. The array values were mostly large positive, large negative and zero. Varying the size of the array, it was found that beyond a limiting size the values at the {\em beginning} of the array were 0. Figure~\ref{DPlot} shows this relation when using the \textit{g77} compiler in a
Debian environment; when the number of elements exceeded 696, zeros began appearing at the beginning of the array. Therefore, if {\sc nmax} is greater than the sum of this minimum limit plus the number of bodies then no bodies will disappear non-physically.
The test was repeated with the \textit{gfortran} and \textit{ifort} compilers. With \textit{gfortran} the minimum limit was found to be 628. With \textit{ifort} the minimum value was found to be 0 as the compiler initialises all array elements to 0.
\begin{figure*}
\centering
\epsfig{file=DPlot,width=9.8cm,angle=270}
\caption{The number of elements that are and are not initialised to zero as a function of the number of elements, when using the \textit{g77} compiler under a Debian environment. As the array size increases (abscissa), the number of uninitialised elements increases steadily, with a smaller number of initialised elements spread throughout the array. When the number of elements passes 696, the number uninitialised remains constant and the number initialised increases linearly. Beyond this value, initialised elements are added at the beginning of the array, pushing the uninitialised elements to the end. The result is similar when using \textit{gfortran} but the turnoff occurs at 628 elements. If {\sc Mercury} has been compiled with such a compiler then the {\sc nmax} value would have to have been greater than the turnoff value plus the number of bodies in order for bodies not to have disappeared in a non-physical manner.}
\label{DPlot}
\end{figure*}
\section{Solution}
\label{solution}
Excepting when the used {\sc fortran} compiler is one that makes unconditional implicitly initialisation, a variable not explicitly initialised can have unpredictable behavior. In order to solve this problem, the variable array STAT must be initialised with zero in some point inside the file \textit{Mercury6\_2.for}, before the commands' block for the main calculations. We propose this initialisation to be done in the subroutine MIO\_IN, before (line 6046) or after (line 6111) the block of commands "Check for attempts to do incompatible things". Adding the command outside of the if statement (lines 5911-6045) for new and old integrations will guarantee it is going to be executed for both of them. The initialisation could be done with the three lines:
\begin{tabbing}
do j=2, nbod\\
\hspace{3mm}STAT(j) = 0\\
end do\\
\end{tabbing}
With this command being executed each time an integration starts afresh or from dump files, the variable {\sc stat} will not receive random values, independently of the value of the parameter {\sc nmax} (it must still respect the basic rule: {\sc nmax} $\geq$ {\sc nbod}), the machine environment or the {\sc fortran} compiler.
The initialisation could be done also in other points of the code, but one must be sure it is being done before the main calculations for both old and new integrations.
\section{Conclusions}
\label{conclusions}
We repeated all the tests showed in Section \ref{charac} with a corrected version of {\sc Mercury}, initialising the variable array {\sc stat} at the end of the subroutine MIO\_IN in the source code. No discontinuities were seen in any results, independently of conditions. The new version is now being used in 40 new simulations of accretion dynamics. We believe this small change can improve the program making it more reliable for any type of N-body problem simulations. The corrected version can be downloaded from \url{http://www.astro.keele.ac.uk/~dra/mercury/}.
\section*{Acknowledgements}
{\small We are grateful to Sean Raymond for identifying an issue with simulations' results that led to the discovery of the bug.}
\bsp~The MNRAS class file (\copyright Blacwell Science 2001) and the MNRAS bibliography style file were used in the preparation of this paper; they are available at \url{http://www.blackwellpublishing.com/static/mnras_latex.asp}.
\bibliographystyle{mn2e}
|
1,314,259,995,661 | arxiv | \section{Introduction}
Extrasolar planet detections and analysis of the non-detections
further our knowledge of the planet formation process and contribute
to an empirical determination of the typical planetary system. These
empirical constraints will eventually decide the ubiquity or rarity of
planetary bodies in the Universe. A variety of techniques exist to
detect extrasolar planets \citep{PER00}, and there are currently 168
extrasolar planet
discoveries\footnote{http://www.obspm.fr/encycl/catalog.html}.
Most of the extrasolar planets have been discovered using the radial
velocity technique. Radial velocity detections indicate that
$1.2\%\pm 0.3\%$ of FGK main-sequence stars in the solar neighborhood
have a ``Hot'' Jupiter-mass planet (HJ) orbiting within 0.1 AU
\citep{MAR05}. At this small separation from the central star, the
high temperatures and low disk column densities prevent in situ
formation of HJ planets \citep{BOD00}. Several mechanisms exist to
exchange angular momentum between the protoplanet and natal disk,
enabling the protoplanet to migrate from a more likely formation
separation (several AU) to within 0.1 AU \citep{TER00}. Due to tidal
circularization, HJ have nearly circular orbits with an observed
median eccentricity, $<e>=0.07$, whereas planets with larger
separations have a median eccentricity of $<e>=0.25$.
In addition to the detection statistics and planet properties, the
extrasolar planet detections indicate several physical relationships
between the stellar host properties and the frequency of extrasolar
planets. The most striking of these is that the probability for hosting
an extrasolar planet increases rapidly with stellar metal abundance,
consistent with $P\propto N_{\rm Fe}^{2}$ \citep{FIS05}. The
frequency of planets may also depend on the stellar mass.
\citet{BUT04} and \citet{BON05} point out that there exists a deficit
of $M_{J}$ planets orbiting M dwarf stars. However, the increasing
number of short-period Neptune-mass planets being discovered around M
dwarfs suggests that the overall frequency of planets (of all masses)
orbiting M dwarfs may be similar to FGK dwarfs, but the typical planet
mass is less, thereby escaping detection given the detection
limitations of the current radial velocity surveys \citep{BON05}.
Additionally, none of the M dwarfs harboring planets are metal rich
\citet{BON05}.
A coherent theory of planet formation and survival requires not only
reproducing the physical properties of the planets, but reproducing
any trends in the physical properties on the host environment.
Despite the knowledge and constraints on extrasolar planets that
radial velocity surveys provide, radial velocity surveys have their
limitations. The high resolution spectroscopic requirements of the
radial velocity technique limit its use to the solar neighborhood and
orbital periods equivalent to the lifetime of the survey. A full
consensus of the planetary formation process requires relying on
additional techniques to detect extrasolar planets in a larger variety
of conditions prevalent in the Universe.
For instance, microlensing surveys are sensitive to extrasolar planets
orbiting stars in the Galactic disk and bulge with distances of many
kpc away (\citealt{MAO91,GOU92}). Two objects consistent with
Jupiter-mass companions have been detected via the microlensing
technique \citep{BON04,UDA05}. Additional information is obtained
from studying the microlensing events that did not result in
extrasolar planet detections. Microlensing surveys limit the fraction
of M dwarfs in the Galactic bulge with Jupiter-mass companions
orbiting between 1.5 AU to 4 AU to $<33\%$ (\citealt{ALB01,GAU02}).
Although limited to the solar neighborhood, attempts to directly image
extrasolar planets are sensitive to planets with semimajor axis beyond
20 AU. The light from the parent star limits detecting planets
interior to the seeing disk. Adaptive optics observations of young
($\sim 1$ Myr) stars provide the best opportunity to directly image
extrasolar planets since the young planets are still relatively bright
while undergoing a rapid, cooling contraction. Although the
interpretation relies on theoretical modeling of these complex
planetary objects, three sources in nearby star forming regions
have been detected whose broad-band colors and spectra are consistent
with those expected from 1-42 Jupiter-mass objects \citep{NEU05,CHA05A,CHA05B}. The contrast
ratios necessary for extrasolar planet detection are difficult to reach, and
results for detecting higher mass brown dwarfs are more complete. An
analysis of the Cornell High-Order Adaptive Optics Survey (CHAOS)
derives a brown dwarf companion upper limit of 10\% orbiting between
25 and 100 AU of the parent star \citep{CAR05}. \citet{MCC04}
estimate $1\%\pm 1\%$ of G,K, and M stars have brown dwarf companions
orbiting between 75 and 300 AU, but this estimate may not account for
the full range of orbital inclination and eccentricities possible
\citep{CAR05}. At greater separations, $>$ 1000 AU, brown dwarf companions
to F-M0 main-sequence stars appear to be as common as stellar companions \citep{GIZ01}.
After the radial velocity technique, the transit technique has had the
most success in detecting extrasolar planets \citep{KON05}. The
transit technique can detect $R_{J}$ transits in any stellar
environment where $\lesssim$1\% photometry is possible. Thus, it
provides the possibility of detecting extrasolar planets in the full
range of stellar conditions present in the Galaxy: the Solar neighborhood,
the thin and thick disk, open clusters, the halo, the bulge, and globular clusters
are all potential targets for transit surveys. A major advantage of
the transit technique is the current large-format mosaic CCD imagers
which provide multiplexed photometric measurements with sufficient accuracy
across the entire field of view.
The first extrasolar planet detections
via the transit technique began with the candidate list provided by
the OGLE collaboration \citep{UDA02}. However, confirmation of the
transiting extrasolar planet candidates requires radial velocity
observations. Due to the well known equation-of-state competition
between electron degeneracy and ionic Coulomb pressure, the radius of
an object becomes insensitive to mass across the entire range from
below $M_{J}$ to the hydrogen-burning limit \citep{CHA00}. Thus,
objects revealing a $R_{J}$ companion via transits may actually have a
brown-dwarf mass companion when followed up with radial velocities.
This degeneracy is best illustrated by the planet-sized brown dwarf
companion to OGLE-TR-122 \citep{PON05}. The first radial-velocity
confirmations of planets discovered by transits \citep{KON03,BOU04}
provided a first glimpse at a population of massive, very close-in
planets with $P<$ 3 days and $M_{p}>M_{J}$ (``Very Hot Jupiters'' -
VHJ) that had not been seen by radial velocity surveys.
\citet{GAU05A} demonstrated that, after accounting for the strong
sensitivity of the transit surveys to the period of the planets, the
transit detections were likely consistent with the results from the
radial velocity surveys, implying that VHJs were intrinsically very
rare. Subsequently, in a metallicity-biased radial velocity survey,
\citet{BOU05B} discovered a VHJ with $P=2.2$ day around the bright star
HD189733 that also has observable transits.
Despite the dependence of transit detections on radial velocity
confirmation, radial velocity detections alone only result in a lower
limit on the planetary mass, and thus do not give a complete picture
of planet formation. The mass, radius information directly constrains the
theoretical models, whereas either parameter alone does little to
further constrain the important physical processes that shape the
planet properties \citep{GUI05}. For
example, the mass-radius relation for extrasolar planets
can constrain the size of the rocky core present (e.g., \citealt{LAU05}). Also, the planet
transiting across the face of its parent star provides the exciting
potential to probe the planetary atmospheric absorption lines
against the stellar spectral features \citep{CHA02,DEM05,NAR05}. Or,
in the opposite case, emission from the planetary atmosphere
can be detected when the planet orbits behind the parent star
\citep{CHA05,DEM05B}.
Despite these exciting results, the transit technique is significantly hindered by the
restricted geometrical alignment necessary for a transit to occur.
As a result, a transit survey necessarily contains at least an order of magnitude more
non-detections than detections. In addition, null results themselves can provide
important constraints. For example, the null result
in the globular cluster 47 Tucanae adds important empirical
constraints to the trend of increasing probability of having a
planetary companion with increasing metallicity \citep{GIL00,SAN04}.
Thus, understanding the sensitivity of a given transit survey, i.e.\ the
expected rate of detections and non-detections, takes on
increased importance. Several studies have taken steps toward
sophisticated Monte Carlo calculations to quantify detection
probabilities in a transit survey \citep{GIL00,WEL05,MOC05,HID05,HOO05}.
Unfortunately, these studies do not fully characterize the sources of
error and systematics present in their analysis, and therefore the
reliability of their conclusions is unknown. Furthermore, essentially
all of the previous studies have either (1) not accurately determined the number
of dwarf main-sequence stars in their sample, or (2) made simplifying
assumptions which may lead to misestimated detection probabilities, or (3)
contained serious conceptual errors in the procedure with
which they have determined detection probabilities, or (4) some
combination of the above.
As a specific and important example, most studies do not apply
identical selection criteria when searching for transits amongst the
observed light curves and when recovering injected transits as part of
determining the survey sensitivity. Removal of false-positive transit
candidates arising from systematic errors in the light curve has typically
involved subjective visual inspections, and these subjective criteria
have not been applied to the recovery of injected transits when
determining the survey sensitivity. This is statistically incorrect,
and can in principle lead to overestimating the survey sensitivity.
Even if identical selection criteria are applied to the original
transit search and in determining the survey sensitivity, some surveys
do not apply conservative enough selections to fully eliminate
false-positive transit detections.
In this paper, we address these shortcomings of previous studies
in our analysis of a 19-night photometric search for transiting
extrasolar planets in the open cluster NGC 1245. An automated
transit search algorithm with quantitative selection criteria finds
six transit candidates; none are bona fide planetary transits.
We describe our Monte Carlo calculation to robustly determine
the sensitivity of our survey, and use this to derive upper limits
on the fraction of cluster members with close-in, Jupiter-radii,
$R_{J}$, companions.
Leading up to the process of calculating the upper limit, we develop
several new analysis techniques. First, we develop a differential
photometry method that automatically selects comparison stars to
reduce the systematic errors that can mimic a transit signal. In
addition, we formulate quantitative transit selection criteria, which
completely eliminate false positives due to systematic light-curve variability
without human intervention. We characterize the survey detection
probability via Monte Carlo injection and boxcar recovery of transits.
Distributing the Monte Carlo calculation to multiple processors
enables rapid calculation of the transit detection probability for a
large number of stars.
The techniques developed here enable combining results from transit
surveys in a statistically meaningful way. This work is part of the
Survey for Transiting Extrasolar Planets in Stellar Systems (STEPSS).
The project concentrates on stellar clusters since they provide a
large sample of stars of homogeneous metallicity, age, and distance
\citep{BUR03,BUR04}. Overall, the project's goal is to assess the
frequency of close-in extrasolar planets around main-sequence stars in
several open clusters. By concentrating on main-sequence stars in
open clusters of known (and varied) age, metallicity, and stellar
density, we will gain insight into how these various properties affect
planet formation, migration, and survival.
The survey characteristics and the photometric procedure are given in
\S\ref{OBS}. We explain the automated algorithm to calculate the
differential light curves and describe the light curve noise
properties in \S\ref{LC}. In \S\ref{trandetect} we describe our
implementation of the box-fitting least squares (BLS) method
\citep{KOV02} for transit detection. In \S\ref{sec:selcrit} we
present a thorough discussion of the quantitative selection criteria
for transit detection, followed by a discussion of the objects with
sources of astrophysical variability that meet the selection criteria
in \S\ref{trncands}. We outline the Monte Carlo calculation for
determining the detection probability of the survey in
\S\ref{effcalc}. We present upper limits for a variety of companion
radii and orbital periods in \S\ref{results}. A discussion of the
random and systematic errors present in the technique is given in
\S\ref{uplimiterrsec}. We compare the final results of this study to
our expected detection rate before the survey began and discuss the
observations necessary to reach sensitivities similar to
radial velocity detection rates in \S\ref{discussion}.
Finally, \S\ref{conclusion} briefly summarizes this work.
\section{Observations and Data Reduction\label{OBS}}
\subsection{Observations}
We observed NGC 1245 for 19 nights between 24 Oct. and 11 Nov. of 2001
using the MDM 8K mosaic imager on the MDM 2.4m Hiltner telescope. The
MDM 8K imager consists of a 4x2 array of thinned, 2048x4096, SITe
ST002A CCDs \citep{CRO01}. This instrumental setup yields a
26$\arcmin$x26$\arcmin$ field of view and 0.36$\arcsec$ per pixel
resolution in 2x2 pixel binning mode. Table~\ref{obsdat24} has an
entry for each night of observations that shows the number of
exposures obtained in the cousins $I$-band filter, median
full-width-at-half-maximum (FWHM) seeing in arcseconds, and a brief
comment on the observing conditions. In total, 936 images produced
usable photometry with a typical exposure time of 300 s.
\subsection{Data Reduction}
We use the IRAF\footnote{IRAF is distributed by the National Optical
Astronomy Observatories, which are operated by the Association of
Universities for Research in Astronomy, Inc., under cooperative
agreement with the National Science Foundation.} CCDPROC task for all
CCD processing. The read noise measured in zero-second images taken
consecutively is consistent with read noise measured in zero-second
images spread through the entire observing run. Thus, the stability
of the zero-second image over the course of the 19 nights allows
median combining 95 images to determine a master, zero-second
calibration image. For master flat fields, we median combine 66
twilight sky flats taken throughout the observing run. We quantify
the errors in the master flat field by examining the night-to-night
variability between individual flat fields. The small-scale,
pixel-to-pixel variations in the master flat fields are $\sim 1\%$,
and the large-scale, illumination-pattern variations reach the 3\%
level. The large illumination-pattern error results from a
sensitivity in the illumination pattern to telescope focus. However,
such large-scale variations do not affect differential photometry with
proper reference-star selection (as described in \S\ref{LC}).
To obtain raw instrumental photometric measurements, we employ an
automated reduction pipeline that uses the DoPHOT PSF fitting package
\citep{SCH93}. Comparable quality light curves resulted from photometry via
the DAOPHOT/ALLFRAME, PSF-fitting photometry packages
\citep{STE87,STE98} in the background limited regime. DoPhot performs
slightly better in terms of rms scatter in the differential light
curve in the source-noise limited regime. The photometric pipeline
originated from a need to produce real-time photometry of microlensing
events in order to search for anomalies indicating the presence of an
extrasolar planet around the lens \citep{ALB98}. This study uses a
variant of the original pipeline developed at The Ohio State
University and currently in use by the Microlensing Follow Up Network
\citep{YOO04}.
In brief, the pipeline takes as input a high signal-to-noise (S/N)
``template'' image. A first pass through DoPHOT identifies the
brightest, non-saturated stars on all the images. Using these
bright-star lists, an automated routine (J.~Menzies, private
communication) determines the geometric transformation between the
template image and all the other other images. A second, deeper pass
with DoPhot on the template image identifies all the stars on the
template image for photometric measurement. The photometric procedure
consists of transforming the deep-pass star list from the template
image to each frame. These transformed positions do not vary during
the photometric solution. Next, an automated routine (J.~Menzies,
private communication) determines an approximate value for the FWHM
and sky as required by DoPHOT. Finally, DoPHOT iteratively determines
a best-fit, 7-parameter analytic PSF and uses this best-fit PSF to
determine whether an object is consistent with a single star, double
star, galaxy, or artifact in addition to the photometric measurement
of the object.
\section{Differential Photometry\label{LC}}
In its simplest form, differential photometry involves the use of a
single comparison star in order to remove the time variable
atmospheric extinction signal from the raw photometric measurements
\citep{KJE92}. The process of selecting comparison stars typically
consists of identifying an ensemble of bright, isolated stars that
demonstrate long term stability over the course of the observations
\citep{GIL88}. This procedure is sufficient for studying many
variable astrophysical sources where several percent accuracy is typically
adequate. However, after applying this procedure on a subset of the
data, systematic residuals remained in the data that were similar
enough in shape, time scale, and depth to the expected signal
from a transiting companion to result in large number of highly-significant
false positive detections.
Removing $\lesssim$0.01 mag systematic errors resembling a transit
signal requires a time consuming and iterative procedure for selecting
the comparison ensemble. Additionally, a comparison ensemble that
successfully eliminates systematic errors in the light curve for a
particular star fails to eliminate the systematic errors in the light
curve of a different star. Testing indicates each star has a
small number of stars or even a single star to employ as the
comparison in order to reduce the level of systematics in the light
curve. On the other hand, Poisson errors in the comparison ensemble
improve as the size of the comparison ensemble increases.
Additionally, the volume of photometric data necessitates an automated
procedure for deciding on the ``best'' possible comparison ensemble.
Given its sensitivity to both systematic and Gaussian noise and its
efficient computation, we choose to minimize the standard deviation
around the mean light curve level as the figure of merit in
determining the ``best'' comparison ensemble.
\subsection{Differential Photometry Procedure}
We balance improving systematic and Poisson errors in the light curve
using the standard deviation as the figure of merit by the following
procedure. The first step in determining the light curve for a star
is to generate a large set of trial light curves using single
comparison stars. We do not limit the potential comparison stars to
the brightest or nearby stars, but calculate a light curve using all
stars on the image as a potential comparison star. All comparison
stars have measured photometry on at least 80\% of the total number of
images. A sorted list of the standard deviation around the mean
light-curve level identifies the stars with the best potential for
inclusion in the comparison ensemble. Calculation of the standard
deviation of a light curve involves 3 iterations eliminating
3-standard-deviation outliers between iterations. However, the
eliminated measurements not included in calculation of the standard
deviation remain in the final light curve.
Beginning with the comparison star that resulted in the smallest
standard deviation we continue to add in comparison stars with
increasingly larger standard deviations. At each epoch, we median
combine the results from all the comparison stars making up the
ensemble after removing the average magnitude difference between
target and comparison. We progressively increase the number of stars
in the comparison ensemble to a maximum of 30, calculating the
standard deviation of the light curve between each increase in the
size of the comparison ensemble. The final light curve is determined
using the comparison ensemble size that minimizes the standard
deviation. Less than 1\% of the stars result in the maximum of 30
comparison stars. The median number of comparison stars is 4, with a
modal value of 1. The distribution of comparison stars has a standard
deviation around the median of 4. The fact that the standard deviations
of the majority of stars is minimized using a single comparison
star emphasizes the importance of considering all stars
as possible comparisons in order to minimize systematic errors and achieve the
highest possible accuracy.
\subsection{Comparison to a Similar Algorithm}
Independent of this study, \citet{KOV05} developed a generalized
algorithm for eliminating systematic errors in light curves
that shares several basic properties
with the method we have just presented. They agree with the conclusion that
optimal selection of comparison stars can eliminate systematics in the
light curve. They also use the standard deviation of the light curve
as their figure of merit (see their Equation 2). However, their more
general implementation allows for the comparison star to have a real-valued,
linear correlation coefficient ($c_{i}$ in their Equation 1) in the
differential photometry, whereas the implementation outlined here forces binary
values, 0 or 1, for the linear correlation coefficient. They solve
for the real-valued, linear correlation coefficients by minimization
of the standard deviation via matrix algebra, whereas the method given here
relies on brute force minimization of the standard deviation.
A thorough comparison of the performance between these methods has not
been done. However, we emphasize that our algorithm found that the modal number of stars in the
comparison ensemble is a single star. Their algorithm restricts the
comparison ensemble to a subset of the available stars. The
restricted comparison ensemble may not capture the systematics present
in a light curve. However, their real-valued, linear correlation
coefficients may provide the degree of freedom lacking in the
algorithm of this study necessary to cancel the systematics. Both
algorithms possess an important caveat. The figure of merit cannot
distinguish between improvements in the Poisson error or systematic
error and therefore does not guarantee optimal elimination of the
systematic deviations.
\subsection{Additional Light-curve Corrections}
Although our procedure for optimally choosing comparison stars
succeeds in dramatically reducing systematics in the light
curves, we find that some additional systematic effects nevertheless
remain. We introduce several additional corrections to the light
curves to attempt to further reduce these effects.
In good seeing, brighter stars display saturation effects. Whereas,
in the worst seeing, some stars display light-curve deviations that
correlate with the seeing. To correct for these effects, we fit a
two-piece, third-order polynomial to the correlation of magnitude
versus seeing. The median seeing separates the two pieces of the fit.
We first fit the good-seeing piece with the values of the polynomial
coefficients unconstrained. We then fit the poor-seeing piece, but
constrain the constant term such that the fit is continuous at the
median seeing. However, we do not constrain the first or higher order
derivatives to be continuous. In performing this fit, we excise
measurements from the light curve that would lead to a
seeing-correlation correction larger than the standard deviation of
the light curve. We use this two-piece fit to correct the
measurements.
Measurements nearby bad columns on the detector also
display systematic errors that are not removed by the differential
photometry algorithm. Thus, measurements when the stellar center is
within 6 pixels of a bad column on the detector are eliminated from
the light curve.
The final correction of the data consists of discarding measurements
that deviate by more than 0.5 mag from the average light curve level. This
prevents detection of companions with radii $>3.5 R_{J}$ around the
lowest mass stars of the sample.
\subsection{Light-curve Noise Properties\label{sec:noise}}
Figure~\ref{magrms} shows the logarithm of the standard deviation of
the light curves as a function of the apparent $I$-band magnitude.
Calculation of the standard deviation includes one iteration with
3-standard-deviation clipping. To maintain consistent S/N at fixed
apparent magnitude, the transformation between instrumental magnitude
to apparent $I$-band magnitude only includes a zero-point value, since
including a color term in the transformation results in stars of
varying spectral shape and thus varying S/N in the instrumental $I$
band having the same apparent $I$-band magnitude. Each individual CCD
in the 8K mosaic has its own zero point, and the transformation is
accurate to 0.05 mag.
\begin{figure}
\epsscale{1.2}
\plotone{f1_CJB.eps}
\caption{Shows the logarithm of the light-curve standard
deviation as a function of the apparent $I$-band magnitude ({\it points}). The
depths of transits due to a 1.0 and 1.5 $R_{J}$ companion assuming
the star is a cluster member are shown as {\it dashed lines}. The
{\it solid lines} show photometric noise models that match the
empirically determined noise properties.\label{magrms}}
\end{figure}
One CCD has significantly better noise properties than the others as
evidenced by the second sequence of points with improved standard deviation
at fixed magnitude. The instrument contains a previously unidentified
problem with images taken in the binning mode. The data was taken
with 2x2 native pixels of the CCD array binned to one pixel on
readout. During readout, the control system apparently did not record
all counts in each of the four native pixels. However, the single CCD
with improved noise properties does not suffer from this problem whereas all
the other CCDs do. Subsequent observations with large positional
shifts allow photometric measurements of the same set of stars on the
affected detectors and unaffected detector. Performing these
observations in the unbinned and binning modes confirms that on the
affected detectors, 50\% of the signal went unrecorded by the data
system. This effectively reduces the quantum efficiency by half
during the binned mode of operation for seven of the eight detectors.
The two solid lines outlining the locus of points in
Figure~\ref{magrms} provide further evidence for the reduction in
quantum efficiency. These lines represent the expected noise due to a
a source-noise limited error, a term that scales as a background-noise limited error,
and 0.0015 mag noise floor. We determine the lower line by varying the
area of the seeing disk and the flat noise level until the noise model visually
matches the locus of points for the detector with the lower noise
properties. Then, the upper line results from assuming half
the quantum efficiency of the lower noise model while keeping the
noise floor the same. The excellent agreement between the higher
noise model and the noise properties of the remaining detectors
strongly supports the conclusion that half of the native pixels are
not recorded during readout. This readout error could introduce
significant errors in the limit of excellent seeing. However, only
4\% of the photometric measurements have FWHM$<$2.5 binned pixels.
Thus, even in the binning mode, we maintain sufficient sampling of the
PSF to avoid issues resulting from the readout error.
The different
noise properties between detectors does not complicate the analysis.
The transit detection method involves $\chi^{2}$ merit criteria (see
\S\ref{sec:selcrit}) that naturally handle data with varying noise
properties. Other than reducing the overall effectiveness of the
survey, the different noise properties between the detectors does not
adversely affect the results in any way.
In addition to the empirically determined noise properties, DoPhot
returns error estimates that on average result in reduced
$\chi^{2}=0.93$ for a flat light-curve model. The average reduced
$\chi^{2}$ for all the detector agree within $10\%$.
Scaling errors to enforce reduced $\chi^{2}=1.0$ for each detector
independently has a negligible impact on the results, thus we choose
not to do so.
The upper and lower dash lines in Figure~\ref{magrms} show the transit
depth assuming the star is a cluster member for 1.5 and 1.0 $R_{J}$
companions, respectively. In Figure~\ref{magrms}, 3671 stars have
light curves with a standard deviation less than the signal of a
transiting $R_{J}$ companion.
\section{Transit Detection}\label{trandetect}
In the previous section, we describe a procedure for generating light
curves that reduces systematic errors that lead to false-positive
transit detections. However, systematics nevertheless remain that result in highly
significant false-positive transit detections. This section describes
the algorithm for detecting transits and methods for eliminating false
positives based on the detected transit properties. There are two
types of false-positives we wish to eliminate. The first is
false-positive transit detections that result from systematic errors
imprinted during the signal recording and measurement process. The
second type of false-positive results from true astrophysical
variability that does not mimic a transit signal. For example,
sinusoidal variability can result in highly significant
detections in transit search algorithms. We specifically design the
selection criteria to trigger on transit photometric variability that
affects a minority of the measurements and that are systematically
faint. However, the selection criteria do not eliminate
false-positive transit signals due to true astrophysical variability
that mimic the extrasolar planet transit signal we seek (grazing
eclipsing binaries, diluted eclipsing binaries, etc.).
For detecting transits we employ the box-fitting least squares (BLS)
method of \citet{KOV02}. Given a trial period, phase of transit, and
transit length, the BLS method provides an analytic solution for the
transit depth. We show in the appendix the equivalence of
the BLS method to a $\chi^{2}$ minimization. Instead of using the Signal
Residue \citep[Equation 5 in][]{KOV02} or
Signal Detection Efficiency \citep[Equation 6 in][]{KOV02} for quantifying
the significance of the detection, we use the resulting improvement in
$\chi^{2}$ of the solution relative to a constant flux fit, as outlined in the appendix.
This section begins with a discussion of the parameters affecting the
BLS transit detection algorithm. We set the BLS algorithm parameters
by balancing the needs of detecting transits accurately and of
completing the search efficiently. The next step involves developing
a set of selection criteria that automatically and robustly determines
whether the best-fit transit parameters result from bona fide
astrophysical variability that resembles a transit signal. A set of
automated selection criteria that only pass bona fide variability is a
critical component of analyzing the null-result transit survey and has
been ignored in previous analyses.
Due to the systematic errors present in the light curve, statistical
significance of a transit with a Gaussian noise basis is not
applicable. In addition, the statistical significance is difficult to
calculate given the large number of trial phases, periods, and
inclinations searched for transits. Given these limitations, we
empirically determine the selection criteria on the actual light
curves. Although it is
impossible to assign a formal false alarm probability to our selection
criteria, the exact values for the selection criteria are not
important as long as the cuts eliminate the false positives while
still maintaining the ability to detect $R_{J}$ objects, and identical
criteria are employed in the Monte Carlo detection probability
calculation.
\subsection{BLS Transit Detection Parameters}
The BLS algorithm has two parameters that determine the resolution of
the transit search. The first parameter determines the resolution of
the trial orbital periods. The BLS algorithm \citep[as implemented
by][]{KOV02} employs a period resolution with even frequency
intervals, $\frac{1}{P_{2}}=\frac{1}{P_{1}}-\eta$, where $P_{1}$ is the
previous trial orbital period, $P_{2}$ is the subsequent (longer)
trial orbital period, and $\eta$ determines the frequency spacing
between trial orbital periods. During implementation of the BLS
algorithm, we adopt an even logarithmic period resolution by
fractionally increasing the period, $P_{2}=P_{1}\times(1+\eta)$. The
original implementation by \citet{KOV02} for the orbital-period
spacing is a more appropriate procedure, since even frequency
intervals maintain constant orbital phase shifts of a measurement
between subsequent trial orbital periods. The even logarithmic period
resolution we employ results in coarser orbital phase shifts between
subsequent trial orbital periods for the shortest periods and
increasingly finer orbital phase shifts toward longer trial orbital
periods. Either period-sampling procedure remains valid with
sufficient resolution. We adopt $\eta=0.0025$,
which, given the observational baseline of 19 days, provides $<$10\%
orbital phase shifts for orbital periods as short as 0.5 day.
The second parameter of the BLS algorithm determines the resolution in
orbital phase by binning the phase-folded light curve. Binning of
the data in orbital phase drastically improves the numerical
efficiency, but not without loss in determining the correct transit
properties. \citet{KOV02} give a thorough examination of how the
sensitivity in recovering transits varies with orbital-phase binning
resolution. To search for transit candidates in the light curves we
adopt $N_{\rm bins}=400$ orbital-phase bins. We verify with tests
that the above parameters accurately recover boxcar signals in the
light curves. After injection of boxcar signals in the
light curves, we calculate the $\chi^{2}$ of the solution returned by
the BLS method with the $\chi^{2}$ of the injected model. Tests show
that the BLS method with the above parameters return a $\chi^{2}$
within 30\%, and typically much better, of the injected model's
$\chi^{2}$.
\subsection{Selection Criteria\label{sec:selcrit}}
We apply the BLS method following the description in the previous
section to search for transit candidates in all 6787 stars with light
curves. A visual inspection of a light curve folded at the best-fit
transit period can generally be used to discriminate between bona fide astrophysical
variability and a false positive arising from systematic errors. However, a
proper statistical assessment of the sensitivity of a transit search
requires that the exact same set of selection criteria that are
applied to cull false positives are also applied when assessing the
detection probability via, e.g., Monte Carlo injection and recovery of
artificial signals. Due to the large number of artificial signals
that must be injected to calculate the detection probability properly,
using selection criteria based on visual inspection of light curves is
practically very difficult or impossible. Therefore, quantitative,
automated detection criteria that mimic the visual criteria must be
used.
We employ four selection criteria that eliminate
all false detections while still maintaining the ability to detect
$R_{J}$ companions. These four selection criteria constitute cuts
on (1) the improvement in $\chi^2$ of a transit model over a constant flux model,
(2) the ratio between the $\Delta \chi^{2}$ of the best-fit transit
model and the $\Delta \chi^{2}_-$ of the best-fit anti-transit model,
(3) the fraction of $\Delta \chi^{2}$ from a single night,
and (4) the transit period.
The first of the selection criteria is a cut on
$\Delta \chi^{2}$, the improvement in $\chi^{2}$ between a
constant flux model and a transit model. The $\Delta \chi^{2}$ is
similar to the Signal Residue, SR, of \citet{KOV02}; we derive $\Delta
\chi^{2}$ and its relation to SR in the appendix. We prefer $\Delta
\chi^{2}$ over SR as the former allows a direct comparison of the
transit detection significance between light curves with different
noise properties. Given the correlated systematics in the data, we
cannot rely on analytical formulations with a Gaussian statistics
basis for the statistical significance of a particular $\Delta
\chi^{2}$ value. We empirically determine a cut on $\Delta \chi^{2}$ in
combination with the other selection criteria to fully eliminate false
detections. For a transit detection we require $\Delta
\chi^{2}>95.0$. As shown in the appendix, this selection criterion
corresponds to a S/N$\sim$10 transit detection. Figure~\ref{selcrit}
shows the $\Delta \chi^{2}$ of the best fit transit for all light
curves along the x-axis. The vertical line designates the selection
criteria on this parameter. Even with such a strict
threshold, there are still a large number of false
positives that pass the $\Delta \chi^2$ cut.
\begin{figure}
\epsscale{1.5}
\plotone{f2_CJB.eps}
\caption{The {\it small points} show $\Delta \chi^{2}$ as a function of $\Delta
\chi^{2}_{\rm -}$ for the resulting best-fit transit parameters in all
light curves. Here $\Delta \chi^{2}$ and $\Delta
\chi^{2}_{\rm -}$ are the $\chi^{2}$ improvement between the flat
light-curve model and the best-fit transit and anti-transit model,
respectively. The {\it dotted vertical line} shows the $\Delta
\chi^{2}=95.0$ selection boundary. The {\it solid diagonal line}
shows the $\Delta \chi^{2}/\Delta \chi^{2}_{\rm -}=2.75$ selection
boundary. Objects in the lower right corner pass both selection
criteria. The {\it green diamonds} show values of $\Delta \chi^{2}$
and $\Delta \chi^{2}_{\rm -}$ for the six transit candidates. The
{\it blue stars} show the recovered values of $\Delta \chi^{2}$ and
$\Delta \chi^{2}_{\rm -}$ for the four light curves with injected
transits shown in Figure~\ref{falsetran}. The label next to the blue
stars corresponds to the label in the upper right corner of each panel
in Figure~\ref{falsetran}. These curves were created by injecting
transits into the same light curve. The blue star labeled 0 shows the
values of $\Delta \chi^{2}$ and $\Delta \chi^{2}_{\rm -}$ for this
light curve before the example transits were injected.\label{selcrit}}
\end{figure}
Systematic variations in the light curves that are characterized by
small reductions in the apparent flux of star that are coherent over
the typical time scales of planetary transits can give rise to
false-positive transit detections. However, under the reasonable
expectation that systematics do not have a strong tendency to produce
dimming versus brightening of the apparent flux of the stars, one
would expect systematics to also result in a false-positive
`anti-transit' (brightening) detections. Furthermore, most intrinsic
variables can be approximately characterized by sinusoids, which will
also result in significant transit and anti-transit detections. On the other
hand, a light curve with a true transit signal and insignificant systematics
should produce only a strong transit detection, and not a strong anti-transit
detection.
Thus, the ratio of the significance of the best-fit transit signal
relative to that of the best-fit anti-transit signal provides a rough
estimate of the degree to which a detection has the expected
properties of a bona fide transit, rather than the properties of
systematics or sinusoidal variability. In other words, a highly
significant transit signal should have a negligible anti-transit
signal, and therefore we require the best-fit transit to have a
greater significance than the best-fit anti-transit. We accomplish
this by requiring transit detections to have $\Delta \chi^{2}/\Delta
\chi^{2}_{\rm -}>2.75$, where $\Delta \chi^{2}_{\rm -}$ is the
$\chi^{2}$ improvement of the best-fit anti-transit. For a given
trial period, phase of transit, and length of transit, the BLS
algorithm returns the best-fit transit without restriction on the sign
of the transit depth. Thus, the BLS algorithm simultaneously searches
for the best-fit transit and anti-transit, and so determining $\Delta
\chi^{2}_{\rm -}$ has no impact on the numerical efficiency.
Figure~\ref{selcrit} shows the $\Delta \chi^{2}_{\rm -}$ of
the best fit anti-transit versus the $\Delta \chi^{2}$ of the
best-fit transit for our light curves. The diagonal line demonstrates the selection on
the ratio $\Delta \chi^{2}/\Delta \chi^{2}_{\rm -}=2.75$. Objects
toward the lower right corner of this Figure pass the selection
criteria. The objects with large $\Delta \chi^{2}$ typically have
correspondingly large $\Delta \chi^{2}_{\rm -}$. This occurs for
sinusoidal-like variability or strong systematics that generally have
both times of bright and faint measurements with respect to the mean light-curve level.
Requiring observations of the transit signal on separate nights also
aids in eliminating false-positive detections. We quantify the
fraction of a transit that occurs during each night based on the
fraction of the transit's $\chi^{2}$ significance that occurs during
each night. The parameters of the transit allow identification of the
data points that occur during the transit. We sum the individual
$\chi^{2}_{i}=\left(m_{i}/\sigma_{i}\right)^{2}$ values for data
points occurring during the transit to derive $\chi^{2}_{\rm tot}$,
where $m_{i}$ is the light curve measurement and $\sigma_{i}$ is its
error. Then we calculate the same sum for each night individually.
We denote this $\chi^{2}_{k\rm th\: night}$. We identify the night
for which $\chi^{2}_{k\rm th\: night}$ contributes the greatest
fraction of $\chi^{2}_{\rm tot}$, and we call this fraction
$f=\chi^{2}_{k\rm th\: night}/\chi^{2}_{\rm tot}$. Finally, we
require $f <0.65$. This corresponds to roughly seeing the transit one
and a half times assuming all observations have similar noise.
Alternatively, this criterion is also met by observing $2/3$ of a
transit on one night and $1/3$ of the transit on a separate night, or
observing a full transit on one night and $1/6$ of transit on a
separate night with three times improvement in the photometric error.
Figure~\ref{selcrit2} shows $f$ versus the best-fit period for all the
light curves. The horizontal line designates the selection on this
parameter.
\begin{figure}
\epsscale{1.5}
\plotone{f3_CJB.eps}
\caption{The {\it black points} show $f$ as a function of the best-fit transit orbital period,
where $f$ is the fraction of the total $\chi^{2}$ improvement with the
best-fit transit model that comes from a single night. The objects that pass the $\Delta \chi^{2}>95.0$ selection
criteria are shown as {\it red points}. The {\it horizontal line}
shows the $f=0.65$ selection boundary. The {\it vertical lines}
denote orbital period regions avoided due to false-positive transit
detections. The {\it blue stars} and {\it green diamonds} are the
same as in Figure~\ref{selcrit}.\label{selcrit2}}
\end{figure}
The red points in Figure~\ref{selcrit2} show objects that pass the
$\Delta \chi^{2}>95.0$ selection. We find that most are clustered
around a 1.0 day orbital period. A histogram of the best-fit transit
periods amongst all light curves reveals a high frequency for 1.0 day
and 0.5 day periods. Visual inspection of the phased light curves
reveals a high propensity for systematic deviations to occur on the
Earth's rotational period and 0.5 day alias. We do not fully
understand the origin of this effect, but we can easily conjecture on
several effects that may arise over the course of an evening as the
telescope tracks from horizon to horizon following the Earth's diurnal
motion. In order to eliminate these false positives, we apply as our
fourth selection criteria a cut on the period. Specifically, we
require transit detections to have periods that are not within $1.0\pm
0.1$ and $0.5\pm 0.025$ day. The horizontal lines designate these
ranges of discarded periods.
\section{Transit Candidates}\label{trncands}
Six out of 6787 stars pass all four selection criteria. All
of these stars are likely real astrophysical variables whose
variability resembles that of planetary transit light curves.
However, we find that none are bona fide planetary transits in NGC
1245. After describing the properties of these objects we will
describe the procedure for ruling out their planetary nature.
Figure~\ref{trncand} shows the phased light curves for these six stars.
Each light-curve panel in Figure~\ref{trncand} has a different
magnitude scale with fainter flux levels being more negative. The
upper left corner of each panel gives the detected transit period as
given by the BLS method. The upper right corner of each panel gives
an internal identification number. The panels from top to bottom have
decreasing values in the ratio between the improvement of a transit
and anti-transit model, $\Delta \chi^{2}/\Delta \chi^{2}_{\rm -}$.
\begin{figure}
\epsscale{1.2}
\plotone{f4_CJB.eps}
\caption{The points show the change in magnitude as a function of orbital phase for all
stars that meet the transit candidate selection criteria. Negative
values for $\Delta$ mag are toward fainter flux levels. The phased
period is given in the upper left corner of each panel, and the number
in the upper right corner of each panel gives the internal
identification number.\label{trncand}}
\end{figure}
Table~\ref{trncandprop} lists the properties and selection criteria
values for the stars shown in Figure~\ref{trncand}. The green
diamonds in Figures~\ref{selcrit} and \ref{selcrit2} represent the
selection criteria for the six transit candidates. The photometric
and positional data in Table~\ref{trncandprop} come from
\citet{BUR04}. The $\chi^{2}_{\rm mem}$ entry in
Table~\ref{trncandprop} measures the photometric distance of a star
from the isochrone that best fits the cluster CMD. A lower value of
this parameter means a star has a position in the CMD closer to the
main sequence. Heavy points in Figure~\ref{cmd} denote stars with
$\chi^{2}_{\rm mem}<0.04$, and we designate these stars as potential
cluster members. Based on $\chi^{2}_{\rm mem}$, star 20513 and star
70178 have photometry consistent with cluster membership, thus we also
list the physical parameters of those stars in
Table~\ref{trncandprop}. \citet{BUR04} details the procedure for
determining the physical parameters of a star based solely on the
broad-band photometry and the best-fit cluster isochrone. However,
the validity of the stellar physical parameters only applies if the
star is a bona fide cluster member.
\begin{figure}
\epsscale{1.2}
\plotone{f5_CJB.eps}
\caption{The CMD of the cluster NGC 1245. Potential cluster members
having $\chi^{2}_{\rm mem}<0.04$ are shown with {\it heavy points}).
Objects that exceed the selection criteria for transit detection are
given as {\it open diamonds}.\label{cmd}}
\end{figure}
Figure~\ref{find} shows a finding chart for each star with a light
curve in Figure~\ref{trncand}. The label in each panel gives the
identification number, and the cross indicates the corresponding
object. Star 20274 is not centered in the finding chart because it is
located near the detector edge. The field of view of each panel is
54\arcsec. North is toward the right, and East is toward the bottom.
The panels for stars 20065, 20398, and 20513 (located near the cluster
center) provide a visual impression of the heaviest stellar crowding
encountered in the data. Figure~\ref{cmd} shows the $V$ and $\bv$ CMD
of the cluster field as given in \citet{BUR04}. The large open stars
denote the locations of the objects that exceed the transit selection
criteria.
\begin{figure}
\epsscale{0.8}
\plotone{f6_CJB.eps}
\caption{Finding charts for transit candidates with light curves shown in
Figure~\ref{trncand}. Each panel is 54$\arcsec$ on a side. North is
toward the right, and East is toward the bottom.\label{find}}
\end{figure}
\subsection{Consistency of Transit Parameters with Cluster Membership}
Only stars 20513 and 70718 have $\chi^{2}_{\rm mem}$ values consistent
with cluster membership. Additionally, the transit depth in both
stars indicates potential for having a $R_{J}$ companion. However,
qualitatively, in each case the transit duration relative to the
orbital period is too long to be a true planetary companion to a
cluster main-sequence star. We can use our knowledge of the physical
properties of the parent stars to quantitatively rule out planetary
companions. We do this by comparing an estimate of the stellar radius
derived from the CMD to an independent estimate of a lower limit on
the stellar radius derived from the properties of the light curve. In
both cases we find that the stellar radii derived from the CMD are
well below the lower limit on the stellar radius
based on the light curve.
To derive a lower limit on the stellar radius from the light curve,
we build on the work by \citet{SEA03}. They provide
a purely geometric relationship between the
orbital semimajor axis, $a$, and stellar radius, $R_{\star}$, for a light curve
with a given period, $P$, depth of transit, $\Delta F$, and total
duration of the transit (first to fourth contact), $\tau$, assuming
a circular orbit (see their Equation 8).
By assuming a central transit (impact parameter $b=0$), we transform their equality
into a lower limit. Using Kepler's Third Law, assuming that
the mass of the companion is much smaller than the mass of the star,
and assuming the duration of the transit is much smaller than the period ($\tau \ll P$),
we find,
\begin{equation}
R_{\star}>\frac{\pi (M_{\star}+m_{p})^{1/3}\tau}{P\,^{1/3}(1+\sqrt{\Delta F})}.
\label{eqn:rstar}
\end{equation}
where $R_{\star}$ is in AU, $M_{\star}$ is in units of $M_{\odot}$ and $\tau$ and
$P$ are in years.
Parameters on the right hand side of the above equation contain
substantial uncertainties. Replacing the parameters by their maximum
plausible deviation from their measured values in such a manner as to
decrease $R_{\star}$ increases the robustness of the lower limit. The
orbital period determination has the largest
uncertainty. Tests of recovering transits in the light curves reveal
a 10\% chance for the BLS method to return an orbital period, $P'$, at
the $1/2P$ and $2P$ aliases of the injected orbital period, and a
$<1$\% chance of detecting the $1/3P$ and $3P$ aliases.
Misidentification of the correct orbital period results from gaps in
the observing window function. Replacing $P$ in the above equation
with $3P'$, where $P'$ is the orbital period returned by the BLS
algorithm, provides the maximum plausible deviation of this quantity
and increases the robustness of the lower limit. In addition, the
stellar mass determination based on the CMD
potentially has contamination from a binary companion. Thus, we
replace $M_{\star}$ with $0.5 M'_{\star}$, where $M'_{\star}$ is the
stellar mass estimate from the CMD. We do not
modify $\tau$ or $\Delta F$. For the cases considered here, $\Delta F \ll 1$,
and the term $1+\sqrt{\Delta F} \simeq 1$ in Equation \ref{eqn:rstar}.
Therefore the precise value of $\Delta F$ has little effect on
the resulting limit on $R_*$. The BLS algorithm fits a boxcar transit
model to the light curve via a $\chi^{2}$ minimization. Since, in the
limit of zero noise, any non-zero boxcar height fit to a transit can
only result in an increasing $\chi^{2}$ when the length of the boxcar
exceeds the length of the transit, $\tau$ underestimates the true
transit length. Making the above replacements the lower limit on the
stellar radius is,
\begin{equation}
R_{\star}>7.3\frac{(M'_{\star}/M_{\odot})^{1/3}(\tau/\rm{1\: day})}{(P'/\rm{1\: day})^{1/3}(1+\sqrt{\Delta F})} R_{\odot}.
\end{equation}
For star 20513, the above equation requires $R_{\star}>1.04 R_{\odot}$
if the star is a cluster member. Fits to the CMD yield a stellar
radius $R_{\star}=0.80 R_{\odot}$. The lower limit for star 70718
is $R_{\star}>0.82 R_{\odot}$, whereas the CMD yields $R_{\star}=0.56
R_{\odot}$. Clearly both stars lack consistency between the stellar
radius based on the CMD location and the stellar radius based on the
transit properties.
The transiting companions to 20513 and 70718 are also unlikely
to be planets if the host stars are field dwarfs.
\citet{TIN05} provide a
diagnostic to verify the planetary nature of a transit when only the
light curve is available. The diagnostic $\eta_{p}$ of \citet{TIN05} compares
the length of the observed transit to an estimate of the transit
length derived by assuming a main-sequence mass-radius relation for the central star. By
assuming a radius of the companion of $R_{p}=1.0 R_{J}$, we find
$\eta_{p}=4.0$ and $3.8$ for 20513 and 70718, respectively. Values of
$\eta_{p}\lesssim 1$ correspond to planetary transits. Therefore,
20513 and 70718 are unlikely to host planetary companions
with $R_{p} \la R_{J}$ if they are main-sequence stars.
We note that our final {\it a posteriori} criterion with which we reject cluster
transit candidates, namely the consistency between the radius of
the parent star as estimated from the CMD and the radius as estimated
from the light curve, is a conceptually different kind of selection
criterion than those that we applied to all the light curves to arrive
at our six transit candidates. The original four selection criteria
were designed to detect bona fide astrophysical variability that
resembles the signals from transiting planets, but does not
necessarily arise from a transiting planetary companion. In
principle, we could have included the radius consistency cut as an
additional selection criterion applied to all light curves. The
motivation to do this would be that imposing this additional criterion
might automatically remove some systematic false positives and so
allow us to improve our efficiency by making the other selection
criteria less stringent. We have found using limited tests that
this is not the case. We therefore chose to leave the radius check as
an {\it a posteriori} cut on the transit candidates. Nevertheless,
observing a cluster does provide an advantage over observing field
stars, as the additional constraint on the stellar radius from the
cluster CMD provides a more reliable confirmation of the planetary
nature than the light curve alone \citep{TIN05}, and furthermore
allows a more accurate assessment of the detection probability.
It is important to emphasize that all of the injected transits with
which we compute the detection probability (\S\ \ref{effcalc})
automatically pass the radius consistency criterion. A fraction of
these will be recovered at periods that differ enough from the input
period that by using the recovered period they will no longer satisfy
the radius constraint. However, we find that this fraction is
negligibly small.
\subsection{Individual Cases}
This section briefly discusses each object that met the selection
criteria as a transit candidate but does not belong to the cluster.
The V-shaped transit detected in star 30207 rules out a $R_{J}$
companion. Transiting $R_{J}$ companions result in a flat
bottomed eclipse as the stellar disk fully encompasses the planetary
disk. A closer inspection of the light curve also reveals ellipsoidal
variations outside of the transit. This light curve matches the
properties of a grazing eclipse, which is a typical contaminant in
transit searches (e.g., \citealt{BOU05}).
The remaining stars have depths too large for a $R_{J}$ companion and
show evidence for secondary eclipses. Recall that we eliminated
data points with $|\Delta m|> 0.5$ mag in the light curves. This eliminates the eclipse
bottom for star 20065. Keeping all the data for star 20065 clearly
reveals the characteristics of a detached eclipsing binary.
The period BLS derives for star 20065 aligns the primary
and secondary eclipses, and thus BLS-reported period is not the true orbital
period.
The eclipses in stars 20398 and 20274 do not perfectly phase up. This
is because the resolution in period we used for the search prevents
perfect alignment of the eclipses for such short periods. This effect
is inconsequential for detecting transiting planets as they all have
orbital periods longer than 0.3 day.
Finally, we note that other
variables exist in the dataset. They were not selected
because they do not meet the $\Delta
\chi^{2}_{\rm min}/\Delta \chi^{2}_{\rm min\, -}$ selection criterion.
A future paper will present variables that exist in this dataset using
selection criteria more appropriate for identifying quasi-sinusoidal periodic
variability (J. Pepper et al., in preparation).
\section{Detection Probability Calculation}\label{effcalc}
We did not detect any transit signals consistent with a $R_{J}$
companion. To interpret this null result in terms of the frequency of
planetary companions to stars in NGC 1245, we develop a Monte Carlo
detection probability calculation for quantifying the sensitivity of
the survey for detecting extrasolar planet transits. The calculation
provides the probability of detecting a transit in the survey as a
function of the companion semimajor axis and radius. In addition to
the photometric noise and observing window, the observed properties of
the transit signal depend sensitively on the host mass, radius,
limb-darkening parameters, and orbital inclination with respect to the
line of sight. Without accurate knowledge of the stellar parameters,
a detailed detection probability is not possible. This precludes
analyzing stars not belonging to the cluster. Given the degeneracy
between broad-band colors of dwarfs, subgiants, and giants, the
stellar radius for most field objects cannot be determined from the
CMD alone. Assuming all stars of a given color are dwarfs drastically
overestimates the number of actual dwarf stars in a transit survey
\citep{GOU03}. The minimal expenditure of observational resources
necessary for determining the stellar parameters for a cluster transit
survey provides a significant advantage over transit surveys of the
field.
Each star in the survey has a unique set of physical properties and
photometric noise, thus we calculate the detection probability for all
stars in the survey. This is the first study of its kind to do so.
Given the detection probability for each star, the distribution of
extrasolar planet semimajor axis, and frequency of extrasolar planet
occurrence, the survey should have detected,
\begin{equation}
N_{\rm det}=f_{\star} \sum_{i=1}^{N_{\star}}P_{\rm det,i},
\label{eqn:ndet}
\end{equation}
extrasolar planets, where the sum is over all stars in the survey,
\begin{equation}
P_{\rm det,i}=\int \int \frac{d^{2}p}{dR_{p} da}P_{\epsilon,i}(a,R_{p})P_{T,i}(a,R_{p})P_{\rm mem,i}dR_{p}da\label{ndeteq},
\end{equation}
$R_{p}$ is the extrasolar planet radius, $a$ is the semimajor axis,
$f_{\star}$ is the fraction of stars with planets distributed
according to the joint probability distribution of $R_{p}$ and $a$,
$\frac{d^{2}p}{dR_{p} da}$. The Monte Carlo detection probability
calculation provides $P_{\epsilon,i}(a,R_{p})$,the probability of
detecting a transit in a given light curve. The term
$P_{T,i}(a,R_{p})$ gives the probability for the planet to cross the
limb of the host along the line of sight, and $P_{\rm mem,i}$ gives
the probability the star is a cluster member. This framework for
calculating the expected detections of the survey follows from the
work of \citet{GAU02}. In the following subsections we describe the
procedure for calculating each of these probability terms.
\subsection{Calculating $P_{\epsilon,i}(a,R_{p})$}
$P_{\epsilon,i}(a,R_{p})$ is the probability of detecting a transit
around the $i$th star of the survey averaged over the orbital phase
and orbital inclination for a given companion radius and semimajor
axis. We begin this section with a description of the procedure for
injecting limb-darkened transits into light curves for recovery.
After injecting the transit, we attempt to recover the transit
employing the same BLS algorithm and selection criteria as employed
during the transit search on the original data. It is critical to
employ identical selection criteria during the recovery as
the original transit search since only then can we trust the
robustness and statistical significance of the detection. The
fraction of transits recovered for fixed semimajor axis and $R_{p}$
determines $P_{\epsilon}$. Next, we characterize the sources of error
present in $P_{\epsilon}$ and how we ensure a specified level of
accuracy. Finally, in this section we discuss the parallelization of
the calculation to obtain $P_{\epsilon}$ for all stars in the survey
in a reasonable amount of time.
In the appendix, we discuss the importance of injecting realistic
transits for recovery. \citet{MAN02} provide analytic formulas for
calculating realistic limb-darkened transits. We employ the
functional form of a transit for a quadratic limb-darkening law as
given in Section 4 of \citet{MAN02}. The quadratic limb-darkening
coefficients come from \citet{CLA00}. Specifically, we use the
$I$-band limb-darkening coefficients using the ATLAS calculation for
$\log g=4.5$, $\log$[M/H]=0.0, and $v_{turb}=2$ kms$^{-1}$.
We assume circular orbits for the companions. All known extrasolar
planets to date that orbit within 0.1 AU have eccentricities $<$0.3,
and the average eccentricity for these planets is
$<e>=0.07$\footnote{http://www.obspm.fr/encycl/catalog.html}.
After injecting the transit, we employ the BLS algorithm to recover
the injected transit signal using the selection criteria described in
\S\ref{sec:selcrit}. For numerical efficiency, we relax the resolution of
the BLS search parameters. We adopt a fractional period step,
$\eta=0.004$, and phase space binning, $N_{\rm bins}=300$. Despite
the reduced resolution, higher resolution, converged solutions reveal
only a 0.003 lower probability resulting from the adopted parameters.
We correct all probabilities for this systematic even though it is at
an insignificant level compared to the other uncertainties.
Figure~\ref{falsetran} visualizes the injected transits with
increasing degrees of significance from top to bottom. This Figure
shows light curves with an injected transit phased at the period as
returned from the BLS algorithm. The solid line illustrates the
injected limb-darkened transit signal. The top two panels and the
bottom two panels illustrate 1.0 and 1.5 $R_{J}$ companions,
respectively. The transit recovery in the top panel barely
meets the selection criteria, thus giving a visual impression for the
sensitivity of the survey. The resulting selection criteria values
after recovery of the injected transits are shown in
Figures~\ref{selcrit} and \ref{selcrit2} by the blue stars, and the
labels next to the stars correspond to the panel label given in the
upper right hand corner.
The modeled transits shown in this Figure are injected into the
same light curve of a potential cluster member with $V=16.6$ and rms scatter
(before transit injection) of $\sigma=0.003$.
The blue star, labeled 0 in Figures~\ref{selcrit}
and \ref{selcrit2}, represents the values for the selection criteria for
found for this light curve before injecting the transits.
\begin{figure}
\plotone{f7_CJB.eps}
\caption{Phased light curves showing the recovery of transits injected
in the light curve by the Monte Carlo calculation. The injected
limb-darkened transit signal is given by the {\it solid line}. The
top two panels and bottom two panels show results for 1.0 and 1.5
$R_{J}$ companions, respectively. The transit recovery in the top
panel barely meets the selection criteria and gives an impression for
the sensitivity of the survey. The labels in the upper right corner
of the panels correspond to the markers for the selection criteria
values shown in Figures~\ref{selcrit}
and~\ref{selcrit2}.\label{falsetran}}
\end{figure}
As opposed to previous work, we carefully examine, quantify, and control the
uncertainties present in the calculation. During injection of a transit at
fixed semimajor axis, the transit can occur during any phase of the
orbit. We use the following procedure
to ensure that we inject enough trial transits at random orbital
phases to yield convergence of $P_{\epsilon}$.
Based on binomial statistics, the error in the resulting probability
at fixed orbital period depends on the actual probability and the
number of trial transit phases, $\sigma_{\epsilon}=\sqrt{N_{\rm
trial}\epsilon_{\rm act}(1-\epsilon_{\rm act})}$, where $N_{\rm
trial}$ is the number of trial transit phases and $\epsilon_{act}$ is
the actual probability (unknown a priori). Maintaining the same error
in the detection probability for differing $\epsilon_{\rm act}$
requires a variable number of trial phases. For each semimajor axis,
we first obtain an initial
estimate for the probability, $\epsilon_{\rm est}$, using $N_{\rm
trial}=100$. We then increase $N_{\rm
trial}$ until the probability converges to
$\sigma_{\epsilon}=\sqrt{N_{\rm trial}\epsilon_{\rm
est}(1-\epsilon_{\rm est})}\leq 0.02$. The above procedure
systematically overestimates $\epsilon_{\rm act}$ when $\epsilon_{\rm
act}\ga 0.95$ and systematically underestimates $\epsilon_{\rm act}$
when $\epsilon_{\rm act}\la 0.05$. However, these errors
are of order the adopted $\sigma_{\epsilon}=0.02$ accuracy, and so
we neglect them here.
In addition to a random orbital phase, assuming a random orientation
of the orbit requires taking into account an even distribution in
$\cos i$, where $i$ is the inclination of the orbit. Only a narrow
range of inclinations, $\cos i\leq (R_{\star}+R_{p})/a$, results in a
transit. Thus, we inject the transit with an even distribution in
$\cos i$ between $0\leq \cos i \leq (R_{\star}+R_{p})/a$.
The previous discussion pertains to ensuring a prescribed accuracy at
fixed semimajor axis. However, the expected detection rate also
requires an integral over semimajor axis, which must be sampled at
high enough resolution to ensure convergence of the integral. We
calculate the probability at even logarithmic intervals, $\delta \log
a=0.011$ AU. In comparison to high-resolution, converged
calculations, this semimajor axis resolution results in an absolute
error in the detection probability integrated over the semimajor axis
of $\sigma_{\epsilon}=0.003$. We inject transits with semimajor axis
from the larger of 0.0035 AU and $1.5 R_{\star}$ to 0.83 AU. The
best-fit isochrone to the cluster CMD determines the parent star
radius.
Generating the light curve from the raw photometric measurements is
numerically time consuming. Thus, we inject the transit after
generating the light curve. This procedure has the potential to
systematically reduce or even eliminate the transit signal, because
generating the light curve and applying a seeing decorrelation tend to
``flatten'' a light curve. To quantify the significance of this
effect, we inject transits in the raw photometric measurements before
the light curve generation procedure on several stars in the sample.
Comparing the detection probability obtained by injecting transits
before light curve generation to the detection probability obtained by
injecting the transit after light curve generation reveals that injecting
the transit after generating the light curve overestimates the
detection probability by $\sim 0.03$. We decrease the calculated
probability at fixed period by 0.03 to account for this systematic
effect.
The 0.03 systematic overestimate in the detection probability
becomes increasingly important for correctly characterizing the
detection probability at long orbital periods. For instance, the
detection probability for a star of median brightness
will be overestimated by $>$15\% for orbital periods $>4.0$ day and
1.5 $R_{J}$ companions if this systematic effect is not taken into
account. The detection probability is overestimated by $>$50\% for
orbital periods $>$8.0 day without correction. The results for 1.0
$R_{J}$ companions are even more severe. The detection probability
would be overestimated by 50\% for periods beyond 1.8 day for a star
of median brightness without correction.
Based on the CMD of NGC 1245 \citep{BUR04}, this study contains light
curves for $\sim$ 2700 stars consistent with cluster membership.
Initially, we calculate the detection probability for 2 possible
companion radii: 1.0 and 1.5 $R_{J}$. For each star, on average we
inject 50000 transits for a single companion radius at 150 different
semimajor axes. In total, we inject and attempt to recover $\sim
2.7\times 10^{8}$ transits. Current processors allow injection and
attempted recovery on order of 1 s per transit. A single processor
requires $\sim$3000 days for the entire calculation. Fortunately, the
complete independence of a transit injection and recovery trial allows
parallelization of the calculation. We accomplish a parallel
calculation via a server and client architecture. A server injects a
transit in the current light curve and sends it to a client for
recovery.
Based on the computing resources available, we employ two different
methods for communication between the server and clients. Using a
TCP/IP UNIX socket implementation for communication between the server
and clients allows access to $\sim$40 single-processor personal
workstations connected via a local area network within the department
of astronomy at The Ohio State University. Additionally, the
department of astronomy at The Ohio State University has exclusive
access to a 48 processor Beowulf cluster via the Cluster Ohio program
run by the Ohio Supercomputer Center. The Message Passing Interface
(MPI) libraries provide communication between the server and clients
on the Beowulf cluster. A Beowoulf cluster belonging to the Korean
Astronomy Observatory also provided computing resources for this
calculation. C programming source code for either client-server
communication implementation is available upon request from the
author.
The light solid line in Figure~\ref{efffig} shows the detection
probability, $P_{\epsilon}(a,R_{p})$, for three representative stars
in order of increasing apparent magnitude from top to bottom and for
the 2 companion radii, 1.0 and 1.5 $R_{J}$, on the left and right,
respectively. In general, the probability nears 100\% completion for
orbital periods $\la 1.0$ day and then has a power law fall off toward
longer orbital periods. The falloff in the detection probability toward longer orbital periods
partially results from the requirement of observing more than one transit.
The large drop in the detection probability
around 0.5 and 1.0 day orbital periods results from the selection
criteria we impose. The narrow, non-zero spikes in the detection
probability near the 0.5 and 1.0 day orbital periods result from
injecting a transit at this period, but the BLS method returns a
best-fit period typically at the $\sim$0.66 day alias.
\begin{figure}
\plotone{f8_CJB.eps}
\caption{Detection probability as a function of the orbital period
is shown as the {\it heavy solid line}. This is a product of the probability for a
transit to occur ({\it dash line}) and the probability that an
injected transit meets the selection criteria ({\it light solid
line}). The panels from top to bottom show representative stars in
order of increasing apparent magnitude. The {\it left} panels give
results for a 1.5 $R_{J}$ companion. The {\it right} panels give
results for a 1.0 $R_{J}$ companion.\label{efffig}}
\end{figure}
Figure~\ref{efffig} shows the detection probability with 3.3 times
higher resolution in orbital period and a lower, 1\%, error in the
detection probability at fixed orbital period than the actual
calculation. Thus, the figure resolves variability in the detection
probability as a function of orbital period for probabilities
$\ga$1\%. However, such fine details have negligible impact on the
results.
\subsection{Calculating $P_{T,i}(a,R_{p})$}
The probability for a transit to occur is $P_{T}=(R_{\star}+R_{p})/a$.
This transit probability assumes the transit is equally detectable for
the entire possible range of orbital inclinations that geometrically
result in a transit. As $\cos i$ for the orbit approaches
$(R_{\star}+R_{p})/a$ the transit length and depth decreases,
degrading the transit S/N. We address this when computing $P_{\epsilon}$
by injecting the transit with an even distribution in $\cos i$ between
the geometric limits for a transit to occur. Thus, $P_{T}$ represents
the overall probability for a transit with high enough inclination to
begin imparting a transit signal, while the detailed variation of the
light curve signal for varying inclination takes place when
calculating $P_{\epsilon}$. $P_{T}$ is shown as the dashed light line
in Figure~\ref{efffig}. The heavy solid line in Figure~\ref{efffig}
is the product of $P_{\epsilon}$ and $P_{T}$.
\subsection{Calculating $P_{\rm mem}$}
The Monte Carlo calculation requires knowledge of the stellar
properties, and the given properties are only valid if the star is in
fact a bona fide cluster member. An estimate of the field-star
contamination from the CMD provides only a statistical estimate of the
cluster membership probability. Based on the study of the mass
function and field contamination in \citet{BUR04}, we estimate the
cluster membership probability, $P_{\rm mem}$, as a function of
stellar mass. In brief, we start with a subsample of stars based on
their proximity to the best-fit cluster isochrone (selection on
$\chi^{2}_{\rm mem}<0.04$, see \S\ref{trncands}). This sample
contains $N_{\star}\sim 2700$ potential cluster members, and the heavy
points in Figure~\ref{cmd} mark this cluster sample in the CMD. The
best-fit isochrone allows an estimate of the stellar mass for each
member of the cluster sample, and we separate the sample into mass
bins. Repeating this procedure on the outskirts of the observed field
of view, scaled for the relative areas, provides an estimate of the
field-star contamination in a given mass bin. We fit $P_{\rm mem}$,
given in discrete mass bins, with a smooth spline fit for
interpolation.
The solid line in Figure~\ref{memprob} shows $P_{\rm mem}$ as a
function of stellar mass. The corresponding probability is given on
the right side ordinate. The clear histogram shows the distribution
of the potential cluster members as a function of mass. The lower
shaded histogram shows the product of the potential cluster members
histogram and $P_{\rm mem}$. This results in
effectively $N_{\star ,\rm eff}\sim$870 cluster members in total. For
reference, the corresponding apparent $I$-band magnitude is given along
the top.
\begin{figure}
\epsscale{1.3}
\plotone{f9_CJB.eps}
\caption{Distribution of the potential cluster members as a function
of stellar mass ({\it open histogram}). The {\it solid line} shows the
membership probability (right hand ordinate) as a function of stellar
mass. The {\it shaded histogram} shows the product of the potential
cluster member histogram and the cluster membership probability. The
corresponding apparent $I$-band magnitude is given along the
top.\label{memprob}}
\end{figure}
\section{Results}\label{results}
\subsection{Results Assuming a Power-law Orbital-period Distribution}
The previous section describes the procedure for calculating the
sensitivity of the survey to detect planetary companions as a function
of semimajor axis. The results from this calculation enable us to
place an upper limit on the fraction of cluster members harboring
close-in companions given the null result. However, calculating the
upper limit over a range of orbital periods necessitates assuming a
distribution of orbital periods for the planetary companions.
Radial velocity surveys characterize the distribution of extrasolar
planets in period as $dn\propto P^{-\gamma} dP$, with $0.7\la \gamma
\la 1.0$, corresponding to $dn\propto a^{-\beta} da$, with $0.5\la
\beta \la 1.0$ \citep{STE01,TAB02}. These studies fit the entire
range of orbital periods ranging from several days to several years.
More recently, after an increase in the number of extrasolar planet
discoveries, \citet{UDR03} confirm a shortage of planets with $10\la P
\la 100$ day orbits. Thus, the period distribution may take on
different values of $\gamma$ in the $P\la 10$ day and $P\ga 100$ day
regimes.
The initial extrasolar planet discoveries via the transit technique
had periods less than 3.0 days \citep{KON04}. The detection of these
``Very Hot Jupiters,'' contrasted with the
results from radial-velocity surveys, which demonstrated a clear
paucity of planets with $P\la 3.0$ days. After accounting for the
strong decrease in sensitivity of field transit surveys with increasing
period, \citet{GAU05A} demonstrated the consistency between the apparent lack
of VHJ companions in the radial velocity surveys and their discovery
in transit surveys. They further demonstrated that VHJ appear
to be intrinsically much rarer than HJ (with
$3 \leq P/\rm day \leq 9$. We will therefore treat
VHJ as HJ distinct populations.
Due to the incomplete knowledge of the actual
period distribution of extrasolar planets and its possible dependence
on the properties of the parent star, we provide upper limits assuming
an even logarithmic distribution of semimajor axis.
Thus, we assume a
form of the joint probability distribution of the semimajor axis and
$R_{p}$ given by
\begin{equation}
\frac{d^{2}p}{dR_{p} da}=k\delta(R_{p}-R_{p}') a^{-1}\label{uplimint},
\end{equation}
where $k$ is the normalization constant, $\delta$ is the Dirac delta
function, and $R_{p}'$ is the planet radius. We initially give results
for $R_{p}'=1.0$ and $1.5$ $R_{J}$. We follow \citet{GAU05A} and show
results for HJ (3.0$<$P$<$9.0 day) and VHJ (1.0$<$P$<$3.0 day) ranges.
In addition, we show results for a more extreme population of
companions with $P_{\rm Roche}<P<1.0$ day, where $P_{\rm Roche}$ is
the orbital period at the Roche separation limit, which we designate
as Extremely Hot Jupiter (EHJ). Assuming a negligible companion mass,
the Roche period depends solely on the density of the companion.
Jupiter, Uranus, and Neptune have nearly the same $P_{\rm Roche}\sim
0.16$ day.
Figure~\ref{powerlawfig} shows the probability for detecting a
VHJ (1.0 day $\leq P \leq$ 3.0 day) companion with an even logarithmic
distribution in semimajor axis as a function of apparent $I$-band
magnitude. The left and right panels show results for a 1.5 and 1.0
$R_{J}$ companion, respectively. The top panels of
Figure~\ref{powerlawfig} show the probability for detecting an
extrasolar planet, $P_{\rm det}$, assuming $P_{\rm mem}=1.0$. The
bottom panels show $P_{\rm det}$ after taking into account $P_{\rm
mem}$. The results for 1.0 $R_{J}$ companions broadly scatter across
the full range of detection probability. However, the 1.5 $R_{J}$
companion results delineate a tight sequence in detection probability
as a function of apparent magnitude.
\begin{figure}
\epsscale{1.2}
\plotone{f10_CJB.eps}
\caption{Probability for transit detection as a function of the
apparent $I$-band magnitude assuming an even logarithmic distribution
in semimajor axis from 1.0$<P<$3.0 day. The {\it top} panels assume
$P_{\rm mem}=1.0$. The {\it left} panels show results for a 1.5
$R_{J}$ companion. The {\it right} panels show results for a 1.0
$R_{J}$ companion. The {\it bottom} panels are the same as the top
panels, but they take into account the membership probability $P_{\rm mem}$.\label{powerlawfig}}
\end{figure}
The 1.5 $R_{J}$ companion signal lies many times above the rms scatter
in the light curve (see Figure~\ref{magrms}). Thus, a single
measurement contributes a large fraction of the S/N required for
detection. In this limit, the observing window function mainly
determines the detection probability, and as we show in
\S\ref{thyeffdisc} the result is similar to results obtained by the
theoretical detection probability framework of \citet{GAU00}.
However, the 1.0 $R_{J}$ companion transit signal comes closer to the
detection threshold. \citet{PEP05} describe the sensitivity of a
transit survey as a function of planet radius. The sensitivity of a
transit survey depends weakly on $R_{p}$ until a critical radius is
reached when the S/N of the transit falls rapidly. The sensitivity of
the survey for 1.0 $R_{J}$ is near this threshold, hence the large
scatter in the detection probability.
With the detection probabilities for all stars in the survey for the
assumed semimajor axis distribution, we can calculate the expected
number of detections scaled by the fraction of cluster members with
planets. Thus, from the Poisson distribution, a null result
is inconsistent at the $\sim$95\% level when $N_{\rm
det}\sim 3$. This allows us to solve for the 95\% confidence upper
limit on the fraction of cluster members with planets using Eq.\ \ref{eqn:ndet}.
This gives,
\begin{equation}
f_{\star} \le 3.0/\sum_{i=1}^{N_{\star}}P_{\rm det,i}\qquad {\rm (95\%~c.l.)}.\label{uplimeq}
\end{equation}
Figure~\ref{uplimit} shows the 95\% confidence upper limit on the
fraction of stars with planets in NGC 1245 for several ranges of
orbital period. The solid and dashed lines give results for 1.5 and 1.0
$R_{J}$ companions, respectively. For 1.5 $R_{J}$ companions we limit
the fraction of cluster members with companions to $<$1.5\%, $<$6.4\%,
and $<$52\% for EHJ, VHJ, and HJ companions, respectively. For 1.0
$R_{J}$ companions, we find $<$2.3\% and $<$15\% have EHJ and VHJ
companions, respectively.
\begin{figure}
\plotone{f11_CJB.eps}
\caption{Upper limit (95\% Confidence) on the fraction of stars in the
cluster with companions for several ranges in orbital period assuming
an even logarithmic distribution in semimajor axis. The {\it solid
lines} show results for a 1.5 $R_{J}$ companion. The {\it dash lines}
show results for a 1.0 $R_{J}$ companion.\label{uplimit}}
\end{figure}
The detection probability decreases rapidly with orbital period beyond
1.0 day. As a result, the survey does not reach the sensitivity
needed to place an interesting upper limit on 1.0 $R_{J}$ companions beyond $P>3.0$
day.
We further divide the VHJ period range and show upper limits for
the period ranges
1.0< $P/\rm day$ <2.0 and 2.0$<P/\rm day<$3.0, which we denote as $P_{12}$ and $P_{23}$.
For 1.5 $R_{J}$ companions we limit $f_{\star}$ to $<$5.2\%
and $<$11\% for $P_{12}$ and $P_{23}$, respectively. For 1.0 $R_{J}$
companions we limit $f_{\star}$ to $<$19\% and $<$47\% for $P_{12}$
and $P_{23}$, respectively. We also divide the HJ period range and
limit $f_{\star}$ for 1.5 $R_{J}$ companions in the 3.0$<P/\rm day<$6.0 to
$<$36\%.
\subsection{Results for Other Companion Radii\label{uplimradsec}}
Due to computing limitations we calculate detection probabilities for
the entire cluster sample only for 1.5 and 1.0 $R_{J}$ companions. In
\S\ref{uplimiterrsec} we show that an upper limit determination using
a subsample of the stars with size $N_{\star ,\rm SS}=100$
approximates the results based on the entire stellar sample. Thus, we
calculate upper limits for a variety of companion radii using
$N_{\star ,\rm SS}=100$ randomly chosen stars in the sample. Instead
of showing upper limit results over a range of orbital periods, we
derive upper limits at fixed period by replacing the semimajor axis
distribution with a $\delta(a-a_{o})$ function in
Equation~\ref{uplimint}. To obtain results at fixed period, each star
has a different $a_{o}$ that depends on the stellar mass.
Figure~\ref{uplimrad} shows the upper limit on the fraction of stars
with planets in the survey as a function of orbital period. The lines
show results for various values of the companion radius in terms of
$R_{J}$ as indicated by the label next to each line along the top of
the figure. The shaded regions denote orbital periods removed by the
selection criteria in order to eliminate false-positive transit
detections that occur around the diurnal period and 0.5 day alias. At
smaller companion radii, the transit S/N$\propto R_{p}^{2}$ drops
quickly. Toward larger companion radii the S/N of the transit
saturates and the observational window function increasingly dominates
the survey effectiveness. The survey cannot detect companions with
$R_{p}>3.5 R_{J}$ as the transit/eclipse becomes too deep given the
removal of measurements that deviate by more than 0.5 mag from the
mean light-curve level.
\begin{figure}
\plotone{f12_CJB.eps}
\caption{Upper limit (95\% Confidence) on the fraction of stars in the
cluster with companions for several companion radii as label along the
top. The result for a 1.0 $R_{J}$ companion is based on the entire
sample, whereas the results for the other companion radii are based on
a subsample of $N_{\star}=100$ stars. The shaded regions denote orbital periods
removed by the selection criteria in order to eliminate false-positive
transit detections that occur around the diurnal period and 0.5 day
alias.\label{uplimrad}}
\end{figure}
\section{Error in the Upper Limit}\label{uplimiterrsec}
In this section we discuss several sources of error present when
determining an upper limit on the fraction of stars with planets.
\subsection{Error When Using a Subsample}
Computing power limitations discourage calculating detection
probabilities over the entire cluster sample. Thus, we first
characterize the error associated with determining an upper limit
using only a subset of the entire cluster sample. Starting with
Equation~\ref{uplimeq}, we derive an error estimate when using a
subsample by the following means. Replacing the summation over
$P_{i,\rm det}$ with the arithmetic mean, $\langle P_{\rm det} \rangle$,
Equation~\ref{uplimeq} becomes
\begin{equation}
f_{\star}=3.0/(N_{\star}\langle P_{\rm det} \rangle)\label{uplimave}.
\end{equation}
By propagation of errors, the error in the upper limit is given by
\begin{equation}
\sigma_{f}=\frac{3.0}{N_{\star}}\frac{\sigma_{\langle P \rangle}}{\langle P_{\rm det} \rangle^{2}},\label{uplimerreq}
\end{equation}
where $\sigma_{\langle P \rangle}$ is the error in the mean detection
probability. The error in the mean detection probability scales as
$\sigma_{\langle P \rangle}=\sigma_{P}/\sqrt{N_{\star ,\rm SS}}$, where
$\sigma_{P}$ is the intrinsic standard deviation of the distribution of
$P_{i,\rm det}$ values and $N_{\star ,\rm SS}$ is the size of the subsample.
We empirically test this error estimate by calculating the upper limit
with subsamples of increasing size. The small points in
Figure~\ref{uplimiterr} show the upper limit on the fraction of stars
with planets as a function of the subsample size. The upper limit
calculation assumes an even logarithmic distribution of semimajor axis
for companions with $1.0\leq P\leq 3.0$ day for 1.5 and 1.0 $R_{J}$
radius companions, top and bottom panels, respectively. Neighboring
columns of upper limits differ by a factor of 2 in the subsample size.
We randomly draw stars from the full sample without replacement,
making each upper limit at fixed sample size independent of the
others. The dashed line represents the upper limit based on the full
cluster sample.
\begin{figure}
\plotone{f13_CJB.eps}
\caption{Estimates for the upper limit (95\% Confidence) on
the fraction of stars in the cluster as a function of the sample size
employed in making the estimate are shown as {\it small points}.
We have assumed an even logarithmic distribution in periods between
$1.0<P/\rm day<3.0$ orbital period. The {\it dash
line} shows the upper limit based on the entire sample. The average
upper limit at fixed sample size is given by {\it square points}. The
sequence of {\it open stars} gives the standard deviation in the
distribution of upper limits at fixed sample size. The {\it solid
line} shows the error model estimate for the standard deviation in the
upper limit. The {\it top} panel gives results for a 1.5 $R_{J}$ companion, and the {\it bottom} panel gives results for a 1.0 $R_{J}$
companion.\label{uplimiterr}}
\end{figure}
The distribution of upper limits around the actual value possesses a
significant tail toward higher values. This tail results from the
significant number of stars with $P_{\rm tot}=0.0$. At fixed sample
size, the large square point represents the mean upper limit.
Using subsamples sizes of $N_{\star ,\rm SS}\lesssim 20$ tends
to systematically overestimates the true upper limit. The open star symbol
represents the $1-\sigma$ standard deviation of the distribution at
fixed sample size. The solid line shows the error estimate from
Equation~\ref{uplimerreq}. Despite the non-Gaussian nature of the
underlying distribution, the error estimate in the upper limit
roughly corresponds with its empirical determination especially toward
increasing $N_{\star ,\rm SS}$ where the systematic effects become
negligible. From Figure~\ref{uplimiterr}, we conclude that
adopting $N_{\star ,\rm SS}\ga 100$
provides adequate control of the random and systematic errors in
calculating an upper limit, without becoming numerically prohibitive.
This verifies the procedure for estimating the upper limit for a
variety of companion radii in \S\ref{uplimradsec}.
\subsection{Error in Determining Sample Size}
Up to this point, we have mainly addressed sources of error directly
associated with determining $P_{\epsilon}$. However, the upper limit
error budget contains an additional source of error from uncertainties
in determining $P_{\rm mem}$. This additional source of error
directly relates to the accuracy in determining the number of single
main-sequence stars in the survey.
We characterize this error as follows. At fixed orbital period,
$\langle P_{\rm det} \rangle=\langle P_{\rm mem} P_{\epsilon}
P_{T} \rangle$. Given that $P_{\rm mem}$ is nearly independent of the other
terms, the previous average is separable, such that $\langle P_{\rm det} \rangle=\langle P_{\rm mem} \rangle \langle P_{\epsilon} P_{T} \rangle $. This
separation changes the derived upper limit by a negligible 0.3\%
relative error. The separation allows us to rewrite
Equation~\ref{uplimave} as
\begin{equation}
f_{<,95}=3.0/(N_{\star ,\rm eff}\langle P_{\epsilon} P_{T} \rangle),
\end{equation}
where $N_{\star ,\rm eff}=N_{\star}\langle P_{\rm mem} \rangle$ is
the effective number of cluster members in the sample after taking
into account background contamination. Thus, $N_{\star ,\rm eff}$
carries equal weight with $\langle P_{\epsilon} P_{T} \rangle$ in the upper-limit
error budget.
The ability to determine $N_{\star ,\rm eff}$ accurately provides an
advantage for transit surveys toward a rich stellar cluster rather
than toward a random galactic field. Even though methods based on the
cluster CMD statistically determine cluster membership, they
concentrate on a narrow main-sequence region to search for planets
where the cluster counts significantly outweigh the background
contamination counts. By concentrating on the main sequence of a
cluster, this survey has only $\sim 68\%$ contamination by background
stars. In contrast, random galaxy fields contain $\gtrsim 90\%$
contamination by subgiant and giant stars for V$<$11 surveys
\citep{GOU03}. Overall, $N_{\star ,\rm eff}$ has an 8\% error, which
propagates to a relative error of 8\% in the upper limit. The error
in $N_{\star ,\rm eff}$ comes from subtracting the star counts
observed within a 12.7$\arcmin$ radius of the cluster center by the
control field star counts outside this radius. The error is larger
than the Poisson error of $N_{\star ,\rm eff}=870$ since the control
field star count is scaled to match the larger cluster field area.
\subsection{Error Due to Blends and Binaries\label{binstat}}
The final source of error we address results from stellar blends due
to physical binaries or chance, line-of-sight associations. The
additional light from an unresolved blend dilutes a transit signal
from one component of the blend. Thus, we overestimate the ability to
detect a transit around blends. However, a compensatory effect arises
since the extra light from a blend results in an overestimate in the
stellar mass and radius, which in turn results in modeling a shallower
transit. Modeling such details is not possible without knowing the
binary nature for each object, but we can estimate the number of stars
affected by assuming binary star statistics as measured in the field.
Due to low stellar crowding, we estimate chance blends have a
negligible effect in comparison to physically associated binaries
\citep{KIS05}. Finding charts in Figure~\ref{find} demonstrate the
stellar crowding conditions of the survey.
The latest Coravel radial velocity survey dedicated to F7-K field
dwarfs \citep{HAL04} and the visual binary and common proper motion
pairs survey of \citet{EGG04} provide the basis for the binary star
estimates. Overall they find a binary frequency of 56\% for systems
with $\log (P/\rm day)\leq 6.31$. However, due to the strong dependence of
luminosity on the stellar mass only systems with mass ratio, $q>0.6$,
significantly contribute light to dilute the transit signal. For
lower mass ratios the lower mass component contributes $<20\%$ of the
total system flux. When taking binaries across the entire range of
orbital periods the mass-ratio distribution peaks near $q\sim 0.2$ and
slowly drops toward higher q \citep{DUQ91}. From Figure 10 in
\citet{DUQ91}, only $\sim 20\%$ of their binary systems have $q>0.6$.
Thus, if the binary statistics for the cluster matches the field
dwarfs, transit dilution occurs for $\sim 11\%$ of the stellar sample.
The radial velocity survey for binaries in the Pleiades and Praesepe
clusters reveals consistency with the frequency of binaries in the
field surveys \citep{HAL04}.
In principle, the data from this survey can also answer whether the
binary statistics of the cluster matches the field dwarfs. However,
the statistical methods and selection criteria described in this study
do not optimally detect interacting and eclipsing binaries.
Additionally, in order to reach planetary companion sensitivities, we
remove light-curve deviations beyond 0.5 mag as discrepant, which
removes the deep eclipses.
\subsection{Overall Error}
The errors involved with determining the number of cluster members
dominates the error budget in determining the upper limit. However,
as discussed in \S\ref{effcalc}, this is only true if one quantifies
and corrects for the systematic overestimate in detection probability
due to a reduction in the transit signal from the procedures of
generating and correcting the light curve. For instance, at the
median stellar brightness for this survey, the detection probability is
overestimated by $>$15\% for orbital periods $>$4.0 day and $>$1.0 day
for 1.5 and 1.0 $R_{J}$ companions, respectively, without correction.
Since we characterize this systematic effect, the error
in determining the number of cluster members dominates the error
budget.
Additionally, the potential for a large contamination of binaries
diluting the transit signal necessitates an asymmetrical error bar.
We roughly quantify the error estimate resulting from binary
contamination from the field dwarf binary statistics. From the arguments
in the previous section, we adopt 11\% as a $1-\sigma$ systematic
fractional error due to binary star contamination. Overall, combining
this systematic error with the 7\% fractional error in determining the
cluster membership, upper limits derived from the full stellar sample
contain a $^{+13\%}_{-7\%}$ fractional error.
\section{Discussion}\label{discussion}
Along with this work, several other
transit surveys have quantified their detection probability from actual
observations in an attempt to constrain the fraction of stars with
planets or quantify the consistency with the solar neighborhood radial
velocity planet discoveries \citep{GIL00,WEL05,MOC05,HID05,HOO05}.
Unfortunately, a direct comparison of upper limits from this work with these
other transit surveys cannot be made. Until this study, none of
the previous studies have quantified the random or systematic errors
present in their techniques in sufficient detail to warrant a
comparison. Additionally, previous studies do not
have quantifiable
selection criteria that completely eliminate false-positive transit
detections due to systematic errors in the light curve, a necessary
component of an automated Monte Carlo calculation.
\subsection{Initial Expectations vs. Actual Results}
In the meantime, we can discuss why the initial estimate of finding
two planets assuming 1\% of stars have $R_{J}$ companions evenly
distributed logarithmically between 0.03 to 0.3 AU \citep{BUR03}
compares to the results from this study, which indicate that we
expected to detect 0.1 planets. The initial estimates for the
detection rate are based on the theoretical framework of
\citet{GAU00}. Given a photometric noise model, observational window,
and S/N of the transit selection criteria, the theoretical framework
yields an estimate of the survey detection probability. This
theoretical detection probability coupled with a luminosity function
for the cluster determines the expected number of detections. As we
show next, the initial estimates did not account for the light curve
noise floor or detector saturation, and contain optimistic estimates for
the sky background and luminosity function. In addition, the initial
estimates could not have accounted for the 50\% reduction in signal
for the majority of the light curves due to the detector error
discussed in \S\ref{sec:noise}. Finally, as discussed in detail by
\citet{PEP05} and demonstrated explicitly here, the detection
probability is very sensitive to the precise error properties near the
critical threshold of detection, which for this survey is just reached
for $R_J$ companions.
The top panels of Figure~\ref{effthy} compare the detection
probability of the Monte Carlo calculation of this study to the
initial theoretical estimate. The small points replicate the
Monte Carlo results from the top panels of Figure~\ref{powerlawfig},
while the dashed line shows the detection probability based on the
initial theoretical expectations. The initial theoretical
expectations clearly overestimate the detection probability. The
bright end continues to rise due to ignoring the effects of detector
saturation and the photometric noise floor. The faint end does not
cutoff due to an underestimated sky brightness. The initial estimate
of the sky brightness, 19.5 mag arcsec$^{-2}$, compares optimistically
to the range of sky brightnesses encountered during the actual
observations. The sky varied between 17.5 and 19.0 mag arcsec$^{-2}$
over the course of the observations. The full lunar phase took place
near the middle of the observation, and the Moon came within 40$\degr$
of the cluster when nearly full.
\begin{figure}
\epsscale{1.3}
\plotone{f14_CJB.eps}
\caption{{\it Top}: Probability for transit detection as a function of
the apparent $I$-band magnitude assuming an even logarithmic
distribution in semimajor axis from 1.0$<P<$3.0 day and $P_{\rm
mem}=1.0$ using the Monte Carlo calculation of this study({\it small
points}). The binned average of the Monte Carlo results is denoted by
{\it open stars}. The {\it dash line} shows the expected probability
for transit detection based on a theoretical calculation prior to this
survey. The {\it dot dash line} shows the theoretical probability for
transit detection assuming a photometric noise model appropriate for
the survey. The {\it solid line} shows the theoretical probability
for transit detection with an accurate photometric noise model for the
survey and including the effects of limb darkening. The {\it left}
panel shows 1.5 $R_{J}$ companion results. The {\it right} panel
shows 1.0 $R_{J}$ companion results. {\it Bottom}: Shows the
theoretical probability for transit detection allowing each star of
the survey to have its empirically determined photometric noise and
including the effects of limb darkening ({\it small points}). The
open stars are reproduced from the top panels.\label{effthy}}
\end{figure}
The initial estimate for the cluster
luminosity function simply selected cluster members via tracing by eye lines
that bracket the main sequence in the CMD. This crude
technique led to an estimated 3200 cluster members down to $I\sim$20.
A careful accounting of the field star contamination results in only
$\sim$870 cluster members in the survey. The luminosity function
overestimate and the expected sensitivity to transits around
the bright and faint cluster members leads to a factor of 4-5
overestimate in the number of cluster members in the survey.
Additionally, the factor of 4-5 overestimate of the initial detection
probability when compared to binned average detection probability for
the Monte Carlo results (open stars in Figure~\ref{effthy}), easily
accounts for the factor of 20 difference in the overall number of expected
detections (for $R=R_J$).
\subsection{Improving Theoretical Expectations\label{thyeffdisc}}
Clearly, accurate and realistic transit detection statistics requires
more detailed analysis than these early estimates and more careful
theoretical work has already been done \citep{PEP05}. In the case of
an open cluster, delineating cluster membership by tracing the main
sequence in the CMD overestimates the number of cluster members. A
careful subtraction of the field contamination is necessary in order
to extract an accurate cluster-member count.
A photometric noise model that accurately reflects the quality of
observations is the next step in correctly calculating a theoretical
detection probability. From Figure~\ref{magrms}, we estimate the
actual photometric noise present in the data. This includes the
proper sky measurement and systematic floor in the photometric
precision. With a noise model similar to the lower solid line in
Figure~\ref{magrms}, we recalculate the theoretical detection
probability. The dot dash line in Figure~\ref{effthy} shows the
resulting detection probability still overestimates the Monte Carlo
results. However, it does agree with the faint-end cutoff of the
Monte Carlo calculation. We impose the bright-end cutoff due to
saturation effects at the same magnitude as the observed increase in
light curve rms as shown in Figure~\ref{magrms}.
For these results we include an additional effect not taken into
account by \citet{GAU00}. We multiply the transit S/N selection
criteria, Equation 5 of \citet{GAU00}, by $\sqrt{{\rm max}(N_{\rm
obs},1.7)}$, where $N_{\rm obs}$ is the typical number of transits
detected throughout the observing run. The $N_{\rm obs}=1.7$ floor in
this factor corresponds to the requirement of observing the transit
twice multiplied by the observing efficiency. For simplicity, we take
$N_{\rm obs}=N_{\rm tot}/P\times 0.2$, where $N_{\rm tot}=16$, the
length of the observing run in days, and the factor of 0.2 accounts
for the actual observational coverage encountered during the run.
Given that the theoretical calculation still overestimates the Monte Carlo
results, to increase the realism of the theoretical detection
probability, we include a linear limb-darkening law, which effectively
weakens the transit depth. We solve for the factor $G$, Equation 6 of
\citet{GAU00}, assuming a linear limb-darkening parameter, $\mu=0.6$,
for all stars. The inclusion of limb darkening significantly impacts
the theoretical detection probability as the solid line in
Figure~\ref{effthy} demonstrates. Although the theoretical detection
probability still overestimates the upper envelope of results from the
Monte Carlo calculation, the level of agreement, after including an
accurate photometric noise model and limb darkening, shows significant
improvement over the initial estimates.
Despite the improved agreement, the Monte Carlo detection probability
calculation shows significant scatter at fixed magnitude. The
theoretical probability treats all stars at fixed
magnitude as having the same noise properties. With the theoretical
detection probability we can address whether the scatter in detection
probability at fixed magnitudes results from the observed scatter in
noise properties at fixed magnitude as shown in Figure~\ref{magrms}.
Thus, we calculate a theoretical detection probability for each star
individually using the measured rms in the light curve for each star
to determine the theoretical transit S/N selection criteria using Equation 5 of
\citet{GAU00}. The small points in the bottom panels of
Figure~\ref{effthy} show the resulting theoretical detection probability.
Some of the scatter in detection probability results from the scatter
in noise properties as a function of magnitude. The heavy star points
represent the average Monte Carlo detection probability in 0.25
magnitude bins. In the case of the 1.5 $R_{J}$ companions, the signal
is large in comparison to the photometric noise. The left panels of
Figure~\ref{effthy} demonstrate the theoretical detection probability
overestimates the Monte Carlo detection probability by only 20\%.
However, the closer the transit signal approaches the systematic and
rms noise, the theoretical detection probability strongly
overestimates the actual detection probability. In the case of 1.0
$R_{J}$ companions (right panels of Figure~\ref{effthy}), the
theoretical calculation overestimates the Monte Carlo results by 80\%.
Thus, we urge caution when relying on a theoretical detection
probability when the survey is near the critical threshold for transit
detection. Such is the case for 1.0 $R_{J}$ companions in this survey.
\subsection{Planning Future Surveys}
Even though the theoretical calculation overestimates the absolute detection
probability by a factor of $<$2, tests on a small sample of stars
with the Monte Carlo calculation reveal it provides a much higher
relative accuracy. Thus, the computationally efficient theoretical
calculation allows us to examine the relative change in the detection
probability for a given change in survey parameters. For planning
future surveys it is essential to decide between increasing the number
of stars by observing another cluster or improving the detection
probability by increasing the length of observations on a single
cluster. As shown in \S\ref{results}, the upper limit scales linearly
with the sample size, thus keeping everything else constant,
increasing the sample size by a factor of 2 improves the
upper limit by a factor of 2.
Using the theoretical detection probability framework, we can quantify
the improvement in sensitivity for a survey twice as long. We assume
a survey twice as long consists of an observing window identical to
the current survey for the first half and repeats the observing window
of the current survey for the latter half. The upper limit improves
only by a factor of 1.3 for a logarithmic distribution of VHJ planets.
However, the upper limits for HJs with 3.0 to 9.0 day orbital periods
decrease by a factor of 2.6. Thus, not only is it more efficient to
observe this cluster twice as long, but the analysis of \citet{GAU05A}
reveals a 5-10 times larger HJ population than the the VHJ population.
This strongly suggests transit surveys with a single observing site
require month long runs for maximum efficiency in detecting HJ
companions.
Figure~\ref{effthy} reveals little improvement in the detection
probability occurs for increasing the photometric precision, at least
for 1.5 $R_{J}$ companions. To first order, the photometric precision
determines the faint-end cutoff in the detection probability. Thus, a
lower sky background or improved photometric precision predominately
effects the number of stars in the survey rather than the detection
probability. However, improving the photometric precision does lead
to increasing the sensitivity for smaller radius companions. In the
case of 1.0 $R_{J}$ companions, the rms in the light curve typically
is $\lesssim$1.8 times lower than the transit signal. As shown in the
previous section, the theoretical detection probability breaks down
for such low precision. In the case of 1.5 $R_{J}$ companions, the
rms in the light curve typically is $\lesssim$4 times lower than the
transit signal. Thus, for the 1.0 $R_{J}$ results to reach the same
sensitivity as the 1.5 $R_{J}$ results, improvement in the light curve
rms is necessary until the transit S/N is above a critical threshold
when the detection probability is weakly dependent on $R_{p}$
\citep{PEP05}.
According to a recent review of radial velocity detected planets,
$1.2\pm 0.3\%$ of solar neighborhood stars have HJ companions
\citep{MAR05}. This survey of NGC 1245 reached an upper limit of 52\%
of the stars having 1.5 $R_{J}$ HJ companions. As mentioned
previously, a survey lasting twice as long can reduce this upper limit
to 21\%. Reaching similar sensitivity as the radial velocity results
requires observing additional clusters in order to increase the number
of stars in the sample. This survey has $\sim 870$ cluster members
and $\sim 740$ of them have nonzero detection probability for 1.5
$R_{J}$ VHJ companions. Hence a total sample size of $\sim 7400$
dwarf stars observed for a month will be needed to help constrain the
fraction of stars with planets to a 2\% level (comparable to radial
velocity results). Assuming that the observed HJ frequency of $\sim 1\%$
remains valid for a variety of stellar
environments, we expect to detect one planet every 5000 dwarf stars
observed for a month. Results for 1.0 $R_{J}$ companions without
substantial improvement in the photometric precision likely will
require a small factor larger sample size.
\section{Conclusion}\label{conclusion}
In this study we complete the analysis of a 19-night search for
transiting extrasolar planets orbiting members of the open cluster NGC
1245. An automated transit search algorithm with quantitative
selection criteria finds six transit candidates; none are bona
fide planetary transits. Thus,
this work also details the procedure for analyzing the null-result
transit search in order to determine an upper limit on the fraction of
stars in the cluster harboring close-in $R_{J}$ companions. In
addition, we outline a new differential photometry technique that
reduces the level of systematic errors in the light curve.
A reliable upper limit requires quantifiable transit selection
criteria that do not rely on visual, qualitative judgments of the
significance of a transit. Thus, we develop completely quantitative
selection criteria that enable us to calculate the detection
probability of the survey via Monte Carlo techniques. We inject
realistic limb-darkened transits in the light curves and attempt their
recovery. For each star we inject 100,000 transits at a variety of
semimajor axes, orbital inclination angles, and transit phases,
to fully map the detection probability for 2700 light curves
consistent with cluster membership based on their position in the CMD.
After characterizing the field contamination, we conclude the sample
contains $\sim$870 cluster members.
When calculating a 95\% confidence upper limit on the fraction of
stars with planets, we assume companions have an even logarithmic
distribution in semimajor axis over several ranges of orbital period.
We adopt the period ranges as outlined by \citet{GAU05A}, for HJ and
VHJ companions, and an as of yet undetected population with P$<$1.0
day, which we denote as Extremely Hot Jupiters (EHJ). For NGC 1245,
we limit the fraction of cluster members with 1.0 $R_{J}$ companions
to $<$3.2\% and $<$24\% for EHJ and VHJ companions, respectively. We
do not reach the sensitivity to place any meaningful constraints on
1.0 $R_{J}$ HJ companions. For 1.5 $R_{J}$ companions we limit the
fraction of cluster members with companions to $<$1.5\%, $<$6.4\%, and
$<$52\% for EHJ, VHJ, and HJ companions, respectively.
We also fully characterize the errors associated with calculating the
upper limit. We find the overall error budget separates into two
equal contributions from error in the total number of single dwarf
cluster members in the sample and the error in the detection
probability. After correcting the detection probability for
systematic overestimates that become increasingly important for
detecting transits toward longer orbital periods (see
\S\ref{effcalc}), we conclude that random and systematic errors in
determining the number single dwarf stars in the sample dominate the
error budget. \S\ref{results} details the error analysis, and
overall, we assign a $^{+13\%}_{-7\%}$ fractional error in the
upper limits.
In planning future transit surveys, we demonstrate that observing NGC 1245
for twice as long will reduce the upper limits for the important HJ
period range more efficiently than observing an additional cluster of
similar richness as NGC 1245 for the same length of time as this data
set. To reach a $\sim$ 2\% upper limit on the fraction of stars with
1.5 $R_{J}$ HJ companions, where radial velocity surveys currently measure
1.3\% \citep{MAR05}, we conclude a total sample size of $\sim 7400$
dwarf stars observed for a month will be needed. If 1\% of stars have
1.5 $R_{J}$ HJ extrasolar planets, we expect to detect one planet
every 5000 dwarf stars observed for a month. Results for 1.0 $R_{J}$
companions without substantial improvement in the photometric
precision likely will require a small factor larger sample size.
\acknowledgements This publication was not possible in a timely manner
without the gracious donation of computing resources by the following
individuals: D. An, N. Andronov, M. Bentz, E. Capriotti, J. Chaname,
G. Chen, X. Dai, F. Delahaye, K. Denney, M. Dietrich, S. Dong,
S. Dorsher, J. Escude, D. Fields, S. Frank, H. Ghosh, O. Gnedin,
A. Gould, D. Grupe, J. Guangfei, C. Onken, J. Marshall, S. Mathur,
C. Morgan, N. Morgan, S. Nahar, J. Pepper, B. Peterson, J. Pizagno,
S. Poindexter, J. Prieto, B. Ryden, A. Steed, D. Terndrup, J. Tinker,
D. Weinberg, R. Williams, B. Wing, J. Yoo. We thank C. Han for the
donation of supercomputing resources belonging to the Korea Astronomy
Observatory and Astrophysical Research Center for the Structure and
Evolution of the Cosmos (ARCSEC) of Korea Science and Engineering
Foundation (KOSEF) through Science Research Program (SRC) program.
This publication makes use of supercomputer resources through the
Cluster Ohio Project Rev3, an initiative of the Ohio Supercomputer
Center, the Ohio Board of Regents, and the OSC Statewide Users Group.
This work was supported by NASA grant NAG5-13129 and a Menzel
Fellowship from the Harvard College Observatory.
\begin{figure}
\plotone{fa1_CJB.eps}
\caption{The lines show $\Delta\chi^2_{bc}$, the difference in $\chi^2$ between a boxcar fit to a
planetary transit across a limb-darkened star and the exact model fit,
normalized by $\Delta \chi^2_0$ the difference in $\chi^2$ between
the exact model fit and a constant flux fix to the light curve.
Each band is for a different planet/star radius
ratio $R_p/R_*$, and the width of the band shows the variation in
$\Delta\chi^2_{bc}/\Delta \chi^2_0$ for range of linear limb-darkening parameters
$u_1=0.0-0.4$.}
\label{fig:a1}
\end{figure}
\def\eq#1{equation (\ref{#1})}
\def\Eq#1{Eq.~\ref{#1}}
\def\rm km~s^{-1}{{\rm km\,s^{-1}}}
\def{\rm AU}{{\rm AU}}
\def\rm km{\rm km}
\def\rm AU{\rm AU}
\def\rm km~s^{-1}{\rm km~s^{-1}}
\def^\circ{^\circ}
\defN_{\rm tr}{N_{\rm tr}}
\deft_{\rm tr}{t_{\rm tr}}
\deft_{\rm eq}{t_{\rm eq}}
\deft_{\rm night}{t_{\rm night}}
\def{\cal P}_{t}{{\cal P}_{t}}
\def{\cal P}_{W}{{\cal P}_{W}}
\def{\cal P}_{S/N}{{\cal P}_{S/N}}
\def\chi^2_{\rm eq}{\chi^2_{\rm eq}}
\def\chi^2_{\rm min}{\chi^2_{\rm min}}
\defN_{\rm n}{N_{\rm n}}
\deff_{0}(\lambda){f_{0}(\lambda)}
\deff_{0\lambda}{f_{0\lambda}}
\def{\cal P}_{\rm tot}{{\cal P}_{\rm tot}}
|
1,314,259,995,662 | arxiv |
\section{Introduction}
Suppose $f: {\mbox{\bf R}}^n \to {\mbox{\bf R}}$ is a convex function, and
$\alpha \in {\mbox{\bf R}}$. We refer to the function $\min\{f(x),\alpha\}$
as a \emph{clipped convex function}.
In this paper we consider the problem of minimizing a sum of
clipped convex functions,
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & f_0(x) + \sum_{i=1}^m \min\{f_i(x),\alpha_i\},
\end{array}
\label{eq:main_formulation}
\end{equation}
with variable $x\in{\mbox{\bf R}}^n$, where
$f_0:{\mbox{\bf R}}^n \to {\mbox{\bf R}} \cup \{+\infty\}$
and $f_i:{\mbox{\bf R}}^n \to {\mbox{\bf R}}$ for $i=1,\ldots,m$ are closed proper convex functions,
and $\alpha_i\in{\mbox{\bf R}}$ for $i=1,\ldots,m$.
We use infinite values of $f_0$ to encode constraints on $x$, {\it i.e.}, to
constrain $x \in \mathcal X$ for a closed convex set $\mathcal X$
we let $f_0(x)=+\infty$ for all $x \not \in \mathcal X$.
When $f_i(x) > \alpha_i$, the value of the $i$th term in the sum
is \emph{clipped} to $\alpha_i$, which limits how large each term
in the objective can be.
Many practical problems can be formulated as instances
of~\eqref{eq:main_formulation};
we describe a few in~\S\ref{sec:applications}.
\paragraph{NP-hardness.}
In general, problem~\eqref{eq:main_formulation} is nonconvex and as a result
can be very difficult to solve.
Indeed,~\eqref{eq:main_formulation} is NP-hard.
We show this by giving a reduction
of the subset sum problem to an instance of~\eqref{eq:main_formulation}.
The subset sum problem involves determining whether or not there exists a subset
of a given set of integers $a_1,\ldots,a_n$ that sum to zero.
The optimal value of the problem
\begin{equation*}
\begin{array}{ll}
\mbox{minimize} & (a^Tx)^2 - n/4 + \sum_{i=1}^n \min\{x_i^2, 1/4\} + \min\{(x_i-1)^2, 1/4\} \\
\mbox{subject to} & \mathbf 1^T x \geq 1,
\end{array}
\end{equation*}
which has the form~\eqref{eq:main_formulation},
is zero if and only if $x_i \in \{0, 1\}$, at least one of $x_i=1$,
and $a^T x = 0$; in other words, the set $\{a_i \mid x_i = 1\}$ sums to zero.
Since the subset sum problem can be reduced to
an instance of~\eqref{eq:main_formulation}, we conclude that in general
our problem is at least as hard as difficult problems
like the subset sum problem.
\paragraph{Global solution.}
There is a simple (exhaustive) method to solve~\eqref{eq:main_formulation} globally:
for each subset $\Omega$ of $\{1,\ldots,m\}$, we solve the convex problem
\begin{equation}\label{eq:convex-subset}
\begin{array}{ll}
\mbox{minimize} & f_0(x) + \sum_{i \in \Omega} f_i(x) + \sum_{i \not \in \Omega} \alpha _i \\
\mbox{subject to} & f_i(x) \leq \alpha_i, \quad i \in \Omega,
\end{array}
\end{equation}
with variable $x \in {\mbox{\bf R}}^n$.
The solution to~(\ref{eq:convex-subset}) with the lowest optimal
value is the solution to~\eqref{eq:main_formulation}.
This general method is not practical unless $m$ is quite small,
since it requires the solution of $2^m$ convex optimization problems.
In some specific instances of problem~\eqref{eq:main_formulation},
we can cut down the search space if we know that a specific
choice of $\Omega \subseteq \{1, \dots, m\}$ implies
\[
\{x \mid f_i(x) \leq \alpha_i,\, i \in \Omega\} = \emptyset,
\]
which means that the optimal value of~(\ref{eq:convex-subset}) is
$+\infty$.
In this case, we do not have to
solve problem~\eqref{eq:convex-subset} for this choice of $\Omega$, as we know it will be infeasible.
One simple example where this happens is when the $\alpha_i$-sublevel sets of $f_i$ are pairwise disjoint, which implies that
we only have to solve $m$ convex problems (as opposed to $2^m$) to find the global solution.
This idea is used in~\cite{minimizing2019liu} to guide their proposed search algorithm.
\paragraph{Related work.}
The general problem of minimizing a sum of clipped convex functions
was recently considered in~\cite{minimizing2019liu}.
In their paper, they also show that the problem is NP-hard via a reduction to 3-SAT and
give a global solution method in a few special cases whenever $n$ is small.
They also provide a heuristic method based on cyclic coordinate descent,
leveraging the fact that one-dimensional problems are easy to solve.
The idea of using clipped convex functions has appeared
in multiple application areas, the most prominent being statistics.
For example,
the sum of clipped absolute values (often referred to as the \emph{capped} $\ell_1$-norm)
has been used as a sparsity-inducing regularizer
\cite{zhang2009multi, zhang2010analysis, ong2013learning}.
In particular,~\cite{zhang2009multi, ong2013learning}
make use of the fact that problem~\eqref{eq:main_formulation}
can be written as a difference-of-convex (DC) problem and can be
approximately minimized via the convex-concave procedure \cite{lipp2016variations}
(see Appendix~\ref{sec:convex_concave}).
The clipped square function (also known as the \emph{skipped-mean} loss)
was also used in~\cite{torr1998robust} to estimate
view relations, and in~\cite{portilla2015efficient} to
perform robust image restoration.
Similar approaches have been taken for clipped loss functions,
where they have been used for robust feature selection~\cite{lan2016robust},
regression~\cite{yang2010relaxed,she2011outlier},
classification~\cite{suzumura2014outlier,safari2014insensitive,xu2016robust},
and robust principal component analysis~\cite{sun2013robust}.
\paragraph{Summary.}
We begin by presenting some applications of minimizing
a sum of clipped convex functions in~\S\ref{sec:applications}
to empirical risk minimization and control.
We then provide some simple heuristics for approximately
solving~\eqref{eq:main_formulation} in~\S\ref{sec:methods},
which we have found to work well in practice.
In~\S\ref{sec:perspective_formulation}, we describe a method for
converting~\eqref{eq:main_formulation} into a mixed-integer convex program,
which is amenable to solvers for mixed-integer convex programs.
Finally, we describe an open-source Python implementation of the ideas
described in this paper
in~\S\ref{sec:implementation} and apply our implementation to a
few illustrative examples in~\S\ref{sec:examples}.
\section{Applications}
\label{sec:applications}
In this section we describe some possible
applications of minimizing a sum of clipped convex functions.
\subsection{Clipped empirical risk minimization}
\label{sec:clipped_erm}
Suppose we have data
\[
x_1,\ldots,x_N \in {\mbox{\bf R}}^n, \quad y_1,\ldots,y_N \in \mathcal Y.
\]
Here $x_i$ is the $i$th feature vector, $y_i$ is its corresponding output (or label),
and $\mathcal Y$ is the output space.
We find parameters $\theta\in{\mbox{\bf R}}^n$ of a linear model given the data
by solving the \emph{empirical risk minimization} (ERM) problem
\begin{equation}
\begin{array}{ll}
\label{eq:erm}
\mbox{minimize} & \frac{1}{N}\sum_{i=1}^N l(x_i^T\theta,y_i) + r(\theta),
\end{array}
\end{equation}
with variable $\theta$, where $l:{\mbox{\bf R}} \times \mathcal Y \to {\mbox{\bf R}}$ is the loss function,
and $r:{\mbox{\bf R}}^n \to {\mbox{\bf R}}$ is the regularization function.
Here the objective is composed of two parts:
the loss function, which measures the accuracy of the predictions,
and the regularization function, which measures the complexity of $\theta$.
We assume that $l$ is convex in its first argument and that $r$ is convex,
so the problem~(\ref{eq:erm}) is a convex optimization problem.
For a given $x\in{\mbox{\bf R}}^n$, our prediction of $y$ is
\[
\hat y = \underset{y \in \mathcal Y}{\mathop{\rm argmin}} \; l(x^T\theta^\star, y),
\]
where $\theta^\star$ is optimal for~(\ref{eq:erm}).
For example, in linear regression, $\mathcal Y = {\mbox{\bf R}}$, $l(z, w)=(z - w)^2$,
and $\hat y = x^T \theta^\star$;
in logistic regression, $\mathcal Y = \{-1,1\}$, $l(z, w)=\log(1+e^{-wz})$,
and $\hat y = \mathbf{sign}(x^T\theta^\star)$,
where $\mathbf{sign}(z)$ is equal
to $1$ if $z \geq 0$ and $-1$ otherwise.
While ERM often works well in practice,
it can perform poorly when there are outliers in the data.
One way of fixing this is to clip the loss for each data point
to a value $\alpha\in{\mbox{\bf R}}$, leading to the \emph{clipped ERM} problem,
\begin{equation}
\label{eq:clipped-erm}
\begin{array}{ll}
\mbox{minimize} & \frac{1}{N}\sum_{i=1}^N \min\{l(x_i^T\theta,y_i),\alpha\} + r(\theta).
\end{array}
\end{equation}
After solving (or approximately solving) the clipped problem,
we can label data points $(x_i,y_i)$ where $l(x_i^T\theta^\star,y_i) \geq \alpha$
as outliers.
The clipped ERM problem is an instance
of what is referred to in statistics as a \emph{redescending M-estimator}
\cite[\S 4.8]{huber2009robust},
since the derivative of the clipped loss goes to $0$ as the magnitude of its input goes to infinity.
In this terminology, the clip value $\alpha$ is referred to
as the \emph{minimum rejection point}.
In~\S\ref{sec:erm-example}, we show an example where the normal empirical risk minimization problem fails, while its clipped variant has
good performance.
\subsection{Clipped control}
Suppose we have a linear system with dynamics given by
\[
x_{t+1} = Ax_t + Bu_t, \quad t=0,\ldots,T-1,
\]
where $x_t \in {\mbox{\bf R}}^n$ is the state of the system
and $u_t\in{\mbox{\bf R}}^p$ denotes the input to the system, at time period $t$.
The dynamics matrix $A \in {\mbox{\bf R}}^{n \times n}$ and
the input matrix $B \in {\mbox{\bf R}}^{n \times m}$ are given.
We are given stage cost functions $g_t:{\mbox{\bf R}}^n \times {\mbox{\bf R}}^p \to {\mbox{\bf R}}$,
and an initial state $x^\mathrm{init}\in{\mbox{\bf R}}^n$.
The standard optimal control problem is
\[
\begin{array}{ll}
\mbox{minimize} & \sum_{t=0}^{T} g_t(x_t, u_t)\\
\mbox{subject to} & x_{t+1} = A_t x_t + B_t u_t, \quad t=0,\ldots,T-1, \\
& x_t\in\mathcal X_t, \quad u_t\in\mathcal U_t, \quad t=0,\ldots,T,\\
& x_0 = x^\mathrm{init},
\end{array}
\]
where, at time $t$, $\mathcal X_t\subseteq{\mbox{\bf R}}^n$ is the convex set of allowable states
and $\mathcal U_t\subseteq{\mbox{\bf R}}^m$ is the convex set of allowable inputs.
The variables in this problem are the states and inputs,
$x_t$ and $u_t$.
If the stage cost function $g_t$ are convex, the optimal control
problem is a convex optimization problem.
We define a \emph{clipped optimal control} problem as
an optimal control problem in which
the stage costs can be expressed as sums of clipped convex functions,
{\it i.e.},
\[
g_t(x,u) = g_t^0(x,u) + \sum_{i=1}^K \min\{g_t^{i}(x,u),\alpha_t^i\},
\]
where, for all $t$ and $i=1,\ldots,K$, the functions $g_t^{i}:{\mbox{\bf R}}^n\times{\mbox{\bf R}}^m\to{\mbox{\bf R}}$ are convex
and $\alpha_t^i\in{\mbox{\bf R}}$.
This gives another instance of our general
problem~(\ref{eq:main_formulation}).
A simple but practical example of a clipped control problem is
described in~\S\ref{sec:control-example}. The problem is to
design a lane change trajectory for a vehicle; the stage
cost is small when the vehicle is centered in either lane,
which we express as a sum of two clipped convex functions.
\section{Heuristic methods}
\label{sec:methods}
There are many methods for
approximately solving~\eqref{eq:main_formulation}.
In this section we describe a few heuristic methods
that we have observed to work well in practice.
\paragraph{Bi-convex formulation.}
Throughout this section, we will make use of a simple reformulation
of~\eqref{eq:main_formulation} as the bi-convex problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & L(x,\lambda)
= f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + (1-\lambda_i)\alpha_i\\
\mbox{subject to} & 0 \le \lambda \le \mathbf 1,
\label{eq:nonlinear}
\end{array}
\end{equation}
with variables $\lambda \in {\mbox{\bf R}}^m$ and $x \in {\mbox{\bf R}}^n$.
(We note that this reformulation was also pointed out in~\cite[\S3]{yang2010relaxed}.)
The equivalence follows immediately
from the fact that
\[
\min\{a, b\} = \min_{0 \le \lambda \le 1} \left(\lambda a + (1- \lambda) b\right).
\]
\paragraph{Nonlinear programming.}
When $f_i$ are all smooth functions
and $\mathop{\bf dom} f_0$ is representable as the sublevel set of a smooth function,
it is possible to use general nonlinear solvers to (approximately)
solve~\eqref{eq:nonlinear}.
\paragraph{Alternating minimization.}
Another possibility is to perform alternating minimization on~\eqref{eq:nonlinear},
since each respective minimization is a convex optimization problem.
In alternating minimization, at iteration $k$,
we solve~\eqref{eq:nonlinear} while fixing $\lambda=\lambda^{k-1}$,
resulting in $x^k$.
We then solve~\eqref{eq:nonlinear} while fixing $x=x^k$,
resulting in $\lambda^k$.
It can be shown that
\begin{equation}
\label{eq:lam_update}
(\lambda^k)_i = \begin{cases}
1 & f_i(x^k) \leq \alpha_i \\
0 & \text{otherwise},
\end{cases}
\end{equation}
is a solution for minimization over $\lambda$ with fixed $x = x^k$.
\paragraph{Inexact alternating minimization.}
Although alternating minimization often works well, we have found that
inexact minimization over $\lambda$ works better in practice.
Instead of fully minimizing over $\lambda$, we instead
compute the gradient of the objective with respect to $\lambda$,
\[
g_i = (\nabla_\lambda L(x^k, \lambda))_i = f_i(x^k) - \alpha_i.
\]
We then perform a signed projected gradient
step on $\lambda$ with a fixed step size $\beta > 0$ (we have found $\beta=0.1$ works well in practice, though a range of values all appear to work equally as well). This results in the update
\[
\lambda^{k} = \Pi_{[0,1]^m}(\lambda^k - \beta \mathbf{sign}(g)),
\]
where $\mathbf{sign}$ is applied elementwise to $g$,
and $\Pi_{[0,1]^m}$ denotes the projection onto the unit box, given by
\[
(\Pi_{[0,1]^m}(z))_i = \begin{cases}
1 & z_i \geq 1, \\
z_i & 0 < z_i < 1, \\
0 & \text{otherwise}.
\end{cases}
\]
The final algorithm is described below.
\begin{algdesc}
\label{alg:twostep}
\emph{Inexact alternating minimization.}
\begin{tabbing}
{\bf given} initial $\lambda^0=(1/2)\mathbf 1$, step size $\beta=0.1$, and tolerance $\epsilon > 0$.\\
{\bf for} $k=1,\ldots,n_\mathrm{iter}$\\
\qquad \=\ 1.\ \emph{Minimize over $x$.} Set $x^k$ to the solution of the problem\\
$
\hspace*{3.5cm} \begin{array}{ll}
\mbox{minimize} & f_0(x) + \sum_{i=1}^m \lambda^{k-1}_i f_i(x) + (1 - \lambda^{k-1}_i)\alpha_i.
\end{array}
$
\\
\qquad \=\ 2.\ \emph{Compute the gradient.} Set $g_i=f_i(x^{k}) - \alpha_i$. \\
\qquad \=\ 2.\ \emph{Update $\lambda$.} Set
$\lambda^k = \Pi_{[0,1]^m}(\lambda^{k-1} - \beta \mathbf{sign}(g))$. \\
\qquad \=\ 3.\ \emph{Check stopping criterion.} Terminate if $\|\lambda^k - \lambda^{k-1}\|_1 \le \epsilon$.\\
{\bf end for}
\end{tabbing}
\end{algdesc}
Algorithm~\ref{alg:twostep} is a descent algorithm in the sense that the
objective function of~\eqref{eq:nonlinear}
decreases after every iteration.
It is also guaranteed to terminate in a finite amount of time,
since there is a finite number of possible values of $\lambda$.
We also note that alternating minimization can be thought of
as a special case of algorithm~\ref{alg:twostep} where $\beta\geq1$.
In practice, we have found that algorithm~\ref{alg:twostep}
often finds the global optimum in simple problems and appears
to work well on more complicated cases.
We use algorithm~\ref{alg:twostep} in our
generic \verb|cvxpy| implementation (see \S\ref{sec:implementation}).
\section{Perspective formulation}
\label{sec:perspective_formulation}
In this section we describe the perspective formulation of~\eqref{eq:main_formulation}.
The perspective formulation is a mixed-integer convex program (MICP), for which specialized solvers with reasonable practical performance exist.
The perspective formulation can also be used to compute a lower bound
on the original objective by relaxing the integral constraints, as in~\cite{moehle2015perspective},
as well to obtain good initializations for any of the procedures described in~\S\ref{sec:methods}.
\paragraph{Perspective.}
Following~\cite[\S 8]{rockafellar1970convex}, we define the perspective (or recession) of the
closed convex function $f$ with $0 \in \mathop{\bf dom} f$
as\footnote{If $0 \not \in \mathop{\bf dom} f$, replace $\gamma f_0(x / \gamma)$ with $\gamma f_0(y + x / \gamma)$ for any $y \in \mathop{\bf dom} f$. See~\cite[Thm.\ 8.3]{rockafellar1970convex} for more details.}
\begin{equation}
\label{eq:persp}
f^\mathrm{p}(x, t) = \begin{cases}
t f(x / t) & t > 0,\\
\lim_{\gamma \downarrow 0}\,\gamma f_0(x/\gamma) & t = 0,\\
+\infty & \text{otherwise},
\end{cases}
\end{equation}
for $(x, t) \in {\mbox{\bf R}}^n \times {\mbox{\bf R}}_+$.
We will use the fact that the resulting function $f^\mathrm{p}$ is convex~\cite[\S3.2.6]{boyd2004convex}.
\paragraph{Superlinearity assumption.}
If $f$ is superlinear, {\it i.e.}, if for all $x \in {\mbox{\bf R}}^n \setminus \{0\}$, we have
\begin{equation}\label{eq:superlinear-assumption}
\lim_{t \to \infty} \frac{f(tx)}{t} = +\infty,
\end{equation}
then
\begin{equation}
\label{eq:superlinear}
f^\mathrm{p}(x, t) = \begin{cases}
tf(x/t) & t > 0 \\
0 & t = 0, \; x = 0,\\
+\infty & \text{otherwise},
\end{cases}
\end{equation}
since the limit in~\eqref{eq:persp} is equal to the limit in~\eqref{eq:superlinear-assumption} unless $x=0$.
There are many convex functions that satisfy this superlinearity property.
Some examples are the sum of squares function and the indicator function of a compact convex set.
Since we will make heavy use of property~\eqref{eq:superlinear} in this section,
we will assume that $f_0$ is superlinear for the remainder of this section.
If $f_0$ is not superlinear, then it can be made superlinear
by adding, {\it e.g.}, a small positive multiple of the sum of squares function.
\paragraph{Conic representation of the perspective.}
We note that representing the epigraph of the perspective of a function
is often simple if the function has a conic representation~\cite{grant2008graph}.
More specifically, if $f$ has a conic representation
\[
f(x) \le v \iff Ax + bv + c \in \mathcal K,
\]
for some closed convex cone $\mathcal K$, then the perspective of $f$ has a conic representation given by
\[
f^\mathrm{p}(x, t) \le v \iff Ax + bv + tc \in \mathcal K.
\]
This fact allows us to use a conic representation of the perspective
and avoid issues of non-differentiability and division-by-zero that
we might encounter with direct numerical implementations of the perspective~\cite[\S 2]{moehle2015perspective}.
\paragraph{Perspective formulation.}
We define the \emph{perspective formulation} of~\eqref{eq:main_formulation} as the following MICP:
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \sum_{i=1}^m f^\mathrm{p}_i(z_i, t_i) + (1 - t_i) \alpha_i +
\frac{1}{m}\left(f^\mathrm{p}_0(z_i, t_i) + f^\mathrm{p}_0(x - z_i, 1-t_i)\right) \\
\mbox{subject to} & t \in \{0, 1\}^m,
\end{array}
\label{eq:micp}
\end{equation}
with variables $x, z_i\in{\mbox{\bf R}}^n$ for $i=1,\ldots,m$ and $t\in{\mbox{\bf R}}^m$.
Any MICP solver that can handle the functions $f^\mathrm{p}_i$ for $i=0,\ldots,m$
can be used to solve~\eqref{eq:micp}.
\paragraph{Proof of equivalence.}
To show that~\eqref{eq:micp} is equivalent to the original problem~\eqref{eq:main_formulation},
first take $(x, t, z_i)$ that are feasible for~\eqref{eq:micp}.
Since $t$ is Boolean, for each $i$ we have $t_i=0$ or $t_i=1$.
Since $f^\mathrm{p}_0(z_i,t_i)$ must be finite (as this point is feasible),
then $t_i=0$ implies that $z_i=0$ (due to~\eqref{eq:superlinear}).
Similarly, when $t_i = 1$ we must have $z_i = x$.
Therefore the $i$th term in the sum becomes
\[
t_if_i(x) + (1-t_i)\alpha_i + \frac{1}{m}f_0(x).
\]
Summing over the index $i$ yields that problem~\eqref{eq:micp} is equivalent to
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & f_0(x) + \sum_{i=1}^m t_i f_i(x) + (1 - t_i)\alpha_i \\
\mbox{subject to} & t \in \{0, 1\}^m.
\end{array}
\label{eq:aux-mip}
\end{equation}
Partially minimizing~\eqref{eq:aux-mip} over $t$, we find that $x$ is a feasible
point for~\eqref{eq:main_formulation} with the same objective value.
Now take $x$ feasible for~\eqref{eq:main_formulation}.
Let
\[
t_i = \begin{cases}
1 & f_i(x) \leq \alpha_i \\
0 & \text{otherwise},
\end{cases}
\quad
i=1,\ldots,m,
\]
and $z_i = t_ix$.
Then $(x, t, z_i)$ is feasible for~\eqref{eq:micp} and has the same objective value, and the problems are equivalent.
\paragraph{Lower bound via relaxation.}
Since the perspective formulation is equivalent to the original problem,
relaxing the Boolean constraint in~\eqref{eq:micp}
and solving the resulting convex optimization problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \sum_{i=1}^m f^\mathrm{p}_i(z_i, t_i) + (1 - t_i) \alpha_i +
\frac{1}{m}\left(f^\mathrm{p}_0(z_i, t_i) + f^\mathrm{p}_0(x - z_i, 1-t_i)\right) \\
\mbox{subject to} & 0 \leq t \leq \mathbf 1,
\end{array}
\label{eq:relaxed}
\end{equation}
with variables $z_i$, $t$, and $x$,
yields a lower bound on the objective value of~\eqref{eq:main_formulation}.
That is, given any approximate solution of~\eqref{eq:main_formulation}
with objective value $p$, the optimal value $q^\star$ of~\eqref{eq:relaxed}
yields a certificate guaranteeing that the approximate solution
is suboptimal by at most $p-q^\star$.
Additionally, a solution of the relaxed problem can be used as an
initial point for any of the heuristic methods described in~\S\ref{sec:methods}.
\paragraph{Efficiently solving the relaxed problem.}
We note that~\eqref{eq:relaxed} has $m+1$ times as many variables as the original problem, so it is worth considering faster solution methods.
To do so, we can convert the problem to
\emph{consensus form}~\cite[\S7.1]{boyd2011distributed};
{\it i.e.}, we introduce additional variables $y_i\in{\mbox{\bf R}}^n$ for $i=1,\ldots,m$,
and constrain $y_i=x$, resulting in the equivalent problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \sum_{i=1}^m f^\mathrm{p}_i(z_i, t_i) + (1 - t_i) \alpha_i +
\frac{1}{m}\left(f^\mathrm{p}_0(z_i, t_i) + f^\mathrm{p}_0(y_i - z_i, 1-t_i)\right) \\
\mbox{subject to} & y_i=x,\quad i=1,\ldots,m,\\
& 0 \leq t \leq \mathbf 1.
\end{array}
\label{eq:persp-relaxation}
\end{equation}
Since the objective is separable in $(y_i,z_i,t_i)$ over $i$,
there exist many efficient distributed algorithms for solving this problem, {\it e.g.},
the alternating direction method of multipliers (ADMM)
\cite{boyd2011distributed, M2AN_1975__9_2_41_0, gabay1976dual}.
\section{Implementation}
\label{sec:implementation}
Our Python package \verb|sccf| approximately solves generic problems
of the form~\eqref{eq:main_formulation} provided all $f_i$ can be represented as valid \verb|cvxpy| expressions and constraints.
It is available at:
\begin{center}
\texttt{https://www.github.com/cvxgrp/sccf}.
\end{center}
We provide a method \verb|sccf.minimum|, which
can be applied to a \verb|cvxpy| Expression and a scalar to create a
\verb|sccf.MinExpression|.
The user then forms an objective as a sum of \verb|sccf.MinExpression|s,
passes this objective and (possibly) constraints to a \verb|sccf.Problem|
object, and then calls the \verb|solve| method, which
implements algorithm~\ref{alg:twostep}.
We take advantage of the fact that the only parameter changing
between problems is $\lambda$ by caching the canonicalization procedure \cite{agrawal2019differentiable}.
Here is an example of using \verb|sccf| to solve a clipped least squares problem:
\begin{verbatim}
import cvxpy as cp
import sccf
A, b = get_data(m, n)
x = cp.Variable(n)
objective = 0.0
for i in range(m):
objective += sccf.minimum(cp.square(A[i]@x-b[i]), 1.0)
objective += 0.01 * cp.sum_squares(x)
prob = sccf.Problem(objective)
prob.solve()
\end{verbatim}
\section{Examples}
\label{sec:examples}
All experiments were conducted on a single core
of an Intel i7-8700K CPU clocked at 3.7 GHz.
\subsection{Clipped regression}
\label{sec:erm-example}
In this example we compare clipped regression (\S\ref{sec:clipped_erm})
with standard linear regression and
Huber regression~\cite{huber1973robust}
(a well known technique for robust regression)
on a one-dimensional dataset with outliers.
We generated data by sampling 20 data points $(x_i,y_i)$ according to
\[
x_i \sim \mathcal N(0,1), \quad y_i = x_i + (0.1)z_i,
\quad z_i\sim\mathcal N(0,1),
\quad i=1,\ldots,20.
\]
We introduced outliers in our data by flipping the sign of $y_i$
for 5 random data points.
The problems all have the form
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & L(\theta) = \sum_{i=1}^{20}\phi(x_i\theta-y_i) + (0.2)\theta^2,
\end{array}
\label{eq:clipped_regression}
\end{equation}
where $\phi:{\mbox{\bf R}}\to{\mbox{\bf R}}$ is a penalty function.
In clipped regression, $\phi(z) = \min\{z^2, 0.5\}$.
In linear regression, $\phi(z) = z^2$.
In Huber regression,
\[
\phi(z) = \begin{cases}
z^2 & |z| \leq 0.5 \\
0.5(2|z| - 0.5) & \text{otherwise}.
\end{cases}
\]
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figs/clipped_regression.pdf}
\caption{Clipped regression, linear regression, and Huber regression on a one-dimensional dataset with outliers. The outliers affect the linear regression and Huber regression models, while the clipped regression model appears to be minimally affected.}
\label{fig:clipped-regression}
\end{figure}
Let $\theta^\mathrm{clip}$ be the clipped regression model;
we deem points where $(x_i\theta^\mathrm{clip}-y_i)^2\geq 0.5$ as outliers
and the remaining points as inliers.
In figure~\ref{fig:clipped-regression} we visualize
the data points and the resulting models along with the outliers/inliers
identified by the clipped regression model.
In this figure, the clipped regression model clearly outperforms the linear and Huber
regression models since it is able to fully ignore the outliers.
Algorithm~\ref{alg:twostep} terminated in 0.13 seconds and took 8 iterations on this instance.
\paragraph{Lower bound.}
The relaxed version of the perspective formulation~\eqref{eq:relaxed}
can be used to efficiently find a lower bound on the objective value for the clipped version
of~\eqref{eq:clipped_regression}. The objective value of~\eqref{eq:clipped_regression} for clipped regression was 1.147,
while the lower bound we calculated was 0.533,
meaning our approximate solution is suboptimal by at most 0.614.
In figure~\ref{fig:perspective} we plot the clipped objective~\eqref{eq:clipped_regression}
for various values of $\theta$;
note that the function is highly nonconvex and that $\theta^\mathrm{clip}$
is the (global) solution.
We also plot the objective of the perspective relaxation as a function of $\theta$,
found by partially minimizing~\eqref{eq:relaxed} over $z_i$ and $t$; note that
the function is convex and a surprisingly good approximation of the true convex envelope.
We note that the minimum of the perspective relaxation
and the true minimum are surprisingly close, leading us to believe that
the solution to the perspective relaxation could be a good
initialization for heuristic methods.
\begin{figure}
\centering
\includegraphics[width=.7\linewidth]{figs/perspective.pdf}
\caption{The clipped regression loss and its perspective relaxation.}
\label{fig:perspective}
\end{figure}
\subsection{Clipped logistic regression}
In this example we apply clipped logistic regression (\S\ref{sec:clipped_erm})
to a dataset with outliers.
We generated data by sampling 1000 data points $(x_i,y_i)$
from a mixture of two Gaussian distributions in ${\mbox{\bf R}}^5$.
We randomly partitioned the data into 100 training data points
and 900 test data points and introduced outliers by flipping the sign of $y_i$
for 20 random training data points.
We (approximately) solved the \emph{clipped logistic regression} problem
\[
\begin{array}{ll}
\mbox{minimize} & \frac{1}{1000}
\sum_{i=1}^{1000}\min\{\log(1 + e^{-y_i (x_i^T\theta + b)}),\alpha\} + (0.1)\|\theta\|_2^2,
\end{array}
\]
with variables $\theta$ and $b$, for various values of $\alpha\in[10^{-1},10^1]$.
We also solved the problem for $\alpha=+\infty$, {\it i.e.}, the \emph{standard logistic
regression problem}.
Over the $\alpha$ values we tried, on average,
algorithm~\ref{alg:twostep} took 6.37 seconds
and terminated in 9.64 iterations.
\begin{figure}
\centering
\includegraphics[width=.7\linewidth]{figs/logistic_regression.pdf}
\vspace{-1em}
\caption{Test accuracy of clipped logistic regression (solid),
test accuracy of standard logistic regression (gray),
and fraction of outliers (dotted dashed) for varying clip values $\alpha$.
Note that the fraction of detected outliers goes down as $\alpha$ goes up.
Between roughly $\alpha=10^{-.5}$ and $\alpha=10^{0.05}$, the test
accuracy of clipped
logistic regression is higher than standard logistic regression.
Clipped logistic regression converges to standard logistic regression as
$\alpha \to \infty$.
}
\label{fig:logreg}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figs/logistic_regression_lambda.pdf}
\vspace{-1em}
\caption{A plot of $\lambda$ throughout the course of algorithm~\ref{alg:twostep}
for the clipped logistic regression example. Note that at some of the iterations
({\it e.g.}, $k=1$, $2$, or $3$),
the gradient of the loss with respect to a certain $\lambda_i$ changes sign,
causing $\lambda_i$ to be updated in the opposite direction.}
\label{fig:logreg_lambda}
\end{figure}
Figure~\ref{fig:logreg} displays the test loss and fraction of outliers
over the range of values of $\alpha$ we approximately minimized.
Figure~\ref{fig:logreg_lambda} shows the trajectory of the entries of $\lambda$
during each step of the execution of algorithm~\ref{alg:twostep} for the $\alpha$ with the highest test accuracy, while figure~\ref{fig:logreg_lambda} plots the histogram of the logistic loss for each of the available data points for this same $\alpha$.
\begin{figure}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[]{figs/logreg_density.pdf}
\end{subfigure}
%
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[]{figs/clipped_density.pdf}
\end{subfigure}
\vspace{-1.5em}
\caption{Left: histogram of log logistic loss for each
data point in standard
logistic regression; right: histogram of log logistic loss
for each data point in clipped logistic regression. Note that standard logistic
regression attempts to make the loss small for all data points,
while its clipped counterpart allows the loss to be high for some of the data points.}
\label{fig:density}
\end{figure}
\subsection{Lane changing}
\label{sec:control-example}
In this example, we consider a control problem
where a vehicle traveling down a road at a fixed speed
must avoid obstacles, stay in one of two lanes,
and provide a comfortable ride.
We let $x_t\in{\mbox{\bf R}}$ denote the lateral position of the vehicle
at time $t=0,\ldots,T$ ($T$ is the time horizon).
The obstacle avoidance constraints are given as vectors
$x^\mathrm{min}, x^\mathrm{max}\in{\mbox{\bf R}}^T$ that
represent lower and upper bounds on $x_t$ at time $t$.
We can split the objective into the sum of two functions described below.
\begin{itemize}
\item \emph{Lane cost.} Suppose the two lanes are centered at $x=-1$ and $x=1$.
The lane cost is given by
\[
g^\mathrm{lane}(x) = \sum_{t=0}^T \min\{(x_t-1)^2,1\} + \min\{(x_t+1)^2,1\}.
\]
The lane cost incentivizes the vehicle to be in the center of
one of the two lanes.
The lane cost is evidently a sum of clipped convex functions.
\item \emph{Comfort cost.}
The comfort cost is given by
\[
g^\mathrm{comfort}(x) = \rho_1 \|Dx\|_2^2 + \rho_2 \|D^2x\|_2^2 + \rho_3 \|D^3x\|_2^2,
\]
where $D$ is the difference operator and $\rho_1,\rho_2,\rho_3 > 0$ are weights
to be chosen.
The comfort cost is a weighted sum of the squared lateral velocity, acceleration, and jerk.
\end{itemize}
To find the optimal lateral trajectory we solve the problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & g^\mathrm{lane}(x) + g^\mathrm{comfort}(x) \\
\mbox{subject to} & x_0=x^\mathrm{start}, \quad x_T=x^\mathrm{end}, \\
& x^\mathrm{min} \leq x \leq x^\mathrm{max},
\end{array}
\label{eq:clipped-control-example}
\end{equation}
where $x^\mathrm{start},x^\mathrm{end}\in{\mbox{\bf R}}$ are given starting and ending
points of the trajectory.
\begin{figure}
\centering
\includegraphics[]{figs/lane_changing.pdf}
\caption{Trajectory of a vehicle looking to avoid obstacles
(represented by boxes) while optimizing for comfort and lane position.}
\label{fig:clipped-control-complex}
\end{figure}
\paragraph{Numerical example.}
We use $T=100$, $\rho_1 = 10$, $\rho_2 = 1$, $\rho_3 = .1$,
$x^\mathrm{start}=1$, and $x^\mathrm{end}=-1$.
In figure~\ref{fig:clipped-control-complex}
we show the trajectory resulting from an approximate solution
to~\eqref{eq:clipped-control-example} with three obstacles.
For this example, algorithm~\ref{alg:twostep} terminated in 1.2 seconds
and took 4 iterations.
We are able to find a comfortable trajectory that avoid the obstacles
and spends as little time as possible in between the lanes.
\paragraph{Lower bound.}
Using the relaxed version of the perspective formulation~\eqref{eq:relaxed}, we can compute a lower bound on the objective value of the clipped control problem~\eqref{eq:clipped-control-example}. We found a lower bound value of around 103.55, while the approximate solution we found had an objective value of 119.07, indicating that our approximate solution is no more
than 15\% suboptimal.
\section*{Acknowledgments}
S. Barratt is supported by the National Science Foundation Graduate Research Fellowship
under Grant No. DGE-1656518.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.